text stringlengths 100 500k | subset stringclasses 4 values |
|---|---|
All smooth functions have Taylor expansion?
The theories are:
$f(x)$ has Taylor expansion equals the remainder of Taylor polynomial converge to $0$.
a smooth function such as $f(x) = e^{-1/x^2}$, $x>0$, $f(x) = 0$, $x \leqslant 0$ do not has Taylor expansion near $0$.
However, I think about these propositions and find:
smooth - exist Taylor polynomial - select Peano remainder = $o(x^n$) - the remainder must converge to $0$ - smooth function must have Taylor expansion near $0$
I will be appreciated if someone can point out the error in this infer.
taylor-expansion smooth-functions
SarakioSarakio
$\begingroup$ All smooth functions $f$ (even your example in $2.$) have a Taylor expansion : it is $\sum_{k=0}^n \frac{f^{(k)}(0)}{k!}x^k$. But it has no need to converge to $f$ itself, has shows your example in $2.$. In fact, it has no need to be convergent on any open interval. $\endgroup$
– Didier
$\begingroup$ The Taylor expansion of $e^{-x^2}$ is indeed $0$. It does exist. The remainder is exactly $e^{-x^2}$. $\endgroup$
For every smooth function $f$, you can consider the Taylor series around $0$ $$\sum_{n \geq 0} \frac{f^{(n)}(0)}{n!}x^n$$
However, there are two possible obstructions in general to link directly this series to the function :
It is possible that the radius of convergence is $0$. Actually, one can prove that for every sequence $(a_n)$, there exists a smooth function $f$ satisfying $f^{(n)}(0)=a_n$. Therefore the Taylor series can be every series, so it can be a series which converges nowhere.
It is possible that the Taylor series converges everywhere, but is not equal to $f$ in a neighbourhood of $0$. The classical example is the one you mention with $e^{-1/x^2}$, you get a Taylor series which is the null series, and the function is non constant equal to $0$ in any neighbourhood of $0$.
Finally, you have the following condition for a function to be analytic over an interval $I$ : a smooth function $f : I \rightarrow \mathbb{R}$ is analytic over $I$ iff $\forall [a,b]\subset I$, there exists $M \in \mathbb{R}$ and $\alpha > 0$ such that $\forall n \in \mathbb{N}$, $$||f^{(n)}||_{\infty|[a,b]} \leq Mn!\alpha^n$$
TheSilverDoeTheSilverDoe
$\begingroup$ Thanks to your reply! It's proved that not every smooth function has a Taylor expansion. However, I still have doubts. Please see my answer below. $\endgroup$
– Sarakio
$\begingroup$ take $f(x) = e^{-x^2}, x \not= 0, f(0) = 0$ as an example. Consider $R_{n}(x) = f(x) - \sum_{k = 0}^n \frac{f^{(k)}(0)}{k!}x^k$. On one hand, $f^{(k)}(0) = 0$ so $R_{n}(x) = f(x)$, then $\lim\limits_{n\to+\infty}R_{n}(x) = f(x) \not= 0, if x \not= 0$; on the other hand, smooth function $f(x)$ has $Maclaurin formula$ so $R_{n}(x) = o(x^n) $for every n, then $\lim\limits_{n\to+\infty}R_{n}(x) = \lim\limits_{n\to+\infty}o(x^n) = 0, if |x| < 1$. Why there are two different results? $\endgroup$
$\begingroup$ The $o(x^n)$ is considered when $x$ tends to $0$, not when $n$ tends to $+\infty$. As you said, $R_n(x)=f(x)$ for every $n$, and $f(x)$ is indeed a $o(x^n)$ when $x$ tends to $0$. $\endgroup$
– TheSilverDoe
$\begingroup$ Thanks!Then Peano remainder is not supposed to be used to compute whether a function has Taylor expansion. $\endgroup$
$\begingroup$ You mean $e^{-1/x^2}$, rather than $e^{-x^2}$, which is obviously analytic. $\endgroup$
– pyon
Assume $f(x)$ as a smooth function and consider its $Maclaurin formula$,
then for every n: $f(x) = \sum_{k = 0}^n \frac{f^{(k)}(0)}{k!}x^k + o(x^n)$
select a sufficiently small neighborhood of zero, then: $\lim\limits_{n\to+\infty}R_{n}(x) = \lim\limits_{n\to+\infty}(f(x) - \sum_{k = 0}^n \frac{f^{(k)}(0)}{k!}x^k)=\lim\limits_{n\to+\infty}o(x^n)=0$
then smooth function $f(x)=\lim\limits_{n\to+\infty}\sum_{k = 0}^n \frac{f^{(k)}(0)}{k!}x^k=\sum_{n \geq 0} \frac{f^{(n)}(0)}{n!}x^n$.
then smooth function $f(x)$ has Taylor expansion.
the above conclusion is wrong, because $e^{-1/x^2}$ is smooth and do not have Taylor expansion. However, what's the error during my infer?
Not the answer you're looking for? Browse other questions tagged taylor-expansion smooth-functions or ask your own question.
Taylor Expansion of Error Function
Is there an alternative to Taylor expansion of functions with more control over the error distribution?
What is the first order Taylor expansion of a matrix to scalar function?
Remainder in Taylor expansion
error term in the Taylor expansion of 1/x
Remainder of Taylor Expansion of a smooth function
How does the Taylor Series converge at all points for certain functions
Can any smooth even function $f: \mathbb{R} \to \mathbb{R}$ be written as a smooth function of $x^2$?
Approximating $\sqrt{2}$ using Taylor expansion of $\sqrt{1+x}$ to two terms | CommonCrawl |
Submissions 2106.10265v2
SciPost Submission Page
Liberating Confinement from Lagrangians: 1-form Symmetries and Lines in 4d N=1 from 6d N=(2,0)
by Lakshya Bhardwaj, Max Hubner, Sakura Schafer-Nameki
As Contributors: Lakshya Bhardwaj · Max Hubner
Arxiv Link: https://arxiv.org/abs/2106.10265v2 (pdf)
Date accepted: 2021-12-08
Submitted by: Hubner, Max
Submitted to: SciPost Physics
Academic field: Physics
High-Energy Physics - Theory
Approach: Theoretical
We study confinement in 4d N=1 theories obtained by deforming 4d N=2 theories of Class S. We argue that confinement in a vacuum of the N=1 theory is encoded in the 1-cycles of the associated N=1 curve. This curve is the spectral cover associated to a generalized Hitchin system describing the profiles of two Higgs fields over the Riemann surface upon which the 6d (2,0) theory is compactified. Using our method, we reproduce the expected properties of confinement in various classic examples, such as 4d N=1 pure Super-Yang-Mills theory and the Cachazo-Seiberg-Witten setup. More generally, this work can be viewed as providing tools for probing confinement in non-Lagrangian N=1 theories, which we illustrate by constructing an infinite class of non-Lagrangian N=1 theories that contain confining vacua. The simplest model in this class is an N=1 deformation of the N=2 theory obtained by gauging $SU(3)^3$ flavor symmetry of the $E_6$ Minahan-Nemeschansky theory.
Publication decision taken: accept
Editorial decision: For Journal SciPost Physics: Publish
(status: Editorial decision fixed and (if required) accepted by authors)
Submission & Refereeing History
You are currently on this page
Submission 2106.10265v2 on 3 September 2021
Report 1 submitted on 2021-10-03 17:12 by Anonymous
Author Reply by Dr Hubner on 2021-11-02
Reports on this Submission
Show/hide Reports view
Anonymous Report 1 on 2021-10-3 (Invited Report)
The paper introduces a method to detect confinement for some 4d N=1 theories obtained from class S theories. Since the method is based on the study of the geometry of the compactification, it could be applied also to non-Lagrangian theories.
Very technical presentation throughout the paper.
The paper addresses the problem of detecting confinement using a geometric method. More specifically, the authors consider four-dimensional N=1 deformations of N=2 class S theories, and argue that the behavior of vevs of the line operators of the resulting theory is encoded in the 1-cycles of the N=1 curve associated to the chosen vacuum. Most of the paper is devoted to checking that the geometric analysis leads to the same results as the field theory one. Following the formal definition of confinement for a gauge theory, they then conclude that this provides a method for detecting the "confining phase" of non-Lagrangian theories.
After minor revisions, I would recommend this paper for publication.
Requested changes
1) In Section 1, the authors state that their "considerations apply to general class S setup". Is it obvious that a more general setup would have the same behavior?
2) The authors propose a method for detecting confinement, and check in detail that it reproduces the results, known from a field-theoretical analysis, for the cases of $\mathfrak{su}(n)$ N=1 SYM and the theory studied by Cachazo-Seiberg-Witten. Both these theories have confining vacua. Would it be a good proof of concept for their method to consider a case where it is known from field theory that the N=1 deformation does not admit a confining vacuum?
3) Some minor additional points:
- There are a few mathematical notations/quantities that are used but then defined only later on. For instance, $\Lambda_{\mathcal{N}=2}$ is first used in p. 8, and then defined at p. 36. Similarly, the Pontryagin dual is already used at p. 15, but only defined at p. 18.
- The contents of footnote 3 p. 8 are not common knowledge for a non-expert reader. Could the authors provide a reference?
- At p. 17: "we will only study theories for which $\mathcal{L}$ is an abelian group under OPE of line operators". Is there a case in which it is not?
- The text around (3.5) p. 19 contains repetitions and colloquialisms: "is modded out by modding out... that are modded out..."
- Minor typos (understandable in a 83-page paper): p. 9 "led", p. 18 "where $\hat{\Lambda}$ is the Pontryagin dual of $\Lambda$", p. 18 footnote 9 "The Pontryagin dual group..."
validity: high
significance: high
originality: high
clarity: high
formatting: perfect
grammar: excellent
Author: Max Hubner on 2021-11-02 [id 1904]
(in reply to Report 1 on 2021-10-03)
We thank the referee for their valuable feedback and suggestions.
Regarding the first point raised in the report, we just wanted to highlight that one can also include regular punctures and our considerations would apply in the same fashion as for irregular punctures. We have modified the sentence to make it more clear.
The second remark suggests adding non-confining examples. The current version of the paper deals in multiple instances (eg. SO(3) SYM) with theories that have both confining and non-confining vacua. As for theories with exclusively non-confining vacua, a few examples like SU(N) with fundamental chirals come into mind, but such theories have a trivial 1-form symmetry to start with, and hence there is no scope for confinement. As far as theories having non-trivial 1-form symmetry and exclusively non-confining vacua are concerned, we are afraid we do not know any Lagrangian examples that 1. lie in the class of theories we consider, and 2. do not already arise by choosing a polarization in su(n) SYM.
Regarding footnote 3 p. 8, we added a reference. Regarding the sentence on p. 17, we will added a footnote 8. Finally, the referee kindly pointed out many typos that have been corrected.
We hope that the revisions will make the paper suitable for publication.
Login to report or comment | CommonCrawl |
Fluids Questions
1) A large piece of cork weigh $0.285\,{\rm N}$ in air. When held submerged underwater by a spring scale, the scale reads $0.855\,{\rm N}$. Find the density of the cork.
A large piece of cork weigh $0.285\,{\rm N}$ in air. When held submerged underwater by a spring scale, the scale reads $0.855\,{\rm N}$. Find the density of the cork.
Draw a free body diagram as shown in the figure:
Apply Newton's 2${}^{nd}$ law to the cork : $\Sigma F_y=0\Rightarrow B=mg+T$
Where $T$ is tension in the spring scale and $B$ is buoyancy force.
Buoyancy force: when a rigid object is submerged in a fluid (completely or partially), there exist an upward force on the object that is equal to the weight of the fluid that is displaced by the object. i.e. $B=\rho_{fluid}V_{submerged}\,g$
Mass of the cork is known (weight measured in air) so using buoyancy force we can find the volume of it.
\[B=mg+T=\rho_{water}V_{cork}g\]
\[\Rightarrow 0.285+0.855=1000\times 9.8\times V_{cork}\]
\[V_{cork}=1.16\times {10}^{-4}\ {{\rm m}}^{{\rm 3}}\]
Given the $V_{cork}$, one can determine the density of the cork by $$\rho_{cork}=\frac{m_{cork}}{V_{cork}}=\frac{W_{cork}}{gV_{cork}}$$
\[\rho_{cork}=\frac{0.285}{9.8\times 1.16\times {10}^{-4}}=250.7\ \frac{{\rm kg}}{{{\rm m}}^{{\rm 3}}}\]
2) An object of volume $0.2\, {{\rm m}}^{{\rm 3}}$ is completely submerged in a fluid with density $500\, {\rm kg/}{{\rm m}}^{{\rm 3}}$. Compute the buoyant force the fluid on the object.
An object of volume $0.2\, {{\rm m}}^{{\rm 3}}$ is completely submerged in a fluid with density $500\, {\rm kg/}{{\rm m}}^{{\rm 3}}$. Compute the buoyant force the fluid on the object.
By the Archimedes principle, the buoyant force is equal to the weight of the displaced fluid. In this case since the object is totally submerged so the volume of the displaced fluid and object is the same. Recall that the weight of an object is the mass times g and mass is the density times the volume of object. So the buoyant force is
\[F_B=\underbrace{\rho_{fluid}V_{submerged}}_{M}g=500\times 0.2\times 9.8=980\ {\rm N}\]
3) A $1.5\, {\rm kg}$ block of wood floats on water with $68$ percent of its volume submerged. A lead block is placed on the wood, fully submerging the wood to a depth where the lead remains entirely out of the water. Find the mass of the lead block.
A $1.5\, {\rm kg}$ block of wood floats on water with $68$ percent of its volume submerged. A lead block is placed on the wood, fully submerging the wood to a depth where the lead remains entirely out of the water. Find the mass of the lead block.
We must use the buoyancy force and $\Sigma F_y=0\ $in two stages:
Stage $1$: before loading the lead block
$\Sigma F_y=0\Rightarrow F_B-m_{block}g=0\Rightarrow \rho_{water}V_{sub}g=m_{block}g$
\[\Rightarrow \ \rho_{water}\left(0.68\ V_{block}\right)=m_{block}\]
\[\rho_{water}V_{block}=\frac{m_{block}}{0.68}\]
Stage $2$: after loading
\[\Sigma F_y=0\Rightarrow \left(m_{lead}+m_{block}\right)g=F_B=\rho_{water}V_{block}g\]
\[\Rightarrow m_{lead}=\rho_{water}V_{block}-m_{block}\]
Now by substituting from stage $1$ into the stage $2$, we obtain
\[\Rightarrow m_{lead}=\frac{m_{block}}{0.68}-m_{block}=1.5\times 0.47=0.71\ {\rm kg}\]
4) A spherical shell of copper with an outer diameter of $12\, {\rm cm}$ floats on water with half its volume above the water's surface. Determine the inner diameter of the shell. The cavity inside the spherical shell is empty.
A spherical shell of copper with an outer diameter of $12\, {\rm cm}$ floats on water with half its volume above the water's surface. Determine the inner diameter of the shell. The cavity inside the spherical shell is empty.[$\rho_{water}=1000\,\frac{{\rm kg}}{{{\rm m}}^{{\rm 3}}}\ ,\ \ \rho_{copper}=8940\,\frac{{\rm kg}}{{{\rm m}}^{{\rm 3}}}$]
The volume of displaced water is equal to the volume of the object that is submerged i.e. $\frac{1}{2}\left(\frac{4}{3}\pi R^3_1\right)$.
Because the object is motionless (in equilibrium) so $\Sigma F_y=0$
\[mg=F_B=\rho_wV_{subm}g\]
\[\rho_{copper}V_{shell}g=\rho_wV_{subm}g\]
\[\rho_{copper}\frac{4}{3}\pi\left(R^3_1-R^3_2\right)=\rho_w\frac{1}{2}\left(\frac{4}{3}\pi R^3_1\right)\Rightarrow \ R^3_2=R^3_1\left(1-\frac{1}{2}\frac{\rho_w}{\rho_{copper}}\right)\]
\[\therefore R_2={\left({\left(0.12\right)}^3\left(1-\frac{1}{2}\frac{1000}{8940}\right)\right)}^{\frac{1}{3}}=0.118\ {\rm m}= 11.8\ {\rm cm}\]
Where $\frac{4}{3}\pi(R^3_1-R^3_2)$ is the volume of the spherical shell with outer radius $R_1$ and inner radius $R_2$.
5) A solid cylinder (radius $=0.150\,{\rm m}$, height=$0.120\,{\rm m}$) has a mass of $6.5\,{\rm kg}$. This cylinder is floating in water. Then oil ($\rho=725\, {\rm kg/}{{\rm m}}^{{\rm 3}}$) is poured on top of the water
A solid cylinder (radius $=0.150\,{\rm m}$, height=$0.120\,{\rm m}$) has a mass of $6.5\,{\rm kg}$. This cylinder is floating in water. Then oil ($\rho=725\, {\rm kg/}{{\rm m}}^{{\rm 3}}$) is poured on top of the water until the situation shown in the drawing results. How much of the height of the cylinder is in the oil?
Here there is two buoyant force acting on the cylinder that must be balanced by the weight of the cylinder (since the cylinder is in equilibrium) i.e. $\Sigma F_y=0$
\[F_{Bo}+F_{Bw}=W_{cyl}\]
\[\rho_oV_{sub,o}g+\rho_wV_{sub,w}g=m_{cyl}g\]
Where the above volumes are submerged volumes of the cylinder in the oil and water.
Because the cylinder totally submerged in the fluids ,$V_{sub,o}+V_{sub,w}=V_{cyl}$
Therefore from above equations, we obtain
\[\rho_o\left(V_{cyl}-V_{sub,w}\right)+\rho_wV_{sub,w}=m_{cyl}\]
\begin{align*}
\Rightarrow V_{sub,w}&=\frac{m_{cyl}-\rho_oV_{cyl}}{\rho_w-\rho_o}\\
&=\frac{6.5-725\times \left(\pi{\left(0.15\right)}^2\times 0.12\right)}{1000-725}\\
&=1.27\times {10}^{-3}{{\rm m}}^{{\rm 3}}
\end{align*}
Use the definition of the volume of the cylinder to find its height
\[h_w=\frac{V_{sub,w}}{\pi r^2}=\frac{1.27\times {10}^{-3}}{p\times {\left(0.15\right)}^2}=0.018{\rm m}\]
The sum of the submerged heights in the oil and water is the height of the cylinder.
\[h_w+h_o=H_{cyl}\Rightarrow h_o=0.12-0.018=0.102\ {\rm m}\]
$h_o$ is the height of the cylinder floating in the oil.
6) Suppose that the radius of the pulmonary artery decreases to $80\%$ the original radius owing to cholesterol buildup. By how much would the pressure change to maintain the original flow rate of blood through the artery?
Suppose that the radius of the pulmonary artery decreases to $80\%$ the original radius owing to cholesterol buildup. By how much would the pressure change to maintain the original flow rate of blood through the artery? (a pressure difference of $450\, {\rm Pa}$ exist across the length of the pulmonary artery)
Because the flux passing through the artery is constant, so if we called $I_V$ as the volume flow rate then the Poiseuille's law states
\[\Delta P=\frac{\eta L}{\pi r^4}I_V\]
\[{\left(I_V\right)}_{obst}={\left(I_V\right)}_{free}\to \frac{\pi R^4_{obst}\Delta P_{obst}}{8\eta L}=\frac{\pi R^4_{free}\Delta P_{free}}{8\eta L}\]
\[R^4_{obst}\Delta P_{obst}=R^4_{free}\Delta P_{free}\]
\[\Delta P_{obst}=\Delta P_{free}{\left(\frac{R_{free}}{R_{obst}}\right)}^4=450{\left(\frac{1}{0.8}\right)}^4=1048\ {\rm Pa}\]
\[\therefore \Delta P_{obst}-\Delta P_{free}=\left(1048-450\right)=648\ {\rm Pa}\]
7) The aorta has an inner radius of approximately $0.25\, {\rm cm}$. the average speed of blood through the aorta is $1.0\, {\rm m/s}$. A capillary has an inner radius of approximately
The aorta has an inner radius of approximately $0.25\, {\rm cm}$. the average speed of blood through the aorta is $1.0\, {\rm m/s}$. A capillary has an inner radius of approximately $5$ microns ($5\times {10}^{-6}\, {\rm m}$) and the average speed of blood through a capillary is $1.0\, {\rm cm/s}$. Assuming that all the blood flowing through the aorta flows through the capillaries, determine the number of capillaries in the circulatory system.
Because all the blood flowing through the aorta flows through the capillaries, then the volume flux is conserved. By definition, volume flux ($I_V$) is equal to the area of the tube times the speed of fluid through it.
Thus,in this case we have
\[{\left(I_V\right)}_{aorta}={\left(I_V\right)}_{capillaries}\]
\[{\left(A\times v\right)}_{aorta}=N{\left(A\times v\right)}_{capillary}\]
\[N_{capillary}=\frac{A_{aorta}\times v_{aorta}}{A_{capillary}\times v_{capillary}}=\frac{\left(\pi r^2_{aorta}\times v_{aorta}\right)}{\pi r^2_{capillary}\times v_{capillary}}\]
\[=\frac{{\left(0.25\times {10}^{-6}\right)}^2\times 1}{{\left(5\times {10}^{-6}\right)}^2\times {10}^{-2}}=2.5\times {10}^7\ \]
8) The pulmonary artery, which connects the heart to the lungs, is $8.5\, {\rm cm}$ long and has an inner radius of $2.4\, {\rm mm}$. If a pressure difference of $450\, {\rm Pa}$ exist
The pulmonary artery, which connects the heart to the lungs, is $8.5\, {\rm cm}$ long and has an inner radius of $2.4\, {\rm mm}$. If a pressure difference of $450\, {\rm Pa}$ exist across the length of the pulmonary artery, what is the average speed of the blood flowing through it? ($\eta_{Blood}=0.004\ {\rm Pa.s}$)
Use the Poiseuille's law to determine how the pressure drop relates to the flux $I_V$
\[I_V=Av=\frac{\pi R^4\Delta P}{8\eta L}\]
\[v=\frac{\pi R^4\Delta P}{8\eta L}\frac{1}{A}=\frac{\eta R^4\Delta P}{8\eta L}\frac{1}{\eta R^2}=\frac{R^2\Delta P}{8\eta L}\]
\[=\frac{{\left(2.4\times {10}^{-3}\right)}^2\times 450}{8\times 4\times {10}^{-3}\times 8.5\times {10}^{-2}}=0.45\frac{{\rm m}}{{\rm s}}\]
9) The water flows straight downward out the end of circular pipe of radius $r=2\,{\rm mm}$ and velocity $v_0=1\, {\rm m/s}$. It flows in a continuous column, without breaking into drops.
The water flows straight downward out the end of circular pipe of radius $r=2\,{\rm mm}$ and velocity $v_0=1\, {\rm m/s}$. It flows in a continuous column, without breaking into drops.
(a) What is the flow rate in this column?
(b) What is the radius of the column a distance $H=\frac{1}{2}\,{\rm m}$ below the pipe?
(a) In a incompressible fluid, the volume flow rate is defined as $I_V=Av$, where $v$ is the velocity of the fluid perpendicular to the cross-sectional area $A$. Therefore,
\[I_V=A_{disk}v=\left(\pi r^2\right)v=\pi{\left(0.002\right)}^2\left(1\right)=1.256\times {10}^{-3}\frac{{{\rm m}}^{{\rm 3}}}{s}\]
(b) Use the Bernoulli equation as $''P+\rho gh+\frac{1}{2}\rho v^2={\rm Constant}''$ at two point to find the velocity of the water at $H$, then given that the volume flow rate is constant determine the desired radius!
\[\left\{ \begin{array}{lcl}
at\ h=\frac{H}{2}\ & : & \ \ P_h+\rho gh+\frac{1}{2}\rho v^2=Constant \\
at\ h=0 & : & \ \ P_0+\frac{1}{2}\rho v^2_0=Constant \end{array}
\right.\]
\[\Rightarrow \ \ P_h+\rho gh+\frac{1}{2}\rho v^2=P_0+\frac{1}{2}\rho v^2_0\]
\[\Rightarrow v={\left(v^2_0+2gh\right)}^{\frac{1}{2}}={\left(1^2+9.8\times 0.5\right)}^{\frac{1}{2}}=\ 3.286\ \frac{{\rm m}}{{\rm s}}\]
\end{enumerate}
Since the initial and final entry are open to air so the air pressure at these points is the same that is $P_h=P_0$.
\[{\left(I_V\right)}_{ini}={\left(I_V\right)}_{fin}\Rightarrow A_iv_i=A_fv_f\Rightarrow \left(\pi r^2_i\right)v_i=\left(\pi r^2_f\right)v_f\]
\[r_f={\left(\frac{v_i}{v_f}\right)}^{\frac{1}{2}}r_i=\left(0.002\right)\sqrt{\frac{1}{3.286}}=1.1{\rm mm}\]
10) Two identical vertical tubes are connected by a small horizontal tube as shown in the figure. The tubes are open at the top. Equal volumes of liquids with densities
Two identical vertical tubes are connected by a small horizontal tube as shown in the figure. The tubes are open at the top. Equal volumes of liquids with densities $\rho_1=4800\, {\rm kg/}{{\rm m}}^{{\rm 3}}$ and $\rho_2=800\,{\rm \ kg/}{{\rm m}}^{{\rm 3}}$ are at sitting at rest in the tubes. The height $H=2\, {\rm m}$ and $1{\rm atm=1.013\times }{{\rm 10}}^{{\rm 5}}{\rm Pa}$
(a) What is the pressure at the bottom of fluid $2$?
(b) What is the height $h_A$?
(c) A ball of radius $R=1\, {\rm cm}$ and density $\rho_b=1200\, {\rm kg/}{{\rm m}}^{{\rm 3}}$ is dropped into the left tube. The viscosity of the top fluid is $\eta_2=0.03\ {\rm Pa.s}$. While the ball is in fluid $2$, what is its terminal velocity?
(a) The pressure at a point $h$ below the free surface of a fluid is given by $P=P_0+\rho gh$, where $P_0$ is the air pressure. Therefore
\[P=1.013\times {10}^5+800\left(9.8\right)\left(2\right)=1.17\times {10}^5\ {\rm Pa}\]
(b) Since the fluids are at equilibrium (motionless) so the pressure at the bottom of the tubes must be the same so
\[P_1=P_2\Rightarrow P_0+\rho_1gh_A=P_0+\rho_2gH+\rho_1gh\]
\[\rho_1h_A=\rho_2H+\rho_1h\Rightarrow h_A=\frac{\rho_2}{\rho_1}H+h\]
Since stated that there is equal volume thus $V_1=V_2\Rightarrow HA=\left(h_A+h\right)A\Rightarrow H=h_A+h$
\[\therefore h_A=\frac{\rho_2}{\rho_1}H+h\ \xrightarrow{H=h_A+h} h_A=\frac{\rho_2}{\rho_1}H+\left(H-h_A\right)\Rightarrow h_A=\frac{1}{2}H\left(\frac{\rho_2}{\rho_1}+1\right)\]
\[h_A=\frac{1}{2}\left(2\right)\left(\frac{800}{4800}+1\right)=1.167\ {\rm m}\]
(c) The three forces acting on the ball are, buoyancy force $F_b$, Drag force $F_D$ and the gravity. Terminal velocity means that the acceleration of the ball must be zero so the net force on the ball is zero i.e. $\Sigma F_{net}=0$
\[\Sigma F_y=0\Rightarrow F_D+F_b-mg=0\]
\[\Rightarrow \ \ 6\pi\eta Rv+\rho_2V_{sub}g-mg=0\Rightarrow v_T=\frac{m-\rho_2V_{sub}}{6\pi\eta R}g\]
\[\Rightarrow v_T=\frac{\left(\rho_b-\rho_2\right)\left(\frac{4}{3}\pi R^3\right)}{6\pi\eta R}g=\frac{2}{9}\left(\rho_b-\rho_2\right)\frac{R^2g}{\eta}=\frac{2}{9}\left(1200-800\right)\frac{{\left(0.01\right)}^2\left(9.8\right)}{0.03}=2.90\frac{{\rm m}}{{\rm s}}\]
Where we have substitute the mass of the ball $m$ with $\rho_bV$.
11) A spherical ball has a mass of $2.66\, {\rm kg}$ and a radius of $3.34\, {\rm cm}$. This ball is suspended from a weight scale.
A spherical ball has a mass of $2.66\, {\rm kg}$ and a radius of $3.34\, {\rm cm}$. This ball is suspended from a weight scale.
(a) What would be the weight of this ball as measured in the air?
(b) What is the volume of this ball in ${{\rm m}}^{{\rm 3}}$?
(c) The ball, still suspended from the scale, is now immersed in fresh water. Calculate the effective weight of the ball for this situation. ($\rho_w=1000\ {\rm kg/}{{\rm m}}^{{\rm 3}}$)
(a) The weight of an object that is suspended from a string (scale) equals the tension in the string i.e. $\Sigma F_y=0\Rightarrow T-W=0$
\[\Rightarrow \ \ T=mg=2.26\times 9.8=26.1\ {\rm N}\]
(b) The volume of the sphere is defined as
\[V=\frac{4}{3}\pi r^3=\frac{4}{3}\pi{\left(3.34\times {10}^{-2}{\rm m}\right)}^2=1.56\times {10}^{-4}\ {{\rm m}}^{{\rm 3}}\]
(c) There is a buoyancy force acting on the objects in the fluids. This force equals to the weight of the displaced fluid. Therefore, \begin{align*}
\Sigma F_y=0 &\Rightarrow \ T+F_b-mg=0\\
&\Rightarrow T=mg-\rho V_{sub}g\\
&=26.1-\left({10}^3\times 1.56\times {10}^{-4}\right)\left(9.8\right)\\
&=24.57\ {\rm N}
Where $V_{sub}$ is the amount of volume that submerged in the water.
12) A water tank is filled to a height of $h=2.2\, {\rm m}$. The air in the tank above the water is kept at a constant pressure of $P=2.0\, {\rm atm}$. The diameter of the U shaped is $3.0\, {\rm cm}$.
A water tank is filled to a height of $h=2.2\, {\rm m}$. The air in the tank above the water is kept at a constant pressure of $P=2.0\, {\rm atm}$. The diameter of the U shaped is $3.0\, {\rm cm}$. Initially a weight is put on the open end of the pipe, which temporarily prevents the water from flowing out. ($\rho_w=1000\, {\rm kg/}{{\rm m}}^{{\rm 3}}$, $1\, {\rm atm=}{{\rm 10}}^{{\rm 5}}\ {\rm Pa}$ , $P_0=1\,{\rm atm}$)
(a) What is the pressure at the bottom of the tank?
(b) Calculate the minimum value of the weight $W$, so that it isn't pushed away by the pressure of the water.
(c) The weight is removed, and the water starts to flow out of the open end of the pipe. Determine the velocity $v$ of the water just after it leaves the pipe.
(d) What is the flow rate?
(a) The pressure at depth $h$ below the surface of a fluid is given by $P=P_0+\rho gh$ where $P_0$ is the air pressure. So
\[P_2=P+\rho gh=\left(2\times {10}^5\right)+1000\times 9.8\times 2.2=2.21\times {10}^5\ {\rm Pa}\]
(b) The force acting on the open end of the tube due to the tank ($P_2A$) and air ($P_0A$) must be balanced with the weight of $W$. So using the definition of the pressure we have
F_{net} &=A\Delta P=A\left(P_2-P_0\right)=\pi{\left(\frac{d}{2}\right)}^2\left(P_2-P_0\right)\\
&\Rightarrow F_{net}=\pi{\left(\frac{0.03}{2}\right)}^2\left(2.21-1\right)\times {10}^5=86.2\ {\rm N}
\[\therefore W=F_{net}=86.2\, {\rm N}\to {\rm \ \ W=mg}\Rightarrow {\rm m=8.8\ kg}\]
(c) Use the Bernoulli's equation at points A and B
\[P_A+\rho gh_A+\frac{1}{2}\rho v^2_A=P_B+\rho gh_B+\frac{1}{2}\rho v^2_B\]
Let the base to be as shown in the figure, so $h_A=h_B=0=v_B$
&\Rightarrow \frac{1}{2}\rho v^2_A=P_B-P_A\\
&\Rightarrow v_A=\sqrt{\frac{2\left(P_B-P_A\right)}{\rho}}=\sqrt{\frac{2\left(2.21-1\right)\times {10}^5}{1000}}\\
&=15.55\frac{{\rm m}}{{\rm s}}
(d) The flow rate is defined as $Q=Av$ so
Q &=\pi r^2v=\pi{\left(\frac{d}{2}\right)}^2v\\
&=\pi{\left(\frac{0.03}{2}\right)}^2\times 15.6\\
&=0.099\,\frac{{{\rm m}}^{{\rm 3}}}{s}
13) A cubic block of ice (each side, $s=1\, {\rm m}$) floats in a lake. The density of ice is $917\, {\rm kg/}{{\rm m}}^{{\rm 3}}$ and the density of lake water is $1000\,{\rm \ kg/}{{\rm m}}^{{\rm 3}}$.
A cubic block of ice (each side, $s=1\, {\rm m}$) floats in a lake. The density of ice is $917\, {\rm kg/}{{\rm m}}^{{\rm 3}}$ and the density of lake water is $1000\,{\rm \ kg/}{{\rm m}}^{{\rm 3}}$.
(a) How far ($s-x_0$) does the top of the ice block lie above the water level?
(b) Now assume that the block has been pushed further $x$ distance into the water. When released, the block will bob up and down. What is the net force on the ice block when it is $x$ distance below its equilibrium position?
(c) Using the analogy to a spring force, what is the angular frequency of bobbing?
(a) Two forces acting on the ice block, buoyancy and gravity. These forces must be in balance (the ice block does not accelerate). So $F_B=W$
\rho_wV_{sub}g=\underbrace{\rho_{ice}V_{ice}g}_{W}\Rightarrow \rho_wx_0A=\rho_{ice}As\\
\Rightarrow x_0=\frac{\rho_{ice}}{\rho_w}s=\frac{917}{1000}\left(1\right)=0.917\ {\rm m}\\
\therefore s-x_0=1-0.917=8.3\ {\rm cm}
In the above, $V_{sub}$ is the submerged volume of the ice in the water and $A=s^2$ is the cross sectional area of the ice block.
(b) In this case, since the ice block is moved away from the equilibrium so there is a net force on the block as
\[F_{net}=F_B-W=\rho_wA\left(x_0+x\right)g-\rho_{ice}Asg\]
From part (a) we know that $\rho_wx_0A=\rho_{ice}As$ so
\[F_{net}=-\rho_ws^2gx\ \]
We have inserted a minus to show that the force and displacement are in opposite directions. In the other words $F_{net}$ is a restoring force like the spring force in the Hook's law.
(c) As mentioned in part (b), the $F_{net}$ is similar to the spring force $F_S=-kx$ so
\[F_{net}=F_S\Rightarrow \ -\rho_ws^2gx=-kx\Rightarrow k=\rho_ws^2g\]
In the springs the angular frequency is defined as
\omega=\sqrt{\frac{k}{m}}=\sqrt{\frac{\rho_ws^2g}{\rho_{ice}s^3}}=\sqrt{\frac{\rho_wg}{\rho_{ice}s}}\\
\therefore \ \omega=\sqrt{\frac{1000\times 9.8}{917\times 1}}=3.26\ \frac{{\rm rad}}{{\rm s}}
Where $m=\rho_{ice}V=\rho_{ice}s^3$ is the mass of the ice block.
14) A water tank ($\rho_{water}=\, 1000{\rm \ kg/}{{\rm m}}^{{\rm 3}}$ ) is placed on top of a hill, as shown in the figure. The atmospheric pressure is
(a) A water tank ($\rho_{water}=\, 1000{\rm \ kg/}{{\rm m}}^{{\rm 3}}$ ) is placed on top of a hill, as shown in the figure. The atmospheric pressure is $P_{atm}=1.013\, \times \ {10}^5\ {\rm N/}{{\rm m}}^{{\rm 2}}$ . The level of the water in the tank is $5.00\ {\rm m}$ high. The length of the pipe is $100\, {\rm m}$, and the inclination is $\theta\ =\ 60.0{}^\circ $ .
(b) Determine the pressure in the pipe at the bottom of the hill. Assume that the water is not flowing through the pipe (static case).
(c) At what velocity will the water exit from a faucet on the sixth floor, $41\, {\rm m}$ above the ground?
(a) The pressure at depth $h$ in a fluid with density $\rho$ is given by $P=P_0+\rho gh$. In this problem, the distance between the bottom of the pipe and the free surface of water is
h_{tot}&=h_{tank}+l_{pipe}\,{\sin 60{}^\circ \ }\\
&=5+100\,{\sin 60{}^\circ \ }\\
&=91.6\ {\rm m}
P_{Bottom} &=P_0+\rho gh_{tot}\\
&=1.013\times {10}^5+\left({10}^3\right)\left(9.8\right)\left(91.6\right)\\
&=9.99\times {10}^5\ \frac{{\rm N}}{{{\rm m}}^{{\rm 2}}}
(b) Use the Bernoulli's equation between the 6${}^{th}$ floor and end of the pipe. Since the faucet is open to atmosphere so $P_6=P_0=1\,{\rm am}$
\[P_0+\frac{1}{2}\rho v^2_0+\rho gh_0=P_6+\frac{1}{2}\rho v^2_f+\rho gh_6\]
\[g(\underbrace{h_0-h_6}_{\Delta h})=\frac{1}{2}v^2_f\Rightarrow \]
\[v_f=\sqrt{2g(h_0-h_6)}=\sqrt{2\times 9.8\times (91.6-41)}=31.49\ \frac{{\rm m}}{{\rm s}}\]
Where $\Delta h$ is the distance shown in the figure. (Since, as mentioned, the water in the pipe is in steady state, $v_0=0$)
15) A man is sitting in a boat on a swimming pool. In the boat there is a rock of mass $M =100\, {\rm kg}$ and density ${\rho }_{{\rm rock}}=5.00\times{10}^{3}\,{\rm \ kg/}{{\rm m}}^{{\rm 3}}$.
A man is sitting in a boat on a swimming pool. In the boat there is a rock of mass $M =100\, {\rm kg}$ and density ${\rho }_{{\rm rock}}=5.00\times{10}^{3}\,{\rm \ kg/}{{\rm m}}^{{\rm 3}}$. He throws the rock into the water. By what amount and in what direction will the water level of the pool change? The density of water is ${\rho }_{{\rm water}}=10^{3}\,{\rm kg/}{{\rm m}}^{{\rm 3}}$ and the area of the pool surface is ${\rm 50.0\ }{{\rm m}}^{{\rm 2}}$ .
Notes: a floating object displaces its own weight of water and a submerged object displaces its own volume of water. Therefore, in the case that the rock is in the boat the volume of the displaced water is
\[\Delta V_{in}=\frac{m_{rock}}{\rho_w}=\frac{100}{1000}=0.1\ {{\rm m}}^{{\rm 3}}\]
And when the rock is thrown into the water
\[\Delta V_{out}=\frac{m_{rock}}{\rho_{rock}}=\frac{100}{5\times {10}^3}=0.02\ {{\rm m}}^{{\rm 3}}\]
Thus the total change in the water level is $\Delta V_{tot}=\Delta V_{in}-\Delta V_{out}$.
\[\Delta {{\rm V}}_{{\rm tot}}{\rm =0.1-0.02=0.08\ }{{\rm m}}^{{\rm 3}}\]
Using the definition of the volume of a rectangular shape, we obtain the change in the water level of the pool
\[\Delta V_{tot}=A\Delta h\Rightarrow \Delta h=\frac{\Delta V_{tot}}{A}=\frac{0.08}{50}=1.6\times {10}^{-3}\ {\rm m}\]
16) A meter stick with a density of $700\, {\rm kg/}{{\rm m}}^{{\rm 3}}$ is lowered end on into a swimming pool filled with water.
A meter stick with a density of $700\, {\rm kg/}{{\rm m}}^{{\rm 3}}$ is lowered end on into a swimming pool filled with water.
(a) How much of the meter stick is above the water when the meter stick reaches static equilibrium with the water?
(b) If the meter stick is raised $10\, {\rm cm}$ above its equilibrium position and released, with what period does it oscillate?
(c) For the condition above, what is the maximum speed of the oscillation of the meter stick?
(a) Static equilibrium reaches when the buoyant and gravity force acting on the object are balanced by each other. By definition, buoyant force is equals to the weight of the fluids displaced by a submerged object in that fluid. Let the amount of the stick meter above the water be $\Delta L$ and the total length of it be $L$. Therefore,
F_g=F_B\to \underbrace{\rho V}_{M}g=\underbrace{\rho_wV_{sub}}_{M_{sub}}g\\
\rho ALg=\rho_wA\left(L-\Delta L\right)g\Rightarrow \Delta L=\frac{\rho_w-\rho}{\rho_w}L\\
\Delta L=\frac{1000-700}{1000}\left(1\right)=0.3\ {\rm m=30\ cm}\\
Where $A$ is the cross sectional area of the meter stick and $M_{sub}$ is the amount of the meter stick submerged in the water.
(b) Since the meter stick moved away from equilibrium position so there is a net force on it. If the submerged height is greater than the equilibrium height ($L-\Delta L$) then $F_B>mg$ and the object accelerate upward or vice versa. Let us assume the $y$ axis is toward upward direction. Since at the instant that the meter stick pulled up by $y(t)$, the buoyant force smaller than the weight of the meter stick so there is net force toward the $-y$ axis. Using Newton's second law, we obtain
\Sigma F=Ma\to F_B\hat{j}+W\left(-\hat{j}\right)=Ma\left(-\hat{j}\right)\\
\rho_wA\left(L-\Delta L+y(t)\right)g-\rho ALg=-\rho AL\frac{d^2y(t)}{dt^2}\\
From part (a), we can rearrange the relation above as follows
\[\Rightarrow \rho AL\frac{d^2y(t)}{dt^2}=-\rho_wAy(t)g\Rightarrow \frac{d^2y(t)}{dt^2}=-\frac{\rho_wg}{\rho L}y(t)\]
This is similar to the equation of simple harmonic motion $\frac{d^2x}{dt^2}+\omega^2x=0$ where $\omega$ is the angular velocity and is related to the period of the motion by $\omega=2\pi/T$ so
\[T=2\pi\sqrt{\frac{\rho L}{\rho_wg}}=2\pi\sqrt{\frac{700\times 1}{1000\times 9.8}}=1.68\ {\rm s}\]
(c) In above the equation of motion of the meter stick is as the harmonic motion that is $y\left(t\right)=A\,{\sin \omega t\ }$, where $A$ is the maximum displacement of the body from equilibrium position that in this case is $10\, {\rm cm}$. By taking time derivative of $y(t)$, we can find the equation of the velocity of the object as $v\left(t\right)=A\omega\,{\cos \omega t\ }$ where $A\omega$ is the maximum speed. So in this case
v_{max}&=A\omega=0.1\times \sqrt{\frac{\rho_wg}{\rho L}}\\
&=0.1\times \sqrt{\frac{1000\times 9.8}{700\times 1}}\\
&=0.374\ \frac{{\rm m}}{{\rm s}}
17) Four liters of liquid helium are stored at their boiling point temperature of $4.2\, {\rm K}$ in a spherical container of radius $0.1\, {\rm m}$. The container is perfect blackbody absorber.
Four liters of liquid helium are stored at their boiling point temperature of $4.2\, {\rm K}$ in a spherical container of radius $0.1\, {\rm m}$. The container is perfect black body absorber. The container is surrounded by a spherical shield whose temperature is $77\ {\rm K}$. This shield is a perfect black body. A vacuum exists in space between the container and the shield. $L_V=2.1\times {10}^4\, {\rm J/K}$ and helium density is $\rho_{He}=125\, {\rm kg/}{{\rm m}}^{{\rm 3}}$.
(a) What volume of liquid helium boils away through the venting valve in one hour?
(b) If instead absorbing part of the liquid helium container is aluminized, how much helium boils off in one hour? The emissivity of aluminum is $\epsilon_{Al}=0.1$
(a) Since there is a vacuum between container and shield, so the heat must be transferred through the valve via radiation. Recall that Stefan-Boltzmann law states that the power radiated by an object with absolute temperature $T$ is given by $P_r=e\sigma AT^4$, where $\sigma$ is the Stefan's constant and $e$ is emissivity of the radiating surface. The net heat absorbed by the helium is
P_{net}=\frac{Q_{abs}}{t}=e\sigma A\left(T^4_{shield}-T^4_{He}\right)\\
\Rightarrow Q_{abs}=4\pi r^2e\,t\sigma\left(T^4_{shield}-T^4_{He}\right)
Q_{abs}&=4\pi{\left(0.1\right)}^2(1)(60\times 60)(5.67\times {10}^{-8})({77}^4-{\left(4.2\right)}^4)\\
&=901.68\ {\rm J}
This absorbed heat causes $m\, {\rm g}$ helium boils away through valve so
\[Q_{abs}=mL_V\Rightarrow m=\frac{901.68}{21000}=0.042\ {\rm kg}\]
Use the formulae of density to determine the desired volume.
\[V=\frac{m}{\rho}=3.43\times {10}^{-4}\ {{\rm m}}^{{\rm 3}}\]
Category : Fluids
MOST USEFUL FORMULA IN FLUIDS:
Density of a substance:
\[\rho=\frac m v\]
Pressure in a fluid:
\[P=\frac F A\]
Pressure in a static liquid:
\[P=P_0+\rho gh\]
$SI$ unit of pressure:
\[1\,\mathrm {Pa}=1\, \mathrm {N/m^2}\]
Common units of pressure:
1\,\mathrm {atm}&=101.325\,\mathrm {kPa}=760\,\mathrm {mmHg}=760\,\mathrm {torr} \\
&\approx 14.70\,\mathrm {lb/in^2}
Bernoulli equation:
\[P+\rho gh+\frac 1 2 \rho v^2= constant\]
Number Of Questions : 17
Work and Energy
Electromagnetic Induction
Harmonic motions
Rotational Motions
Vectors and Coordinate Systems
Capacitance and Resistance
Momentum and Collision
Kinematics in Two Dimensions
Kinematics in One Dimension | CommonCrawl |
Transfer principle
From Encyclopedia of Mathematics
A principle that allows one to transfer assertions from one algebraic system to another. The completeness of an elementary theory $T$ implies a transfer principle for the models of $T$: every elementary sentence (i.e., closed formula in the first-order language of $T$) is true in all models of $T$ if it is true in at least one model. For example, the completeness of the theory of algebraically closed fields of fixed characteristic means that every elementary sentence in the language of fields which holds in one algebraically closed field will also hold in all other algebraically closed fields of the same characteristic. This is an elementary version of the Lefschetz principle, which was introduced and partially proved by S. Lefschetz and A. Weil and states (roughly speaking) that algebraic geometry over all algebraically closed fields of a fixed characteristic is the same ( "there is but one algebraic geometry in characteristic p" ) (cf. [a1]). But Lefschetz and Weil did not have in mind only the elementary sentences. That is why Weil worked with universal domains, that is, algebraically closed fields of infinite transcendence degree over their prime field. So, the conjecture was that there is but one algebraic geometry over universal domains of fixed characteristic. A satisfactory formalization and model-theoretic proof is due to P. Eklof [a2]. It uses the infinitary language $L_{\infty\omega}$, which admits infinite conjunctions and disjunctions in one sentence. With such a sentence, one can express the fact that a field has infinite transcendence degree over its prime field. This cannot be done with a single elementary sentence. Indeed, algebraically closed fields are elementarily equivalent to the algebraic closure of their prime field, even if they have infinite transcendence degree.
The analogue of the elementary Lefschetz principle for real algebraic geometry is the Tarski principle (the completeness of the elementary theory of real closed fields). A similar principle is known for $p$-adically closed fields (cf. $p$-adically closed field). The Ax–Kochen–Ershov principles in the model theory of valued fields can be viewed as conditioned transfer principles.
[a1] G. Cherlin, "Model theoretic algebra" J. Symb. Logic , 41 (1976) pp. 537–545 MR0539999 MR0411961 Zbl 0338.02029 Zbl 0332.02056
[a2] P.C. Eklof, "Lefschetz's principle and local functors" Proc. Amer. Math. Soc. , 37 (1973) pp. 333–339 MR325389
Transfer principle. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Transfer_principle&oldid=39820
This article was adapted from an original article by F.-V. Kuhlmann (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://www.encyclopediaofmath.org/index.php?title=Transfer_principle&oldid=39820"
TeX done | CommonCrawl |
An Exercise in Irrelevance
Knowledge, Biology and Ontologies
Archive for the 'Tawny-OWL' Category
Tawny-OWL 1.1.0
I am please to announce the second full release of Tawny-OWL, my library for fully programmatic development of OWL ontologies. Tawny-OWL now has a fairly large feature set and is becoming a rich development environment.
Perhaps the biggest single change in this release in terms of code base is the least immediately obvious from the user perspective. Previously a large part of the code base was using Java reflection and therefore quite slow. I have now type-hinted all the namespaces meaning that tawny should never reflect. The practical upshot of this is that Tawny runs faster; in the most extreme case, tawny.render is about 5x faster.
The most difficult change for me has be the regularisation of :subclass and :subproperty keywords. The reasons behind this have been described in great detail previously (http://www.russet.org.uk/blog/2985). This was not an easy change to make as it breaks the syntax significantly; I should have made the change before Tawny 1.0, but I didn't. I am hopefully that there will not be similar changes in future.
The roadmap for Tawny 1.2 is relatively simple; currently there is no good way to search over an ontology and to extract classes fulfilling certain requirements (short of direct invocation of the OWL API). I now have a simple implementation of search facilities operating over OWL using a combination of core.logic and tawny.query; hardening and extending this will be the next logical (ahem!) step.
Posted by Phillip Lord on April 29, 2014 at 5:51 pm under Tawny-OWL.
Tawny and Protege
Tawny-OWL (http://www.russet.org.uk/blog/2366) enables a rich programmatic interface to OWL and ontology building. To an extent, I wrote Tawny because I wanted to get away from the use of Protege (http://protege.stanford.edu/) as an ontology editor. I compare the experience of Protege to Tawny as similar to a comparison between Excel and R; if the former does what you need, then it's fine, but it's hard to extend. So, it is with Tawny — it is simple to add patterns, new syntaxes, new capabilities. And I have access to all the standard tools that I expect with any programmatic environment; I can use versioning, build tools and test harnesses.
Having said all of this, Tawny-OWL comes with some cost. Although most IDEs have good capabilities for jumping to definitions and the like, they are limited compared to the display capabilities of Protege (http://protege.stanford.edu/); the ability to navigate quickly and rapidly through an ontology, to use tools like OWLViz to get a broad overview of the ontology structure.
Even if I feel that Protege is limited as an editor, I would still like to use its visualisation capabilities; it is unfortunate if, in choosing Tawny-OWL, I have to abandon Protege. This is not, however, necessary. It is possible to use Protege to visualise an ontology created by Tawny with synchronisation; changes are displayed by Protege immediately, as they are displaying the live data models that Tawny is manipulating. This is achieved by Protege-Nrepl; in this post, I describe the implementation behind it.
Tawny is implemented in Clojure which is a lisp that compiles down to Java bytecodes; the OWL functionality comes from the OWL API which is the same API that Protege uses. In an abstract sense, then it should be possible to plug the two together; to have Tawny operate over the same data structures that Protege is displaying.
There are a number of ways to connect a Clojure process to an IDE, but the most common way is with a relatively recent tool called nrepl. This is a protocol and an tool implementing this protocol which allows communication with a Clojure process. There are now quite a few tools which have implemented clients to this protocol.
Protege-Nrepl
I was fortunate that Clojure provided most of the tools that I needed. Protege-Nrepl is a protege plugin which places a single menu item into the Protege frame. This then launches an internal Clojure process, which in turn launches a Nrepl socket. As it stands, Protege-Nrepl is not specific to Tawny — it simple provides a Clojure process. On the top of this, there is a small bridge package called Tawny-Protege which links together the data structures of Tawny, and Protege.
From a practical point-of-view, this means that I can launch protege, then connect to it from Emacs (or any other Clojure IDE). The IDE then operates in the same way as if Clojure were launched internally.
In theory, the process is very simple: I chose to implement the plugin itself in Java because this seemed easiest, not least because Protege provides a standard maven file to build plugins (initially, I used the older ant build, but the dependencies were a pain). Protege is an OSGI application; I have little knowledge of OSGI, so not having to work this part out was a relief. Java side the relevant code, looks like this:
RT.loadResourceScript("protege/dialog.clj");
RT.loadResourceScript("protege/nrepl.clj");
Var init = RT.var("protege.nrepl","init");
init.invoke();
// and later
Var newDialog = RT.var("protege.dialog", "new-dialog-panel");
Additionally there is some glue to implement the plugin interface, and some threading (loading Clojure in the paint thread is not a good idea). The protege.nrepl/init function loads a user config file, while protege.dialog/new-dialog-panel creates a GUI which starts the nrepl server.
That should be the process complete, but in my hands this failed; the problem is that OSGI requires me to pre-declare all the packages that I want to import within a bundle, so they get into the classpath. In this case, I included all the dependencies transitively anyway; the whole point of the plugin was to package Clojure up for Protege, so there was little point adding it independently. Protege classes (for the plugin) need to come from the protege environment, as do the OWL API classes, or I will not be able to manipulate objects created by protege with Tawny, as they would be different classes (of the same name, but different classloader).
For reasons that I could not determine, the OSGI manifest plugin also inserted a large number of dependency packages, including javax.servlet, junit, and some sun.misc classes; these are not available meaning that, even though they are not actually used, unless they are excluded specifically they make the plugin crash. All of this was achieved with the following modifications to the maven-bundle plugin.
<instructions>
<Bundle-ClassPath>.</Bundle-ClassPath>
<Bundle-SymbolicName>${project.artifactId};singleton:=true</Bundle-SymbolicName>
<Bundle-Vendor>Phil Lord</Bundle-Vendor>
<!-- We exclude a bunch of things here which otherwise get
into the import list and are not provided from anywhere. How
do they get there? No idea! -->
<Import-Package>
!javax.servlet*,!junit.*,!org.junit*,!org.apache.*,
!org.testng.*,!sun.misc.*,*
</Import-Package>
<Include-Resource>plugin.xml,{maven-resources}</Include-Resource>
<Embed-Transitive>true</Embed-Transitive>
<Embed-Dependency>*;scope=compile</Embed-Dependency>
<Require-Bundle>
org.protege.editor.core.application,
org.protege.editor.owl,
org.semanticweb.owl.owlapi
</Require-Bundle>
</instructions>
On the clojure side, the final addition was Pomegranate; enabling Clojure in Protege is fairly useless without being able to add new dependencies (such as Tawny!), but I did not want to add these to the maven build. Pomegranate allows me to add new dependencies on the fly.
As I always use Tawny, I add the following to ~/.protege-nrepl/init.clj so that it is alongside Protege. I may change this so it happens automatically; if anyone wanted to use protege-nrepl without Tawny they could still do so.
(ns init
(:require
[cemerick.pomegranate]
[protege model nrepl]))
;; force loading of tawny
(cemerick.pomegranate/add-dependencies
:coordinates '[[uk.org.russet/tawny-protege "1.1.0-SNAPSHOT"]]
:repositories (merge cemerick.pomegranate.aether/maven-central
{"clojars" "http://clojars.org/repo"}))
;; and monkey patch the thing
(require 'tawny.protege-nrepl)
;; initing the dialog takes ages -- so auto connect
(dosync (ref-set protege.model/auto-connect-on-default true))
Lein-Sync
When launched from within Protege, the Clojure process will be running independently of a Maven or leiningen project. If, for example, I try and load the tawny.pizza/pizza, clojure will fail as it cannot find the local resources, nor any dependencies.
To handle this situation, I have created lein-sync — this is a leiningen plugin which is run in the project directory, which creates a .sync.clj file which contains all the Pomegranate code needed to extend the local classpath. For instance, this file generated for the tawny.pizza looks like this:
;; This file is auto-generated by lein sync
(require 'cemerick.pomegranate)
:coordinates
'[[uk.org.russet/tawny-owl "1.0-SNAPSHOT"]
[org.clojure/tools.nrepl
:exclusions
([org.clojure/clojure])]
[clojure-complete/clojure-complete
[ritz/ritz-nrepl-middleware "0.7.0"]
[org.clojure/tools.trace "0.7.5"]
[compliment/compliment "0.0.1"]]
:repositories
'[["central"
{:snapshots false, :url "http://repo1.maven.org/maven2/"}]
["clojars" {:url "https://clojars.org/repo/"}]])
(cemerick.pomegranate/add-classpath
"/home/phillord/src/knowledge/ontology-clj/tawny-pizza/src")
"/home/phillord/src/knowledge/ontology-clj/tawny-pizza/dev-resources")
"/home/phillord/src/knowledge/ontology-clj/tawny-pizza/resources")
(.println System/out "Loaded .sync in pizza")
Some of these dependencies (compliment, tools.trace) come from my local leiningen configuration. Loading this file, ensures an nrepl launched from within Protege behaves in the same way as a locally launched nrepl. Currently, classpath extension uses fully qualified paths which obviously requires the same (or a shared) file system between the leiningen instance generating .sync.clj and Protege; I may address this latter as it would enable me to run Protege on a different machine from the IDE.
Finally, I have written some Emacs to connect to the nrepl server and automatically run .sync.clj on connection; adding something similar for other IDEs would be straight-forward, although manual use of the repl is also possible.
Given all the availability of the tools, conceptually building protege-nrepl was straight-forward. In practice, it was made somewhat more complex through a combination of ClassLoaders, OSGI and the need to dynamically extend the classpath in a running JVM. In particular, my experience of running OSGI has not been positive; I spent a substantial amount of time chasing down a very strange bug caused by an inconsistency between the OWL API and Protege. Combined with the strange behaviour of the maven plugin which I only solved by multiple trial and error restarts, it all added a lot of complexity. Currently, I am using a pre-release version of Protege as this has been ported to maven; this requires a local build which I realize is not an end-user experience.
The end product, however, was worth the effort. Despite my criticisms of Protege, it remains an excellent tool; having a running Protege, updating live is a considerable advance over the old "save and reload" workflow that I used previously. I look forward to the next release of Protege, as this use of Tawny-OWL, protege-nrepl and Protege will increase the attractiveness of Tawny considerably.
Comment on this post.
Further Experiments with Literate Programming
Literate programming comes in many forms and disguises but is essentially the notion that the documentation and programmatic code should be written together, so that the documentation supports the code and vice versa. In this post, I discuss some of the problems with literate programming, my early attempts to circumvent these with respect to ontology development. Finally, I finish up with a description of some new technology which, I think, offers a solution.
Literate Programming for Ontologies
The reality is, I think, that literate programming has never really take off; there are a large number of reasons for this of course. Code does not naturally have an linear narrative and is not necessarily read in this way: rather, when read by an experienced programmer, they often track the flow of execution through the code (http://synesthesiam.com/posts/modeling-how-programmers-read-code.html). A secondary problem is apparently quite trivial but the editing environment for literate programmes tends to be poor. I cannot find any good research on this, but this is both my experience and that of others (http://unspecified.wordpress.com/2010/06/04/literate-programming-is-a-terrible-idea).
For ontology development, I think a literate approach seems to make more sense. Again, in my experience, ontologies do have a somewhat more narrative approach than code — at least in the sense that the lack loops and the like.
Initial Approaches
I have now been experimenting with literate techniques since 2009. The first version used a single latex file, and pulled these out into a Manchester syntax file (http://www.russet.org.uk/blog/1213). This worked quite nicely but suffered from the poor editor problem: I was building ontologies embedded in LaTeX, so lacked even the basic features (such as syntax highlighting) that I got when editing Manchester syntax files directly. This was a problem even with the very limited feature set from tools like omn-mode.el. The disadvantage would have been worse if I had been used to a richer environment for Manchester syntax.
My second attempt was took the opposite approach; now I used two files — a Manchester syntax file and a LaTeX one with a method for referring between the two (http://www.russet.org.uk/blog/1258). This worked okay but had a poor implementation which I later refined (http://www.russet.org.uk/blog/1269).
These approaches have their advantages but do both suffer from a poor editing environment; either in having two files to switch and link between, or favouring documentation over ontology or vice versa. They also suffered from a secondary issue, which is that they are based around Manchester syntax. While this is nice enough, since writing Tawny-OWL (1303.0213), this style of ontology development just feels not rich enough.
One of the declared advantages of using a real programming language as the basic for Tawny-OWL was the ability to use the tools from that language; I have used a number of these both within Tawny-OWL and with ontologies written with Tawny: mostly obviously, the test environment, but also serialisation, properties support and, of course, the entire editing environment.
This raises the question as to whether I could use literate programming tools from Clojure as well. To my knowledge, the only real option in town here is Marginalia. Marginalia uses markdown as the documentation format and builds a nice presentation with code on one side, and comments on the other.
However, it has problems. Firstly, it presents all comments as text — you cannot comment the comments as it were which is irritating for boilerplate such as licence text. Secondly, the side-by-side presentation breaks the flow of reading as you have to move your eyes around the screen all the time. And, finally, it's Markdown. While Markdown is nice at what it does, it's very limited, and I missed the extra power of something like LaTeX.
The main difficulty, though, remains the editing environment. Without special support, while editing the comments show up as just comments. I can never remember the order of brackets in Markdown links — I rely on syntax highlighting to tell me that I have it correct.
Is there a way that Clojure and LaTeX can be made to work together?
LaTeX experiments: line comments
My first thought, in experimenting with LaTeX was a remarkably cheap and cheerful one. Consider a document such as this:
;; \documentclass{article}
;; \begin{document}
;; \begin{code}
(println "hello world")
;; \end{code}
;; \end{document}
This is a valid Clojure file, and is nearly valid latex as well. The only illegal part is that ;; occurs before the documentclass macro, although, in practice having ;; appear randomly throughout the document would not be ideal either.
Now, LaTeX as an embedded markup language has a very plastic syntax, and I have used that in this case. It is actually very easy to just ignore the ; character entirely, through the use of Catcodes; we can put this into a driver file which then inputs our Clojure file like so:
\catcode`;=9
\input{file.clj}
This way we maintain the validity of our Clojure file (otherwise the first line would be illegal). This is a remarkably cheap and cheerful way of achieving our aims; albeit at the cost of losing the ability to use semi-colons in our writing.
Indirect-buffers
What, however, about the editing environment. My own preferred environment — Emacs — has nice modes which edit both LaTeX and Clojure code, and it is possible to switch between the two, when I want to move between editing code and editing documentation. This is quite clunky, but there is a second option which is "indirect-buffers". This is a piece of Emacs arcana where two buffers share some of the same data structures but not all, which means that they can have different modes. Unfortunately, my experience is that the buffers share too much — as well as the text, they also share "text-properties" which unfortunately both LaTeX and Clojure mode use. In practice, this means syntax highlighting fails (or rather than two representations fight with each other). As a second problem, although the file is valid LaTeX it is not normal LaTeX; simply things like wrapping text in paragraphs fails because of the ;; comments at the beginning of each line.
So, this experiment fails the editing environment test.
LaTeX experiments: block comments
My next attempt was to use block comments. Consider this file which is valid lisp using #| and |# block comments.
#|
\begin{code}
|#
\end{code}
We can use a similar (but not identical) trick with catcodes to make this valid latex also:
\catcode`#=\active
\def#{\catcode`#=6}
\catcode`|=\active
\def|{\catcode`|=12}
\input{hello_world.lisp}
The first call makes the # character active — that is, definable as a macro. We then define # as a macro which will set the catcode of # to 6 (which is it's default). Then, we do the same with |. The practical upshot of this is that the opening #| does nothing other than reset everything in the driver file; effectively it's ignored.
This actually works quite nicely in the editing environment; the opening #| effectively makes no difference to Emacs, and the mode works well. The only real disadvantage is that every code block needs two delimiters — one to open the code block in latex, and end the comment in Lisp.
Now there are various multi-mode tools around for Emacs which should help solve the otherwise clunky editing environment, although even here I am not convinced that this is the right route. Multi-mode tools are complex and to some extent are not what I want — when editing code I want to suppress the documentation, give it a low visual immediacy, and when editing documentation, I want the reverse.
There is, however, a bigger problem — while the last example is valid Common Lisp, Clojure does not have block comments, nor does the programmer have the ability to extend the reader in this way. So, while this seems a nice solution, it depends on a specific language feature which Clojure lacks.
Emacs Experiments: formats
My next idea was to use formats. Emacs allows transformations to happen between the text that is visualised on screen, and how it is saved to file. The main reason for this is to support the many non-ascii text formats that exist. But it is (perhaps unsurprisingly) fully extensible within Emacs and could be used for any purpose. So, why not convert line-commented Clojure on file into block-comments on screen; this will give editable latex on screen and valid Clojure on file and a driver file to give valid Latex on file also.
Unfortunately, it fails. While Emacs latex support is file based, Clojure (and specifically cider) has a tighter integration; it can communicate the contents of a buffer without saving to file. This circumvents the formatting — the block comments are sent to Clojure process which complains.
Emacs Experiments: linked-buffer
I am now experimenting with another option. indirect-buffers place exactly the same text (and text-properties) into two buffers. Instead of sharing all the text, why not have two buffers with a function that can transform the text bi-directionally between the two. The practical result is two views over the same content. Surprisingly, this works pretty well, as you can see here even though my current implementation is very simple — the whole buffer is copied every keypress. We could achieve the same thing with indirect-buffers, but as well as simple copying, however, we can also transform the text on the fly so that both buffers are valid for their respective modes.
The broad idea is not that new — it's similar to web/weave or SWeave for instance, except that it is embedded into the editor; this means we can take advantage of existing support for both languages from the editor and the author gets immediate feedback about the transformation — so messing up the syntax is pretty obvious.
It also provides a superset of functionality provided by other techniques: indirect-buffers as mentioned previously, shadowfile.el (which creates a second copy of a file somewhere else on every save), and it could also mimic shadow.el which generates a secondary file by a command invocation on every save (although an invocation of an external command every keypress would probably not be performant).
The first release of linked-buffer was a month ago. I am currently unhappy with the configuration and will change this so code is in flux at the moment, but I am using it in anger which is a good sign. Currently, it does a latex <-> clojure transformation, but I will add a few more as time goes on.
It has taken me quite a while to get to this stage, and a number of experiments along the way, but my feeling is that I now have a workable literate environment. It also validates my decision to build Tawny (http://www.russet.org.uk/blog/2962). Having a rich textual language for building ontologies is a bit of a game changer; providing programmatic extensions to the language has been helpful, but the access to other tools, git, travis, tests and a repl has really made the difference. Now adding a literate environment to this as well changes the way that I can use ontologies and is a paradigm shift in their development.
Posted by Phillip Lord on March 14, 2014 at 12:05 pm under Emacs, Tawny-OWL.
Tawny 1.0
I am please to announce the first full release of Tawny-OWL, my library for fully programmatic development of OWL ontologies. The library now has a fairly large feature set:
Complete support for OWL2
Integrated support for reasoning with HermiT or ELK
Profile checking
Fixtures and support macros for unit testing
Use of external ontologies available only as OWL files
Rendering of OWL API objects to Tawny code.
Support for generating and using ontologies with numeric IDs.
Support for multilingual labels.
Additionally, I now have initial integration with Protege, described later.
The library is now available from clojars or on github.
Feedback is welcome at tawny-owl@googlegroups.com.
A little over a year ago, I first described my experiments with building a programmatic environment for ontology construction (http://www.russet.org.uk/blog/2214). The need arose out of frustration with existing ontology tools; Protege, for example, provides a nice graphical environment, but it has many limitations. It does not easily allow automated generation of ontology entities, for example, and it also does not provide access to tools which are common place in an IDE: versioning tools, diffs, test cases and so forth. While ontology specific variations of these tools do exist, they were not as good as the ones I was used to use when programming.
Tawny seeks to bridge this gap, by using a full programmatic environment to generate OWL ontologies. I chose Clojure because of its syntactic plasticity; at its simplest, when using tawny, it does not feel like a programming language, just a syntax and evaluation engine for writing ontologies. However, the full power of the programming language is there and can be used when necessary (http://www.russet.org.uk/blog/2366).
Since the first blog post, there have now been a further 8, as well as three papers, describing tawny itself (http://www.russet.org.uk/blog/2366), the karyotype ontology (1305.3758) and our use of patterns, higher-level abstractions within the karyotype ontology and applied to SIO. From an initial experiment, tawny has become a useful tool which we are using on a daily basis.
In Early Release
Included in this release is our initial integration with Protege. Tawny builds on the OWL API, which is also the basis for Protege. I always assumed that Protege would be used to view OWL files generated by Tawny, but it is actually possible to integrate them much more comprehensively than this. It is now possible to directly manipulate the data structures of Protege using Tawny; in short, Protege can display what ever tawny has generated immediately, and without a file in between.
We have achieved this in two ways; firstly with protege-tawny which provides a command line environment directly inside Protege. This is useful, but does not provide the rich programmatic IDE that I want. However, the protege-nrepl environment allows exactly this; protege launches Clojure, and launches a NREPL server to which you connect with Emacs, Eclipse or any of the other Clojure IDEs. Finally, lein-sync allows syncing classpaths and dependencies with an existing Clojure project. The practical upshot can be seen in screencast; tawny can be used as normal, with Protege following.
Currently, Protege and Tawny use different versions of the OWL API, so while the protege-nrepl can be used with the current release of Protege, periodic crashes happen. In the meantime, a hand-built distribution of Protege is available including nrepl.
I have three main aims for the next few releases of Tawny. First, we need to provide access to explanation code; currently, this has to be accessed within Protege, which is less than ideal for a process than can take many minutes to run. I wish to integrate this with the Clojure unit test environment so that explanations will be generated by failing test cases.
Second, tawny currently allows the development of ontologies, but does not allow easy querying over them. I have several possibilities here: including integration of a SPARQL engine; the current rendering engine, combined perhaps with core.match, or, finally, fully-fledged support for core.match directly.
Finally, I wish to experiment with and add support for connection points (http://www.russet.org.uk/blog/2955), to better enable modular ontology development.
Posted by Phillip Lord on November 14, 2013 at 2:54 pm under Tawny-OWL.
Tawny 0.12
I am pleased to announce the release of tawny-owl, Version 0.12.
This package allows users to construct OWL ontologies in a fully programmatic environment, namely Clojure. This means the user can take advantage of programmatic language to automate and abstract the ontology over the development process; also, rather than requiring the creation of ontology specific development environments, a normal programming IDE can be used; finally, a human readable text format means that we can integrate with the standard tooling for versioning and distributed development.
OWL is a W3C standard ontology representation language; an ontology is a fully computable set of statements, describing the things and their relationships. These statements can be reasoned over, inferences made and contradictions detected automatically using an off-the-shelf reasoner.
0.12 is planned to be the final, feature complete release before the 1.0 release. New features will not be added before 1.0. Key new features in 0.12 are:
Complete support for OWL 2, include data types
OWL documentation can be queries as normal clojure metadata
New namespaces, query and fixture
Completion of rendering functionality
Regularisation of interfaces: where relevant functions now take an ontology as the first argument.
Updated to Hermit 1.3.7.3, OWL API 3.4.5
Tawny is available at https://github.com/phillord/tawny-owl, or as a maven artifact from http://clojars.org. The development of tawny-owl is documented in my journal at http://www.russet.org.uk/blog/category/all/professional/tech/tawny-owl
Posted by Phillip Lord on August 1, 2013 at 2:14 pm under Tawny-OWL.
Data Properties in Tawny
Although it appears fairly innocuous, the last commit to tawny-owl seems momentus to me. While I still need to go through the spec line-by-line, and the code needs some clean up, this commit essentially represents the completion of the tawny.owl namespace; the addition of data properties and data types was the last part of the spec that I have to fulfil.
When I started off the tawny-owl library in October (http://www.russet.org.uk/blog/2214) I was most interested in getting a test environment, and the ability to use a normal editor. Subsequently, and particularly in the course of writing up my first paper on this library (http://www.russet.org.uk/blog/2366), it became obvious to me that I needed to support all of OWL2. I think I have achieved my original design motivations and some more besides. I have also learned a lot about OWL, the OWL API and Manchester syntax. It is also a strange project, because it is the first time I have fulfilled a specification in quite this way. I cannot recall the last time I could reasonably be said to have finished something, as research is generally open-ended.
I did not, however, start with a regular syntax in mind. In general, the conversion to lisp has worked reasonably well: the object side of OWL in particular falls into a prefix, lisp syntax very naturally; the individual side less so. The data side of OWL had another surprise in store for me: it looks very similar to the object side; so I wanted to share syntax. However, all the Java method calls are named differently and take different types and parameter number.
In the end, I have supported this through a multi-method and some heuristics to guess which call is wanted. For instance, with these two calls from the pizza ontology:
(owlsome hasTopping CheeseTopping)
(owlsome hasCalorifiContentValue (span =< 400))
we generate quite different types of OWL object. The owlsome method defers to either object-some or data-some respectively, which can also be used directly. In this case, the difference is obvious; however, tawny also takes strings in most of these places; in this case, we convert to an IRI and check whether it exists in the ontology or any ontology we know about first. I suspect that these heuristics will work in most cases, but fail in some; only time and experience will tell me about these.
Before the next release, 0.12, I will finish both the inline, function documentation and update the tutorial. After this I plan to sit on the API a while, think about the functions and the syntax to make sure I am happy; the release after should be 1.0 and as is the way of these things, I will be stuck with the apperance of the API for quite a while. This also allows me to avoid a 0.13 release without accusation of superstition.
There are still many parts of tawny that I wish to improve; in particular, I need to extend the repl facilities with doc and apropos features — my attempt to hijack the Clojure native facilities have failed despite extensive efforts. And explanation code needs to go in; currently, waiting for protege to reason and produce these results in a soul destroying experience; I want me continuous integration tests to automatically dump explanations whenever inconsistencies happen.
But new features are for the future; for this iteration, tawny-owl is finished and now will be polished.
Posted by Phillip Lord on June 10, 2013 at 5:45 pm under Tawny-OWL.
Please cite this resource as: An Exercise in Irrelevance http://www.russet.org.uk/blog
Download resource as: [bib]
No current archives
Clojure-owl
greycite
LiveConference
SillyIdeas
Tawny-OWL
Archives Select Month December 2018 (1) May 2018 (1) October 2017 (2) September 2017 (1) May 2017 (2) September 2016 (1) June 2016 (2) May 2016 (4) April 2016 (1) August 2015 (3) July 2015 (2) June 2015 (1) May 2015 (3) April 2015 (2) March 2015 (2) February 2015 (2) January 2015 (2) November 2014 (2) October 2014 (3) August 2014 (3) June 2014 (2) April 2014 (4) March 2014 (1) December 2013 (2) November 2013 (1) October 2013 (2) September 2013 (1) August 2013 (1) July 2013 (1) June 2013 (1) May 2013 (1) April 2013 (5) March 2013 (4) February 2013 (3) January 2013 (1) December 2012 (1) November 2012 (3) October 2012 (4) September 2012 (2) August 2012 (5) July 2012 (2) June 2012 (3) May 2012 (5) April 2012 (4) March 2012 (3) February 2012 (2) January 2012 (1) December 2011 (1) November 2011 (1) October 2011 (2) September 2011 (1) August 2011 (1) June 2011 (2) May 2011 (3) April 2011 (3) March 2011 (1) February 2011 (4) December 2010 (1) September 2010 (3) August 2010 (3) July 2010 (5) June 2010 (2) May 2010 (2) April 2010 (1) March 2010 (3) February 2010 (13) January 2010 (8) December 2009 (4) October 2009 (6) September 2009 (11) August 2009 (4) July 2009 (9) June 2009 (11) May 2009 (8) April 2009 (9) March 2009 (2) February 2009 (2) January 2009 (8) December 2008 (6) November 2008 (4) October 2008 (2) September 2008 (14) August 2008 (6) July 2008 (11) May 2008 (10) April 2008 (8) March 2008 (15) February 2008 (10) January 2008 (7) December 2007 (12) November 2007 (13) October 2007 (16) September 2007 (4) August 2007 (8) July 2007 (15) June 2007 (8) May 2007 (9) April 2007 (6) March 2007 (13) February 2007 (4) January 2007 (10) December 2006 (3) November 2006 (10) October 2006 (2) September 2006 (12) August 2006 (6) July 2006 (1) June 2006 (9) May 2006 (7) April 2006 (20) March 2006 (37) February 2006 (9)
Allyson Lister
Dan Swan
Duncan Hull
Frank Gibson
Michael J. Bell
Ontogenesis
Ignazio Palmisano on Rust 2019: Ignorant Thoughts
An Exercise in Irrelevance » Blog Archive » Configuring the Marble Mouse for Ubuntu 18.04 on Configuring the Marble Mouse for Ubuntu 17.10
Phillip Lord on Lenticular Text: Looking at code from different angles
Nils Blum-Oeste on Lenticular Text: Looking at code from different angles
Powered by WordPress. Theme modified from Evanescence. | CommonCrawl |
Nano Convergence
3D nanomolding and fluid mixing in micromixers with micro-patterned microchannel walls
Bahador Farshchian1,2,
Alborz Amirsadeghi1,
Junseo Choi1,
Daniel S. Park1,
Namwon Kim2 &
Sunggook Park1
Nano Convergence volume 4, Article number: 4 (2017) Cite this article
Microfluidic devices where the microchannel walls were decorated with micro and nanostructures were fabricated using 3D nanomolding. Using 3D molded microfluidic devices with microchannel walls decorated with microscale gratings, the fluid mixing behavior was investigated through experiments and numerical simulation. The use of microscale gratings in the micromixer was predicated by the fact that large obstacles in a microchannel enhances the mixing performance. Slanted ratchet gratings on the channel walls resulted in a helical flow along the microchannel, thus increasing the interfacial area between fluids and cutting down the diffusion length. Increasing the number of walls decorated with continuous ratchet gratings intensified the strength of the helical flow, enhancing mixing further. When ratchet gratings on the surface of the top cover plate were aligned in a direction to break the continuity of gratings from the other three walls, a stack of two helical flows was formed one above each other. This work concludes that the 3D nanomolding process can be a cost-effective tool for scaling-up the fabrication of microfluidic mixers with improved mixing efficiencies.
In this paper we show that a micromixer with patterned walls can be fabricated using 3D nanomolding and solvent-assisted bonding to manipulate the flow patterns to improve mixing.
Over the past two decades microfluidic devices have received significant interests for analytical chemistry, clinical diagnosis, environmental monitoring and food analysis applications [1–7]. Microfluidics offers a variety of advantages such as low consumption of sample and reagents, low power consumption, reduced cost, high throughput, integration of various functionality and automation [8]. Despite the progresses made in recent years, mixing in microfluidics still remains as a challenge, mostly arising from a low Reynolds number (Re) by the small dimensions of microchannels and the limited range of obtainable velocities in its operation. Therefore, the flow regime is mostly laminar and, consequently, in fluidic channels with smooth walls and no external disturbance source the only mechanism through which mixing can occur is diffusion, which is an inherently slow process. The requirement of a long channel for complete mixing increases the device footprint significantly, making the device impractical for many lab-on-a-chip applications. In order to overcome the limitation, tremendous efforts have been given on micromixers, as summarized in the following.
Micromixers can be classified as passive and active micromixers [9]. Detailed information on the design and operating principles of active and passive micromixers can be found in previous reviews [9–12]. In active micromixers where disturbance in fluid flow is generated by an external source such as magnetic field, electric field, ultrasonic effects and thermal medium, their integration into microfluidic devices is challenging due to expensive and complex fabrication protocols and supporting external equipment.
Passive micromixers that do not require an external source can be categorized further into two types: geometrical modification in microfluidic channel design [13–17] and modification of mixing channels by integration with additional structures [18–23]. The geometrical modification for the first type include split and recombination of microchannels [13], hydraulic focusing by using sheath flows provided through two other inlets [14], a modified Tesla structure that induces the Coanda effect to improve mixing [24], curved microchannels [16, 17] and 3D serpentine channels with recurring "C-shaped" units [15]. Their mixing is enhanced either by lamination of fluid streams or by inducing chaotic advection at corners. While their fabrication mostly involves design of a photomask followed by a single step photolithography, it is usually accompanied by an increase in the device footprint, which is not desired in many applications.
In the second group of passive mixers, the additional structures integrated into mixing microchannels can be obstacles, surface patterns at the channel walls or the combination thereof [18–23]. This type of micromixers are interesting particularly because they are effective even for low Re flows [18]. In a phenomenal work by Stroock et al. [20], staggered herringbone structures and slanted well structures were formed on the bottom of the mixing channel using a two-step photolithography and PDMS casting. In Johnson et al. [19], a preformed T-micromixer imprinted in polycarbonate was post-modified with a pulsed UV excimer laser to form slanted wells at the junction. However, because the laser milling is a serial fabrication method, only a small portion of the micromixer channel could be modified with slanted wells. After initial work by Stroock et al. and Johnson et al., researchers added structures more than one surface of mixing microchannels to further improve mixing. In Yang et al. [22], a connected groove micromixer whose bottom and side walls were patterned with connected grooves was fabricated via a two-step photolithography and PDMS casting. Sato et al. [23] demonstrated a 3D microchannel whose top and side walls were patterned with slanted microgrooves. However, their fabrication involves two inclined backside photolithography steps and two topside photography steps.
The literatures indicate that mixing in microchannels can be further improved if a more number of walls are decorated with additional microscale structures. However, the integration of additional structures on the walls of microchannels requires a number of additional micromachining steps or use of high-end equipment and thus increases the fabrication cost significantly [19–23]. Therefore, fabrication of such structures at low cost and with high throughput is a huge technological challenge.
Previously, we have developed 3D nanomolding, a modified molding technique allowing for fabricating micro/nanostructures along the surface of any arbitrary microstructures in polymer substrates with the help of a thin intermediate polydimethysiloxane (PDMS) stamp introduced between the brass mold insert and polymer substrate for molding [25, 26]. The objectives of this paper are two-folds. The first objective is to demonstrate the feasibility of the 3D nanomolding process for simple production of microfluidic devices with fluidic walls decorated with micro- and nanoscale patterns. The other objective is to systematically investigate the improvement of mixing as different numbers of the microchannel walls are decorated with microgratings. Such a systematic investigation has been difficult to achieve due to the lack of a simple and low cost fabrication method for sidewall patterns. Using the 3D nanomolding process, continuous microscale ratchet gratings were patterned on the sidewalls and bottom of the microchannel to realize 3D T-micromixer structures. By using a solvent-assisted bonding technique with a plain or micropatterned cover plate, enclosed T-micromixers with no side, one side, three sides and four sides patterned with microscale ratchet gratings were fabricated and 3D flow patterns induced by the surface structures at different locations of the microchannels were studied using confocal microscopy.
Brass mold fabrication and PMMA pre-patterning
Two different molds were used for pre-patterning of micro/nanostructures in poly(methyl methacrylate) (PMMA) substrate. A Si stamp with an array of nanoholes of 100 nm diameter, 200 nm period, and 100 nm height was used for nanopatterning and a brass mold containing ratchet gratings with the period of 75 μm was used for micropatterning. Another brass mold with a T-junction protrusion was used produce a micromixer, The width and depth of the microchannels for the T-junction micromixer were 50 and 70 μm, respectively. The length of the fluid inlet to the T-junction was 5 mm and the length from the T-junction to the fluidic outlet was 30 mm. The brass molds were fabricated by a KERN MMP2522 micro milling machine. The brass was rough cut with an 800 μm diameter end mill (PMT Tools) at 200 mm/min, followed by a finishing pass with a 100 μm diameter end mill (PMT Tools) at 75 mm/min. The spindle was run at 40,000 rpm for all passes. For ratchet fabrication, a jig was used to angle the brass surface off the horizontal. After fabrication, the Si or brass mold with nanoholes or ratchet gratings was imprinted into poly(methyl methacrylate) (PMMA) using a commercial nanoimprint lithography (NIL) machine (Obducat 6 inch NIL). Imprinting was performed at 160 °C and 30 bar for 10 min. Then the system was cooled down to 70 °C and demolding was done.
3D nanomolding
Figure 1 shows the process scheme for 3D nanomolding [26]. On the surface of PMMA substrate pre-patterned with nanopillars or micro ratchet gratings, PDMS prepolymer was spin coated at 2000 rpm for 40 s and cured in order to form a thin intermediate PDMS stamp. The thickness of the intermediate PDMS stamp was 41.6 ± 2.5 μm. This was followed by the primary molding step at 170 °C and 5 bar for 5 min, which was performed in the NIL machine using a brass mold having microfluidic protrusion structures. For microfluidic devices for mixing experiments, the angle between the direction of ratchet gratings in the pre-patterned PMMA substrate and the direction of the microchannel in the brass mold was set to be ~45°, so that slanted ratchet gratings were formed on the sidewalls and bottom surface of the microchannel. After primary molding, demolding was performed in two steps at 70 °C: first demolding of the brass mold and then second peeling off the PDMS intermediate stamp from the 3D molded PMMA substrate. Even for fabricating plain microchannels, a thin intermediate PDMS stamp without any pre-patterned structures was used for primary molding in order to obtain a similar cross-section to those for 3D microchannels decorated with ratchet gratings.
Schematic illustartion of the 3D nanomolding process [26]
Solvent-assisted bonding
Before bonding a cover plate to 3D molded substrate, holes were drilled in the inlet and outlet area of the 3D channel. A solvent-assisted bonding technique developed by Brown et al. [27] for bonding of PMMA chips was used. A few drops of a solvent mixture (47.5% dimethyl sulfoxide (DMSO), 47.5% water and 5% methanol) were spread over the cover plate and then the 3D molded substrate was placed in conformal contact with the cover plate. The assembly of 3D molded substrate/cover plate was loaded in the NIL machine and brought to 92 °C and a 10 bar for 30 min, which led to complete bonding. For micromixers with no side and three sides of microchannel walls patterned, a plain cover plate without any patterns was used while, for micromixers with one side and four sides of channel walls patterned, a cover plate patterned with ratchet gratings was used. For the micromixer with four sides patterned, the cover plate was so aligned for bonding that the ratchet gratings in the cover plate were in parallel with the ratchet gratings in the bottom of the 3D microchannel.
Leakage testing
After solvent-assisted bonding, PEEK™ tubing capillaries (part number 1577-12x, IDEX, Oak Harbor, WA) with 795 μm (0.0313 in) outer diameter and 177 μm (0.007 in) inner diameter were inserted into inlet/outlet ports of the chip and a tiny amount of epoxy resin was applied to the capillary-chip interfaces to prevent leakage. Two 5 mL glass syringes (1005TLL, Hamilton, Reno, NV, USA) were filled with a fluorescein dye solution (fluorescein sodium salt in deionized (DI) water with a concentration of 3.75 × 10−2 g/L, F6377-100G, Sigma-Aldrich, St. Louis, MO, USA). The dye solution was injected at 40 μL/min into the inlets of the micromixer using syringe pumps (KDS220 multi-syringe pump, Kd Scientific Inc., MA, USA). The injected micromixer was imaged using an inverted fluorescence microscope (Eclipse, Nikon Instruments Inc. Melville, NY, USA) with a 10× objective lens for leak test and the images were captured by a digital camera (Cool- SnapFX Photometrics, Tucson, AZ, USA).
Mixing characterization
A scanning laser confocal microscope (Leica TCS SP2) was used to map out 3D flow patterns induced by the surface structures on microchannel walls at different locations along the microchannel (1, 3, 5, 10, 15, 20 and 28 mm) from T-junction. DI water and a solution of fluorescein dye in DI water were injected from separate inlets of the T-micromixer. Two flow rates of 10 and 40 μL/min in the mixing microchannel were investigated. The results from confocal microscopy were a stack of images in the xy plane with a slice thickness of 383 μm. The stack of images was then assembled into a 3D image and transects in the desired area in the xz plane. x, y and z are the coordinates in the direction of the width, length and height of the channels respectively.
To quantify the degree of mixing for a location in 3D microchannels, the standard deviation of the normalized fluorescence intensity, i.e. fluorescence intensity at a location with respect to the maximum fluorescence intensity at the inlet, was obtained from cross-sectional confocal microscopy images using \(\sigma = \langle (I - \langle I\rangle )^{2} \rangle^{1/2}\). σ is the standard deviation, I is the grayscale intensity value of a pixel with a value between 0 and 1, and 〈 〉 means an average over all pixels in the image. Therefore, σ is 0.5 for completely separated fluids and 0 for completely mixed fluids. Only the central 50% of the area of the images was used to calculate σ to eliminate variaions of the fluorescence intensity around the walls due to optical effects [20]. σ is also called mixing index.
Numerical simulation of the mixing
A series of computational simulations were performed using a multiphysics software COMSOL for four different models shown in Fig. 2. The models consisted of a T-micromixer with a 4000 μm long microchannel with a cross-section of 60 μm wide and 30 μm height. The four models differed by the number of microchannel walls decorated with microscale ratchet gratings. The high and period of the ratchets were 6 and 75 μm, respectively. The ratchets were placed at a 45° angle to the long axis of the channel.
Schematic illustration of a the plain b one side patterned c three sides patterned and d four sides patterned micromixers. The ratchets patterned on the surface were aligned at a ~45° angle with respect to the direction of the channel
The models used for the simulation were different from the fabricated structures used for mixing experiments in three aspects. First, the dimensions of the T-micromixer channels were reduced by a factor of ~2 with respect to those of the micromixers used for experiments. This means that the cross-sectional area of the long microchannel was reduced by a factor of ~4. The height of the ratchet gratings was also increased by a factor 1.5. These dimensions were used to achieve simulation results in a reasonable computational time. Second, ratchet gratings in the models were formed on the walls of a portion of the long microchannel. Starting at a 150 μm distance from the T-junction of micromixers, only 45 ratchets were placed on the walls of microchannels. On the other hand, in micromixers used for experiments, entire microchannel walls were decorated with ratchet gratings. Third, the cross-section of the model microchannel was rectangular while that for 3D microchannels had rounded bottom corners which were inevitably produced during the 3D molding process. The previous systematic study on 3D molding indicates that the roundness of corners in 3D microchannels can be reduced by using a thinner intermediate PDMS stamp and by changing dimensions of microchannels [25, 28].
Two liquids with properties of water at 298 K were let through each inlet of the micromixer. One of the liquids contained 1 mol/m3 of an imaginary species with an arbitrary diffusion coefficient of 10−10 m2/s while the other was pure water. The inlet boundary conditions were set at a constant velocity (0.9 cm/s). No-slip boundary condition was used for the walls. The outlet boundary condition was set at zero pressure. The geometry was discretized using the physics-controlled mesh module of the software. A multi-physics finite element model for an incompressible steady state flow consisting of the Navier Stokes equation in the laminar regime and the convective diffusion was solved by Comsol at each node. Mixing was quantified by comparing the standard deviation of the concentration profiles.
Structural characterization of fabricated devices
Figure 3 shows scanning electron microscope (SEM) images for a 3D nanomolded PMMA substrate in which a sinusoidal microfluidic channel was decorated with nanopillars. The corners of the microchannel were slightly rounded, which was inevitably produced during the 3D nanomolding process. However, nanopillars were well formed on the entire surface of the microfluidic channel, which indicates that 3D nanomolding is a feasible method to produce nanostructures on the walls of microfluidics.
SEM micrographs of a 3D nanomolded sinusoidal microfluidic device in PMMA. a and b are top view images with different magnifications and c, d, and e correspond to images with a tilted angle with different magnifications
Figure 4a shows SEM images of a 3D molded PMMA substrate in which ratchet microgratings were formed on the entire surface of the substrate. The 3D microchannel had the depth and width of 65.0 ± 2.0 μm and 129.5 ± 4.9 μm, respectively. The ratchet structures in the bottom center of the microchannel had the height of 3.5 ± 0.2 μm and were aligned at a ~45° angle with respect to the direction of the microchannel.
SEM micrographs of a a 3D channel fabricated via 3D molding, b the cross-section of the 3D channel after solvent-assisted bonding, c micro ratchets integrated inside the channel. d A photograph of the micromixer with patterned walls
It is challenging to form an enclosed fluidic chip via bonding a cover plate to the 3D molded substrate due to the presence of microscale ratchet gratings on the top surface of the substrate. For this purpose, solvent-assisted bonding is preferred to thermal bonding which is the bonding method for most polymer microfluidic applications. In thermal bonding, an elevated temperature close to the PMMA glass transition temperature (Tg) of 105 °C is used [27]. At this temperature, PMMA chains are mobile and thus deformation of the molded microstructures can occur. In solvent-assisted bonding, on the other hand, a temperature lower than Tg of PMMA (85–95 °C) is used. Even though exposure to a solvent enhances softening of PMMA at the surface, only a portion of the exposed PMMA cover plate in contact with 3D molded PMMA substrate can deform during bonding, as demonstrated by Brown et al. [27]. Thus, ratchet gratings formed on the microchannel walls can survive during bonding. Figure 4b, c show the cross-section of a 3D microchannel after bonding to a plain PMMA cover plate. The ratchet gratings on the bottom surface and sidewalls of the microchannel were clearly visible, indicating that the solvent-assisted bonding process did not produce significant deformation on the integrated ratchet structures. The height and width of the 3D channel after solvent-assisted bonding were measured to be 60.5 ± 0.7 μm and 121.5 ± 1.0 μm, respectively, and the height of the integrated ratchet gratings in the bottom center of the channel was 3.7 ± 0.3 μm. Compared to the dimensions of the 3D microchannels prior to bonding, the dimensional variations occurring during the bonding process were <8%. Figure 4d shows a photograph for a complete microfluidic chip with the 3D microchannel after connecting and gluing the capillary tubes to the chip. Leak test results showed no leakage around the 3D microchannel, which in turn confirms that solvent-assisted bonding is a suitable method to form an enclosed fluidic system for 3D molded PMMA substrates.
Fluid mixing in 3D microchannels
After fabrication, DI water and a solution of a fluorescein dye in DI water were injected from separate inlets of the micromixers and their mixing behavior was studied using confocal microscopy. Figure 5 shows cross-sectional confocal microscopy images taken at different locations of 1, 3, 5, 10, 15, 20 and 28 mm from T-junction for four different mixing microchannels: (1) plain channel and 3D channels with (2) one side patterned (top side), (3) three sides patterned (bottom and side walls) and (4) four sides patterned (top, bottom and side walls). The volumetric flow rate in the mixing microchannel was 10 μL/min which corresponds to a Reynolds number of 1.85. For the plain channel, the dyed water and pure water moved side by side along the channel and thus mixing occurred mainly by diffusion, as can be seen in Fig. 5a. Advection was along the channel and was not useful for transversal mixing.
Confocal microscope images for cross-sections of a plain microchannel and 3D microchannels with b one side patterned (top side), c three sides patterned (bottom and side walls), and d four sides patterned (top, bottom and side walls) for different locations of 1, 3, 5, 10, 15, 20 and 28 mm from T-junction. Deionized water and a solution of fluorescein dye in deionized water were injected from separate inlets of the micromixer. The volumetric flow rate in the mixing microchannel was 10 μL/min. The scale bar is 20 μm
When one or more sides of microchannel walls were patterned with slanted micro ratchet gratings, the mixing mechanism became a combination of diffusion and advection. Ratchet gratings formed at a ~45° angle relative to the microchannel direction produced a transversal component of advection for fluids adjacent to the ratchet gratings, which in combination with the flow along the channel led to the formation of a helical flow. This can be clearly seen in Fig. 5b–d. The transversal component of flow increased the interfacial area between fluids and cut down the diffusion length for complete mixing. As a result, mixing was improved compared to that for the plain channel.
The helical flow became even more pronounced when three sides of the microchannel walls were patterned with continuous slanted ratchet gratings. Here, "continuous" means that line gratings were formed when two or more consecutive sides of a microchannel were virtually unfolded on a flat surface. Then, the gratings on two opposite sidewalls formed by single 3D molding were perpendicular to each other (Fig. 5c). The enhanced helical flow occurred because continuous ratchets on the left (right) sidewall in Fig. 5c also created transversal downward (upward) flow, which helps the fluids rotate faster as they move along the channel.
For the microchannel with all sidewalls patterned, the ratchet gratings on the top and bottom surface were parallel to each other while those on the two sidewalls were perpendicular to each other. Thus, the continuity of gratings was broken on the top surface. Consequently, an initial stretch of dyed fluid toward the pure water side occurred at both top and bottom surfaces, forming a stack of two helical flows one above the other in opposite directions. The helical flow formed on the bottom of the channel was stronger than the one on the top as it was strengthened by sidewall patterns. In this microchannel, almost complete mixing was achieved at a distance of 10 mm from T-junction.
We also studied the mixing behavior at a higher flow rate of 40 μL/min (Re = 7.4) and the fluorescence micrographs are shown in Fig. 6. A similar flow behavior was observed but the degree of mixing was lower compared to the results obtained at a 10 μL/min flow rate.
The degree of mixing can be quantified by taking the standard deviation of the normalized intensity from cross-sectional confocal images at different locations in the mixing microchannel. The standard deviation values for different 3D microchannels at 10 μL/min were shown in Fig. 7a. In general, the degree of mixing was in the increasing order of plain microchannel < microchannel with one side patterned < microchannel with three sides patterned < microchannel with four sides patterned.
a The standard deviation (σ) for the normalized intensity in the cross-sectional confocal microscopy image versus the distance from T-junction for different micromixers (plain microchannel and microchannels with one side, three sides, four sides patterned) at a flow rate of 10 μL/min. b σ value versus the distance from T-junction for the plain channel and the 3D channels with three sides patterned at different flow rates of 10 and 40 μL/min
Figure 7b compares the standard deviation versus location from the T-junction for the plain channel and the 3D channel with three sides patterned at two different flow rates of 10 and 40 μL/min. An increased flow rate for both cases increased the standard deviation (or decreased the degree of mixing) at the same location of the microchannels from T-junction. However, the degree of mixing in the 3D microchannel with three sides patterned at 40 μL/min was still significant and comparable to that in the plain microchannel at 10 μL/min, indicating that 3D microchannels are particularly useful for high flow rate microfluidic applications.
Comparing the position of dyed water front in the microchannel with three sides patterned for two different flow rates (Figs. 5c, 6c), the degree of the spiral rotation induced by the surface ratchet gratings was not much changed by varying the flow rate. At a high flow rate, the time for fluids to reach a location in a microchannel, i.e. the time for diffusion to occur, was short for both plain and 3D channels. However, as a result of the enhanced interfacial area between two fluids by transversal fluid motion in 3D microchannels, the reduction of mixing by an increased flow rate will not be significant compared to that in the plain microchannel.
Comparison with simulation results
The experimental results were compared with results from numerical simulations. The differences in the models used for simulations from the actual structures used for experiments are described in Sect. 2.6. Despite the differences, numerical simulations provide qualitative comparisons with the corresponding experiments. Figure 8 shows the concentration profile images for mixing of two water-based liquids at different locations along the plain and various 3D microchannels. Qualitatively, the simulation results were in good agreement with the experimental results in that surface ratchet gratings induced transversal motion of fluids. The rotation of fluids was enhanced when more sidewalls were patterned with continuous ratchet gratings (Fig. 8b, c). When ratchet gratings on the top surface were formed in parallel to ratchet gratings on the bottom surface, stretching of fluids occurred at both top and bottom surfaces, in agreement with experimental results (Fig. 8d). However, the degree of stretching on the bottom surface relative to that on the top surface was significantly reduced. This can be seen when the flow patterns for microchannels with four sides patterned (see at 3 mm in Figs. 5d, 6d and at 1390 in Fig. 8d) are compared at the distance showing a similar flow pattern for microchannels with three sides patterned (see at 3 mm in Figs. 5c and 6c and at 1390 µm in Fig. 8c). Thus, in the simulated case, the top helical flow seems to hinder the stretching of fluids on the bottom surface, which will be further discussed in the next.
Simulated cross-sectional concentration profiles alongside mixing channels for a plain channel and channels with b one side, c three sides, d four sides patterned. A T-micromixer used for the simulation was composed of a 4000 μm long channel with a cross-section of 60 μm wide and 30 μm high. The ratchets with a height of 6 μm and a period of 75 μm were placed at a 45° angle to the long axis of the channel. Flow velocity was 0.9 cm/s
Figure 9 shows the standard deviation of the concentration profiles shown in Fig. 8. Addition of ratchet gratings improved mixing significantly. Due to smaller dimensions of the mixing microchannel, larger size of integrated structures and a slower fluid velocity, mixing occurred in a short length compared to what we observed in the experiments. Mixing was most efficient when the sidewalls and bottom of the microchannel were patterned. However, incorporation of ratchet gratings to the top surface (microchannels with four sides patterned) did not improve mixing further with respect to the microchannel with three sides patterned, which is different from experimental results. We attribute this to the use of a diffusion coefficient value of 10−10 m2/s for the dyed water, which is lower than ~10−9 m2/s for a small molecule in water at room temperature [29]. Thus, mixing by diffusion at the interface of stretched liquids seems to be a rate-limiting process over the transverse flow of liquids induced by surface ratchets. In this case, the helical flow formed by the top surface ratchets prevents the interface area of two liquids from further expanding, resulting in a detrimental effect on mixing.
Standard deviation obtained from the simulated cross-sectional concentration profiles shown in Fig. 8
The deformation of microstructures on the surface of microchannel walls during fluidic experiments may be an issue due to a strong shear force applied. We have not performed an SEM investigation after the fluidic experiments. However, an inspection with an optical microscope before and after the fluidic experiments does not indicate any hint that the microstructures at the microchannel walls were deformed. We calculated the shear stress applied of the fluid using a simple Poiseuille model with a parallel plate with infinite aspect ratio in the cross-sectional dimensions where the shear stress is a function of volumetric flow rate, Q, channel dimensions (height h, width w, and length L), and fluid viscosity μ, as follows:
$$\tau = - \frac{12Q\mu }{{h^{2} w}}.$$
Putting the experimental conditions used in this study to the equation (Q = 40 μL/min; μ ~1 Pa s; h = 65 μm; and w = 130 μm) shows a shear stress in the range of ~73,000 Pa. The actual shear stress at the microchannel wall should be even smaller than this value. Even though it is a rough estimation, this value is significantly lower than the tensile stress values of PMMA, which is in the range of 48–76 MPa. Therefore, under the experimental conditions used in this study, it is not expected that the microgratings are deformed during the fluidic experiments.
Finally it should be noted that in most cases microfluidic designs are limited to planar, layer-by-layer geometries that are imposed by current lithography based techniques of microfabrication [20]. Using the 3D molding process, 3D patterns can be imprinted easily in a wide range of thermoplastic polymers used for low cost lab-on-a-chip applications and enclosed microfluidic devices with 3D patterns can be formed via solvent-assisted bonding. The current 3D molding process time is limited by curing of PDMS to form an intermediate stamp. However, the process time can be significantly used by using other UV curable polymers with similar cross-linking densities to have similar elastic properties since UV curing time is much shorter than thermal curing needed for PDMS. Various structures such as hierarchical micro and nanostructures with different geometries and dimensions can be patterned on the walls of microchannels and the cover plate enabling manipulating flow patterns. The direction of the patterns can also be controlled by setting different angle between brass mold protrusions and micropatterns on the surface of the thermoplastic polymer in the modified 3D molding process. Such advantages make the 3D molding process a suitable and powerful technique for fabricating micromixers.
We studied the effect of the surface structures embedded on microchannel walls in micromixers on the mixing behavior. Four different 3D micromixers with no side, one side, three sides and four sides patterned with microscale ratchet gratings were fabricated via 3D nanomolding and solvent-assisted bonding. In plain channel mixing occurs as a result of diffusion at the interface of fluids which move side by side along the channel. By adding ratchet gratings to the surface of micromixers flow patterns could be manipulated. The strength of the helical flow induced by slanted ratchet gratings was intensified by increasing the number of walls continuously patterned with ratchet gratings. In a micromixer whose all sidewalls were patterned in such a way that ratchet gratings on the top and bottom surface were parallel to each other and those on the two sidewalls were perpendicular to each other, a stack of two helical flows form one above each other, causing one fluid to wrap around other fluid and push it across the 3D channel.
D. Mark, S. Haeberle, G. Roth, F. von Stetten, R. Zengerle, Microfluidic lab-on-a-chip platforms: requirements, characteristics and applications. Chem. Soc. Rev. 39, 1153–1182 (2010)
S. Neethirajan, I. Kobayashi, M. Nakajima, D. Wu, S. Nandagopal, F. Lin, Microfluidics for food, agriculture and biosystems industries. Lab Chip 11, 1574–1586 (2011)
Y.H. Zhang, P. Ozdemir, Microfluidic DNA amplification—a review. Anal. Chim. Acta 638, 115–125 (2009)
J.C. Jokerst, J.M. Emory, C.S. Henry, Advances in microfluidics for environmental analysis. Analyst 137, 24–34 (2012)
C. Rivet, H. Lee, A. Hirsch, S. Hamilton, H. Lu, Microfluidics for medical diagnostics and biosensors. Chem. Eng. Sci. 66, 1490–1507 (2011)
P. Yager, T. Edwards, E. Fu, K. Helton, K. Nelson, M.R. Tam et al., Microfluidic diagnostic technologies for global public health. Nature 442, 412–418 (2006)
D. Janasek, J. Franzke, A. Manz, Scaling and the design of miniaturized chemical-analysis systems. Nature 442, 374–380 (2006)
G.M. Whitesides, The origins and the future of microfluidics. Nature 442, 368–373 (2006)
N.T. Nguyen, Z.G. Wu, Micromixers—a review. J. Micromech. Microeng. 15, R1–R16 (2005)
V. Hessel, H. Lowe, F. Schonfeld, Micromixers—a review on passive and active mixing principles. Chem. Eng. Sci. 60, 2479–2501 (2005)
L. Capretto, W. Cheng, M. Hill, X. Zhang, Micromixing within microfluidic devices. Microfluid. Technol. Appl. 304, 27–68 (2011)
E.A. Mansur, M. Ye, Y. Wang, Y. Dai, A state-of-the-art review of mixing in microfluidic mixers. Chin. J. Chem. Eng. 16, 503–516 (2008)
M. Koch, D. Chatelain, A.G.R. Evans, A. Brunnschweiler, Two simple micromixers based on silicon. J. Micromech. Microeng. 8, 123–126 (1998)
Y. Gambin, C. Simonnet, V. VanDelinder, A. Deniz, A. Groisman, Ultrafast microfluidic mixer with three-dimensional flow focusing for studies of biochemical kinetics. Lab Chip 10, 598–609 (2010)
R.H. Liu, M.A. Stremler, K.V. Sharp, M.G. Olsen, J.G. Santiago, R.J. Adrian et al., Passive mixing in a three-dimensional serpentine microchannel. J. Microelectromech. Syst. 9, 190–197 (2000)
T. Scherr, C. Quitadamo, P. Tesvich, D.S.-W. Park, T. Tiersch, D. Hayes et al., A planar microfluidic mixer based on logarithmic spirals. J. Micromech. Microeng. 22, 055019 (2012)
A.P. Sudarsan, V.M. Ugaz, Fluid mixing in planar spiral microchannels. Lab Chip 6, 74–82 (2006)
T.N.T. Nguyen, M.-C. Kim, J.-S. Park, N.E. Lee, An effective passive microfluidic mixer utilizing chaotic advection. Sens. Actuators B Chem. 132, 172–181 (2008)
T.J. Johnson, D. Ross, L.E. Locascio, Rapid microfluidic mixing. Anal. Chem. 74, 45–51 (2002)
A.D. Stroock, S.K.W. Dertinger, A. Ajdari, I. Mezic, H.A. Stone, G.M. Whitesides, Chaotic mixer for microchannels. Science 295, 647–651 (2002)
P.B. Howell, D.R. Mott, S. Fertig, C.R. Kaplan, J.P. Golden, E.S. Oran et al., A microfluidic mixer with grooves placed on the top and bottom of the channel. Lab Chip 5, 524–530 (2005)
J.-T. Yang, W.-F. Fang, K.-Y. Tung, Fluids mixing in devices with connected-groove channels. Chem. Eng. Sci. 63, 1871–1881 (2008)
H. Sato, S. Ito, K. Tajima, N. Orimoto, S. Shoji, PDMS microchannels with slanted grooves embedded in three walls to realize efficient spiral flow. Sens. Actuators A Phys. 119, 365–371 (2005)
C.C. Hong, J.W. Choi, C.H. Ahn, A novel in-plane passive microfluidic mixer with modified Tesla structures. Lab Chip 4, 109–113 (2004)
B. Farshchian, S.M. Hurst, J. Lee, S. Park, 3D molding of hierarchical micro- and nanostructures. J. Micromech. Microeng. 21, 8 (2011)
B. Farshchian, S. Park, J. Choi, A. Amirsadeghi, J. Lee, S. Park, 3D nanomolding for lab-on-a-chip applications. Lab Chip 12, 4764–4771 (2012)
L. Brown, T. Koerner, J.H. Horton, R.D. Oleschuk, Fabrication and characterization of poly(methylmethacrylate) microfluidic devices bonded using surface modifications and solvents. Lab Chip 6, 66–73 (2006)
B. Farshchian, A. Amirsadeghi, S.M. Hurst, J. Kim, S. Park, Deformation behavior in 3D molding: experimental and simulation studies. J. Micromech. Microeng. 22, 115027 (2012)
L. Capretto, W. Cheng, M. Hill, X.L. Zhang, Micromixing within microfluidic devices, in Microfluidics: technologies and applications, vol. 304, ed. by B.C. Lin (Springer, Berlin, 2011), pp. 27–68
BF performed the 3-D nanomolding and mixing experiments and wrote the manuscript. AA performed the numerical simulation of mixing. Drs. JC and DSP contributed to the fluorescence microscopy for mixing experiments. NK contributed to the analysis and interpretation on microfluidic mixing results. SP conceived the research idea, guided the experiments and simulation work and analysis, and revised the manuscript. All authors read and approved the final manuscript.
Authors would like to thank Matthew L. Brown in LSU Socolofsky Microscopy Center for confocal imaging assistance.
This research was supported by National Science Foundation CAREER Award (CMMI-0643455) and the P41 Center for BioModular Multi-Scale Systems for Precision Medicine (P41EB020594) from the National Institutes of Health.
Mechanical and Industrial Engineering Department and Center for Bio-Modular Multiscale Systems for Precision Medicine, Louisiana State University, Baton Rouge, LA, 70803, USA
Bahador Farshchian, Alborz Amirsadeghi, Junseo Choi, Daniel S. Park & Sunggook Park
School of Engineering, Texas State University, 601 University Drive, San Marcos, TX, 78666, USA
Bahador Farshchian & Namwon Kim
Bahador Farshchian
Alborz Amirsadeghi
Junseo Choi
Daniel S. Park
Namwon Kim
Sunggook Park
Correspondence to Sunggook Park.
Farshchian, B., Amirsadeghi, A., Choi, J. et al. 3D nanomolding and fluid mixing in micromixers with micro-patterned microchannel walls. Nano Convergence 4, 4 (2017). https://doi.org/10.1186/s40580-017-0098-x
Received: 27 December 2016
3D molding
Surface structures in microchannel
Secondary flow
Flow patterns
Beyond Nanomaterials
Nanopatterning Lithography | CommonCrawl |
For circular motion in a vertical plane, why does Net Force = Centripetal Force?
I'm struggling with some of the concepts pertaining to the forces and acceleration associated with circular motion in a vertical plane (only concerned with what happens at the 'top' and 'bottom' of the loop for now though).
Let's say that we have a roller-coaster going around a circular (vertical) track. I understand that at the bottom of the loop, the $F_{Net}=F_{Normal}-mg$, and this makes sense. But what is confusing me is why $F_{Net}=ma={mv^2\over r}$. I realise that ${mv^2\over r}$ is the centripetal force acting inwards (towards the centre of the loop), and $a$ is the centripetal acceleration, but why does this equal the net force?! Shouldn't we take into account the acceleration due to gravity?
eg. If we are trying to find the Net Force, why don't we find the Net acceleration, which would be something like "Centripetal acceleration - acceleration due to gravity" (at the bottom of the loop)?
So if I was to find the Net Force on 60kg person at the bottom of a roller coaster of radius 9m, travelling at 18.8 $m/s^2$, I could just use $F_{Net}={mv^2\over r}$? This seems to me like we just ignored gravity :/
Any help much appreciated, I feel as though I know enough to harm myself but not enough to properly understand it
Smeato
homework-and-exercises forces kinematics acceleration centripetal-force
Jacob SmeatonJacob Smeaton
The question you bring up is a very common one, and the source of some difficulty to novices. I know it's been addressed here, but I can't find a good presentation. What follows is not a good one, but it might be enough to answer the question.
Centripetal force describes a force (often the net of several forces). It does not name a force. Gravity is a force. Friction is a force. Centripetal is not a force. "Centripetal" describes a net force that points toward the center of a circle. In the event that the speed of the object is not changing, or if observing only over a very short period of time, then kinematics demands that $a_c=v^2/r$ and Newton's second law says that there must be a net force that produces that acceleration. We know that there is a centripetal force by observing the motion of the object, not by studying the interactions between objects as we would for gravity or friction. In a sense, "centripetal force" is more a statement of kinematics than dynamics.
The motion tells us that there must be a centripetal force. What the nature of that force is presents a different question, one that is answered by analyzing the actual real forces, those due to interactions between objects.
Kinematics tells us that the following must be true: $$F_\mathrm{net} = \frac{mv^2}{r}$$
Analysis of forces at the bottom of the loop tells us that $$F_\mathrm{net}=N-mg$$
garypgaryp
The thing to understand about circular motion is that the centripetal acceleration is a purely kinematic fact.
Something moving along a curved path is accelerating because it's velocity vector is not constant. This is just kinematics
If that curved motion has a known, constant speed $v$ and known radius of curvature $r$ than the magnitude of the acceleration is $a_c = v^2/r$. This is still just kinematics
It's a lot like looking at something that is not moving or is moving at constant velocity and saying "Hey, that thing is in equilibrium!". When we look at an object following a curved path we know it is accelerating just as we know that uniform motion implies equilibrium.
And just like the case of equilibrium we then use Newton's third law to connect net force to that acceleration. \begin{align*} F_\text{net,equilibrium} &= m a_\text{equilibrium}\\ &= 0\\ \\ F_\text{net,circular motion} &= m a_c \\ &= m\frac{v^2}{r} \;. \end{align*}
In other words, for objects in uniform circular motion it is always true that the net force acting on that is the "the centripetal force".
Something to watch out for: if the speed is not constant there is a component of acceleration tangential to the path as well and the net force is no longer the centripetal force.
I encourage students to think of "the centripetal acceleration" as a primary thing, and the forces that cause that acceleration as a kind of intermediate step in the work that we don't dignify with a title in order to reinforce the fact that finding which forces (or components thereof) that combine to cause the centripetal acceleration as a to-be-worked-out feature of the problem.
dmckee♦dmckee
Forget about the idea of having contributions to acceleration. An object has only one acceleration, not a sum of several.
There can be many forces, and they all combine into a net force. The object is being pushed a bit from this side, a bit from that side, a bit from behind, a bit from above and all in all, all these are summed up to a net force $F_{net}$.
But you are not accelerating a bit sideways and a bit to the other side and a bit backwards, which should then combine into one acceleration. Acceleration is not made of contributions. There can be several forces, but they together result in one acceleration - they don't cause an acceleration each which is then summed up to a "net" acceleration.
Newton's 2nd law does indicate this: The formula is not $\sum F=m\sum a$ but only $\sum F=ma$; the $F$ is a sum of many $F$'s, but the $a$ is not a sum of many $a$'s.
Therefore there is no such thing as "net" acceleration - it is just acceleration. And in the case of uniform circular motion, where this acceleration must be pointing inwards, this acceleration has been named: centripetal acceleration. It is not another "type" of acceleration - just a name we call it, when it causes a circular motion.
And for such inwards-pointing acceleration to be present, all forces acting on the object combined must cause it. At the bottom of the vertical loop, there is weight pulling down and normal force upwards, and those must together cause upwards acceleration. (You don't subtract the "acceleration contributions" from each other, you just combine the forces and see what acceleration that sum causes.)
If it wasn't upwards, it wouldn't be a uniform circular motion.
why $F_{Net}=ma=\frac{mv^2}r$
People have fond out that if a motion is uniformly circular, the acceleration is always $a=\frac{v^2}{r}$ and pointing towards the centre. This can be proven seperately and has gotten nothing to do with forces.
Newton's law always holds, $F_{net}=ma$, and so in the case of uniform circular motion, the $a$ in this law can be replaced with $a=\frac{v^2}{r}$.
So if I was to find the Net Force on 60kg person at the bottom of a roller coaster of radius 9m, travelling at 18.8 $m/s^2$, I could just use $F_{Net}=\frac{mv^2}r$? This seems to me like we just ignored gravity :/
This isn't ignoring gravity - you just aren't done. Gravity will surely enter as a part of the net force $F_{Net}$. As you even correctly wrote yourself, the net force does consist of gravity downwards and normal force upwards, $F_{Net}=F_n-mg$. So plug this into the expression and the gravity influence (the weight $mg$) is indeed included:
$$F_{Net}=\frac{mv^2}r\quad\Leftrightarrow\quad F_n-mg=\frac{mv^2}r$$
SteevenSteeven
Not the answer you're looking for? Browse other questions tagged homework-and-exercises forces kinematics acceleration centripetal-force or ask your own question.
Bullet hitting a stationary target attached to the rope
Centripetal force and circular motion
Is there a reaction force on the ball in a vertical circular motion?
Centripetal force in loop motion
Maximum velocity on Unbanked roads and why should friction be more than centripetal force?
What guarantees that in a circular motion in a vertical loop, the resultant of the forces will always point to the center?
Normal Force in Circular Motion
Non-uniform vertical circular motion
What is the behavior of the normal force during uniform circular motion?
Kinematics for Non-Uniform Circular Motion
Vertical loop forces | CommonCrawl |
Second-order mixed-moment model with differentiable ansatz function in slab geometry
KRM Home
A deterministic-stochastic method for computing the Boltzmann collision integral in $\mathcal{O}(MN)$ operations
October 2018, 11(5): 1235-1253. doi: 10.3934/krm.2018048
Stability of traveling waves for nonlocal time-delayed reaction-diffusion equations
Yicheng Jiang and Kaijun Zhang ,
School of Mathematics and Statistics, Northeast Normal University, Changchun, Jilin 130024, China
Received June 2017 Revised July 2017 Published May 2018
Fund Project: The first author is supported by NSFC grant (No.11571066) and the second author is supported by NSFC grant (No.11771071)
This paper is concerned with the stability of noncritical/critical traveling waves for nonlocal time-delayed reaction-diffusion equation. When the birth rate function is non-monotone, the solution of the delayed equation is proved to converge time-exponentially to some (monotone or non-monotone) traveling wave profile with wave speed $c>c_*$, where $c_*>0$ is the minimum wave speed, when the initial data is a small perturbation around the wave. However, for the critical traveling waves ($c = c_*$), the time-asymptotical stability is only obtained, and the decay rate is not gotten due to some technical restrictions. The proof approach is based on the combination of the anti-weighted method and the nonlinear Halanay inequality but with some new development.
Keywords: Traveling wave, time delay, nonlocal reaction-diffusion equations, $L^2$-weighted energy, stability.
Mathematics Subject Classification: Primary: 35K57, 35C07; Secondary: 35K58, 92D25.
Citation: Yicheng Jiang, Kaijun Zhang. Stability of traveling waves for nonlocal time-delayed reaction-diffusion equations. Kinetic & Related Models, 2018, 11 (5) : 1235-1253. doi: 10.3934/krm.2018048
M. Aguerrea, C. Gomez and S. Trofimchuk, On uniqueness of semi-wavefronts, Math. Ann., 354 (2012), 73-109. doi: 10.1007/s00208-011-0722-8. Google Scholar
I. L. Chern, M. Mei, X. F. Yang and Q. F. Zhang, Stability of non-monotone critical traveling waves for reaction-diffusion equations with time-delay, J. Differential Equations, 259 (2015), 1503-1541. doi: 10.1016/j.jde.2015.03.003. Google Scholar
J. Fang and X. Q. Zhao, Esistence and uniqueness of traveling waves for non-monotone integral equations with in applications, J. Differential Equations, 248 (2010), 2199-2226. doi: 10.1016/j.jde.2010.01.009. Google Scholar
T. Faria, W. Huang and J. Wu, Traveling waves for delayed reaction-diffusion equations with global response, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 462 (2006), 229-261. doi: 10.1098/rspa.2005.1554. Google Scholar
T. Faria and S. Trofimchuk, Nonmonotone traveling waves in single species reaction-diffusion equation with delay, J. Differential Equations, 228 (2006), 357-376. doi: 10.1016/j.jde.2006.05.006. Google Scholar
T. Faria and S. Trofimchuk, Positive heteroclinics and traveling waves for scalar population models with a single delay, Appl. Math. Comput., 185 (2007), 594-603. doi: 10.1016/j.amc.2006.07.059. Google Scholar
A. Gomez and S. Trofimchuk, Global continuation of monotone wavefronts, J. Lond. Math. Soc., 89 (2014), 47-68. doi: 10.1112/jlms/jdt050. Google Scholar
S. A. Gourley and J. Wu, Delayed nonlocal diffusive system in biological invasion and disease spread, Fields Inst. Commun., 48 (2006), 137-200. Google Scholar
W. S. C. Gurney, S. P. Blythe and R. M. Nisbet, Nicholson's blowflies revisited, Nature, 287 (1980), 17-21. doi: 10.1038/287017a0. Google Scholar
R. Huang, M. Mei and Y. Wang, Planar traveling waves for nonlocal dispersion equation with monostable nonlinearity, Discrete Contin. Dyn. Syst. Ser. A, 32 (2012), 3621-3649. doi: 10.3934/dcds.2012.32.3621. Google Scholar
R. Huang, M. Mei, K. J. Zhang and Q. F. Zhang, Asymptotic stability of non-monotone traveling waves for time-delayed nonlocal dispersal equations, Discrete Contin. Dyn. Syst. Ser. A, 36 (2016), 1331-1353. doi: 10.3934/dcds.2016.36.1331. Google Scholar
Y. C. Jiang and K. J. Zhang, Time-delayed reaction-diffusion equation with boundary effect: (Ⅰ) converegence to non-critical traveling waves, Applicable Analysis, 97 (2018), 230-254. doi: 10.1080/00036811.2016.1258696. Google Scholar
W. T. Li, S. G. Ruan and Z. C. Wang, On the diffusive Nicholson's blowflies equation with nonlocal delays, J. Nonlinear Sci., 17 (2007), 505-525. doi: 10.1007/s00332-007-9003-9. Google Scholar
C. K. Lin, C. T. Lin, Y. P. Lin and M. Mei, Exponential stability of non-monotone traveling waves for Nicholson's blowflies equation, SIAM J. Math. Anal., 46 (2014), 1053-1084. doi: 10.1137/120904391. Google Scholar
C. K. Lin and M. Mei, On traveling wavefronts of the Nicholson's blowflies equations with diffusion, Proc. Roy. Soc. Edinburgh Set. A, 140 (2010), 135-152. doi: 10.1017/S0308210508000784. Google Scholar
S. W. Ma, Traveling waves for non-local delayed diffusion equations via auxiliary equations, J. Differential Equations, 237 (2007), 259-277. doi: 10.1016/j.jde.2007.03.014. Google Scholar
A. Matsumura and M. Mei, Convergence to traveling fronts of solutions of the $p$-system with viscosity in the presence of a boundary, Arch. Ration. Mech. Anal., 146 (1999), 1-22. doi: 10.1007/s002050050134. Google Scholar
M. Mei, C. K. Lin, C. T. Lin and J. W. H. So, Traveling wavefronts for time-delayed reaction-diffusion equation, (Ⅰ) Local nonlinearity, J. Differential Equations, 247 (2009), 495-510. doi: 10.1016/j.jde.2008.12.026. Google Scholar
M. Mei, C. K. Lin, C. T. Lin and J. W. H. So, Traveling wavefronts for time-delayed reaction-diffusion equation, (Ⅱ) Nonlocal nonlinearity, J. Differential Equations, 247 (2009), 511-529. doi: 10.1016/j.jde.2008.12.020. Google Scholar
M. Mei, C. H. Ou and X. Q. Zhao, Global stability of monostable traveling waves for nonlocal time-delayed reaction-diffusion equations, SIAM J. Appl. Math., 42 (2010), 2762–2790; erratum, SIAM J. Appl. Math., 44 (2012), 538–540. doi: 10.1137/110850633. Google Scholar
M. Mei and J. W. H. So, Stability of strong traveling waves for a nonlocal time-delayed reaction-diffusion equation, Proc. Roy. Soc. Edinburgh Sect. A, 138 (2008), 551-568. doi: 10.1017/S0308210506000333. Google Scholar
M. Mei, J. W. H. So, M. Y. Li and S. S. P. Shen, Asymptotic stability of traveling waves for the Nicholson's blowflies equation with diffusion, Proc. Roy. Soc. Edinburgh Sect. A, 134 (2004), 579-594. doi: 10.1017/S0308210500003358. Google Scholar
M. Mei and Y. Wang, Remark on stability of traveling waves for nonlocal Fisher-KPP equations, Int. J. Num. Anal. Model Ser. B, 2 (2011), 379-401. Google Scholar
A. J. Nicholson, Competition for food amongst Lucilia Cuprina larvae, Proceedings of the 8th International Congress of Entomology, Stockhom, (1984), 227–281. Google Scholar
A. J. Nicholson, An outline of dynamics of animal populations, Aust. J. Zool., 2 (1954), 9-65. doi: 10.1071/ZO9540009. Google Scholar
J. W. H. So and Y. Yang, Dirichlet problem for the diffusion Nicholson's blowflies equation, J. Differential Equations, 150 (1998), 317-348. doi: 10.1006/jdeq.1998.3489. Google Scholar
J. So and X. Zou, Traveling waves for the diffusion Nicholson's blowflies equation, Appl. Math. Comput., 122 (2001), 385-392. doi: 10.1016/S0096-3003(00)00055-2. Google Scholar
E. Trofimchuk, V. Tkachenko and S. Trofimchuk, Slowly oscillating wave solutions of a single species reaction-diffusion equation with delay, J. Differential Equations, 245 (2008), 2307-2332. doi: 10.1016/j.jde.2008.06.023. Google Scholar
E. Trofimchuk and S. Trofimchuk, Admissible wavefront speeds for a single species reaction-diffusion with delay, Discrete Contin. Dyn. Syst. Ser. A, 20 (2008), 407-423. doi: 10.3934/dcds.2008.20.407. Google Scholar
J. Wu and X. Zou, Traveling wave fronts of reaction-diffusion systems, J. Dyn. Differ. Equations, 13 (2001), 651-687. doi: 10.1023/A:1016690424892. Google Scholar
Zhao-Xing Yang, Guo-Bao Zhang, Ge Tian, Zhaosheng Feng. Stability of non-monotone non-critical traveling waves in discrete reaction-diffusion equations with time delay. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 581-603. doi: 10.3934/dcdss.2017029
Cheng-Hsiung Hsu, Jian-Jhong Lin. Stability analysis of traveling wave solutions for lattice reaction-diffusion equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020001
Ming Mei. Stability of traveling wavefronts for time-delayed reaction-diffusion equations. Conference Publications, 2009, 2009 (Special) : 526-535. doi: 10.3934/proc.2009.2009.526
Bang-Sheng Han, Zhi-Cheng Wang. Traveling wave solutions in a nonlocal reaction-diffusion population model. Communications on Pure & Applied Analysis, 2016, 15 (3) : 1057-1076. doi: 10.3934/cpaa.2016.15.1057
Xiaojie Hou, Yi Li. Local stability of traveling-wave solutions of nonlinear reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 681-701. doi: 10.3934/dcds.2006.15.681
Wei-Jian Bo, Guo Lin, Shigui Ruan. Traveling wave solutions for time periodic reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4329-4351. doi: 10.3934/dcds.2018189
Wei-Jie Sheng, Wan-Tong Li. Multidimensional stability of time-periodic planar traveling fronts in bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2681-2704. doi: 10.3934/dcds.2017115
Joaquin Riviera, Yi Li. Existence of traveling wave solutions for a nonlocal reaction-diffusion model of influenza a drift. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 157-174. doi: 10.3934/dcdsb.2010.13.157
Guo Lin, Haiyan Wang. Traveling wave solutions of a reaction-diffusion equation with state-dependent delay. Communications on Pure & Applied Analysis, 2016, 15 (2) : 319-334. doi: 10.3934/cpaa.2016.15.319
Zhi-Xian Yu, Rong Yuan. Traveling wave fronts in reaction-diffusion systems with spatio-temporal delay and applications. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 709-728. doi: 10.3934/dcdsb.2010.13.709
Jiang Liu, Xiaohui Shang, Zengji Du. Traveling wave solutions of a reaction-diffusion predator-prey model. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1063-1078. doi: 10.3934/dcdss.2017057
Shi-Liang Wu, Tong-Chang Niu, Cheng-Hsiung Hsu. Global asymptotic stability of pushed traveling fronts for monostable delayed reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3467-3486. doi: 10.3934/dcds.2017147
Shi-Liang Wu, Wan-Tong Li, San-Yang Liu. Exponential stability of traveling fronts in monostable reaction-advection-diffusion equations with non-local delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 347-366. doi: 10.3934/dcdsb.2012.17.347
Abraham Solar. Stability of non-monotone and backward waves for delay non-local reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5799-5823. doi: 10.3934/dcds.2019255
Masaharu Taniguchi. Multi-dimensional traveling fronts in bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 1011-1046. doi: 10.3934/dcds.2012.32.1011
Masaharu Taniguchi. Traveling fronts in perturbed multistable reaction-diffusion equations. Conference Publications, 2011, 2011 (Special) : 1368-1377. doi: 10.3934/proc.2011.2011.1368
Henri Berestycki, Guillemette Chapuisat. Traveling fronts guided by the environment for reaction-diffusion equations. Networks & Heterogeneous Media, 2013, 8 (1) : 79-114. doi: 10.3934/nhm.2013.8.79
Bingtuan Li, William F. Fagan, Garrett Otto, Chunwei Wang. Spreading speeds and traveling wave solutions in a competitive reaction-diffusion model for species persistence in a stream. Discrete & Continuous Dynamical Systems - B, 2014, 19 (10) : 3267-3281. doi: 10.3934/dcdsb.2014.19.3267
Kota Ikeda, Masayasu Mimura. Traveling wave solutions of a 3-component reaction-diffusion model in smoldering combustion. Communications on Pure & Applied Analysis, 2012, 11 (1) : 275-305. doi: 10.3934/cpaa.2012.11.275
Zhaosheng Feng. Traveling waves to a reaction-diffusion equation. Conference Publications, 2007, 2007 (Special) : 382-390. doi: 10.3934/proc.2007.2007.382
2018 Impact Factor: 1.38
HTML views (94)
Yicheng Jiang Kaijun Zhang | CommonCrawl |
Reconstructing charge-carrier dynamics in porous silicon membranes from time-resolved interferometric measurements
Extracting quantitative dielectric properties from pump-probe spectroscopy
Arjun Ashoka, Ronnie R. Tamming, … Akshay Rao
Picturing charge carrier diffusion
Thomas Kirchartz
Long-range ballistic propagation of carriers in methylammonium lead iodide perovskite thin films
Jooyoung Sung, Christoph Schnedermann, … Akshay Rao
Absolute timing of the photoelectric effect
M. Ossiander, J. Riemensberger, … R. Kienberger
Ultrafast relaxation of lattice distortion in two-dimensional perovskites
Hao Zhang, Wenbin Li, … Aditya D. Mohite
Direct observation of charge separation in an organic light harvesting system by femtosecond time-resolved XPS
Friedrich Roth, Mario Borgwardt, … Wolfgang Eberhardt
Direct observation of ultrafast exciton localization in an organic semiconductor with soft X-ray transient absorption spectroscopy
D. Garratt, L. Misiekis, … J. P. Marangos
Ultrafast infrared nano-imaging of far-from-equilibrium carrier and vibrational dynamics
Jun Nishida, Samuel C. Johnson, … Markus B. Raschke
Direct observation of large electron–phonon interaction effect on phonon heat transport
Jiawei Zhou, Hyun D. Shin, … Gang Chen
Wei He1,
Rihan Wu2,
Igor V. Yurkevich ORCID: orcid.org/0000-0003-1447-89133,
Leigh T. Canham2 &
Andrey Kaplan2
663 Accesses
Surfaces, interfaces and thin films
We performed interferometric time-resolved simultaneous reflectance and transmittance measurements to investigate the carrier dynamics in pump-probe experiments on thin porous silicon membranes. The experimental data was analysed by using a method built on the Wentzel-Kramers-Brillouin approximation and the Drude model, allowing us to reconstruct the excited carriers' non-uniform distribution in space and its evolution in time. The analysis revealed that the carrier dynamics in porous silicon, with ~50% porosity and native oxide chemistry, is governed by the Shockley-Read-Hall recombination process with a characteristic time constant of 375 picoseconds, whereas diffusion makes an insignificant contribution as it is suppressed by the high rate of scattering.
Since the 1990s, research interest in Porous Silicon (p-Si) has grown considerably, following reports that showed photoluminescence in the visible range1,2. Soon thereafter, research into this sponge-like material exploded as it seemingly offered new directions for the development of a wide range of optical and electro-optical applications such as optical interference filters3, solar panel enhancers4, all-optical modulator for far-IR5, multilayer periodic structure with photonic gap6, electroluminescence material7, Mach-Zehnder interferometer based sensors8, and optical biosensors9.
One of the distinctive features of p-Si is its rigid sponge-like skeleton made of silicon having nano-metric dimensions. The specific surface area of p-Si can reach 800 m2/cm2, since porosity significantly enlarges its surface-to-volume ratio10. This property provides a new potential for photoluminescence and light trapping11,12,13,14. To improve the efficiency of p-Si based electronic and electro-optical devices, greater understanding and control of the charge transport mechanism and recombination dynamics is needed. For example, improved transport is crucial for photovoltaic devices to allow the photo-excited carriers to reach electrodes before recombination happens. It is preferable for such devices to establish conditions which combine the provision of charge with a long life time and a high diffusion coefficient15,16. On the other hand, a fast relaxation into the conduction band minimum and recombination with holes is critical for the high performance of light modulation devices17,18. The investigation of these transport properties is a complex task as it involves a number of phenomena, such as scattering, recombination and diffusion, which are usually difficult to characterise in a single measurement. The most common experimental methods for investigating charge carrier transport is to measure current-voltage (I-V) characteristics or steady-state photoconductivity. However, the problem of fabrication of reproducible, low resistivity and stable contacts on a p-Si surface has probably influenced most studies to date19. Moreover, in many cases, p-Si samples are usually composed of a porous silicon layer on top of a crystalline silicon (c-Si) substrate, which might distort accurate estimation of the transport properties20,21,22. To avoid these problems in this work, we used free-standing thin membranes of p-Si and contactless ultrafast optical methods.
The main purpose of this work is to establish, as unambiguously as possible, the carrier dynamics and related constants of the moderately mesoporous material, with a stabilised native oxide protective shell. We used the time-resolved femtosecond pump-probe method to simultaneously measure reflectance and transmittance spectra in the temporal range up to 210 ps after the excitation by a femtosecond laser pulse. This experimental method uses the simultaneous recording of the time-resolved reflectance and transmittance interference fringes over the probe beam spectral range of 60 nm interacting with a thin optical p-Si membrane slab. The observation of the interferograms is not accidental, but had a deliberate purpose to select the thicknesses of membrane providing a high fringe contrast. These well-resolved fringes enhance the optical response of the probe interacting with the optically excited membrane and improve the sensitivity of the experiment; while, without the fringes on thick membranes, the change of the reflectance and transmittance induced by the pump could be too weak to successfully analyse.
Reflectance and transmittance were recorded simultaneously as a function of the wavelength in order to reduce the number of free parameters and increase the fidelity in the simulation of the experimental data. For the experimental data analysis, we used a recently developed method based on the Wentzel-Kramers-Brillouin (WKB) approximation23,24. Using this method, we retrieved a non-uniform spatial distribution of the excited charge carriers and their evolution as a function of time. We show that the carrier dynamics in our samples are governed exclusively by the recombination process, whereas the contribution of the diffusion is insignificant. From our measurements, we estimated the recombination time to be 375 ps.
Time resolved pump probe measurements and analysis
To evaluate the excited carriers dynamics in the p-Si membrane, the time resolved pump-probe transmission and reflection were measured simultaneously over the wavelength range between 765 and 820 nm. The pump fluence was fixed at about 1.5 mJ/cm2. The time delay between the pump and probe was scanned from −20 to 210 ps, with 5 ps step size. Figure 1 shows the measurement results of ΔT/T0 and ΔR/R0 on the left and right panels, respectively. The signal at the negative delay times is set for the false-colour representation of the background values of ΔT/T0 and ΔR/R0. It can be seen that at the positive delay times both signals, ΔT/T0 and ΔR/R0, oscillate as a function of the wavelength. These oscillations represent the Fabry-Perot interference fringes of the probe beam propagating through the membrane while excited by the pump. The reason for the pump-induced fringes is the modification of the membrane dielectric function by the free carriers excited by the pump. This creates conditions at which the probe beam components, partially reflected and transmitted by the upper and lower boundaries of the membrane, interfere with one another and intensify or reduce the amount of reflected or transmitted light. The use of a thin membrane allows sensitivity of small (Δε/ε < 10−3) pump-induced changes of the dielectric function, through the measurements of the optical interference fringes in reflectance and transmittance spectra, and analysis using the thin-film optics equations23,25,26. The decrease of the fringe contrast as a function of time is related to the decay of the excitation when the dielectric function which is altered by the pump returns to the initial value. Thus, simulating the fringes with a suitable optical model allows for the retrieval of the complex dielectric function and its evolution as a function of time. Once the dielectric function evolution is obtained, it can be used to reconstruct the corresponding development of the carrier density using a high-frequency conductivity model, such as Drude theory24,27.
Simultaneously recorded fractional change of the transmittance (a), ΔT/T0, and reflectance (b), ΔR/R0, as a function of the delay time, Δt, and probe wavelength.
To model the optical response of the p-Si membrane, it was considered as a uniform homogeneously mixed material5,27,28, consisting of silicon matrix skeleton and pores filled with air. The optical response of the material can be represented by the effective dielectric function εeff of a 2D composite material described by the following Maxwell-Garnett formula29,30:
$${\varepsilon }_{eff}={\varepsilon }_{m}+2p{\varepsilon }_{m}\frac{{\varepsilon }_{p}-{\varepsilon }_{m}}{{\varepsilon }_{p}+{\varepsilon }_{m}-p({\varepsilon }_{p}-{\varepsilon }_{m})},$$
where εm and εp represent the dielectric functions of the membrane constituents, silicon and air pores, respectively, and p is the volume fraction of the pores. The pores were assumed to be a dispersionless material with εp = 1, dielectrically softer than the silicon skeleton constituent. Since the diameter of the pores and silicon constituents are significantly smaller than the probe wavelength, the assumption of the homogeneous medium approximation and the application of the Maxwell-Garnett formulas are valid31,32.
The free carriers excited optically by the pump modify the dielectric function of the silicon skeleton according to the Drude theory of high frequency conductivity26,33,34:
$${\varepsilon }_{m}={\varepsilon }_{{Si}}-\frac{{\omega }_{p}^{2}}{{\omega }^{2}+i\omega \gamma },$$
$${\omega }_{p}^{2}=\frac{{e}^{2}}{{\varepsilon }_{0}}(\frac{N}{{m}_{eff}})$$
represents the plasma frequency and ω is the probe frequency; meff = 0.17 is the reduced mass of the optically excited electron-hole plasma27,35; e is the electron charge and ε0 is the vacuum permittivity; γ denotes the charge-carrier scattering rate; N is the density of the free carriers excited by the pump; εSi is the complex dielectric function of the crystalline silicon used to fabricate the samples and whose value was determined previously23. The combinations of Eqs 2 and 3 substituted into Eq. 1 leads to the dependence of the effective dielectric function, εeff, on the probe frequency, ω, scattering rate, γ, and the free carrier density, N. The decay of N(Δt) induces changes to εeff (Δt) and, consequently, to the contrast of fringes of ΔT/T0(Δt) and ΔR/R0(Δt). To fully incorporate the excited carrier density decay into the optical model, it must also be considered as a function of the sample depth z. Assuming that the pump is linearly absorbed, immediately after the excitation, when the pump and probe temporally overlap, this function can be presented as \(N({\rm{\Delta }}t=0,z)={N}^{0}({\rm{\Delta }}t=\mathrm{0)}{e}^{-{\alpha }_{pump}z}\), where αpump = 450 cm−1 is the effective absorption coefficient of the pump, which includes the absorption due to the properties of the material and internal multiple reflection from the sample boundaries; N0(Δt = 0) is the excited carrier density on the sample surface at the zero delay. Therefore, the optical model calculating ΔT/T0(Δt) and ΔR/R0(Δt) is underpinned by the development of the carrier density in time and space, N(Δt,z). The same argument applies to the effective dielectric function, εeff (Δt,z) evolution in space and time. We note that, the complexity of the spatial development in time of the carrier density is somewhat relaxed in this work as the samples are quasi-one-dimensional, restricting the carrier movement along the interwoven wires of porous silicon.
To account for non-uniform εeff (z), in calculations of the transmittance and reflectance we used a method based on Wentzel-Kramers-Brillouin (WKB) approximation36, which was developed and previously used in our works on similar porous silicon membranes but having different porosity23,24:
$$T={|\sqrt{\frac{q\mathrm{(0)}}{q(d)}}\frac{\mathrm{(1}+r\mathrm{(0)(1}-r(d))}{{e}^{-i\psi }-r\mathrm{(0)}r(d){e}^{i\psi }}|}^{2}$$
$$R={|\frac{r\mathrm{(0)}{e}^{-i\psi }-r(d){e}^{i\psi }}{{e}^{-i\psi }-r\mathrm{(0)}r(d){e}^{i\psi }}|}^{2},$$
where r(0) and r(d) are the reflection coefficients of the front and rear sample boundaries, respectively; \(q(z)=\sqrt{\frac{{\omega }^{2}}{{c}^{2}}\varepsilon {\rm{e}}{\rm{f}}{\rm{f}}(z)-{k}_{x}^{2}}\) is the wavevector of the probe along the z coordinate; \({k}_{x}=\frac{\omega }{c}sin(\theta )\) is the tangential component; θ is the incidence angle; \(\psi ={\int }_{0}^{d}\,dzq(z)\) is the cumulative complex phase of the probe traversing the sample having thickness d.
To perform the calculation, it was estimated that \({N}^{0}({\rm{\Delta }}t=\mathrm{0)}=\mathrm{(1}-{R}_{0})\frac{F}{\hslash \omega }{\alpha }_{pump}=2\times {10}^{19}\) cm−3 and the corresponding scattering rate was taken from our previous work5 γ0(Δt = 0) = 7 × 1014 s−1. These values were used to estimate εeff (z), which determines the Fresnel coefficients r(0) and r(d), the wavevectors q(0) and q(d), the phase ψ at the zero delay time. Then a generic algorithm was used iteratively to find the best function of N(Δt,z) describing the experimentally measured change of the reflectance and transmittance, ΔT/T0(Δt) and ΔR/R0(Δt), for each delay time, Δt. The scattering constant, γ, was readjusted to 6.6 × 1014 sec−1 to fit the data in the best way and it was kept constant as a function of time and space. We found that the alteration of γ has a relatively weak effect on the simulation, suggesting that the carrier relaxation rate is saturated under these experimental conditions.
To illustrate the fitting results, the data at several different time delay – 5, 55, 100, 150 and 200 ps–were picked out and shown in Fig. 2. It can be seen that the amplitudes of the fringes, shown as black dotted lines, gradually becomes weaker, and the fitting results, displayed as solid red lines, are a reasonable match for the data. Discrepancies of fit for ΔT/T0 at longer delay times were difficult to resolve without changing the model, but these are not significant and are tolerable without altering our interpretation of the results.
Representative transient spectra of the transmittance change, ΔT/T0 (top row), and reflectance change, ΔR/R0 (bottom), for the delay times of 5, 55, 100, 150 and 200 ps; black - experimental data and red - theoretical simulation.
The fractional changes of the real and imaginary parts of the dielectric function, Δεr/εr and Δεi/εi, respectively, are shown in Fig. 3. Significantly, the change of the imaginary part is by two orders of magnitude greater than that of the real part. This is consistent with the evolution of the shape of the fringes observed in Figs 1, 2. However, the spectral positions of the maxima and minima of the fringes, governed by the real part of the dielectric function, do not change. In contract to this, is the width of the troughs and peaks, controlled mostly by the imaginary part, which noticeably decreases as the excitation decays as a function of both time and depth. Indeed, such behaviour is expected for a material where γ ~ ωp < ω and for which the Drude model predicts that the imaginary part can be approximated as \({\rm{\Delta }}{\varepsilon }_{i}\approx {\omega }_{p}^{2}\gamma /{\omega }^{3}\), while the real part is nearly constant37. In such conditions, the fractional change of the imaginary part, Δεi/εi, depends linearly on the free carrier density, N, and its change almost exclusively governs the observed changes of the reflectance and transmittance, ΔT/T0 and ΔR/R0, respectively. In fact, their change can be exclusively attributed to the induced by pump free carrier absorption of the probe.
Fractional change of the imaginary, Δεi/εi (left panel), and real, Δεr/εr (right panel), parts of the effective dielectric constant as a function of the sample depth and delay time, Δt.
Solving the carrier dynamics
The free carrier density reconstructed from the simulation, N(Δt,z), is shown in Fig. 4. It can be seen that, to a large extent, it replicates the same trend shown in Fig. 3. N(Δt,z) function can be used to evaluate the dynamics of the free carriers using the following rate equation:
$$\frac{dN(z,t)}{dt}=D\frac{{\partial }^{2}N(z,t)}{\partial {z}^{2}}-\frac{N(z,t)}{\tau }$$
Left panel: reconstructed decay of the charge carrier density, N, inside silicon constituent of p-Si as a function of the sample depth and delay time, Δt. Right panel: representative decay curves of the charge carrier density, N, at three different distances from the sample surface; red - on the surface: z = 0, green - z = 6 and blue - z = 12 μm. Solid lines represent the model best-fitted to the experimental data, which are shown as dots. For convenience, the same lines are shown on the left panel as well. The black dashed line is the average carrier density, 〈N(t)〉.
In Eq. 6 the first term describes the one-dimensional diffusion along the depth coordinate, with D being the diffusion coefficient. In general, the diffusion is a three-dimensional process, but in our samples of porous silicon, consisting of wires nearly aligned along the depth coordinate, z, the process can be assumed as limited to one dimension only. The second term gives the recombination rate, 1/τ. In semiconductors, the recombination time, τ, can be dependant on N. The most common processes are Shockley-Read-Hall (SRH), bimolecular and Auger recombinations, where 1/τ is independent, linearly and quadratically dependant on N, respectively38. In samples with a non-uniform carrier distribution, the determination of the prevalent recombination process can be quite a complex task: as τ, might be spatially non-uniform because of its dependance on N. To resolve possible complications, we initially excluded the diffusion process from the rate equation. The average carrier density was estimated using \(\langle N(t)\rangle =\mathrm{1/}d{\int }_{0}^{d}\,dzN(z,t)\), but in the integration, the diffusion term vanished, owing to zero-current boundary conditions at the edges of the sample. 〈N(t)〉 is shown in Fig. 4 along with the decay curves of the carrier density on the surface, z = 0, at the middle, z = 6 μm, and near the rear boundary, z = 12 μm, of the sample. All shown curves can be fitted by solving the ordinary differential equation \(\frac{dN(z;t)}{dt}=-\frac{N(z;t)}{\tau }\) with τ = 3.75 ± 0.15 × 10−10 seconds. This suggests that SRH is the main form of recombination and that diffusion is a much slower process, having no observable impact on the carrier dynamics. Indeed, to observe the diffusion, the inequality \(D\ge \mathrm{1/}{\alpha }_{pump}^{2}\tau \) must hold. However, the samples investigated had such a fast recombination that this was nearly impossible. Therefore, we conclude that the main recombination process is independent of carrier density and that the diffusion is effectively absent in the samples.
The origin of the SRH recombination is determined by the capture of carriers by the boron dopants and impurity states on the surface of the pores. The effective cross-section can be estimated according to σ = 1/Nimpvτ, where v = (kBT/m)1/2 = 1.64 × 107 cm/s is the carriers thermal velocity at room temperature. For the impurity density of Nimp = 3 × 1018 cm−3, the cross-section is σ = 5.42 × 10−17 cm2, a typical value for silicon at room temperature which has been known for decades39. This observation suggests that the SRH in p-Si does not significantly deviate from the bulk counterpart, as suggested previously24.
In conclusion, we investigated the carrier dynamics using time resolved reflectance and transmittance interferograms of the probe beam. We applied the analysis based on the approximation of the p-Si sample as an effective medium described by 2D Maxwell-Garnett approximation. The contribution of the free carriers to the optical response was described by Drude theory, and their non-uniform distribution by Wentzel-Kramers-Brillouin approximation. Our study reveals that the carriers dynamics are outweighed by recombination, while the diffusion is undetectable. The simulation suggests that the excited carriers scattering time is γ = 6.6 × 1014 s−1, implying that the diffusion constant \(D=\frac{{k}_{B}T}{{m}_{eff}}\frac{1}{\gamma } \sim 0.4\) cm2/s is much smaller than the recombination parameter \(1/{\alpha }_{pump}^{2}\tau =1.32\times {10}^{4}\) cm2/s. The excited carrier spatial distribution and its decay, obtained by the experiment, indicate that the recombination time is independent of the carrier density, as would be expected for the Shockley-Read-Hall mechanism. We estimate the recombination time to be τ = 3.75 ± 0.15 × 10−10 seconds.
Time-resolved interferometric measurements
A Coherent ultrafast laser system was used for the femtosecond pump-probe setup. The system delivers 60-fs pulses at the repetition rate of 1 kHz and has an almost Gaussian-shaped spectrum centered around 795 nm. A beam splitter was used to split the laser into two parts: the pump and probe beams. The power ratio between the pump and probe was more than 100:1. A retroreflector delay stage was used to control the difference between the arrival times of the pump and probe pulses. A combination of a half-wave plate and Brewster angle reflection from a glass block was used to adjust the pump fluence. The polarization of the probe beam was adjusted to yield equal contributions of s and p components, while the pump beam was orthogonally polarized with respect to the probe beam to prevent interference between them. The incident angle of the probe beam was set to 45° and the angle difference between the pump and probe beams was ~20°. The probe and pump beam were focused to spot diameters of ~100 and ~300 μm, respectively, by using different focusing lenses. The noncollinear spatial overlap between the pump and probe spots was checked by a CCD camera equipped with a magnifying lens. The temporal overlap between the pump and probe pulses was identified by second-harmonic generation from a BBO crystal positioned at the sample position. The intensities of the reflected and transmitted probe beam were wavelength analyzed by two spectrometers of the same type (Ocean Optics QE65 Pro). The detected data were presented in the form of a fractional change of the reflectance and transmittance23: ΔR/R0 = (Rt − R0)/R0 and ΔT/T0 = (Tt − T0)/T0, where Rt and Tt are the reflectance and transmittance of the excited-state sample at a time delay t after the pump excitation, respectively, and R0 and T0 are the reflectance and transmittance of the sample without excitation. More details of the experimental setup, data analysis and measurements of R0 and T0 can be found elsewhere23,27,40.
Sample fabrication and characterisation
The investigated p-Si samples were fabricated by the electrochemical anodization of the surface of a 3′′-diameter (100) silicon wafer (Boron-doped, 5–15 mΩ cm, corresponding to the dopant's density of ~3 × 1018 cm−3), using an electrolyte comprised of methanol and 40% HF in a 1:1 ratio. A current density of 30 mA/cm2 and an anodization time of 11 min were chosen to yield a layer with ~50% porosity (calculated by using a gravimetric calibration curve) and ~13.5 μm depth. This layer was detached from the underlying substrate, after anodization, by applying a 120 mA/cm2 pulse (10 s) before being removed from the electrolyte; the free-standing membrane was then rinsed in methanol and air dried. Membranes were stored in ambient air for longer than 2 years which ensured complete native oxide growth prior to evaluation. To verify the sample morphology, the porosity of >50% and thickness of ~13 μm of the p-Si membrane were estimated from the SEM images and optical characterization based on the transmittance T0 and reflectance R0 measurement and data analysis23. The samples used in this study do not show detectable luminescence, as p-silicon substrates are generally better suited to this purpose41. Instead, we used p+ substrates which are a better choice to obtain relatively thick and optically uniform membranes28. The average diameters of the pores and silicon interwoven wires were about 40 and 20 nanometer, respectively, and they do not have a strong quantum confinement for free carriers. We also investigated in our previous work the optical constants of the membrane which revealed that the real and imaginary parts of the complex effective dielectric function to be weakly dispersive around the values of ~3 and ~0.005, respectively23. To avoid confusion, we note that the recently published work on the charge carrier dynamics in p-Si was carried out on the samples with much higher porosity of >70% and using rather different wavelength in the 3.5–5 μm range24. Hence, the results of that work should be compared with a perspective care as it is very likely that the probe wavelength and porosity affects the observed times of the carriers recombinations.
Canham, L. T. Silicon quantum wire array fabrication by electrochemical and chemical dissolution of wafers. Appl. Phys. Lett. 57, 1046 (1990).
Article ADS CAS Google Scholar
Lehmann, V. & Göosele, U. Porous silicon formation: A quantum wire effect. Appl. Phys. Lett. 58, 856–858, https://doi.org/10.1063/1.104512 (1991).
Bilyalov, R. R., Stalmans, L., Schirone, L. & Levy-Clement, C. Use of porous silicon antireflection coating in multicrystalline silicon solar cell processing. IEEE Transactions on Electron Devices 46, 2035–2040, https://doi.org/10.1109/16.791993 (1999).
Razali, N. S. M., Rahim, A. F. A., Radzali, R. & Mahmood, A. Study of double porous silicon surfaces for enhancement of silicon solar cell performance. AIP Conf. Proc. 1885, 020261, https://doi.org/10.1063/1.5002455 (2017).
Park, S. J. et al. All-optical modulation in mid-wavelength infrared using porous si membranes. Sci. Reports 6, 30211 (2016).
Agarwal, V., Mora-Ramos, M. E. & Alvarado-Tenorio, B. Optical properties of multilayered period-doubling and rudin-shapiro porous silicon dielectric heterostructures. Photonics Nanostructures - Fundamentals Appl. 7, 63–68, http://www.sciencedirect.com/science/article/pii/S1569441008000473, https://doi.org/10.1016/j.photonics.2008.11.001 (2009)
Gelloz, B. Handbook of Porous Silicon (Springer International Publishing, Switzerland, 2014).
Kim, K. & Murphy, T. E. Porous silicon integrated mach-zehnder interferometer waveguide for biological and chemical sensing. Opt. Express 21, 19488–19497, http://www.opticsexpress.org/abstract.cfm?URI=oe-21-17-19488, https://doi.org/10.1364/OE.21.019488 (2013).
Dancil, K.-P. S., Greiner, D. P. & Sailor, M. J. A porous silicon optical biosensor: Detection of reversible binding of igg to a protein a-modified surface. J. Am. Chem. Soc. 121, 7925–7930, https://doi.org/10.1021/ja991421n (1999).
Hérino, R. Properties of Porous Silicon, chap. Pore size distribution in porous silicon, pp. 89–96. EMIS datareviews series, no. 18 (London, U.K. : IEE, INSPEC, 1997).
Sun, W., Kherani, N. P., Hirschman, K. D., Gadeken, L. L. & Fauchet, P. M. A three-dimensional porous silicon p-n diode for betavoltaics and photovoltaics. Adv. Mater. 17, 1230–1233 (2005).
Aroutiounian, V., Martirosyan, K. & Soukiassian, P. Almost zero reflectance of a silicon oxynitride/porous silicon double layer antireflection coating for silicon photovoltaic cells. J. Phys. D: Appl. Phys. 39, 1623 (2006).
Wolkin, M., Jorne, J., Fauchet, P., Allan, G. & Delerue, C. Electronic states and luminescence in porous silicon quantum dots: the role of oxygen. Phys. Rev. Lett. 82, 197 (1999).
Cullis, A., Canham, L. & Calcott, P. The structural and luminescence properties of porous silicon. J. Appl. Phys. 82, 909–965 (1997).
Nozik, A. J. Nanoscience and nanostructures for photovoltaics and solar fuels. Nano letters 10, 2735–2741 (2010).
Priolo, F., Gregorkiewicz, T., Galli, M. & Krauss, T. F. Silicon nanostructures for photonics and photovoltaics. Nat. nanotechnology 9, 19–32 (2014).
Gan, K.-G., Sun, C.-K., DenBaars, S. P. & Bowers, J. E. Ultrafast valence intersubband hole relaxation in ingan multiple-quantum-well laser diodes. Appl. physics letters 84, 4675–4677 (2004).
Williams, K. W., Monahan, N. R., Koleske, D. D., Crawford, M. H. & Zhu, X.-Y. Ultrafast and band-selective auger recombination in ingan quantum wells. Appl. Phys. Lett. 108, 141105 (2016).
Kanungo, J. & Basu, S. Handbook of Porous Silicon (Springer International Publishing, Switzerland, 2014).
Ram, S. Handbook of Porous Silicon (Springer International Publishing, Switzerland, 2014).
Ben-Chorin, M., Möller, F. & Koch, F. Band alignment and carrier injection at the porous-silicon–crystalline-silicon interface. J. applied physics 77, 4482–4488 (1995).
Ben-Chorin, M., Möller, F. & Koch, F. Nonlinear electrical transport in porous silicon. Phys. Rev. B 49, 2981 (1994).
He, W., Yurkevich, I. V., Canham, L. T., Loni, A. & Kaplan, A. Determination of excitation profile and dielectric function spatial nonuniformity in porous silicon by using wkb approach. Opt. express 22, 27123–27135 (2014).
Zakar, A. et al. Carrier dynamics and surface vibration-assisted auger recombination in porous silicon. Phys. Rev. B 97, 155203, https://doi.org/10.1103/Phys-RevB.97.155203 (2018).
Heavens, O. S. Optical Properties of Thin Solid Films (Dover Publications, 1985).
Downer, M. C. & Shank, C. V. Ultrafast heating of silicon on sapphire by femtosecond optical pulses. Phys. Rev. Lett. 56, 761–764 (1986).
He, W., Yurkevich, I. V., Zakar, A. & Kaplan, A. High-frequency conductivity of optically excited charge carriers in hydrogenated nanocrystalline silicon investigated by spectroscopic femtosecond pump–probe reflectivity measurements. Thin Solid Films, http://www.sciencedirect.com/science/article/pii/S004060901500228X, https://doi.org/10.1016/j.tsf.2015.03.023 (2015)
Campos, A., Torres, J. & Giraldo, J. Porous silicon dielectric function modeling from effective medium theories. Surf. Rev. Lett. 9, 1631–1635 (2002).
Schwarz, R. et al. Photocarrier grating technique in mesoporous silicon. Thin Solid Films 255, 23–26 (1995).
Sihvola, A. H. Electromagnetic mixing formulas and applications. No. 47 in IEE Electromagnetic Waves Series (The Institute of Elrctrical Engineers, London, UK, 1999).
Niklasson, G. A., Granqvist, C. G. & Hunderi, O. Effective medium models for the optical properties of in homogeneous materials. Appl. Opt. 20, 26 (1981).
Hashin, Z. & Shtrikman, S. A varitional approach to the theory of the effective magnetic permeability of multiphase materials. J. Appl. Phys. 33, 3125–3131 (1962).
Mal`y, P. et al. Picosecond and millisecond dynamics of photoexcited carriers in porous silicon. Phys. Rev. B 54, 7929 (1996).
Sokolowski-Tinten, K. & von der Linde, D. Generation of dense electron-hole plasmas in silicon. Phys. Rev. B 61, 2643–2650 (2000).
Sabbah, A. J. & Riffe, D. M. Measurement of silicon surface recombination velocity using ultrafast pump–probe reflectivity in the near infrared. J. Appl. Phys. 88, 6954 (2000).
Sakurai, J. J. Modern Quantum Mechanics (Addison-Wesley, 1993).
Ziman, J. M. Principles of the Theory of Solids (Cambridge University Press, 1972).
Capizzi, M. et al. Electron-hole plasma in direct-gap ga 1−x al x as and k-selection rule. Phys. Rev. B 29, 2028–2035, https://doi.org/10.1103/PhysRevB.29.2028 (1984).
Abakumov, V. N., Perle, V. I. & Yassievich, I. N. Capture of carriers by attractive centers in semiconductors. Sov. Phys. Semicond. 12, 1 (1978).
Roger, T. W., He, W., Yurkevich, I. V. & Kaplan, A. Enhanced carrier-carrier interaction in optically pumped hydrogenated nanocrystalline silicon. Appl. Phys. Lett. 101, 141904 (2012).
Joo, J. et al. Enhanced quantum yield of photoluminescent porous silicon prepared by supercritical drying. Appl. Phys. Lett. 108, 153111 (2016).
The authors would like to thank Dr. Thomas Roger and Dr. Ammar Zakar for the fruitful discussion and support for the experimental setup. W.H. thanks the funding from the China Scholarship Council (CSC) for the support of his research.
College of Physics and Materials Science, Henan Normal University, Xinxiang, 453007, People's Republic of China
Wei He
School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom
Rihan Wu, Leigh T. Canham & Andrey Kaplan
Nonlinearity and Complexity Research Group, Aston University, Birmingham, B4 7ET, United Kingdom
Igor V. Yurkevich
Rihan Wu
Leigh T. Canham
Andrey Kaplan
W.H. conceived the experiments, W.H. and A.K. conducted the experiments, A.K., W.H., I.V.Y. and R.W. analysed the results. L.C. fabricated the samples. All authors participated in the writing and reviewing of the manuscript.
Correspondence to Andrey Kaplan.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
He, W., Wu, R., Yurkevich, I.V. et al. Reconstructing charge-carrier dynamics in porous silicon membranes from time-resolved interferometric measurements. Sci Rep 8, 17172 (2018). https://doi.org/10.1038/s41598-018-35210-z
Porous Silicon
Carrier Dynamics
Effective Dielectric Function
Time-resolved Pump-probe Measurements
Probe Beam | CommonCrawl |
The design of high affinity human PD-1 mutants by using molecular dynamics simulations (MD)
Jiangfeng Du1,
Yaping Qin1,
Yahong Wu1,
Wenshan Zhao1,
Wenjie Zhai1,
Yuanming Qi1,
Chuchu Wang1 &
Yanfeng Gao ORCID: orcid.org/0000-0002-3289-95541
Programmed cell death protein 1 (PD-1), a negative co-stimulatory molecule, plays crucial roles in immune escape. Blockade of the interaction between PD-1 and PD-L1 shows exciting clinical responses in a fraction of cancer patients and the success makes PD-1 as a valuable target in immune checkpoint therapy. For the rational design of PD-1 targeting modulators, the ligand binding mechanism of PD-1 should be well understood in prior.
In this study, we applied 50 ns molecular dynamics simulations to observe the structural properties of PD-1 molecule in both apo and ligand bound states, and we studied the structural features of PD-1 in human and mouse respectively.
The results showed that the apo hPD-1 was more flexible than that in PD-L1 bound state. We unexpectedly found that K135 was important for binding energy although it was not at the binding interface. Moreover, the residues which stabilized the interactions with PD-L1 were distinguished. Taking the dynamic features of these residues into account, we identified several residual sites where mutations may gain the function of ligand binding. The in vitro binding experiments revealed the mutants M70I, S87 W, A129L, A132L, and K135 M were better in ligand binding than the wild type PD-1.
The structural information from MD simulation combined with in silico mutagenesis provides guidance to design engineered PD-1 mutants to modulate the PD-1/PD-L1 pathway.
T cell activation and exhaustion are precisely controlled by two signaling pathways in immune system: T cell receptor (TCR) [1] and checkpoint pathway [2]. TCR is expressed on the surface of T cells and recognizes the epitope peptides presented by the antigen presenting cells (APCs). The engagement of the epitope by TCR stimulates the specific T cell clonal expansion, which further protects us from infection, tumorigenesis. However, to prevent excessive immune response and normal tissue damage, the immune system develops a series of negatively regulation pathways, in which programmed cell death protein 1 (PD-1) serves as one of the most important modulators.
Human PD-1 (hPD-1), a member of the CD28 family, is a type 1 transmembrane immunoglobulin with a total length of 268 amino acids and its gene locates on the long arm of chromosome 2, the second largest chromosome, which indicates the protein may be cross-linked with many other gene products and involves in several important diseases such as inflammation, cancer, and autoimmune diseases [3]. hPD-1 is composed of three domains: extracellular domain (ectodomain), transmembrane region and cytoplasmic domain from N to C terminus. The ectodomain is comprised of 150 amino acids and contains four glycosylation sites (N49, N58, N74, and N116) and one disulfide bond (C54-C123) (Fig. 1a). The domain interacts with its ligands (PD-L1), which expressed on the cells such as antigen presenting cells, lymphocyte, endothelial cells and fibroblast cells (Fig. 1b and c). The helical transmembrane region (TM) with 21 amino acids (V171-I191) is capable to anchor into the membrane of immunologic cells and maintains the topology of the PD1 structure [3]. The cytoplasmic domain recruits tyrosine phosphatases 1 and 2 (SHP 1 and 2) and terminates the TCR signal transduction to regulate the activity of T cells [4].
The topological and functional features of human PD-1. a the compositions of the whole human PD-1 domains, where the PTM modified residues were noted by red asterisk and the disulphide bond was indicated. b The interaction model of the extracellular domain of human PD-1/PD-L1 complex (Green: human PD-1; Blue: human PD-L1). c The formation of the PD-1/PD-L1 complex triggers the negative signal for T cell exhaustion. d Sequence alignments between human and mouse PD-1 molecules, with a sequence identity (ID) of 65%. Green triangle indicated the sites located at both human and mouse PD-1's the binding interfaces, while black asterisks indicated the sites only occurred at human PD-1 interface and red asterisks indicated the sites only occurred at mouse PD-1 interface
The interaction of the PD-1 with its ligands PD-L1 can promote T cell anergy, apoptosis and exhaustion (Fig. 1c) to prevent excessive T cell activation and maintain self-tissue tolerance [5]. In the physiological condition, the PD-1/PD-L1 pathway plays a critical role in negatively regulating immune-mediated tissue damage [6,7,8,9], otherwise excessive immune response may induce allergic responses [10] or even autoimmunity diseases [11]. Cancer treatment by modulating the PD-1/PD-L1 axis has been highly promoted since PD-L1 was reported to be over-expressed in a wide variety of solid tumors [12]. Those tumors are able to manipulate the PD1/ PD-L1 axis and in turn evade from immune surveillance. Blocking the interaction between PD-1 and PD-L1 by antibody drugs (such as nivolumab and pembrolizumab) showed exciting clinical benefits in a fraction of cancer patients and in broad types of cancers. The success of the antibody drugs makes PD-1 a valuable target in the field of immune checkpoint therapy.
We sought to better understand the functionality of the PD-1 molecule and its ligand, PD-L1, using detailed 3D structures and their interactions in molecular dynamics simulations. These finding will facilitate rational drug design of molecules that can modulate PD-1's pathways. Up to date, a series of experimental determined structures were reported for both hPD-1 and mouse PD-1(mPD-1) molecules (Table 1), which had a similar immunoglobulin topology in 3D structures and shared a sequence identity of 65% (Fig. 1d). Although those 3D structures revealed the structural basis of PD-1 molecules at the atomic level, several shortcomings in the structures may hamper our understandings of the structural features of the molecules and their binding mechanism. Firstly, many mutations occurred in the crystal structures such as N33 M, C93S, C83Sm (mutation occurred in mPD-1), L128Rm, A132Lm [13,14,15]. Secondly, X-ray structure models were not always complete and contained uncertainties in determination of the atom positions especially at high temperature factor fractions. For example, the fraction of T59-E61, S73-N74, D85-D92, A129-K131 could not be modeled in crystal structures for PD-1 molecule [16,17,18,19,20]. Thirdly, special conditions such as high salt concentration, low temperature, pH value or special ions, may be employed to crystallize a protein system, in which a crystallized structure may be different to the one in the physiological conditions. Fourthly, proteins are dynamics in the solutions, and the dynamical features facility the PD-1/PD-L1 recognition and interaction, but X-ray models are not sufficient to study the movement of PD-1. Therefore, a thoroughly understanding of the PD-1/PD-L1 interactions requires the dynamical features in atomistic details. Molecular dynamics (MD) simulations play an important role in understanding the protein's dynamics and work perfectly with the structural information from crystallography [21,22,23,24]. The approach can mimic the atomic movements dynamically at a given condition and provide possibilities to study the residues' flexibility, conformational movements, interactions, and binding energy distributions, etc., which provide important hints for drug discovery [25]. Herein in this work we employed the conventional molecular dynamics simulations by using GROMACS package (version 4.6) to study structural properties of the binding mechanism of PD-1 molecules with its ligand. We mainly aimed to observe the structural properties of PD-1 in different states, to identify the importance of the residues in terms of binding energies, to perform guided in silico mutagenesis, and to measure the PD-L1 binding potency of the predicted mutants.
Table 1 List of the experimental determined structures of the extracellular domain of PD-1
The residue numberings for human and mouse PD-1 molecules used here are that of the mature, processed, protein sequence. The beta strands were numbered as A, B, C, D, E, F, G, H from N to C terminus in this study.
Construction of apo hPD-1, apo mPD-1, PD-1/PD-L1 complexes' systems
Four simulation systems (Additional file 1: Figure S1) were constructed to study the structural properties of PD-1's extracellular domain and its ligand binding mechanism. The protein structure for apo hPD-1 was retrieved from 3RRQ and it ranged from N33 to A149, where E61, D85-D92 were missing in the crystal structure. The structure of apo mPD-1 was from 1NPU, where C83 was mutated to S83. The coordinates of the human PD-1/PD-L1 (hPD-1/PD-L1) complex was retrieved from 4ZQK. In the complex, the length of hPD-L1 was 115 amino acids from A18-A132, and hPD-1 contained 114 amino acids from N33 to E146, where the fragment of D85-D92 was absent. Since there was no crystal structure for mouse PD-1/PD-L1 (mPD-1/PD-L1) complex, we extracted mPD-1 structure from 3BIK, which was a crystal structure for the complex of mPD-1 and human PD-L1 (hPD-L1). The structure of mPD-L1 was modeled by a homology model protocol (Molecular Operating Environment (MOE) package, Version 2015.10) based on hPD-L1 (3SBW) which shared a sequence identity of 73%. Next, the modelled mPD-L1 substituted hPD-L1 in the structure of 3SBW by using alignment/superimposition function in MOE package, which created the complex of mPD-1/PD-L1. A 129-steps energy minimization was performed to remove bumps and optimize the structure of the complex (mPD-1/PD-L1) by using MOE package. The constructed mPD-1/PD-L1 complex contained a PD-1 molecule with a length of 133 amino acids from L25-S157m, and a PD-L1 molecule with a length of 221 amino acids from (F19-H239m).
All the structures were protonated and optimized at the physiological conditions (310 K, pH 7.0) in MOE package.
Atomistic molecular dynamics simulation
The GROMACS 4.6 [26] was applied to perform the molecular dynamics simulations, where a SPCE water model was integrated and the water density was set to 1000 g/L. The simulation box was defined as cubic and the protein/complex was located in the center of the box with a distance of 10 Å to the periodic boundary. The force field of optimized potential for liquid simulation-all atom (OPLS/AA) [27] was chosen to define and control the parameter sets in terms of atom, bond, protonation and energy functions. The systems were neutralized at the physiological concentration of 0.154 mol/L and pH 7.0 by adding sodium and chloride ions. The details about the box sizes, ions' numbers, and waters in each system were shown in Additional file 1: Table S1.
Energy minimization (EM) on each system was performed to remove atom bumps and unfavorable interactions via two-step procedures. In the first step, the protein and ions were restrained as fixed objects, and then a steepest descent minimization algorithm with a step size of 0.01 ps and an update frequency of 1 fs were used to optimize the positions of water molecules until the maximum force between any two atoms was less than 100 kJ mol− 1 nm− 1. In the second step, the entire atoms in the system were subjected to energy minimization with the algorithm of conjugate gradient method until the maximum force in the system was less than 10 kJ mol− 1 nm− 1. The systems were then equilibrated via two simulation steps. At the first step, the systems were gradually heated to the temperature at 310 K via a NVT ensemble protocol for 1 ns simulation, where the Verlet scheme was chosen to control the temperature. When the temperatures were controlled at 310 K, the systems were then equilibrated by a NPT ensemble protocol for 1 ns simulation, where Parrinello-Rahman barostat was chosen to control pressure (constant to 1 Bar) and Verlet scheme was chosen to control temperature (constant to 310 K). PD-1/PD-L1 s in the systems were constrained by LINCS method during the entire equilibration procedure.
Fifty nanoseconds (ns) simulations were performed to observe the dynamics of the overall PD-1 structure and atomistic interactions of PD-1/PD-L1 in the physiological conditions. Leap frog integrator with a time step of 2 fs was employed to control the simulation, where particle mesh Ewald (PME) method was selected to treat long range electrostatics and the van der Waals cutoff was set to 10 Å.
Calculations of binding energy and the solvent accessible surface area (SASA)
The binding energies between PD-1 and PD-L1 in each complex were calculated using MM-PBSA, which is one of the most used methods to compute interaction energy of biomolecule complexes. In this study, we employed g_mmpbsa module for binding energy calculation. The program analyzed the molecular dynamics trajectories and estimated the binding energies (ΔG) of the PD-1 to its ligand PD-L1 by calculating four parts separately: the molecular mechanic energy in the vacuum state (EMM), the entropic contribution (ΔS), polar solvation (ΔGp) and non-polar solvent energies (ΔGap) [28]. The binding energy between two components was estimated by the following formula (Formula 1) in details:
$$ \Delta \mathrm{G}=<{E}_{MM}>+<\Delta {G}_p>+<\Delta {G}_{ap}>-T<\Delta S> $$
Where T denotes the temperature (310 K) used in the simulation environment.
An embedded program "gmx sasa" in gromacs 4.6 (gmx sasa -s md.tpr -f md.trr -o sasa.xvg) was used to calculate the SAS area of the PD-1/PD-L1 complexes. The output for the whole trajectories was further averaged by every 100 snapshots. Theoretically, the SASA of the complex was negatively related to the area of the binding interface. A simplified formula was applied to describe the relation between SASA and the area of the binding interface (Formula 2),
$$ {\mathrm{SASA}}_{{\mathrm{T}}_1}-{\mathrm{SASA}}_{{\mathrm{T}}_0}=\frac{\left({\mathrm{A}}_{{\mathrm{IF}}_{{\mathrm{T}}_1}}-{\mathrm{A}}_{{\mathrm{IF}}_{{\mathrm{T}}_0}}\right)}{2} $$
WhereT0, T1 denote the simulation time points; \( {\mathrm{SASA}}_{{\mathrm{T}}_0},{\mathrm{SASA}}_{{\mathrm{T}}_1} \) is the solvent accessible surface area of the PD-1/PD-L1 complex at the time points; \( {\mathrm{A}}_{{\mathrm{IF}}_{{\mathrm{T}}_1}} \)is the area of binding interface of PD-1 at the time point T1,\( {\mathrm{A}}_{{\mathrm{IF}}_{{\mathrm{T}}_0}} \)is the area of binding interface of PD-1 at the time point T0.
In silico mutagenesis
Human PD-1/PD-L1 complex after 50 ns simulation was used to perform in silico mutagenesis. The proposed residue sites were substituted to 20 other amino acids and an ensemble of the conformations (The number of conformations limit to 25) were generated for each mutant by low-mode MD, which uses implicit vibrational analysis to focus a 50 ps MD trajectory. MM/GBVI was applied to calculate the binding affinity of each conformation and PD-L1 molecules. The conformation with the best binding affinity was selected as the final mutant structure. The force field used for calculation was Amber10:EHT, and the implicit solvent was reaction field (R-Field) model. All calculations were performed in MOE package.
Mutagenesis and expression of humanPD-1 mutants
Human PD-1 expression vectors (pEGFP-N1-hPD-1) containing GFP in the frame to C terminus of wild type or PD-1 mutants. The mutants were generated by site-directed mutagenesis with the QuickChange kit (Thermo Fisher, US). The constructs in LB medium were subjected to DNA sequencing to conform the corrections of the mutations. HEK-293 T cells were transfected with the expression vector pEGFP-N1-hPD-1. The cells were harvested in 36 h after transfection by CaCl2 and incubated in flow cytometry buffer (PBS, 2% FBS), then the expression level of PD-1 was verified by fluorescein PE conjugated anti-human-PD-1 antibody (eBioscience, US) staining. The cells were washed and incubated with hPD-L1-Fc protein (Sino Biological Inc., China), then stained with APC conjugated anti-human IgG (Biolegend, US) on ice for 30 min. Next, the cells were acquired on a FACS Caliber flow cytometry (BD Biosciences, US) and analyzed by CELLQuest™ software. Data were represented as the mean fluorescence intensity (MFI).
The tertiary structures of PD-1 molecules in different states
Proteins are dynamic in the physiological conditions to fulfill their functions especially for those protein-protein interaction entities. To fairly understand the dynamical properties of hPD-1 in the apo and PD-L1 bound states, four 50-ns (ns) MD simulations at the physiological conditions (pH 7.0, 310 K, 1Bar, NaCl concentration at 0.154 mol/L) were performed for each system: human PD-1 in ligand free state (hPD-1 apo state), human PD-1 in PD-L1 bound state (hPD-1 bound state), mouse PD-1 in ligand free state (mPD-1 apo state), mouse PD-1 in PD-L1 bound state (mPD-1 bound state). The root mean square deviation (RMSD) curves of the four trajectories ascending gradually to a plateau, revealed that the PD-1 molecules reaching to structural stable state (Fig. 2a). The analysis of the MD trajectories showed that the hPD-1 in the apo state was more flexible than that in the PD-L1 bound state (Fig. 2a), which is reasonable and can be explained as that the interaction of PD-1/PD-L1 restricted the freedom of PD-1's movement. The apo PD-1 seemed to occur transient conformational changes during the time of 30–40 ns, and the RMSD value was 2.9 Å at the stable state (Fig. 2a). At the ligand bound state, hPD-1 was relevantly easy to reach equilibrium and its RMSD value was 2.5 Å in the equilibrated state.
Flexibility of the PD-1 molecules during the molecular dynamic simulations. a Root mean square deviation (RMSD) curves of the PD-1 at four systems. Human PD-1 were less stable than mouse PD-1 and human PD-1 in apo state were more flexible than that in bound state. b The differences of Cα RMSD of hPD-1 between the apo and bound states in the most common structures from MD simulation trajectory. P89 at P-loop was most flexible. c In the apo state of hPD-1, residues such as D85, D92 and R94 in the P-loop interacted with K78, R114 and D117. d In the bound state of hPD-1, the conformation of the P-loop was maintained by three inner interactions between E84-R86, Q91-C93
MD simulation trajectories (apo hPD-1 and bound hPD-1) contained a list of structures which were computationally from unstable to stable movements. To obtain the most stable and most representative structures from the trajectories, the trajectories were clustered with a threshold of 10 Å. The trajectory of apo hPD-1 was clustered into 190 groups and the group (group name: aG188) was the largest one containing 672 structures (Additional file 1: Figure S2). The trajectory of bound hPD-1 was clustered into 8 groups and the group (group name: bG7) was the largest one containing 1612 structures (Additional file 1: Figure S2). The averaged structures of aG188 and bG7 were selected as the final structures for apo and bound hPD-1 models respectively. Detailed comparisons of hPD-1 between the apo and bound states reflected that the structures had a RMSD value of 3.14 Å at the whole Calpha atoms, and a significant change happened in the loop region (P-loop) of P83-R94 with the maximum Calpha RMSD (at residue P89) of 16 Å which made the local interactions different (Fig. 2b). In the apo state, D85, D92 and R94 at P-loop were able to form 7 electrostatic interactions with K78 (Strand D), R114 (strand F) and D117 (strand F) (Fig. 2c). For example, the interaction energy between D85 and K78 (Strand D) was − 15.2 kcal/mol as shown in Fig. 2c. R94 rendered four interactions with D92 and D117, which had two extra interactions with R114. However, in the bound state, the residues at P-loop did not form any interaction with other regions of the molecule. The P-loop's conformation was maintained by three inner interactions: one between Q91-C93, and two between E84-R86 (Fig. 2d).
The atomic fluctuation of each residue was evaluated during the simulation and the results indicated that hPD-1 molecule had different pattern in two states (Fig. 3a). Several residues at the PD-L1 binding area (indicated by green rectangle in Fig. 3a) had different flexibility values between the apo and bound state, where N74 was most flexible (RMSF > 4.4 Å) in the apo state while it was almost rigid (RMSF < 2 Å) in bound state (Fig. 3a). By comparing the N74 interaction environment, we found that N74 located in a turn region which had two inner hydrogen bonds (S71-Q75, S71-N74). In the apo state, N74 was slightly constrained by Q75 and had a weak hydrogen bond (− 0.5 kcal/mol) with solvent atoms, which made the residue flexible in the solvent (Fig. 3b). However, in the PD-L1 bound state, N74 was surrounded by a list of residues from both hPD-1, hPD-L1 and water molecules. S71, S73 and Q75 together formed firm interactions with R125 (hPD-L1) and D26 (hPD-L1), which further gathered 5 water molecules and restrained N74 at one side. On the other side, M70, N74 and R139 were stabilized with five other water molecules (Fig. 3c). In addition to the residue of N74, other amino acids such as T59, P89, R104, and K131 also had significant differences in RMSF values between apo and bound state (Fig. 3a). The big difference of the RMSF values between apo and ligand bound states encouraged us to hypothesize that these sites (T59, N74, P89, R104 and K131) may influence the PD-1/PD-L1 complex formation. To prove our hypothesis, we additionally performed five in silico mutagenesis at these sites (Mutants T59A, N74A, P89A, R104A and K131A, respectively), and observed the mutations at N74 and K131 impaired the hPD-1/PD-L1 interaction, but T59A, P89A, R104A merely had any influence to the interaction (Additional file 1: Figure S3), which was partially proved by a mouse mutant K98Am (equivalent to K131Ah) [13].
The atomic fluctuation of human PD-1 molecule. a The comparison of the root mean square fluctuation (RMSF) of each residue between apo and bound states. The RMSF value of N74 was significantly influenced by the states (apo and bound). The green rectangles indicated the regions/residues which had a distance less than 4.5 Å to hPD-L1 in the MD simulation model. b N74 was slightly constrained by Q75and a list of water solvents in the apo state. c N74 was strongly constraint at one side by S71, S73 and Q75 together with D26hPD-L1, R125hPD-L1. Red dot: water molecule. The contact energies (kcal/mol) were shown by orange dashed line
The dynamical properties of the PD-L1 binding area
The biological function of PD-1 is to promote the immune resistance via the interaction with PD-L1. Therefore, the information about ligand binding area, volume, hot spot residues, and even the residue types should be well understood prior the rational drug discovery for targeting PD-1/PD-L1 axis. In this study, we monitored the changes of the solvent accessible surface area (SASA) of the PD-1/PD-L1 complexes during MD simulations (Fig. 4a). The results showed that the SASA values had a decreased tendency in both human and mouse systems (Fig. 4a). In human complex, SASA value was decreased by 300 Å2 (Fig. 4a), and in mouse complex, it was decreased by 400 Å2 (Fig. 4a). The decreasing of the total SASA value means the increasing of the binding interface, therefore, the binding interface was becoming larger in both human and mouse systems. Based on Formula 2, the binding interface of hPD-1 was increased from 220 Å2 to 440 Å2 during the MD simulation (Fig. 4b), which induced extra contact residues (with a distance less than 4.5 Å to hPD-L1 molecule). For instance, the contact residues were Q75, T76, K78, D85, K131, A132 and E136 in the crystal structure (hPD-1/PD-L1, 4ZQK), however after the MD simulation, N66, Y68, K135 were induced to the binding interface and involved in the interaction with hPD-L1. To study the correlation between the area changes of SASA and binding energy during the MD simulations, we averagely abstracted 100 samples (500 ps for each sample) from MD simulation trajectories to calculate the binding energies (Additional file 1: Figure S4). The results showed that the binding energies did not improve during the MD simulations in both hPD-1/PD-L1 and mPD-1/PD-L1, and the binding energies did not correlate to the SASA (Additional file 1: Figure S4 B/C), which indicates that not all contacts were in favor of the binding energy and the contact area of PD-1/PD-L1 alone should not be served as an indicator to the binding energy.
The changes of the solvent accessible surface (SAS) of PD-1/PD-L1 complexes during the MD simulations. a The decreasing of solvent accessible surface area (SASA) value of the complex indicated that the increasing of the binding size of the PD-1 during the simulation. The increasing trend of the binding interface for mouse PD-1 was bigger than human PD-1's as indicated by SAS values. b The area of the binding interface for human PD-1 were 220 Å2 from the crystal structure (4ZQK) and the size increased to 440 Å2 after the MD simulation
The MD simulation showed that not all residues in the binding interface constantly served as contact residues in the entire trajectory, which indicated that some residues which were identified as contact residues in the crystal structure may not really contribute to the ligand binding. However, in another view of point, the residues which were identified to have no contribution for the ligand binding may have potential to gain the function for ligand binding when a proper mutation occurs at these sites. Therefore, we propose E61, M70, E84, S87, R112, G119, Y121, A129, and K135 (which had the distance between 4.5 Å and 6 Å to hPD-L1 molecule) as candidate sites for mutagenesis and in silico mutagenesis experiments together with binding energy calculations were performed at these sites.
Binding energy calculation and residual distributions
Binding energy, equivalent to experimental Kd value, is of crucial importance to research the protein-protein interaction (PPI) and biological processes. We investigated the binding free energy of PD-1 with PD-L1 in order to quantify the strength of PD-1/PD-L1 complex. In this study, the binding energies between PD-1 and PD-L1 molecules were estimated by using MM-PBSA module, which calculated four energy terms: van der Waals energy, electrostatics, polar solvation, and SASA energy. The results showed that hPD1/PD-L1 complex had an absolutely stronger energy than mouse complex in each energy term (Fig. 5). The binding energy of hPD-1 and hPD-L1 was − 910.34 kJ/mol, whereas in mPD-1/PD-L1, the binding energy was relatively weak (− 593.29 kJ/mol), which was correlated with the experimental data (Kd values were 8.4 μM and 29.8 μM for human and mouse PD-1/PD-L1, respectively) [15]. We also found that electrostatics and polar solvation dominated the binding energy compared to other energy terms (Fig. 5). To investigate the binding mechanism, a quantitative assessment of the binding energy at individual residue had been studied as well (Fig. 5). The results showed that the importance of the individual residues to the binding energy was not even. In the hPD-1 protein, positively charged residues K131, K135, R104 were the key contributors to the binding energy and non-charged polar residues N33, Q75 and T76 moderately contributed to the ligand binding, whereas the negatively charged residue E61, D85 was adverse to the binding energy. K135 formed an ionic bond with D61 (hPD-L1) and the binding energy was − 12.2 kcal/mol (Fig. 6a). Q75 and T76 formed hydrogen bonds with Y123 and R125 in hPD-L1 (Fig. 6b). N33 did not directly interact with hPD-L1 but its side chain formed hydrogen bonds with S57 and N58. K131 and R104 provided relatively strong long-term electrostatic potentials and solvation energy to maintain hPD-1 and hPD-L1 together. Similarly, in the mPD-1 protein, positively charged residues such as K131m, K78m, and R104m were the key contributors to the ligand binding (Fig. 5). Those individual contributors had averagely three folds higher binding energy than that in hPD-1. However, at the same time, there were more residues especially negatively charged such as E135 m, E138 m, D105m, and D62m adverse to the ligand interactions in mPD-1, which in total made the binding energy of mPD-1 weaker than hPD-1 (Fig. 5). K131m had direct interactions with mPD-L1 by formed an ionic bond with D73mPD-L1 and two hydrogen bonds with Q63mPD-L1 and Q66mPD-L1, respectively (Fig. 6c). K78m formed a firm ionic bond with F19mPD-L1 (Fig. 6d). To further study the importance of those residues for protein-protein interaction (PPI), we also exclusively measured the distance variations of the residues involved in the interactions during MD simulations (Fig. 7). The distance changes proved some interactions firmly contributed to the ligand binding such as Y68-D122hPD-L1, Q75-R125hPD-L1, K78-F19hPD-L1, E136-R113hPD-L1, and E136-Y123hPD-L1. Interestingly, K135-D61hPD-L1 had potential to become as the main contributor to the ligand binding since the distance gradually decreased during the simulation (Fig. 7h).
Binding energy calculations for human and mouse PD-1/PD-L1 complexes. a The total binding energy and the energy components were calculated by MM-PBSA module. Human PD-1/PD-L1 had a stronger binding energy than mouse model. Eele: Electrostatic energy; Evdw: Energy from von del Waal interactions; EPB: Energy from polar solvent effect; ESA:Energy from non polar solvent effect and ΔGbind: The binding energy between PD-1 and PD-L1 in the complexes. b The decomposition of the binding energies into each residues (human) and c The decomposition of the binding energies into each residues (mouse). Those individual residues in mouse model had averagely 3 fold higher values in contributing to binding energy than that in human PD-1 model
Interactions between PD-1 (Green) and PD-L1 (Blue). The interactions were indicated by orange dashed line and the interaction energies were shown in orange (kcal/mol). The interaction energy (< − 5 kcal/mol) was defined as the strong interaction. The interactions for hPD-1/PD-L1 complex were shown in (a/b), and interactions for mPD-1/PD-L1 complex were shown in (c/d). a K135 formed a strong ionic bond with D61hPD-L1. E136 formed a weak interaction withR113hPD-L1. b Q75, T76 and E136 formed hydrogen bonds with Y123hPD-L1 and R125hPD-L1. c K131m formed a strong ionic bond with D73mPD-L1 and the interaction between Q66mPD-L1 and A132m was observed. d K78m formed a strong hydrogen bond with the carboxylic group of F19mPD-L1, and E77m was interacted with K124mPD-L1
Distances of residues to their interacted pairs in hPD-1/PD-L1 complex during the MD simulation (a-i). The residues were the main contributors to the binding energy. The distance was increasing during the MD simulation indicated the interaction of the pair was unstable and weak, and vice versa. The interaction of K135-D61 was becoming stronger because the distance of the pair was decreasing during the simulation
Hydrogen bond (HB) plays a vital role in the non-bonded interactions and each HB would averagely contribute 5 kcal/mol to the binding energy. However, the contribution of the hydrogen bonds (HB) in the MM-PBSA module is highly underestimated. To remedy the defect, we exclusively monitored the variation of HB network on the binding interface during the simulation (Fig. 8). The initial structure of hPD1/PD-L1 complex at the physiological conditions had a number of 14 HBs with hPD-L1, and 18 HBs with the solvent. During MD simulation, the number of HBs between hPD-1 and hPD-L1 was relatively unchanged but the HBs between hPD-1 interface area and solvent increased from 18 to 22. In the mouse complex, the total number of HBs was less than that in human. The MD simulation of mPD-1/PD-L1 complex made the HB numbers between mPD-1 and mPD-L1 increased from 8 to 10, which however led to a consequence as that the HBs between mPD-1 and solvent decreased from 21 to 17. The results showed that hPD-1 had more hydrogen bonds in the equilibrated state than that in the mouse equivalent (Fig. 8), which indicates that hydrogen bonds may dominate the hPD-1/PD-L1 complex formation.
The variation of hydrogen bonds (HBs) during the MD simulation. The number of the hydrogen bonds between the residues at PD-1 interfaces and the atoms from PD-L1 (a) or solvent (b). The number of the HBs which were formed with hPD-L1 remained stable (a, Black line) but which were formed with solvents in hPD-1/PD-L1was increasing during the MD simulation (b, Black). The number of HBs which were formed with mPD-L1 was increasing (a, Blue) but which were formed with solvents in mPD-1/PD-L1 system was decreasing during the MD simulation (b, Blue)
Mutagenesis and design of engineered proteins
The averaged structure of the group bG7 of hPD-1/PD-L1 complex was the energy favorite conformation and it was further used to discover the high affinity PD-1 mutants by a list of in silico approaches such as residue scan, binding affinity estimation, and low-mode molecular dynamic simulations. Before performing the in silico mutagenesis, we verified the quality of the in silico mutagenesis on several PD-1 mutants of which the relative binding abilities were experimentally measured by Zhang and his coworkers, and the data were shown in Additional file 1: Table S2 [13]. We calculated the binding energies of the PD-1 mutants to its ligand PD-L1 by MM/GBVI scoring function, which was designed for protein-protein interaction calculation in MOE package. The correlation between the predicted binding energy and experimental relative binding value of each mutant was analyzed (Fig. 9a). The correlation efficient was R2 = 0.83 which confirmed the quality of the approach (Fig. 9a). Then we performed an in silico mutagenesis over the sites which were either with a minimum distance to PD-L1 between 4.5 Å and 6 Å or identified as hot spot residues in the MD simulations. 20 amino acids were modeled at the sites once a time and the mutated hPD-1 molecules were then submitted to calculate the binding energy with hPD-L1. Several mutants such as E61V, M70I, E84F, S87 W and K135 M (Fig. 9b) with computationally improved binding affinity (Additional file 1: Figure S5) were identified.
In silico mutagenesis experiments were performed by using MM/GBVI scoring function based on the MD simulation model of hPD-1/PD-L1, as descripted in Materials and Methods. a Correlation between experimental binding affinity and calculated binding energy, with the correlation coefficient (R2) of 0.83. X-axis indicates the relative binding ability of a mutant and the y-axis indicates the calculated binding energies between hPD-1 mutants and hPD-L1.The15 datasets of the relative binding ability were from literature (ref 13). b Mutants were computationally improved the binding affinity and had a better stability than wild type hPD-1. The minimum distances of the mutated sites to hPD-L1 were measured in the crystal structure (4ZQK) and MD simulation model respectively
PD-1 mutants in binding PD-L1 by FACS
Based on our prediction by MD simulations and in silico mutagenesis approach (Fig. 9a), we proposed a list of mutants (Fig. 9b) which may improve the binding affinity to its ligand hPD-L1. The mutants can be divided into three categories based on their distances to hPD-L1 at the crystal structure (4ZQK) (Fig. 9b). The mutated sites at mutants Q75F, K78 L, K78 W, A132L had distances less than 4.5 Å to hPD-L1, but the mutated sites at mutants K135 M, M70I, A129H, S87 W, E84F had distances between 4.5 Å to 6 Å to hPD-L1 (Fig. 9b). The mutated residue at mutant E61V was not able to interact with hPD-L1 because it was 10 Å to hPD-L1. To investigate the ligand binding ability, the predicted mutants were expressed in HEK-293 T cells and their hPD-L1 binding levels were measured (Fig. 10). We determined hPD-L1 binding abilities of hPD-1 mutants as had been described for PD-1/PD-L1 binding experiment [29]. The binding abilities of each mutant and WT hPD-1 were indicated by MFI value in different hPD-L1 concentrations as shown in Fig. 10a and c. The experiments were performed for four times to avoid random bias (Fig. 10d and e ). The MFI value of each mutant in binding to hPD-L1 was standardized to WT hPD-1, and the standardized MFI values were indicated as the relative hPD-L1 binding potency (RP), which was the ratio of the averaged MFI value of hPD-1 mutant to WT hPD-1 at 100 μM, where the averaged MFI value was calculated from four independent measurements (Fig. 10e). As shown in (Fig. 10e), A132L and S87 W had two folds of PD-L1 binding affinity than WT PD-1, and the RP values were 2.9 and 2 respectively. The mutants K135 M, A129H and M70I also improved the binding of hPD-L1 with a p-value < 0.05 (Fig. 10e1), and their RPs were 1.44, 1.23 and 1.19 respectively. However, five other mutants (E61V, Q75F, K78 L, K78 W, E84F) decreased the binding ability of the PD-1 variants in binding hPD-L1. Among them, the mutations at K78, located in the ligand binding interface, decreased the hPD-L1 binding significantly at the P-value of 0.01 levels. The RP values between these mutants and WT PD-1 were statistically significant, which indicates that these predicted sites were important to the ligand binding of PD-1, even though the site (E61) was remote to PD-L1 in the crystal structure (Fig. 9b).
The hPD-L1 binding ability of hPD-1 mutants. The binding of hPD-1 mutants with hPD-L1-Fc were measured by FACS. a, c Representative flow cytometry analyses of hPD-L1 binding to the HEK-293 T cells expressing WT hPD-1 or the mutants. b, d The binding affinity between hPD-1 mutants and hPD-L1 at different protein concentrations. Each point represents the mean ± S.E. of four independent measurements. e1,e2 Relative PD-L1 binding potency (RP) values of the hPD-1 mutants. (mean ± S.E., n = 4). *, p < 0.05; **, p < 0.01 versus PD-1 (dashed line). RP is the ratio of the averaged MFI value of hPD-1 mutant to WT hPD-1 at 100 μM. The averaged MFI value was calculated from four independent measurements
PD-1 has recently been one of the most successful clinical targets in immunotherapy [2], since the modulation of the PD-1/PD-L1 pathway can significantly promote the tumor clearance by immune system for a broad cancer types. Up to date, five antibody drugs targeting the PD-1/PD-L1 axis were approved by FDA. Many peptides and even small molecule modulators of the target have been under development [30, 31]. Although the PD-1/PD-L1 related drugs have been successfully applied in clinic and several modulators showed bioactivities, the structural properties of hPD-1/PD-L1 and its binding mechanism in molecular level still needs to be studied. For example, whether the PD-1 molecule goes through a conformational change from its apo state to a ligand bound state? Which residues are responsible for the protein-protein interactions, or have potential to be mutated for binding affinity enhancement? To elucidate those questions, we performed conventional molecular dynamics in four different systems: hPD-1, mPD-1, hPD-1/PD-L1 complex, mPD-1/PD-L1 complex in the present study.
Interactions to stabilize the integrity of the structures
MD trajectories demonstrated that the overall conformation of hPD-1 was more flexible than mPD-1 no matter in apo or ligand bound state. This can be subject to the number of the intra-molecular interactions in PD-1 structures. In hPD-1 molecule, only 3 pairs of interactions (E46-R115; R94-D117; D85-K78) had contact energies greater than − 10 kcal/mol, whereas in mPD-1 molecule there were 6 pairs of interactions (R94-D117m; R115-E146m; E46-R147m;R33-E135m; E46-R115m; E61-R103m) which maintained the stability of the structure. In order to observe the influence of the interactions on the structural stabilization, several sites (E46Am, R94Am, R115Am, E135Am in mPD-1, and E46A, R94A in hPD-1) were mutated by in silico approach, which did not alter the total net charges of PD-1 molecules but broke the relevant interactions. The results showed that the structure of the mutants (E46A/R94A/R115A/E135Am and E46A/R94A) were unstable when compared to the wild type PD-1 s (Additional file 1: Figure S6). The mutagenesis results confirmed that some charged intramolecular interactions contribute to the structural stability. Therefore, considering the importance in structure integrity of these charged residues, mutagenesis experiment occurring on such sites is suggested to be avoided.
Residues for PD-L1 binding
The binding interface of PD-1/PD-L1 complex was well studied since numerous crystal structures of the complex were deciphered (Table 1), which provides possibilities to detect binding interface. However, the binding interface, as a part of proteins, which are dynamic, keeps changing with its size, shape and volume especially when it is in the state of interacting with its ligands (Fig. 4). Therefore, some residues which were adjacent to PD-L1 in the crystal structures may drift away from PD-L1 during a MD relaxation process. This kind of residues may serve as potential candidates for mutagenesis in the design of "gain of function" mutants. To prove the hypothesis, we computationally predicted a list of hPD-1 mutants at these sites (Fig. 9b). The predicted mutants were expressed in HEK293T cell and their binding affinities to hPD-L1 were measured by FACS for four repeats to avoid random bias (Fig. 10). All the mutations had affected to the ligand binding (Fig. 10e) either they enhanced or impaired the hPD-1/PD-L1 interactions. The mutated sites, such as M70, E84, S87, A129, K135, had distances of 4.5 to 6 Å to hPD-L1 in the complex, therefore they did not directly form inter-molecular interactions (Additional file 1: Figure S5). The mutants at these sites enhanced the PD-L1 binding affinity except E84F (Fig. 10e). This may decreased the distance of the mutated sites to hPD-L1. However, the mutations at the sites which had the distances less than 4.5 Å to hPD-L1 mostly impaired the ligand binding ability such as mutants Q75F, K78 L, K78 W. E61 was the only predicted site which had a distance more than 6 Å to hPD-L1, and the mutation at the solvent exposed site (E61V) slightly impaired the binding affinity to hPD-L1 (Fig. 10). In the wild type hPD-1 molecule, M70 interacted with both E136 and R139. The mutant M70I broken the interaction between those sites and offered a chance for E136 contacting with R113hPD-L1. Interactions between E84-S87 and Q133-K135 were observed in the wild type, however the mutants S87 W and K135 M abolished these interactions, which unleashed E84 and Q133 free to contact with hPD-L1. Mutant E84F also abolished the interaction of E84-S87, but the mutant moderately impaired the hPD-L1 binding (Fig. 10). The mutations at Q75 and K78, located in the ligand binding interface, impaired hPD-1/PD-L1 interaction in agreement with our hypothesis that mutations performed at the binding interface had little chance to improve the ligand binding ability.
The experimental data (Fig. 10) indicated that in silico predictions combined with the MD simulation are powerful tool to identify the important sites regarding to ligand binding. The method had also shown their efficiency in predicting 'gain of function' mutations for those sites between 4.5 to 6 Å to hPD-L1. However, the method seemed not suitable to the prediction of the "gain of function" mutations for the sites in the binding interface (the residues with a distance less than 4.5 Å to hPD-L1).
Multi-site mutagenesis
It is not rare that mutations occurred on multiple sites improve the ligand binding ability, and the multi-site mutations can be performed via in silico approach theoretically. However, several concerns prevent us to apply the approach. First, computational approaches need to substitute every 20 residue types for each site and all rotamers of each mutation state should be evaluated by an energy minimization process to coincide with the minimum global energy structure for one single mutation. Therefore, the mutational spaces expand dramatically big to be handled by the current computational cost [32]. Second, multi-site mutagenesis is briefly a sum of a list of single mutations. The process introduces numerous uncertainty and assumptions, which do not guarantee the accuracy of the binding affinity prediction.
To overcome such challenges, we propose a strategy to perform multi-site mutagenesis. First, it is suggested to identify the candidate sites for mutations but not the whole sites. Here, several factors may help to identify the candidate sites. First, the most flexible and most rigid sites in the RMSF analysis, such as T59, N74, P89, and R104 in the hPD-1 molecule; Second, the residues which are key contributors to the binding energy, such as N33, Q75, T76, R104, K131 and K135; Third, it is better to avoid the residues which are involved into the intra-interactions, or the residues at the binding interface. On the other hand, it is recommended to combine the in silico approach with in vitro binding experiments such as surface plasma resonance (SPR). For instance, a proper in silico approach serves to predict a list of the single site mutants, and then the predicted mutants are subject to SPR measurement for PD-1/PD-L1 binding affinity. The high affinity mutants are served as starting points and further submitted to do in silico mutagenesis until the desired multiple-sites mutants were identified.
Binding energy between PD-1/PD-L1
Binding energy of a reaction is a single most important thermodynamic property, which correlates the structure and function of a complex formation [33]. A wide range of concepts are applied for the binding energy calculation, such as free energy perturbation (FEP), umbrella sampling, thermodynamic integration (TI), Monte Carlo simulation, Poisson Boltzmann equation, and microscopic all-atom linear response approximation (LRA) [34]. Among these calculation approaches, FEP and TI require a molecular dynamical trajectory of a molecule from an initial state to the ligand bound state, therefore the calculation under such methods are computationally expensive. MM-PBSA has a lower computational cost compared to FEP and TI, but can yield a more reliable free energy output than other scoring functions such as GBSA [35]. Therefore, in this study, MM/PBSA was chosen for binding energy calculations. With the concept of molecular mechanics calculations and continuum solvation models [28], MM-PBSA module performed well for calculation of the binding energy in the PD-1/PD-L1 systems and the calculated binding energies were correlated to the experimental data. Though the results generated by the module were acceptable, it should be mentioned that the entropy was not calculated in the module since the PD-1/PD-L1 system was too big to estimate the entropy contribution. For estimation of the binding energy, only every eight snapshots of the MD trajectory were submitted to the module, but not every snapshot for the calculation, which may improve the accuracy of the binding energy estimation. It is noted that dielectric constant (DC) values influenced the output of the binding energy calculation, while in this study we empirically set the value as 4 for all proteins in the system, and it generated a reliable data. However, we suggest that a list of DC values such as 1, 2, 4, or 8 should be carefully tested before an official MD simulation and MM-PBSA are performed.
Hotspots detection
Hotspot residues have many definitions such as the residues which are highly conserved in sequence alignments or topological similarity in homologues, contribute the most to the binding energy, or have an acceptable distance with its ligands, are defined as hotspots [36,37,38]. Various algorithms such as Shannon entropy, Henikoff–Henikoff sequence weights, Bayesian networks were developed to detect hotspots. How Madej and his team analyzed 600 non redundant crystal complexes and observed that the small molecule or peptide binding sites were largely overlapped with hot spots residues [36]. Therefore, the detection of the hotspot residues of PD-1 molecule may be meaningful to the drug development in cancer immunotherapy by modulating the PD-1/PD-L1 pathway. The ligand binding area of the PD-1 was deciphered by crystallography [16], but knowledge about hot spots are still little. In this study, we proposed a list of residues as hotspots which either were the key contributors to binding affinity (R104, K131, K135), or formed the direct interactions with hPD-L1 (Q75, T76, K78, D85, E136), as well as the most rigid residues (N74). The hotspot residues were important for hPD-L1 binding and alteration at the sites may impair hPD-1/PD-L1 interactions, which were partially proved by our experimental results for mutants such as Q75F, K78 L and K78 W (Fig. 10).
Programmed cell death protein 1 (PD-1) is an immune checkpoint which is expressed in a variety of immune cells such as activated T cells, tumor-associated macrophages, dendritic cells, B cells. PD-1 serves as a negative regulator for the induction of immune tolerance by forming a complex with its ligand PD-L1. Characterization of the binding mechanism of PD-1/PD-L1, especially in a dynamically view rather than a snapshot, can help to elucidate protein function and gain knowledge to develop therapeutic modulators. In this study, we applied conventional molecular dynamics simulations to observe the structural properties of the PD-1 s. The 3D conformations of the PD-1 s in the ligand-bound and ligand free (apo) states were different which indicates that the PD-1 has changed its conformation during complex formation. For this reason, the apo structure of hPD-1, prior hPD-1/PD-L1 complex formation, is recommended as the target for drug discovery. A comparison of atomic fluctuation in the apo and bound state showed N74, P89, R104, and K131 were significantly different in each state, and we studied the local interaction environments around these residues, which may influence the ligand binding ability of hPD-1 and may serve as candidates for drug discovery. To well understand the ligand binding mechanism, the binding energies were calculated by MM-PBSA module and the calculated data were correlated to the experimental data. The total binding energy was further decomposed into each residue and several key residues (R104, K131, K135) in hPD-1 were identified. Based on the MD simulations and in silico mutagenesis, we expressed a list of hPD-1 mutants at HEK293T cells and measured their binding affinities to hPD-L1, which proved that the feasibility of using in silico approaches to design engineered proteins. Besides, the mutants M70I, S87 W, A132L and K135 M improved hPD-L1 binding ability compared to WT hPD-1, and those mutants showed potential to modulate the interaction of hPD-1 and hPD-L1.
hydrogen bond
hPD-1:
human PD-1
hPD-L1:
K78m :
K78 in mouse PD-1
MD:
Molecular dynamics simulation
MM-PBSA:
Molecular mechanics/Poisson-Boltzmann surface area
mPD-1:
mouse PD-1
mPD-L1:
PD-1:
programmed cell death protein 1
PD-L1:
programmed cell death protein ligand 1
Q63 mPD-L1 :
Q63 in mouse PD-L1
R113hPD-L1 :
R113 in human PD-L1
Smith-Garvin JE, Koretzky GA, Jordan MS. T cell activation. Annu Rev Immunol. 2009;27:591–619.
Sharma P, Allison JP. The future of immune checkpoint therapy. Science. 2015;348(6230):56–61.
Shinohara T, et al. Structure and chromosomal localization of the human PD-1 gene (PDCD1). Genomics. 1994;23(3):704–6.
Karwacz K, et al. PD-L1 co-stimulation contributes to ligand-induced T cell receptor down-modulation on CD8+ T cells. EMBO Mol Med. 2011;3(10):581–92.
Shi L, et al. The role of PD-1 and PD-L1 in T-cell immune suppression in patients with hematological malignancies. J Hematol Oncol. 2013;6(1):74.
Gianchecchi E, Delfino DV, Fierabracci A. Recent insights into the role of the PD-1/PD-L1 pathway in immunological tolerance and autoimmunity. Autoimmun Rev. 2013;12(11):1091–100.
Fife BT, Pauken KE. The role of the PD-1 pathway in autoimmunity and peripheral tolerance. Ann N Y Acad Sci. 2011;1217:45–59.
Francisco LM, Sage PT, Sharpe AH. The PD-1 pathway in tolerance and autoimmunity. Immunol Rev. 2010;236:219–42.
Sharpe AH, et al. The function of programmed cell death 1 and its ligands in regulating autoimmunity and infection. Nat Immunol. 2007;8(3):239–45.
Wang J, et al. Establishment of NOD-Pdcd1−/− mice as an efficient animal model of type I diabetes. Proc Natl Acad Sci U S A. 2005;102(33):11823–8.
Chen MH, et al. Inverse correlation of programmed death 1 (PD-1) expression in T cells to the spinal radiologic changes in Taiwanese patients with ankylosing spondylitis. Clin Rheumatol. 2011;30(9):1181–7.
Dong H, et al. Tumor-associated B7-H1 promotes T-cell apoptosis: a potential mechanism of immune evasion. Nat Med. 2002;8(8):793–800.
Zhang X, et al. Structural and functional analysis of the costimulatory receptor programmed death-1. Immunity. 2004;20(3):337–47.
Lin DY, et al. The PD-1/PD-L1 complex resembles the antigen-binding Fv domains of antibodies and T cell receptors. Proc Natl Acad Sci U S A. 2008;105(8):3011–6.
Cheng X, et al. Structure and interactions of the human programmed cell death 1 receptor. J Biol Chem. 2013;288(17):11771–85.
Zak KM, et al. Structure of the complex of human programmed death 1, PD-1, and its ligand PD-L1. Structure. 2015;23(12):2341–8.
Horita S, et al. High-resolution crystal structure of the therapeutic antibody pembrolizumab bound to the human PD-1. Sci Rep. 2016;6:35297.
Lee JY, et al. Structural basis of checkpoint blockade by monoclonal antibodies in cancer immunotherapy. Nat Commun. 2016;7:13354.
Na Z, et al. Structural basis for blocking PD-1-mediated immune suppression by therapeutic antibody pembrolizumab. Cell Res. 2017;27(1):147–50.
Tan S, et al. An unexpected N-terminal loop in PD-1 dominates binding by nivolumab. Nat Commun. 2017;8:14369.
González MA. Force fields and molecular dynamics simulations. Collection SFN. 2011;12:169–200.
Hansson T, Oostenbrink C, van Gunsteren W. Molecular dynamics simulations. Curr Opin Struct Biol. 2002;12(2):190–6.
Karplus M, McCammon JA. Molecular dynamics simulations of biomolecules. Nat Struct Biol. 2002;9(9):646–52.
Karplus M, Kuriyan J. Molecular dynamics and protein function. Proc Natl Acad Sci U S A. 2005;102(19):6679–85.
Borhani DW, Shaw DE. The future of molecular dynamics simulations in drug discovery. J Comput Aided Mol Des. 2012;26(1):15–26.
Pronk S, et al. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit. Bioinformatics. 2013;29(7):845–54.
Felts AK, et al. Distinguishing native conformations of proteins from decoys with an effective free energy estimator based on the OPLS all-atom force field and the surface generalized born solvent model. Proteins. 2002;48(2):404–22.
Kumari R, et al. g_mmpbsa--a GROMACS tool for high-throughput MM-PBSA calculations. J Chem Inf Model. 2014;54(7):1951–62.
Chang HN, et al. Blocking of the PD-1/PD-L1 interaction by a D-peptide antagonist for Cancer immunotherapy. Angew Chem Int Ed Engl. 2015;54(40):11760–4.
Bourgeois DL, Kreeger PK. Partial least squares regression models for the analysis of kinase signaling. Methods Mol Biol. 2017;1636:523–33.
Guzik K, et al. Small-molecule inhibitors of the programmed cell Death-1/programmed death-ligand 1 (PD-1/PD-L1) interaction via transiently induced protein states and dimerization of PD-L1. J Med Chem. 2017;60(13):5857–67.
Sacan A, Ekins S, Kortagere S. Applications and limitations of in silico models in drug discovery. Methods Mol Biol. 2012;910:87–124.
Singh N, Warshel A. Absolute binding free energy calculations: on the accuracy of computational scoring of protein-ligand interactions. Proteins. 2010;78(7):1705–23.
Christ CD, van Gunsteren WF. Enveloping distribution sampling: a method to calculate free energy differences from a single simulation. J Chem Phys. 2007;126(18):184110.
Homeyer N, Gohlke H. Free energy calculations by the molecular mechanics Poisson-Boltzmann surface area method. Mol Inform. 2012;31(2):114–22.
Thangudu RR, et al. Modulating protein-protein interactions with small molecules: the importance of binding hotspots. J Mol Biol. 2012;415(2):443–53.
Keskin O, Ma B, Nussinov R. Hot regions in protein--protein interactions: the organization and contribution of structurally conserved hot spot residues. J Mol Biol. 2005;345(5):1281–94.
Moreira IS, Fernandes PA, Ramos MJ. Hot spots--a review of the protein-protein interface determinant amino-acid residues. Proteins. 2007;68(4):803–12.
We are grateful to Prof. Wei Liu, for analyzing the MM-PBSA data and also thank Dr. YK Liu for providing the High performance computer clusters for MDsimulation calculations.
This work was supported by National Natural Science Foundation of China (Project No. 31500620, U1604286, 31700677), and the grants from Sci-Tech Key Projects (1611003101000) and Outstanding Talent Projects (174200510022) of Henan Province.
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
School of Life Sciences, Zhengzhou University, 100 Kexue Avenue, Zhengzhou, 450001, China
Jiangfeng Du
, Yaping Qin
, Yahong Wu
, Wenshan Zhao
, Wenjie Zhai
, Yuanming Qi
, Chuchu Wang
& Yanfeng Gao
Search for Jiangfeng Du in:
Search for Yaping Qin in:
Search for Yahong Wu in:
Search for Wenshan Zhao in:
Search for Wenjie Zhai in:
Search for Yuanming Qi in:
Search for Chuchu Wang in:
Search for Yanfeng Gao in:
JD and YG designed the experiment. JD, YQ, Ya W, W Zhao, W Zhai, Yu Q, CW performed the experiments and analyzed the data. JD was a major contributor in writing the manuscript. All authors read and approved the final manuscript.
Correspondence to Yanfeng Gao.
Ethical approval and consent to participate
Figure S1. Four simulation systems were constructed for conventional molecular dynamics simulations. Figure S2. Cluster analysis of 50 ns MD simulation trajectories for human PD-1 systems. Figure S3. In silico Alanine scan at the sites T59, N74, P89, R104, K131. Figure S4. Binding energy changes during 50 ns MD simulations in human and mouse PD-/PD-L1 complexes, respectively. Figure S5. The locations of the residues (E61, M70, E84, S87, K135) at human PD-1 molecule. Figure S6. Residues (E46/R94, E46/R94/R115/E135) stabilized the integrity of the PD-1 structures. Table S1. Information of four MD simulation systems. Table S2. Summary of 15 mutants which were applied to study the correlation between experimental and prediction values. (DOCX 5284 kb)
Du, J., Qin, Y., Wu, Y. et al. The design of high affinity human PD-1 mutants by using molecular dynamics simulations (MD). Cell Commun Signal 16, 25 (2018) doi:10.1186/s12964-018-0239-9
Accepted: 28 May 2018
Molecular dynamics simulations
PD-1
Drug design
Top 25 Cell Death Papers | CommonCrawl |
Convergence of Functions when viewed as Distributions and other Convergence Conditions
Let $\Omega$ be a nonempty open subset of $\mathbb{R}^k$, $\mathfrak{B}$ the collection of all borel sets of $\mathbb{R}^k$ contained in $\Omega$, and $\mu$ the $k$-dimensional Lebesgue measure. Consider a sequence $\{ f_n \}$ of Borel measurable complex functions on $\Omega$ such that $f_n \in L_{loc}^{1}(\Omega)$ for all $n$, which means that for every compact $K \subseteq \Omega$ and every $n$, we have \begin{equation} \int_{K} \left| f_n \right| d \mu < \infty. \end{equation} Let $C_{c}(\Omega)$ be the set of all continuous functions $\phi:\Omega \rightarrow \mathbb{C}$ whose support is a compact subset of $\Omega$, and let $C_{c}^{\infty}(\Omega)$ be the set of all infinitely differentiable functions $\phi:\Omega \rightarrow \mathbb{C}$ whose support is a compact subset of $\Omega$. Let $\mathfrak{B}_{loc}$ be the set of all $E \in \mathfrak{B}$ such that the closure $\bar{E}$ of $E$ in $\mathbb{R}^k$ is a compact subset of $\Omega$. Finally, for every function $g:\Omega \rightarrow \mathbb{C}$, and every $T \subseteq \Omega$, we denote by $g_{| T}$ the restriction of $g$ to $T$.
Let $f$ be a Borel measurable complex function on $\Omega$ such that $f \in L_{loc}^{1}(\Omega)$, and consider the following four properties.
(A) For every compact subset $K$ of $\Omega$, we have \begin{equation} \lim_{n \rightarrow \infty} f_{n | K} = f_{| K} \end{equation} in $L^1(K)$.
\begin{equation} (B) \lim_{n \rightarrow \infty} \int_{E} f_n d\mu = \int_{E} f d\mu \qquad \forall E \in \mathfrak{B}_{loc} \end{equation}
\begin{equation} (C) \lim_{n \rightarrow \infty} \int_{\Omega} f_n \phi d\mu = \int_{\Omega} f \phi d\mu \qquad \forall \phi \in C_{c}(\Omega) \end{equation}
\begin{equation} (D) \lim_{n \rightarrow \infty} \int_{\Omega} f_n \phi d\mu = \int_{\Omega} f \phi d\mu \qquad \forall \phi \in C_{c}^{\infty}(\Omega) \end{equation}
It is immediate to prove that (A) implies (B), (C) and (D). Trivially, (C) implies (D). My question is: are there other implications among these three properties? Any proof or counter-example is welcome.
PS Clearly, by considering $g_n=f_n - f$, we can assume without loss of generality that $f=0$.
PSS For those who know distribution theory, note that, given a Borel measurable complex function $g$ on $\Omega$ such that $g \in L_{loc}^{1}(\Omega)$, if we set for every $\phi \in C_{c}^{\infty}(\Omega)=\mathcal{D}(\Omega)$ \begin{equation} T_{g}(\phi) = \int_{\Omega} g \phi d \mu, \end{equation} then (D) says that $T_{f_n} \rightarrow T_{f}$ in the weak*-topology of $\mathcal{D}^{'}(\Omega)$.
real-analysis functional-analysis distribution-theory
Maurizio Barbato
Maurizio BarbatoMaurizio Barbato
I have finally settled the problem. Since the solution is quite long, I will split it into four parts.
Let us start by proving that, surprisingly enough for me, (B) implies (C).
Let $K$ be a compact subset of $\Omega$. I will denote by $\mathfrak{B}_{K}$ the collection of all Borel sets of $\mathbb{R}^k$ contained in $K$. I will assume, without loss of generality, that $f=0$. Moreover, I will consider all the functions as restricted to $K$: in particular, with abuse of notation, I will denote by $f_n$ the restriction of the original $f_n:\Omega \rightarrow \mathbb{C}$ to $K$.
First of all, some terminology. Given a sequence of complex measures $\{ \lambda_n \}$ on $\mathfrak{B}_{K}$, we shall say that $\{ \lambda_n \}$ is uniformly absolutely continuous if for every $\epsilon > 0$, there exists $\delta > 0$, such that for every $E \in \mathfrak{B}_{K}$, with $\mu(E) < \delta$, we have $| \lambda_n(E) | < \epsilon$ for all $n$. We shall denote with $|\lambda_n|$ the total variation of $\lambda_n$. We need the following preliminary result.
A sequence of complex measures $\{ \lambda_n \}$ on $\mathfrak{B}_{K}$ is uniformly absolutely continuous if and only if $\{ |\lambda_n| \}$ is uniformly absolutely continuous.
Proof. The ''if'' part is trivial. Let us prove the ''only if'' part. Assume by contradiction that for some $\epsilon > 0$ there exists a sequence $\{ E_m \}$ in $\mathfrak{B}_{K}$, such that $\mu(E_m) \rightarrow 0$ and for every $m$ there exists $n_m$ such that $|\lambda_{n_m}|(E_n) \geq \epsilon$. Fix $m$, and take a countable partition $\{ A_j \}$ of $E_m$, with $A_j \in \mathfrak{B}_{K}$ for every $j$, and such that $\sum_{j=1}^{\infty} |\lambda_{n_m} (A_j)| \geq \frac{\epsilon}{2}$. Choose $\bar{j}$ such that $\sum_{j=1}^{\bar{j}} |\lambda_{n_m} (A_j)| \geq \frac{\epsilon}{4}$. From Lemma 6.3 in [R], we deduce that $\left| \sum_{j=1}^{\bar{j}} \lambda_{n_m} (A_j) \right| \geq \frac{\epsilon}{4 \pi}$. Then put \begin{equation} D_m = \bigcup_{j=1}^{\bar{j}} A_j. \end{equation} You get $\mu(D_m) \leq \mu(E_m)$, so that $\mu(D_m) \rightarrow 0$, and $\left| \lambda_{n_m}(D_n) \right| \geq \frac{\epsilon}{4 \pi}$ for all $m$, a contradiction.
Now, let us come back to our problem. Assume that (B) holds, and set for all $n$ \begin{equation} \lambda_n(E) = \int_{E} f_n d \mu \qquad (E \in \mathfrak{B}_K). \end{equation} From the converse of Vitali's Theorem (see [R], Exercise 6.10(g)), we deduce that $\{ \lambda_n \}$ is uniformly absolutely continuous, and by the previous lemma we deduce that $\{ \left| \lambda_n \right| \}$ is uniformly absolutely continuous. From [R], Theorem 6.13 we know that for any $n$: \begin{equation} \left| \lambda_n \right|(E) = \int_{E} \left| f_n \right| d \mu \qquad (E \in \mathfrak{B}_{K}). \end{equation} So, if $\eta > 0$, there exists $\delta > 0$ such that \begin{equation} \int_{E} \left| f_n \right| d \mu < \eta, \end{equation} for any $E \in \mathfrak{B}_{K}$ such that $\mu(E) < \delta$. Let $\{ E_{1},\dots, E_{m} \}$ be a finite subset of $\mathfrak{B}_{K}$, such that $\mu(E_j) < \delta$ for $j=1,\dots,m$, and \begin{equation} K = \bigcup_{j=1}^{m} E_j, \end{equation} To see that such a collection of sets exists, fix a positive integer $p$ such that $2^{kp} > \frac{1}{\delta}$, and consider the subdivision of $\mathbb{R}^{k}$ in dyadic $k$-cells \begin{equation} W = \left \{ (x_1,\dots,x_k) \in \mathbb{R}^k : \frac{j_i}{2^p} \leq x_i < \frac{j_i + 1}{2^p}, \quad i=1,\dots, k \right \}, \end{equation} where $( j_1, \dots, j_k )$ ranges in $\mathbb{Z}^{k}$, with $\mathbb{Z}$ denoting the set of all integer numbers. Take the intersections of these $k$-cells with $K$ to get the required collection.
Now, we have for any $n$ \begin{equation} \int_{K} \left| f_n \right| d \mu \leq \sum_{j=1}^{m} \int_{E_j} \left| f_n \right| d \mu \leq m \eta, \end{equation} so that $\{ f_n \}$ is bounded in $L^{1}(K)$ by $M= m \eta $.
Suppose that $\phi \in C_{c}(\Omega)$, that $\phi$ is real, with $\phi \geq 0$, and that the support of $\phi$ is contained in $K$. Let $\epsilon > 0$. Since $\phi$ is continuous, it is bounded, and from the construction in [R], Theorem 1.17, we deduce the existence of a simple function $s:K \rightarrow [0,\infty)$ such that $0 \leq \phi(x) - s(x) \leq \epsilon$ for all $x \in K$. From our hypothesis there exists $\nu > 0$ such that for $n > \nu$ we have \begin{equation} \left| \int_{K} f_n s d \mu \right| < \epsilon. \end{equation} We then have for any $n > \nu$ \begin{equation} \left| \int_{K} f_n \phi d \mu \right| \leq \left| \int_{K} f_n s d \mu \right| + \left| \int_{K} f_n (\phi - s) d \mu \right| < \epsilon + M \epsilon, \end{equation} and so \begin{equation} \lim_{n \rightarrow \infty} \int_{\Omega} f_n \phi d \mu = \lim_{n \rightarrow \infty} \int_{K} f_n \phi d \mu = 0. \end{equation} If now $\phi \in C_{c}(\Omega)$, $\phi$ is real, and the support of $\phi$ is contained in $K$, by considering the positive part $\phi^{+}$ and negative part $\phi^{-}$ of $\phi$ we get again \begin{equation} \lim_{n \rightarrow \infty} \int_{\Omega} f_n \phi d \mu = 0. \end{equation} Finally, if $\phi \in C_{c}(\Omega)$ and the support of $\phi$ is contained in $K$, by considering the real and imaginary part we get \begin{equation} \lim_{n \rightarrow \infty} \int_{\Omega} f_n \phi d \mu = 0. \end{equation}
Now, we shall show by using a counterexample that (C) does not imply (B). Take $k=1$, $\Omega = \mathbb{R}$, and define for any positive integer $n$: \begin{equation} f_n(x) = \begin{cases} 2n^2 + 4n^{5} \left(x - \frac{2j+1}{2n} \right) & \text{if } x \in \left[ \frac{2j+1}{2n} - \frac{1}{2n^3}, \frac{2j+1}{2n} \right] \quad (j=0,1,\dots,n-1), \\ 2n^2 - 4n^{5} \left(x - \frac{2j+1}{2n} \right) & \text{if } x \in \left[ \frac{2j+1}{2n}, \frac{2j+1}{2n} + \frac{1}{2n^3} \right] \quad (j=0,1,\dots,n-1), \\ 0 & \text{otherwise}. \end{cases} \end{equation} Then $f_n$ is a piecewise linear function, $f_n \geq 0$, $f_n(x)=0$ if $x \notin I = [0,1]$, and \begin{equation} \int_{I} f_n d \mu = 1. \end{equation} Moreover, if $S_n$ is the support of $f_n$, we have \begin{equation} \mu(S_n) = \frac{1}{n^2}, \end{equation} so that \begin{equation} \sum_{n=1}^{\infty} \mu(S_n) < \infty. \end{equation} For any positive integer $m$, let $n_m$ be an integer such that \begin{equation} \sum_{n \geq n_m} \mu(S_n) < \frac{1}{m}, \end{equation} and put $E_m = \bigcup_{n \geq n_m} S_n$. Then for any $A \in \mathfrak{B}$, with $A \subseteq \mathbb{R} \setminus E_m$, we have \begin{equation} \int_{A} f_n d \mu \rightarrow 0. \end{equation} Assume there exists a Borel measurable $f:\mathbb{R} \rightarrow \mathbb{C}$ such that $f \in L_{loc}^{1}(\mathbb{R})$ and such that for every $E \in \mathfrak{B}_{loc}$ we have \begin{equation} \int_{E} f_n d \mu \rightarrow \int_{E} f d \mu. \end{equation} Then, from what we have proved above and [R], Theorem 1.39 we get that for any positive integer $p$, we have $f=0$ a.e. $[ \mu ]$ on $[-p,p] \setminus E_{m}$, so that $f=0$ a.e. $[ \mu ]$ on $\mathbb{R} \setminus E_{m}$. We deduce that $f=0$ a.e. $[ \mu ]$ on $\bigcup_{m=1}^{\infty} \left( \mathbb{R} \setminus E_{m} \right)$. If we set $E = \bigcap_{m=1}^{\infty} E_m$, then we have $\mu(E)=0$, so we conclude that $f=0$ a.e. $[ \mu ]$ on $\mathbb{R}$. But we have \begin{equation} \lim_{n \rightarrow \infty} \int_{I} f_n d \mu = 1, \end{equation} so (B) cannot hold.
We shall now show that instead (C) holds, with $f$ defined by
\begin{equation} f(x) = \begin{cases} 1 & \text{if } x \in \left[0, 1 \right], \\ 0 & \text{otherwise}. \end{cases} \end{equation}
Let $\phi \in C_{c}(\mathbb{R})$. For any positive integer $n$ define the function \begin{equation} \phi_{n}(x) = \begin{cases} \phi \left( \frac{2j+1}{2n} \right) & \text{if } \left[ \frac{j}{n}, \frac{j+1}{n} \right), \quad (j=0,1,\dots, n-1), \\ 0 & \text{otherwise}. \end{cases} \end{equation}
Choose $\epsilon > 0$, and let $\delta > 0$ be such that $| \phi(x) - \phi(y) | < \epsilon$ for every $x, y$ such that $|x - y | < \delta$. Let $\bar{n}$ be an integer such that $\bar{n} > \frac{2}{\delta}$. Then we have \begin{equation} \left| \int_{\mathbb{R}} f_n \phi d \mu - \frac{1}{n} \sum_{j=0}^{n-1} \phi \left(\frac{2j+1}{2n} \right) \right| = \left| \int_{I} f_n \phi d \mu - \int_{I} f_n \phi_n d \mu \right| \leq \int_{I} f_n \left| \phi - \phi_n \right| d \mu \leq \epsilon \int_{I} f_n d \mu = \epsilon. \end{equation} From [Ru], Theorem 6.7 we have \begin{equation} \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{j=0}^{n-1} \phi \left(\frac{2j+1}{2n} \right) = \int_{0}^{1} \phi(x) dx, \end{equation} where the right-hand side integral is the Riemann integral of $\phi$ over $I$. From [Ru], Theorem 11.33 we also know that \begin{equation} \int_{0}^{1} \phi(x) dx = \int_{I} \phi d \mu. \end{equation} So we conlude that \begin{equation} \lim_{n \rightarrow \infty} \int_{\mathbb{R}} f_n \phi d \mu = \int_{I} \phi d \mu = \int_{\mathbb{R}} f \phi d \mu. \end{equation}
Finally, we show by means of a counterexample that (D) does not imply (C). Take $k=1$, $\Omega= \mathbb{R}$, and define for any positive integer $n$: \begin{equation} f_n(x) = \begin{cases} 2 n^{\frac{11}{4}} x & \text{if } x \in \left[ 0, \frac{1}{2n} \right], \\ n^{\frac{7}{4}}(2-2nx) & \text{if } x \in \left[ \frac{1}{2n}, \frac{3}{2n} \right], \\ n^{\frac{7}{4}}(-4+2nx) & \text{if } x \in \left[ \frac{3}{2n}, \frac{2}{n} \right], \\ 0 & \text{otherwise} \end{cases} \quad. \end{equation} If $\phi \in C_{c}^{\infty}(\mathbb{R})$, then for some $L >0$ we have $| \phi'(x) | \leq L$ for all $x \in \mathbb{R}$. So by using Lagrange Mean Value Theorem, we get \begin{multline} \left| \int_{\mathbb{R}} f_n \phi d \mu \right| = \left| \int_{\left[ 0, \frac{1}{n} \right]} f_n \phi d \mu + \int_{\left[ \frac{1}{n}, \frac{2}{n} \right]} f_n \phi d \mu \right| = \left| \int_{0}^{ \frac{1}{n}} f_n(x) \phi(x) dx - \int_{0}^{ \frac{1}{n}} f_n(x) \phi \left(x+\frac{1}{n} \right) dx \right| \leq \\ \leq \int_{0}^{ \frac{1}{n}} f_n(x) \left| \phi(x) - \phi \left(x+\frac{1}{n} \right) \right| dx \leq \frac{L}{n} \int_{0}^{ \frac{1}{n}} f_n(x) dx = \frac{L}{n} \frac{n^{\frac{7}{4}}}{2n} = \frac{L}{2 n^{\frac{1}{4}}}, \end{multline} so that (D) holds with $f=0$. To see that (C) does no hold, define now \begin{equation} \phi(x) = \begin{cases} \sqrt{x} & \text{if } x \in \left[ 0, 1 \right], \\ 2 - x & \text{if } x \in \left[ 1, 2 \right], \\ 0 & \text{otherwise} \end{cases} \quad. \end{equation} An easy computation shows that for $n \geq 2$ we have \begin{equation} \int_{\mathbb{R}} f_n \phi d \mu = \frac{n^{\frac{1}{4}}}{15 \sqrt{2}} \left(-68 + 36 \sqrt{3} \right), \end{equation} so that \begin{equation} \lim_{n \rightarrow \infty} \int_{\mathbb{R}} f_n \phi d \mu = - \infty. \end{equation}
[R], Rudin, Real and Complex Analysis, Third Edition
[Ru], Principles of Mathematical Analysis, Third Edition
$\begingroup$ Even though, the three parts of the solution are independent, to read them in the order I wrote them, select "oldest" among the ordering options. $\endgroup$
– Maurizio Barbato
Now, I show by means of a counterexample that (B) does not imply (A). To see this, consider $k=1$, $\Omega = \mathbb{R}$, and define for every positive integer $n$: \begin{equation} f_n(x)= \begin{cases} 1 & \text{if } x \in \left[\frac{2j}{2^n}, \frac{2j+1}{2^n}\right), \quad (j=0,1,\dots, 2^{n-1}-1), \\ -1 & \text{if } x \in \left[\frac{2j+1}{2^n}, \frac{2j+2}{2^n}\right), \quad (j=0,1,\dots, 2^{n-1}-1), \\ 0 & \text{otherwise}. \end{cases} \end{equation} Take $f=0$. Then by outer regularity of $\mu$, for every $E \in \mathfrak{B}$ and every $\epsilon > 0$, you can find an open set $V$ of $\mathbb{R}$ such that $E \subseteq V$ and $\mu(V \backslash E) < \epsilon$. Set $W = V \cap (0,1)$. $W$ is an at most countable union of open and pairwise disjoint intervals $\{ I_m \}$. Choose $\bar{m}$ such that \begin{equation} \sum_{m > \bar{m}} \mu(I_m) < \epsilon. \end{equation}
Since $f_n$ is periodic on each of the intervals $I_m$, $m=1,\dots,\bar{m}$, and $|f_n(x)| = 1$ for all $x \in (0,1)$, you can find $\bar{n}$ such that for all $n > \bar{n}$ \begin{equation} \left| \int_{I_m} f_n d\mu \right| < \frac{\epsilon}{\bar{m}} \end{equation} for $m=1,\dots,\bar{m}$. We then have for $n > \bar{n}$: \begin{equation} \left| \int_{W} f_n d \mu \right| = \left| \sum_{m} \int_{I_m} f_n d \mu \right| \leq \sum_{m \leq \bar{m}} \left| \int_{I_m} f_n d \mu \right| + \left| \sum_{m > \bar{m}} \int_{I_m} f_n d \mu \right| \leq \epsilon + \sum_{m > \bar{m}} \int_{I_m} |f_n| d \mu \leq 2 \epsilon. \end{equation}
If $D = E \cap (0,1)$, then $\mu(W \setminus D) < \epsilon$, so we have for $n > \bar{n}$: \begin{equation} \left| \int_{E} f_n d \mu \right| = \left| \int_{D} f_n d \mu \right| \leq \left| \int_{W} f_n d \mu - \int_{D} f_n d \mu\right| + \left| \int_{W} f_n d \mu \right| \leq \int_{W \setminus D} |f_n| d \mu + 2 \epsilon < 3 \epsilon. \end{equation}
This proves (B). But (A) is not satisfied with $f=0$ because we have for any $n$ \begin{equation} \int_{\left[0,1 \right]} |f_n| d \mu = 1. \end{equation} Actually, (A) cannot be satisfied by any $f \in L_{loc}^{1}(\Omega)$. Indeed, assume it is. Then, (B) would hold with the same $f$. So from we what we have proved we deduce that for any interval $[a,b]$ and any $E \in \mathfrak{B}$ contained in $[a,b]$, we would have \begin{equation} \int_{E} f d \mu = 0. \end{equation} So from [R], Theorem 1.39, we would have $f=0$ a.e. on $[a,b]$. And so $f=0$ a.e. on $\Omega$. But then $f$ cannot satisfy (A), as we have seen.
PS We know now that (B) implies (C), so our sequence $\{ f_n \}$ satisfies for sure (C). This can also be given a simple direct proof as follows. Now, if $\phi \in C_{c}(\Omega)$, then for every $\epsilon > 0$, there exists $\delta > 0$ such that $|\phi(x) -\phi(y)| < \epsilon$ for every $x, y \in \Omega$ such that $|x-y|< \delta$. Choose $\bar{n}$ such that $2^{\bar{n}} > 1/\delta$. For every $n > \bar{n}$, and every $j=0,1,\dots,2^{n-1}-1$, we have \begin{multline} \left| \int_{\left[\frac{2j}{2^n}, \frac{2j+2}{2^n}\right]} f_n \phi d \mu \right | = \left| \int_{\left[\frac{2j}{2^n}, \frac{2j+1}{2^n}\right]} \phi d \mu - \int_{\left[\frac{2j+1}{2^n}, \frac{2j+2}{2^n}\right]} \phi d \mu \right | \leq \\ \leq \int_{\left[\frac{2j}{2^n}, \frac{2j+1}{2^n}\right]} \left| \phi(x) - \phi ( x+ 1/2^{n}) \right| d \mu \leq \frac{\epsilon}{2^{n}}. \end{multline} So for $n > \bar{n}$ we have \begin{equation} \left| \int_{\Omega} f_n \phi d \mu \right | \leq \frac{\epsilon}{2}, \end{equation} which proves (C).
Not the answer you're looking for? Browse other questions tagged real-analysis functional-analysis distribution-theory or ask your own question.
Equicontinuity and uniform boundedness for "distributions"
Compatibility of pointwise and distributional convergence
Fourier transform of unit step function
Division of Distributions by Polynomials
$\sup_{n \in \mathbb{N}} \|f\|_{\infty} < \infty$ and $f_n(x) \to f(x) \Rightarrow f_n \to f$ weakly
Bounded functions and Montel's Theorem
Convergence in generalized functions(Distributions)
Convergence of a sequence of functions in sense of distributions
integrability and distributions | CommonCrawl |
Oscillation of second-order nonlinear neutral dynamic equations with distributed deviating arguments on time scales
Shao-Yan Zhang1 &
Qi-Ru Wang2
Advances in Difference Equations volume 2015, Article number: 7 (2015) Cite this article
This paper concerns second-order nonlinear neutral dynamic equations with distributed deviating arguments on time scales of the form
$$\bigl(r(t) \bigl(\bigl(y(t)+p(t)y\bigl(\tau(t)\bigr)\bigr)^{\Delta}\bigr)^{\gamma}\bigr)^{\Delta}+\int_{a}^{b}f \bigl(t,y\bigl(\delta (t,\xi)\bigr)\bigr)\Delta\xi=0, $$
where \(\gamma>0\) is a quotient of odd positive integers. By using the generalized Riccati technique and integral averaging techniques, we derive new oscillation criteria for the above equations, which generalize and improve some existing results in the literature.
In this paper, we consider second-order nonlinear neutral dynamic equations with distributed deviating arguments of the following form:
$$ \bigl(r(t) \bigl(\bigl(y(t)+p(t)y\bigl(\tau(t)\bigr) \bigr)^{\Delta}\bigr)^{\gamma}\bigr)^{\Delta}+\int _{a}^{b}f\bigl(t,y\bigl(\delta(t,\xi)\bigr)\bigr) \Delta\xi=0 $$
on a time scale \(\mathbb{T}\) satisfying \(\inf\mathbb{T}=t_{0}\) and \(\sup\mathbb{T}=\infty\). Throughout this paper, we assume the following:
(H1)
\(\gamma>0\) is a quotient of odd positive integers, \(0< a<b\), \(\tau (t)\in C_{rd}(\mathbb{T}, \mathbb{T})\) such that \(\tau(t)\le t\) and \(\lim_{t\rightarrow\infty}\tau(t)=\infty\), \(\delta(t,\xi)\in C_{rd}(\mathbb{T}\times[a,b], \mathbb{T})\) such that \(\lim_{t\rightarrow\infty}\delta(t,\xi)=\infty\);
\(r(t)\in C_{rd}(\mathbb{T},(0, \infty))\) such that \(\int_{t_{0}}^{\infty}(\frac{1}{r(t)})^{\frac{1}{\gamma}}\Delta t=\infty\), and \(p(t)\in C_{rd}(\mathbb{T},[0, 1))\);
\(f:\mathbb{T}\times\mathbb{R}\rightarrow\mathbb{R}\) is a continuous function such that \(uf(t,u)>0\) for all \(u\neq0\), and there exists a function \(q(t)\in C_{rd}(\mathbb{T},[0, \infty))\) such that \(|f(t,u)|\ge q(t) |u|^{\gamma}\).
Oscillation of some second-order nonlinear delay dynamic equations on time scales has been discussed; see [1–18] and the references therein. Recently, there has been much research activity concerning the oscillation of second-order nonlinear neutral delay dynamic equation
$$ \bigl(r(t) \bigl(\bigl(y(t)+p(t)y\bigl(\tau(t)\bigr)\bigr)^{\Delta}\bigr)^{\gamma}\bigr)^{\Delta }+f\bigl(t,y\bigl(\delta(t)\bigr) \bigr)=0,\quad t\in \mathbb{T}. $$
We refer the reader to [1–4].
In 2010, Thandapani and Piramanantham [5] discussed oscillation of the second-order nonlinear neutral delay dynamic equation with distributed deviating arguments
$$ \bigl(r(t) \bigl(x(t)+p(t)x(t-\tau)\bigr)^{\Delta}\bigr)^{\Delta}+ \int_{a}^{b}q(t,\xi )f\bigl(x\bigl(g(t,\xi)\bigr) \bigr)\Delta\xi=0,\quad t\in \mathbb{T}, $$
where \(g(t,\xi)\) is strictly increasing with respect to t and decreasing with respect to ξ, and \(f\in C(\mathbb{R},\mathbb{R})\) with \(uf(u)>0\) for \(u\neq0\), \(f(-u)=-f(u)\).
In 2011, Candan [6] discussed the oscillation of Eq. (1.1) for \(\delta(t,\xi)\le t\) and \(\delta(t,\xi)>t\), respectively, where \(\gamma>0\) is a quotient of odd positive integers. In [6], \(\delta(t,\xi)\) is decreasing with respect to ξ, \(0< p(t)<1\) is increasing and \(f\in C(\mathbb{T}\times\mathbb{R},\mathbb {R})\) with \(uf(t,u)>0\) for all \(u\neq0\). There exists a positive function \(q(t)\) defined on \(\mathbb{T}\) such that \(|f(t,u)|\ge q(t) |u|^{\beta}\), where \(\beta>0\) is a ratio of odd positive integers. In 2013, Candan [7] established other oscillation criteria of Eq. (1.1) for \(\delta(t,\xi)\le t\), where \(\gamma\ge1\) is a quotient of odd positive integers, β (in [6]) is equal to γ, \(r^{\Delta}(t)>0\), and \(\delta(t,\xi)\) is decreasing with respect to ξ.
The purpose of this paper is to establish new oscillation criteria of Eq. (1.1) for \(\gamma> 0\), a quotient of odd positive integers, where functions \(p(t)\) and \(r(t)\) may not be monotonic, \(\delta(t,\xi)\) may not be decreasing with respect to ξ. Hence, our results will generalize and improve those in [6, 7] and others.
By a solution of Eq. (1.1), we mean a nontrivial real-valued function \(y(t)\) such that \(y(t) + p(t)y(\tau(t))\in C_{rd}^{1}[\tau_{1}^{*}(t_{0}), \infty)\), \(r(t)((y(t)+p(t)y(\tau(t)))^{\Delta})^{\gamma}\in C_{rd}[\tau_{1}^{*}(t_{0}), \infty)\) and satisfies Eq. (1.1). Our attention is restricted to those solutions of Eq. (1.1) that satisfy \(\sup\{|y(t)|: t\ge t_{y}\}>0\) for any \(t_{y}\ge t_{0}\). A solution \(y(t)\) of Eq. (1.1) is said to be oscillatory if it is neither eventually positive nor eventually negative. Otherwise, it is called nonoscillatory. The equation itself is called oscillatory if all its solutions are oscillatory.
This paper is organized as follows. After this introduction, we introduce some basic lemmas in Section 2. In Section 3, we present the main results. In Section 4, we illustrate the versatility of our results by two examples.
Some preliminaries
In this section, we present several technical lemmas which will be used in the proofs of the main results. For convenience, we use the notation \((x(\sigma(t)) )^{\gamma}=(x^{\sigma}(t))^{\gamma}\) and set
$$ x(t):=y(t)+p(t)y\bigl(\tau(t)\bigr). $$
Then Eq. (1.1) becomes
$$\begin{aligned} \bigl(r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma}\bigr)^{\Delta}+ \int_{a}^{b}f\bigl(t,y\bigl(\delta(t,\xi )\bigr) \bigr)\Delta\xi=0. \end{aligned}$$
For \(t,T\in\mathbb{T}\) with \(t> T\), we define
$$\begin{aligned} &\beta(t,T)=\int_{T}^{t} \frac{1}{r^{\frac{1}{\gamma}}(s)}\Delta s, \quad\mbox{and}\quad g_{\xi}(t,T)=\left \{ \begin{array}{@{}l@{\quad}l} \frac{\beta(\delta(t,\xi),T)}{\beta(t,T)}, &\delta (t,\xi)< t,\\ 1, &\delta(t,\xi)\ge t; \end{array} \right . \\ &Q(t,T)=q(t)\int_{a}^{b}\bigl[1-p\bigl(\delta(t, \xi)\bigr)\bigr]^{\gamma}g_{\xi}^{\gamma}(t,T)\Delta \xi. \end{aligned}$$
For \(D=\{(t,s)\in\mathbb{T}^{2}: t\geq s\geq0\}\), we define
$$\begin{aligned}& \mathcal{H}=\bigl\{ H(t,s)\in C^{1}_{rd}\bigl(D, [0, \infty )\bigr): H(t,t)=0, H(t,s)>0 \mbox{ and } H^{\Delta}_{s}(t,s)\geq0 \mbox{ for } t>s\geq0\bigr\} , \\& C(t,s)=H_{s}^{\Delta}(t,s)z^{\sigma}(s)+H(t,s)z^{\Delta}(s) \quad\mbox{for }H(t,s)\in\mathcal{H}, \end{aligned}$$
where \(z\in C_{rd}^{1}(\mathbb{T}, (0,\infty))\) is to be given in Theorems 3.1 and 3.2, and \(z^{\Delta}_{+}(t)=\max\{z^{\Delta}(t),0\}\).
First of all, we give the following lemma.
Let conditions (H1)-(H3) hold. If \(y(t)\) is an eventually positive solution of Eq. (1.1), then there exists \(T\in\mathbb{T}\) sufficiently large such that \(x(t)>0\), \(x^{\Delta}(t)\ge0\), \((r(t)(x^{\Delta}(t))^{\gamma } )^{\Delta}\le0 \), \(x(t)\ge r^{\frac{1}{\gamma}}(t)x^{\Delta }(t)\beta(t,T)\), and \(x(\delta(t,\xi))\ge g_{\xi}(t,T)x(t)\) for \(t\in \left.[T,\infty)\right._{\mathbb{T}}\).
Since \(y(t)\) is an eventually positive solution of Eq. (1.1), then by (H1) there exists \(T\in \left.[t_{0},\infty)\right._{\mathbb{T}}\) such that
$$\begin{aligned} \delta(t,\xi)>T,\qquad y(t)>0,\qquad y\bigl(\tau(t)\bigr)>0 \quad\mbox{and}\quad y\bigl(\delta(t,\xi) \bigr)>0 \quad\mbox{for } t\ge T. \end{aligned}$$
From (2.1) and (H2), we see that \(x(t)\) is also positive and satisfies \(x(t)\geq y(t)\). Also by Eq. (1.1) and (H3), we have that \(x(t)\) satisfies
$$ \bigl(r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma} \bigr)^{\Delta}\le -\int_{a}^{b}q(t)y^{\gamma} \bigl(\delta(t,\xi)\bigr)\Delta\xi\le0 \quad\mbox{for } t\ge T, $$
which implies that \(r(t)(x^{\Delta}(t))^{\gamma}\) is decreasing on \(\left.[T,\infty)\right._{\mathbb{T}}\). So we can get
$$\begin{aligned} x(t) =&x(T)+\int_{T}^{t}\frac{(r(s)(x^{\Delta}(s))^{\gamma})^{\frac {1}{\gamma}}}{r^{\frac{1}{\gamma}}(s)}\Delta s\\ \ge& r^{\frac{1}{\gamma}}(t)x^{\Delta}(t)\int_{T}^{t} \frac{1}{r^{\frac {1}{\gamma}}(s)}\Delta s:=r^{\frac{1}{\gamma}}(t)x^{\Delta}(t)\beta(t,T). \end{aligned}$$
We claim that \(r(t)(x^{\Delta}(t))^{\gamma}\ge0\) on \(\left.[T,\infty)\right._{\mathbb{T}}\). Assume not, there is \(t_{1}\in \left.[T,\infty)\right._{\mathbb{T}}\) such that \(r(t_{1})(x^{\Delta}(t_{1}))^{\gamma}<0\). Since \(r(t)(x^{\Delta}(t))^{\gamma}\leq r(t_{1})(x^{\Delta}(t_{1}))^{\gamma}\) for \(t\geq t_{1}\), we have
$$\begin{aligned} x^{\Delta}(t)\leq r^{\frac{1}{\gamma}}(t_{1})x^{\Delta}(t_{1}) \bigl(1/r(t)\bigr)^{1/\gamma}. \end{aligned}$$
Integrating the inequality above from \(t_{1}\) to t (≥T), by (H2) we get
$$\begin{aligned} x(t)\leq x(t_{1})+r^{\frac{1}{\gamma}}(t_{1}) x^{\Delta}(t_{1})\int_{t_{1} }^{t} \bigl(1/r(s)\bigr)^{1/\gamma}\Delta s\rightarrow -\infty \quad(t\rightarrow\infty), \end{aligned}$$
and this contradicts the fact that \(x(t) > 0\) for all \(t\ge T\). Thus we have \(r(t)(x^{\Delta}(t))^{\gamma}\ge0\) on \(\left.[T,\infty)\right._{\mathbb{T}}\) and so \(x^{\Delta}(t)\ge0\) on \(\left.[T,\infty)\right._{\mathbb{T}}\).
Let \(t\ge T\) be fixed such that \(\delta(t,\xi)\ge T\). We consider the two cases \(\delta(t,\xi)< t\) and \(\delta(t,\xi)\ge t\), respectively.
Case I: \(\delta(t,\xi)< t\). Noting that \((r(t)(x^{\Delta }(t))^{\gamma})^{\Delta}\le0\), we have
$$\begin{aligned} x(t)-x\bigl(\delta(t,\xi)\bigr)=\int^{t}_{\delta(t,\xi)} \frac{ (r(s)(x^{\Delta }(s))^{\gamma})^{\frac{1}{\gamma}}}{r^{\frac{1}{\gamma}}(s)}\Delta s \le \bigl(r\bigl(\delta(t,\xi)\bigr) \bigl(x^{\Delta}\bigl(\delta(t,\xi)\bigr)\bigr)^{\gamma}\bigr)^{\frac{1}{\gamma}}\int^{t}_{\delta(t,\xi)}\frac{\Delta s}{r^{\frac{1}{\gamma}}(s)}. \end{aligned}$$
$$\begin{aligned} \frac{x(t)}{x(\delta(t,\xi))}\le1+\frac{ (r(\delta(t)) (x^{\Delta}(\delta(t,\xi)) )^{\gamma})^{\frac{1}{\gamma }}}{x(\delta(t,\xi))}\int^{t}_{\delta(t,\xi)} \frac{\Delta s}{r^{\frac{1}{\gamma}}(s)}. \end{aligned}$$
Since \(\delta(t,\xi)\ge T\) for \(t\in[T,\infty)\),
$$\begin{aligned} x\bigl(\delta(t,\xi)\bigr)>\int^{\delta(t,\xi)}_{T} \frac{ (r(s)(x^{\Delta }(s))^{\gamma})^{\frac{1}{\gamma}}}{r^{\frac{1}{\gamma}}(s)}\Delta s \ge \bigl(r\bigl(\delta(t,\xi)\bigr) \bigl(x^{\Delta}\bigl(\delta(t,\xi)\bigr)\bigr)^{\gamma}\bigr)^{\frac{1}{\gamma}}\int^{\delta(t,\xi)}_{T} \frac{\Delta s}{r^{\frac {1}{\gamma}}(s)}, \end{aligned}$$
which implies that
$$\begin{aligned} \dfrac{(r(\delta(t,\xi))(x^{\Delta}(\delta(t,\xi)))^{\gamma})^{\frac {1}{\gamma}}}{x(\delta(t,\xi))}<\frac{1}{\int^{\delta(t,\xi)}_{T}\frac {\Delta s}{r^{\frac{1}{\gamma}}(s)}}. \end{aligned}$$
$$\begin{aligned} \frac{x(t)}{x(\delta(t,\xi))}<1+\frac{\int^{t}_{\delta(t,\xi)}\frac {\Delta s}{r^{\frac{1}{\gamma}}(s)}}{\int^{\delta(t,\xi)}_{T}\frac{\Delta s}{r^{\frac{1}{\gamma}}(s)}} \leq\frac{\int^{t}_{T}\frac{\Delta s}{r^{\frac{1}{\gamma}}(s)}}{\int^{\delta(t,\xi)}_{T}\frac{\Delta s}{r^{\frac{1}{\gamma}}(s)}}. \end{aligned}$$
Case II: \(\delta(t,\xi)\ge t\). Noting that \(x^{\Delta}(t)\ge0\) and from the definition of \(g_{\xi}(t,T)\) defined in (2.2), we have
$$\begin{aligned} x\bigl(\delta(t,\xi)\bigr)\ge g_{\xi}(t,T)x(t). \end{aligned}$$
Remark 2.1
By \(x(t)\geq y(t)\) on \(\left.[t_{1},\infty)\right._{\mathbb{T}}\), \(x^{\Delta}(t)>0\) and \(\tau(t)\le t\), we get
$$\begin{aligned} y(t)= x(t)-p(t)x\bigl(\tau(t)\bigr)\ge\bigl(1-p(t)\bigr)x(t). \end{aligned}$$
Then from Eq. (1.1), \(x(\delta(t,\xi))\ge g_{\xi}(t,T)x(t)\), (H2) and (H3), we conclude that
$$\begin{aligned} 0\geq{}& \bigl(r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma}\bigr)^{\Delta} \\ &{}+x^{\gamma}(t)q(t)\int_{a}^{b} \bigl[1-p\bigl(\delta(t,\xi)\bigr)\bigr]^{\gamma}g_{\xi}^{\gamma}(t,T) \Delta\xi, \quad t\ge t_{1}, \xi\in[a,b]. \end{aligned}$$
([2])
Let \(g(u)=Bu-Au^{\frac{\gamma +1}{\gamma}}\), where \(A>0\) and B are constants, γ is a positive number. Then g attains its maximum value on \([0, \infty)\) at \(u^{*}= (\frac{B\gamma}{A(\gamma+1)} )^{\gamma}\), and
$$\begin{aligned} \max_{u\in[0, \infty)}g=g\bigl(u^{*}\bigr)=\frac{\gamma^{\gamma}}{(\gamma+1)^{\gamma +1}} \frac{B^{\gamma+1}}{A^{\gamma}}. \end{aligned}$$
In this section, we establish our main results.
Let \(\gamma> 0\). Assume that (H1)-(H3) hold. Furthermore, for sufficiently large \(T\in\mathbb{T}\), one of the following conditions is satisfied:
either \(\int_{t}^{\infty}Q(s,T)\Delta s=\infty\), or
$$\begin{aligned} \int_{t}^{\infty}Q(s,T)\Delta s<\infty \quad\textit{and}\quad \beta^{\gamma}(t,T)\int_{t}^{\infty}Q(s,T)\Delta s>1\quad\textit{for all }t>T, \end{aligned}$$
there exists \(z\in C_{rd}^{1}(\mathbb{T}, (0,\infty))\) such that
$$\begin{aligned} \limsup_{t\rightarrow\infty}\int^{t}_{T} \biggl[z(s)Q(s,T)- \frac{z^{\Delta}_{+}(s)}{\beta^{\gamma}(s,T)} \biggr]\Delta s=\infty, \end{aligned}$$
$$\begin{aligned} \limsup_{t\rightarrow\infty}\int^{t}_{T} \biggl[z(s)Q(s,T) -\frac{1}{(\gamma+1)^{\gamma+1}}\frac{r(s) (z^{\Delta}(s) )^{\gamma+1}}{z^{\gamma}(s)} \biggr]\Delta s=\infty, \end{aligned}$$
there exist \(z\in C_{rd}^{1}(\mathbb{T}, (0,\infty))\) and \(H\in\mathcal{H}\) such that
$$\begin{aligned} \limsup_{t\rightarrow\infty}\frac{1}{H(t, T)}\int^{t}_{T} \biggl[H(t,s)z(s)Q(s,T) -\frac{C^{\gamma+1}(t,s)}{H^{\gamma}(t,s)(\gamma+1)^{\gamma+1}z^{\gamma}(s)} \biggr]\Delta s=\infty. \end{aligned}$$
Then every solution \(y(t)\) of Eq. (1.1) is oscillatory.
Suppose to the contrary that Eq. (1.1) has a nonoscillatory solution \(y(t)\). Without loss of generality, we may assume that \(y(t)\) is eventually positive. Then, by (H1)-(H3), there exists \(T\in \left.[t_{0}, \infty)\right._{\mathbb{T}}\) such that for \(t\geq T\), \(y(\tau(t))>0\), \(y(\delta(t,\xi))>0\), and Lemma 2.1 holds.
The rest of the proof is divided into four parts corresponding to conditions (a)-(d), respectively.
Part I: Assume condition (a) holds.
Let \(\phi(t):=r(t)(x^{\Delta}(t))^{\gamma}\). Then \(\phi(t)\ge 0\) and \(\phi^{\Delta}(t)\le 0\) for \(t\geq T\), and \(\lim_{t\rightarrow\infty}\phi(t)=\zeta\ge 0\). From (2.3), we have
$$\begin{aligned} \phi^{\Delta}(t)+x^{\gamma}(t)q(t)\int _{a}^{b}\bigl[1-p\bigl(\delta(t,\xi)\bigr) \bigr]^{\gamma}g_{\xi}^{\gamma}(t,T)\Delta\xi\le0. \end{aligned}$$
Integrating both sides of (3.1) from t to ∞, we obtain
$$\begin{aligned} \zeta-\phi(t)+\int_{t}^{\infty}Q(s,T)x^{\gamma}(s) \Delta s\le0. \end{aligned}$$
In view of \(x^{\Delta}(t)\ge0\), we have reached a contradiction if \(\int_{t}^{\infty} Q(s,T) \Delta s=\infty\). If \(\int_{t}^{\infty} Q(s, T) \Delta s<\infty\), then
$$\begin{aligned} \phi(t)\ge\int_{t}^{\infty}Q(s,T)x^{\gamma}(s) \Delta s\ge x^{\gamma}(t)\int_{t}^{\infty}Q(s,T) \Delta s. \end{aligned}$$
By Lemma 2.1, we obtain
$$\begin{aligned} \beta^{\gamma}(t,T)\int_{t}^{\infty}Q(s,T) \Delta s\le1, \end{aligned}$$
which is a contradiction to condition (a). Therefore, every solution \(y(t)\) of Eq. (1.1) is oscillatory.
Part II: Assume condition (b) holds. Define
$$ w(t):=\frac{z(t)r(t)(x^{\Delta}(t))^{\gamma}}{x^{\gamma}(t)} \quad \mbox{for } t\geq T. $$
Then \(w(t)>0\). From (2.3), we have
$$\begin{aligned} w^{\Delta}(t) =& \bigl(r(t) \bigl(x^{\Delta}(t) \bigr)^{\gamma}\bigr)^{\Delta}\biggl(\frac {z(t)}{x^{\gamma}(t)} \biggr) + \bigl(r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma}\bigr)^{\sigma}\biggl(\frac {z(t)}{x^{\gamma}(t)} \biggr)^{\Delta} \\ \le& -z(t)Q(t,T)+ \bigl(r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma}\bigr)^{\sigma}\biggl[\frac{z^{\Delta}(t) x^{\gamma}(t)-z(t)(x^{\gamma}(t))^{\Delta}}{x^{\gamma}(t) (x^{\sigma}(t))^{\gamma}} \biggr] \\ \le&-z(t)Q(t,T)+\frac{z^{\Delta}_{+}(t) (r(t)(x^{\Delta}(t))^{\gamma})^{\sigma}}{(x^{\sigma}(t))^{\gamma}} -\frac{ (r(t)(x^{\Delta}(t))^{\gamma})^{\sigma}z(t)(x^{\gamma}(t))^{\Delta}}{x^{\gamma}(t)x^{\gamma}(\sigma(t))}. \end{aligned}$$
When \(\gamma\geq1\), using \(x^{\Delta}(t)>0\) and the Keller's chain rule, we get
$$\begin{aligned} \bigl(x^{\gamma}(t)\bigr)^{\Delta} =&\gamma \biggl[ \int_{0}^{1}\bigl(x(t)+h\mu(t)x^{\Delta}(t) \bigr)^{\gamma-1}\,dh \biggr]x^{\Delta}(t) \\ \geq&\gamma x^{\Delta}(t) \int_{0}^{1} \bigl((1-h)x(t)+hx(t)\bigr)^{\gamma -1}\,dh=\gamma x^{\gamma-1}(t)x^{\Delta}(t). \end{aligned}$$
When \(0<\gamma<1\), using \(x^{\Delta}(t)>0\) and the Keller's chain rule, we obtain
$$\begin{aligned} \bigl(x^{\gamma}(t)\bigr)^{\Delta}\geq\gamma x^{\Delta}(t)\int_{0}^{1} \bigl((1-h)x^{\sigma}(t)+hx^{\sigma}(t)\bigr)^{\gamma-1}\,dh = \gamma\bigl(x^{\sigma}(t)\bigr)^{\gamma-1}x^{\Delta}(t). \end{aligned}$$
Noting that \(r(t)>0\) and from (3.4), (3.5), and Lemma 2.1, we obtain
$$\begin{aligned} \frac{ (r(t)(x^{\Delta}(t))^{\gamma})^{\sigma}z(t)(x^{\gamma}(t))^{\Delta}}{x^{\gamma}(t)x^{\gamma}(\sigma(t))}\ge0. \end{aligned}$$
Since \((r(t)(x^{\Delta}(t))^{\gamma} )^{\Delta}\le0\) and \(t\le\sigma(t)\), we have
$$\begin{aligned} r\bigl(\sigma(t)\bigr) (x^{\Delta}\bigl(\sigma(t) \bigr)^{\gamma}\le r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma}. \end{aligned}$$
Hence from (3.6) and Lemma 2.1 and noting that \(x^{\Delta}(t)\ge0\), we have
$$\begin{aligned} w^{\Delta}(t) \le& -z(t)Q(t,T)+\frac{z^{\Delta}_{+}(t)}{\beta^{\gamma}(t,T)}. \end{aligned}$$
Integrating the above inequality from T to t for \(t\ge T\), we get
$$\begin{aligned} \int^{t}_{T} \biggl[z(s)Q(s,T)- \frac{z^{\Delta}_{+}(s)}{\beta^{\gamma}(s,T)} \biggr]\Delta s\le w(T)-w(t)< w(T). \end{aligned}$$
Taking limsup on both sides as \(t\rightarrow\infty\), we obtain a contradiction to condition (b). Therefore, every solution \(y(t)\) of Eq. (1.1) is oscillatory.
Part III: Assume condition (c) holds.
When \(\gamma\ge1\), from (3.3) and (3.4) we have
$$\begin{aligned} w^{\Delta}(t) \le& -z(t)Q(t,T)+\frac{z^{\Delta}(t)}{z^{\sigma}(t)}w^{\sigma}(t)- \bigl(r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma}\bigr)^{\sigma}\frac{z(t)\gamma x^{\gamma-1}(t)x^{\Delta}(t)}{x^{\gamma}(t)x^{\gamma}(\sigma (t))} \\ \le&-z(t)Q(t,T)+\frac{z^{\Delta}(t)}{z^{\sigma}(t)}w^{\sigma}(t)- \bigl(r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma}\bigr)^{\sigma}\frac{z(t)\gamma x^{\Delta}(t) }{x^{\gamma+1}(\sigma(t))}. \end{aligned}$$
From (3.6) we get
$$\begin{aligned} - \bigl(r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma}\bigr)^{\sigma}\frac{z(t)\gamma x^{\Delta}(t)}{x^{\gamma+1}(\sigma(t))} \le&- \bigl(r^{\sigma}(t) \bigr)^{\frac{\gamma+1}{\gamma}} \bigl(x^{\Delta}\bigl(\sigma (t)\bigr)\bigr)^{\gamma+1} \frac{z(t)\gamma}{r^{\frac{1}{\gamma}}(t)x^{\gamma+1}(\sigma(t))}\\ =&-\frac{z(t)\gamma}{z^{\frac{\gamma+1}{\gamma}}(\sigma(t))r^{\frac {1}{\gamma}}(t)}w^{\frac{\gamma+1}{\gamma}}\bigl(\sigma(t)\bigr). \end{aligned}$$
$$\begin{aligned} w^{\Delta}(t)\le-z(t)Q(t,T)+\frac{z^{\Delta}(t)}{z^{\sigma}(t)}w^{\sigma}(t)-\frac{z(t)\gamma}{z^{\frac{\gamma+1}{\gamma}}(\sigma(t))r^{\frac {1}{\gamma}}(t)}w^{\frac{\gamma+1}{\gamma}}\bigl(\sigma(t)\bigr). \end{aligned}$$
When \(0<\gamma<1\), by (3.3) and (3.5) we have
$$\begin{aligned} w^{\Delta}(t)\le -z(t)Q(t,T)+w^{\sigma}(t)\frac{z^{\Delta}(t)}{z^{\sigma}(t)}- \bigl(r(t) \bigl(x^{\Delta}(t)\bigr)^{\gamma}\bigr)^{\sigma}\frac{z(t)\gamma (x^{\sigma}(t))^{\gamma-1}x^{\Delta}(t)}{x^{\gamma}(t) (x^{\sigma}(t))^{\gamma}}. \end{aligned}$$
By (3.6) we have
$$\begin{aligned} \frac{- (r(t)(x^{\Delta}(t))^{\gamma})^{\sigma}z(t)\gamma (x^{\sigma}(t))^{\gamma-1}x^{\Delta}(t)}{x^{\gamma}(t) (x^{\sigma}(t))^{\gamma}} &=-\frac{(r^{\sigma}(t))^{\frac{\gamma+1}{\gamma}} ((x^{\Delta }(t))^{\sigma})^{\gamma+1} z(t)\gamma x^{\Delta}(t)}{x^{\gamma}(t) x^{\sigma}(t) (r^{\sigma}(t))^{\frac{1}{\gamma}}(x^{\Delta}(t))^{\sigma}}\\ &\leq-\frac{(r^{\sigma}(t))^{\frac{\gamma+1}{\gamma}} ((x^{\Delta }(t))^{\sigma})^{\gamma+1} z(t)\gamma x^{\Delta}(t)}{x^{\gamma}(t) x^{\sigma}(t)r^{\frac{1}{\gamma}}(t)x^{\Delta}(t)} \\ &\le-\frac{z(t)\gamma}{ (z^{\sigma}(t) )^{\frac{\gamma+1}{\gamma }}r^{\frac{1}{\gamma}}(t)} \bigl(w^{\sigma}(t) \bigr)^{\frac{\gamma+1}{\gamma }}. \end{aligned}$$
$$\begin{aligned} w^{\Delta}(t)\le -z(t)Q(t,T)+w^{\sigma}(t) \frac{z^{\Delta}(t)}{z^{\sigma}(t)}-\frac{z(t)\gamma }{ (z^{\sigma}(t) )^{\frac{\gamma+1}{\gamma}}r^{\frac{1}{\gamma }}(t)} \bigl(w^{\sigma}(t) \bigr)^{\frac{\gamma+1}{\gamma}}, \end{aligned}$$
which is the same as (3.8). Let
$$\begin{aligned} B=\frac{z^{\Delta}(t)}{z^{\sigma}(t)},\qquad A=\frac{z(t)\gamma}{ (z^{\sigma}(t) )^{\frac{\gamma+1}{\gamma}}r^{\frac{1}{\gamma}}(t)},\qquad u=w^{\sigma}(t). \end{aligned}$$
Then by Lemma 2.2 and (3.9) we obtain that for all \(t\geq T\),
$$\begin{aligned} w^{\Delta}(t)\le-z(t)Q(t,T)+\frac{1}{(\gamma+1)^{\gamma+1}}\frac{r(t) (z^{\Delta}(t) )^{\gamma+1}}{z^{\gamma}(t)}. \end{aligned}$$
Integrating the above inequality from T to t for ≥T, we get
$$\begin{aligned} \int_{T}^{t} \biggl[z(s)Q(s,T)-\frac{1}{(\gamma+1)^{\gamma+1}} \frac {r(s) (z^{\Delta}(s) )^{\gamma+1}}{z^{\gamma}(s)} \biggr]\Delta s\le w(T)-w(t)< w(T). \end{aligned}$$
By taking limsup on both sides as \(t\rightarrow\infty\), we obtain a contradiction to condition (c). Therefore, every solution \(y(t)\) of Eq. (1.1) is oscillatory.
Part IV: Assume condition (d) holds.
From (3.8) and (3.9) we have that for \(H\in\mathcal {H}_{*}\) and \(t\geq T\),
$$\begin{aligned} \int_{T}^{t}H(t,s)z(s)Q(s,T)\Delta s \le&-\int _{T}^{t}H(t,s)w^{\Delta}(s)\Delta s+\int _{T}^{t}H(t,s)w^{\sigma}(s)\frac{z^{\Delta}(s)}{z^{\sigma}(s)} \Delta s\\ &{}-\int_{T}^{t}H(t,s)\frac{z(s)\gamma}{ (z^{\sigma}(s) )^{\frac {\gamma+1}{\gamma}}r^{\frac{1}{\gamma}}(s)} \bigl(w^{\sigma}(s)\bigr)^{\frac{\gamma +1}{\gamma}}\Delta s. \end{aligned}$$
By integration by parts we obtain
$$\begin{aligned} -\int_{T}^{t}H(t,s)w^{\Delta}(s)\Delta s =H(t,T)w(T)+\int_{T}^{t}H^{\Delta}_{s}(t,s)w^{\sigma}(s) \Delta s. \end{aligned}$$
$$\begin{aligned} \int_{T}^{t}H(t,s)z(s)Q(s,T)\Delta s \le& H(t,T)w(T)+\int_{T}^{t} \biggl[H_{s}^{\Delta}(t,s)+H(t,s) \frac{z^{\Delta}(s)}{z^{\sigma}(s)} \biggr]w^{\sigma}(s)\Delta s\\ &{}-\int_{T}^{t}\frac{H(t,s)z(s)\gamma}{ (z^{\sigma}(s) )^{\frac {\gamma+1}{\gamma}}r^{\frac{1}{\gamma}}(s)} \bigl(w^{\sigma}(s)\bigr)^{\frac{\gamma +1}{\gamma}}\Delta s. \end{aligned}$$
$$\begin{aligned} B=H_{s}^{\Delta}(t,s)+H(t,s)\frac{z^{\Delta}(s)}{z^{\sigma}(s)},\qquad A= \frac {H(t,s)z(s)\gamma}{ (z^{\sigma}(s) )^{\frac{\gamma+1}{\gamma }}r^{\frac{1}{\gamma}}(s)} ,\qquad u=w^{\sigma}(s), \end{aligned}$$
by Lemma 2.2 we obtain that for all \(t\geq T\),
$$\begin{aligned} \int_{T}^{t}H(t,s)z(s)Q(s,T)\Delta s \le& H(t,T)w(T)\\ &{}+\int_{T}^{t}\frac{ [H_{s}^{\Delta}(t,s)+H(t,s)\frac{z^{\Delta}(s)}{z^{\sigma}(s)} ]^{\gamma+1}r(s) (z^{\sigma}(s) )^{\gamma +1}}{H^{\gamma}(t,s)(\gamma+1)^{\gamma+1}z^{\gamma}(s)}\Delta s. \end{aligned}$$
$$\begin{aligned} \frac{1}{H(t,T)}\int_{T}^{t} \biggl[H(t,s)z(s)Q(s,T)-\frac{ [C(t,s) ]^{\gamma+1}r(s)}{H^{\gamma}(t,s)(\gamma+1)^{\gamma +1}z^{\gamma}(s)} \biggr]\Delta s\le w(T). \end{aligned}$$
By taking limsup on both sides as \(t\rightarrow\infty\), we obtain a contradiction to condition (d). Therefore, every solution \(y(t)\) of Eq. (1.1) is oscillatory.
The results in the next theorem hold only for \(\gamma\geq1\).
Let \(\gamma\ge1\). Assume that (H1)-(H3) hold. Furthermore, for sufficiently large \(T\in\mathbb{T}\), there exists \(z\in C_{rd}^{1}(\mathbb{T}, (0,\infty))\) such that one of the following conditions is satisfied:
$$\begin{aligned} \limsup_{t\rightarrow\infty}\int^{t}_{T} \biggl[z(s)Q(s,T)-\frac{ (z^{\Delta}(s) )^{2}r^{\frac{1}{\gamma }}(s)}{4\gamma z(s)\beta^{\gamma-1}(s,T)} \biggr]\Delta s=\infty, \end{aligned}$$
there exists \(H\in\mathcal{H}\) such that
$$\begin{aligned} \limsup_{t\rightarrow\infty}\frac{1}{H(t,T)}\int^{t}_{T} \biggl[H(t,s)z(s)Q(s,T)-\frac{C^{2}(t,s)r^{\frac{1}{\gamma}}(s)}{4\gamma z(s)H(t,s)\beta^{\gamma-1}(s,T)} \biggr]\Delta s=\infty. \end{aligned}$$
Suppose to the contrary that Eq. (1.1) has a nonoscillatory solution \(y(t)\). Without loss of generality, we may assume that \(y(t)\) is eventually positive. Then, by (H1)-(H3) there exists \(T\in \left.[t_{0}, \infty)\right._{\mathbb{T}}\) such that for \(t\geq T\), \(y(t)>0\), \(y(\delta(t,\xi))>0\), \(y(\tau(t))>0\), and Lemma 2.1 holds.
The rest of the proof is divided into two parts corresponding to conditions (a) and (b), respectively.
Define \(w(t)\) as in (3.2). By \(x^{\Delta}(t)\ge0\), \(\sigma(t)\ge t\), (3.3), and (3.4), we obtain
$$\begin{aligned} w^{\Delta}(t) \le& -z(t)Q(t,T)+w^{\sigma}(t)\frac{z^{\Delta}(t)}{z^{\sigma}(t)}- \bigl(r(t) \bigl(x^{\Delta} (t) \bigr)^{\gamma}\bigr)^{\sigma}\frac{z(t)\gamma x^{\gamma-1}(t)x^{\Delta}(t)}{x^{\gamma}(t)(x^{\sigma}(t))^{\gamma}}\\ \le&-z(t)Q(t,T)+w^{\sigma}(t)\frac{z^{\Delta}(t)}{z^{\sigma}(t)}-\frac {z(t)\gamma x^{\gamma-1}(t)x^{\Delta}(t)}{(z^{\sigma}(t))^{2} (r(t) (x^{\Delta} (t) )^{\gamma})^{\sigma}} \bigl(w^{\sigma}(t)\bigr)^{2}. \end{aligned}$$
From (3.6) and Lemma 2.1, we get
$$\begin{aligned} w^{\Delta}(t) \le& -z(t)Q(t,T)+w^{\sigma}(t) \frac{z^{\Delta}(t)}{z^{\sigma}(t)}-\frac{z(t)\gamma }{(z^{\sigma}(t))^{2}r(t)}\frac{x^{\gamma-1}(t)}{(x^{\Delta}(t))^{\gamma-1}} \bigl(w^{\sigma}(t) \bigr)^{2} \\ \le&-z(t)Q(t,T)+w^{\sigma}(t)\frac{z^{\Delta}(t)}{z^{\sigma}(t)}-\frac {z(t)\gamma\beta^{\gamma-1}(t,T)}{(z^{\sigma}(t))^{2}r^{\frac{1}{\gamma }}(t)} \bigl(w^{\sigma}(t)\bigr)^{2}. \end{aligned}$$
By completing the square for \(w^{\sigma}(t)\) on the right-hand side, we have
$$\begin{aligned} w^{\Delta}(t)\le-z(t)Q(t,T)+\frac{ (z^{\Delta}(t) )^{2}r^{\frac {1}{\gamma}}(t)}{4\gamma z(t)\beta^{\gamma-1}(t,T)}. \end{aligned}$$
$$\begin{aligned} \int^{t}_{T} \biggl[z(s)Q(s,T)- \frac{ (z^{\Delta}(s) )^{2}r^{\frac {1}{\gamma}}(s)}{4\gamma z(s)\beta^{\gamma-1}(s,T)} \biggr]\Delta s \le w(T)-w(t)< w(T). \end{aligned}$$
Taking limsup on both sides as \(t\rightarrow\infty\), we obtain a contradiction to condition (a). Therefore, every solution \(y(t)\) of Eq. (1.1) is oscillatory.
Part II: Assume condition (b) holds.
Based on (3.10), the proof is similar to those of Part IV of Theorem 3.1 and Part I of Theorem 3.2, and hence is omitted.
In this section, we give two examples to illustrate our main results.
Example 4.1
Consider the equation
$$\begin{aligned} \biggl(\frac{1}{t}\biggl(\biggl(y(t)+\frac{1}{t}y \bigl(\tau(t)\bigr)\biggr)^{\Delta}\biggr)^{\frac{1}{3}} \biggr)^{\Delta}+\int_{a}^{b} \frac{1}{(t-1)^{\frac{1}{3}}}y^{\frac{1}{3}}\bigl(\delta (t,\xi)\bigr)\Delta\xi=0,\quad t\in\mathbb{T}, \end{aligned}$$
where \(\delta(t,\xi)\ge t\), \(\tau(t)\le t\) and \(\mathbb{T}=\left.[2, \infty)\right._{\mathbb{T}}\). Here we have
\(\gamma=\frac{1}{3}\), \(r(t)=p(t)=\frac{1}{t}\), and \(q(t)=\frac {1}{(t-1)^{\frac{1}{3}}}\);
\(\int_{2}^{\infty}r^{-\frac{1}{\gamma}}(s)\Delta s=\int_{2}^{\infty}s^{3}\Delta s=\infty\), \(g_{\xi}(t,T)=1\);
\(\int_{a}^{b}[1-p(\delta(t,\xi))]^{\gamma}g_{\xi}^{\gamma}(t,T)\Delta\xi =\int_{a}^{b}[1-\frac{1}{\delta(t,\xi)}]^{\frac{1}{3}}\Delta\xi\ge\int_{a}^{b}[1-\frac{1}{t}]^{\frac{1}{3}}\Delta\xi =[1-\frac{1}{t}]^{\frac{1}{3}}(b-a)\).
Hence (H1)-(H3) hold. With \(z(t)=1\), we see that for sufficiently large \(T\in\mathbb{T}\),
$$\begin{aligned} \limsup_{t\rightarrow\infty}\int^{t}_{T}Q(s,T) \Delta s \ge&\limsup_{t\rightarrow\infty}[b-a]\int^{t}_{T} \frac{1}{(s-1)^{\frac {1}{3}}}\biggl(1-\frac{1}{s}\biggr)^{\frac{1}{3}}\Delta s\\ \ge&\limsup_{t\rightarrow\infty}[b-a]\int^{t}_{T} \frac{1}{(s-1)^{\frac {1}{3}}}(s-1)^{\frac{1}{3}}\frac{1}{s^{\frac{1}{3}}}\Delta s\\ \ge&\limsup_{t\rightarrow\infty}[b-a]\int^{t}_{T} \frac{1}{s^{\frac {1}{3}}}\Delta s=\infty. \end{aligned}$$
Hence condition (c) of Theorem 3.1 is satisfied.
By Theorem 3.1, every solution \(y(t)\) of Eq. (4.1) is oscillatory.
$$\begin{aligned} \biggl(\frac{1}{(t+\sigma(t))^{3}}\bigl(\bigl(y(t)+Ay\bigl(\tau(t)\bigr) \bigr)^{\Delta}\bigr)^{3} \biggr)^{\Delta }+\int _{a}^{b}t^{2}y^{\gamma}\bigl( \delta(t-\xi)\bigr) \Delta\xi=0,\quad t\in\mathbb {T}, \end{aligned}$$
where \(1>A\ge0\), \(\tau(t)\le t\) and \(\mathbb{T}=\left.[1,\infty)\right._{\mathbb{T}}\). Here we have
\(\gamma=3\), \(r(t)=\frac{1}{(t+\sigma(t))^{3}}\), \(p(t)=A\), \(\delta (t,\xi)=t-\xi< t\), and \(q(t)=t^{2}\);
\(\int(s+\sigma(s))\Delta s =t^{2}+c\), \(\int_{1}^{\infty}r^{-\frac {1}{\gamma}}(s)\Delta s=\infty\), \(\beta(\delta(t,\xi),T)=\int_{T}^{t-\xi}(s+\sigma(s))\Delta s>\int_{T}^{t-\xi} s\Delta s>T(t-\xi-T)\), and \(\beta(t,T)=\int_{T}^{t}(s+\sigma(s))\Delta s=t^{2}-T^{2}\);
\(\int_{a}^{b}[1-p(\delta(t,\xi))]^{\gamma}g_{\xi}^{\gamma}(t,T)\Delta\xi >\frac{ T^{3}[1-A]^{3}}{(t^{2}-T^{2})^{3}}\int_{a}^{b}(t-\xi-T)^{3}\Delta\xi >\frac{T^{3}[1-A]^{3} }{(t^{2}-T^{2})^{3}}(t-b-T)^{3}(b-a)\).
$$\begin{aligned}[b] \limsup_{t\rightarrow\infty}\int^{t}_{T}Q(s,T) \Delta s &\ge\limsup_{t\rightarrow\infty}T^{3}[1-A]^{3}(b-a) \int^{t}_{T}s^{2}\frac {1}{(s^{2}-T^{2})^{3}}(s-b-T)^{3} \Delta s\\ &\ge\limsup_{t\rightarrow\infty}T^{3}[1-A]^{3}(b-a)\int ^{t}_{T}\frac {1}{s}\biggl(1- \frac{b}{s}-\frac{T}{s}\biggr)^{3}\Delta s=\infty. \end{aligned} $$
Hence condition (a) of Theorem 3.2 is satisfied.
Wu, HW, Zhuang, RK, Mathsen, RM: Oscillation criteria for second-order nonlinear neutral variable delay dynamic equations. Appl. Math. Comput. 173, 321-331 (2006)
Zhang, SY, Wang, QR: Oscillation of second-order nonlinear neutral dynamic equations on time scales. Appl. Math. Comput. 216, 2837-2848 (2010)
Saker, SH: Oscillation criteria for a second-order quasilinear neutral functional dynamic equation on time scales. Nonlinear Oscil. 13, 407-428 (2011)
Saker, SH, O'Regan, D: New oscillation criteria for second-order neutral functional dynamic equations via the generalized Riccati substitution. Commun. Nonlinear Sci. Numer. Simul. 16, 423-434 (2011)
Thandapani, E, Piramanantham, V: Oscillation criteria of second order neutral delay dynamic equations with distributed deviating arguments. Electron. J. Qual. Theory Differ. Equ. 2010, 61 (2010)
Candan, T: Oscillation of second-order nonlinear neutral dynamic equations on time scales with distributed deviating arguments. Comput. Math. Appl. 62, 4118-4125 (2011)
Candan, T: Oscillation criteria for second-order nonlinear neutral dynamic equations with distributed deviating arguments on time scales. Adv. Differ. Equ. 2013, Article ID 112 (2013)
Saker, SH: Oscillation of superlinear and sublinear neutral delay dynamic equations. Commun. Appl. Anal. 12, 173-187 (2008)
Saker, SH, Agarwal, RP, O'Regan, D: Oscillation results for second-order nonlinear neutral delay dynamic equations on time scales. Appl. Anal. 16, 349-360 (2007)
Saker, SH, O'Regan, D, Agarwal, RP: Oscillation theorems for second-order nonlinear neutral delay dynamic equations on time scales. Acta Math. Sin. Engl. Ser. 24, 1409-1432 (2008)
Saker, SH: Hille and Nehari types oscillation criteria for second-order neutral delay dynamic equations. Dyn. Contin. Discrete Impuls. Syst., Ser. B, Appl. Algorithms 16, 349-360 (2009)
Agarwal, RP, Bohner, M: Basic calculus on time scales and some of its applications. Results Math. 35, 3-22 (1999)
Akin-Bohner, E, Bohner, M, Saker, SH: Oscillation criteria for a certain class of second order Emden-Fowler dynamic equations. Electron. Trans. Numer. Anal. 27, 1-12 (2007)
Bohner, M, Peterson, A: Dynamic Equations on Time Scales: An Introduction with Applications. Birkhäuser, Boston (2001)
Hilger, S: Analysis on measure chain - a unified approach to continuous and discrete calculus. Results Math. 18, 18-56 (1990)
Hilger, S: Ein maß kettenkalkül mit anwendung auf zentrumsmannigfaltigkeiten. PhD thesis, Universität Würzburg (1988)
Zhang, SY, Wang, QR: Oscillation criteria for second-order nonlinear dynamic equations on time scales. Abstr. Appl. Anal. 2012, Article ID 743469 (2012)
Zhang, SY, Wang, QR: Interval oscillation criteria for second-order forced functional dynamic equations on time scales. Discrete Dyn. Nat. Soc. 2014, Article ID 684068 (2014)
This work was supported by the NNSF of P.R. China (No. 11271379), Foundation for Technology Innovation in Higher Education of Guangdong, P.R. China (No. 2013KJCX0136) and Foundation for Humanities and Social Science in Ministry of Education of P.R. China (No. 14YJC790141).
Department of Mathematics, Guangdong University of Finance, Yingfu Road 527, Guangzhou, 510520, P.R. China
Shao-Yan Zhang
School of Mathematics and Computational Science, Sun Yat-sen University, West Xingang Road 135, Guangzhou, 510275, P.R. China
Qi-Ru Wang
Correspondence to Qi-Ru Wang.
All authors completed the paper together. All authors read and approved the final manuscript.
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Zhang, SY., Wang, QR. Oscillation of second-order nonlinear neutral dynamic equations with distributed deviating arguments on time scales. Adv Differ Equ 2015, 7 (2015). https://doi.org/10.1186/s13662-014-0337-y
neutral dynamic equations on time scales
distributed deviating arguments
generalized Riccati technique | CommonCrawl |
Ethnobotanical investigation on medicinal plants in Algoz area (South Kordofan), Sudan
Tahani Osman Issa1,
Yahya Sulieman Mohamed2,
Sakina Yagi3,
Reem Hassan Ahmed1,
Telal Mohammed Najeeb1,
Abdelrafie Mohamed Makhawi1 &
Tarig Osman Khider1
The inhabitants of western Sudan use traditional medicine for the treatment of various ailments due to lack of medical doctors and unaffordable prices of pharmaceutical products. The present study is the first documentation of the traditional plant knowledge on medicinal uses of plants by healers in Algoz (South Kordofan), Sudan.
Ethnobotanical data were collected over a period from March to November 2015 using semi-structured interviews with 30 healers (24 male and 6 female) living in the investigated area. Quantitative indices such as use categories, use value (UV) and informant consensus factor (ICF) were intended to evaluate the importance of medicinal plant species.
A total of 94 medicinal plants, which belong to 45 families and 81 genera, were recorded in the study area. The most represented families are Leguminosae with 20 species followed by Combretaceae (6 species), Rubiaceae (5 species) and Asteraceae (4 species). The reported species were belonging to herbs (43%), trees (28%), shrubs (22%), climbers (4%) and parasites (3%). Root and stem (21% each) were the most plant parts used. A majority of remedies are administered orally (67%) where infusion (36%) and maceration (32%) are the most used methods. The highest ICF (0.87) was reported for poisonous animal bites followed by urinary system diseases (0.89), blood system disorders (0.88) and gynaecological diseases (0.87). Anastatica hierochuntica, Ctenolepis cerasiformis, Echinops longifolius, Cleome gynandra, Maerua pseudopetalosa, Martynia annua, Oldenlandia uniflora, Opuntia ficus-indica, Solanum dubium, Sonchus cornutus, Tribulus terrestris and Drimia maritima were reported for the first time in this study.
The number of medicinal plants reported in this paper reflects evidence that Algoz area had a high diversity of medicinal plants which will continue to play an important role in the healthcare system in the study area.
In 2011, Sudan split into two countries with one third of the country being proclaimed a new state named "Republic of South Sudan" leaving behind the remaining area retaining the older name "the Republic of Sudan" [1]. In its former integral state, Sudan was the largest country in Africa and the tenth in the world, boasting an area of 2.5 million square kilometers which spanned diverse terrains and climatic zones [1]. This did bear directly on the wide diversity of vegetation, from those in the desert and semi-desert in the north through the equatorial in the central part to the extreme of the humid equatorial in the south. Such prevailing conditions favoured diverse vegetation consisting of 3137 documented species of flowering plants belonging to 170 families and 1280 genera, 15% of which are endemic [2]. A large number of these plants have a vital contribution to human health care needs throughout the country. Medicinal and aromatic plants and their derivatives represent an integral part of life in Sudan. Communities in different regions of Sudan use traditional medicine for the treatment of various ailments due to lack of medical doctors and unaffordable prices of pharmaceutical products beside their faith on the medicinal values of traditional medicine [3]. It has been estimated that only 11% of the population has access to formal health care [1].
The geographical position of Sudan represents a multicultural melting pot of diverse traditional knowledge over large distances and facilitated the exchange of knowledge about medicinal plants with other countries from Africa to Middle East and Asia [4].
Despite the varied flora and socio-cultural diversity in Sudan, there is a far-reaching lack of written information on the traditional use of medicinal plants [4]. So, documentation of plants used as traditional medicines in Sudan is warranted. The aim of this study was to investigate the traditional plant knowledge on medicinal uses of plants by local healers in Algoz area (South Kordofan), Sudan.
Algoz area is situated in the northern part of South Kordofan state, and its borders are Northern Kordofan state from the north and northeast, West Kordofan state from the northwest, Dellang locality from the south and Habella locality from the southeast direction (Fig. 1). It is located between latitudes 12°–12° 30 N and longitudes 29° 48–300 E and 622 m above sea level, with a total area of 35,000 km2. Short grass and short scattered trees prevail. The area is associated with exposed rocks crossing the central Sudan forming a surface water divide. The White Nile which is the main tributary of the River Nile bounds the hydrologic system to the east, while the highlands of Kordofan Plateau and the Nuba Mountains bound it to the west and the south respectively. Khor Abu Habil is a major seasonal wadi that crosses the study area and flows from the west to the east. The wadi disappears into the sand dunes a few kilometers before reaching the White Nile. The climate in the area is semi-arid with long hot summers (March–September) and short mild winters (December–February). Seasonal rainfall occurs only during summer (June–September) and varies between 200 mm/year in the north and 450 mm/year in the south [5].
a Sudan map showing the South Kordofan State (red) and b Algoz locality (red)
Algoz area has a multi-population with tribes as Dar Shungool, Gaboosh, Dar Bati, Albargo, Albarno, Flata and some Arabic nomads. They are working mainly in agriculture, animal grazing and trade [6].
Data collection and plant identification
Ethnobotanical data were collected from March to November 2015. Information about the medicinal use of plants was collected by carrying out semi-structured interviews with 30 healers (24 male and 6 female) living in the investigated area. The questionnaire was designed to collect data on (i) local names of the plants, (ii) ailments treated by the plant, (iii) plant parts used, (iv) condition of the plant material (dried or fresh) and (v) modes of preparation and administration. Some social factors like the name, age, occupation and education level of the interviewed person were also recorded. Also, the geographic locality and date of the interview were recorded. Plant specimens were collected for taxonomic identification using keys of written floras such as Broun and Massey [7], Andrews [8,9,10,11], Ross [12], Hutchinson and Dalziel [13], Maydell [14] and Elamin [15]. Voucher specimens were deposited at the Herbarium of Institute of Medicinal and Aromatic Plants, National Centre for Research, Sudan (MAPTMR-H). The botanical names and plant families are given according to the standards of the plant list (www.ipni.org/).
Ethnobotanical data analysis
Data analysis was carried out by using both the classical ethnobotanical systematic investigation and a numerical quantitative approach in order to evaluate the importance of the mentioned plant species in the investigated area. The quantitative study was carried out by calculating the following ethnobotanical indices:
Use categories
The medicinal plant uses were classified into categories following the standard developed by Cook [16]. Each time a plant was mentioned as "used" was considered as one "use report". If one informant used a plant to treat more than one disease in the same category, it was considered as a single use report [17].
Use value (UV)
The relative importance was calculated employing the use value [18], a quantitative measure for the relative importance of species known locally:
$$ \mathrm{UV}=\frac{\sum U}{n} $$
where Ui is the number of use reports cited by each informant for a given species and n refers to the total number of informants.
Use values are high when there are many use reports for a plant, implying that the plant is important, and approach zero (0) when there are few reports related to its use. The use value, however, does not distinguish whether a plant is used for single or multiple purposes.
Informant consensus factor
To test homogeneity of knowledge, the informant consensus factor was used [19]:
$$ \mathrm{ICF}=\frac{N_{\mathrm{ur}}-{N}_{\mathrm{t}}}{\left({N}_{\mathrm{ur}}-1\right)} $$
where Nur refers to the number of use reports for a particular use category and Nt refers to the number of taxa used for a particular use category by all informants. Informant consensus factor (ICF) values are low (near 0) if plants are chosen randomly or if there is no exchange of information about their use among informants and approach one (1) when there is a well-defined selection criterion in the community and/or if information is exchanged between informants [20].
Medicinal plant diversity
A total of 94 medicinal plants, which belong to 45 families and 81 genera, were recorded in the study area. Results provide the following information for each species: scientific name, botanical family, local common name, plant habitat, plant part used, disease treated, route of administration and use value (Table 1). The most represented families are Leguminosae with 20 species followed by Combretaceae (6 species), Rubiaceae (5 species), Asteraceae (4 species), Lamiaceae, Poaceae, Tiliaceae and Zygophyllaceae (3 species each), Apocynaceae, Asclepiadaceae, Brassicaceae, Burseraceae, Cleomaceae, Capparaceae, Malvaceae and Meliaceae (2 species each), and other families were represented with one species each. This dominance of Leguminosae plants is a characteristic of the Sudan flora. The most commonly used species is Sarcocephalus latifolius with a UV of 2.07 followed by Guiera senegalensis with a UV of 1.87, Hydnora abyssinica with a UV of 1.83 and Geigeria alata with a UV of 1.67 respectively. Plants that treat three ailments and more (86%) represent the majority, followed by plants that treat single ailments (8%) and those that treat two ailments (6%) respectively.
Table 1 Ethnomedicinal plants used in the Algoz region (South Kordofan)/western Sudan
Habitat of the plants
Analysis of data based on their habitat showed that the reported species belong to herbs (43%), trees (28%), shrubs (22%), climbers (4%) and parasites (3%) (Fig. 2). The majority of medicinal plants are collected from the wild, and only 11% are cultivated or purchased (0.01%) from the market (Table 1).
Habitat of medicinal plants in the study area
Parts of medicinal plants used
Data on different plant parts used in traditional medicine are indicated in Fig. 3. Those that are used the most were the root and stem (21% each) followed by the fruit (15%), whole plant (14%), seed (12%), leaf (11%), gum/latex, bulb/corm and heartwood (0.02%) and flower (0.01%) respectively. There are cases where different parts of the same plant are being used for the treatment of different diseases.
Percentage of plant parts used
Method of preparation
A majority of remedies are administered orally (67%) where infusion (36%) and maceration (32%) are the most used methods. Some prescriptions can be prepared by both methods: infusion or maceration represented 13%, while decoction represented 11% of preparations. Dried powder or freshly collected plant parts are also used. Other prescriptions are used externally (33%) and applied as dry powder (29%), rub (23%), smoke (23%), poultices (20%) or as a wash (6%) (Table 2). Most of these preparations use water as a solvent extractor. Some herbalists used other adjuvants like honey, sugar, salt, milk, sour milk, yoghurt, ajeen (fermented dough), nisha (light porridge), atroon (sodium bicarbonate), bee wax, wax of goat and olive and sesame oil.
Table 2 Mode of preparations of medicinal plants in the study area
Medicinal plants used in combination
For the treatment of particular ailment, sometimes herbalists used more than one plant. For example, Allium sativum bulb is mixed with Zingiber officinale rhizome and applied to the anus for the treatment of haemorrhoids. A potion is prepared from the seed of Trigonella foenum-graecum, curcuma, Negilla sativa and bee honey for the treatment of uterus inflammation. Root of Tinospora bakis is mixed with Syzygium aromaticum (clove) for the treatment of malaria. Atroon is added to some preparations like those of Ziziphus spina-christi and Acacia oerfota for the treatment of dysentery and toothache respectively.
Quantitative analyses of ethnomedicinal data
Fifteen ailment categories were identified. The ICF was calculated for each ailment category, and the range was from 0.50 to 0.91 (Table 3). The highest ICF (0.91) was reported for poisonous animal bites with 8 species and 77 use reports, followed by urinary system diseases (0.89) with 17 species and 156 use reports, blood system disorders (0.88) with 14 species and 116 use reports and gynaecological diseases (0.87) with 12 species and 86 use reports. The highest ICF for poisonous animal bites can be probably related to the hard and dangerous environmental conditions. The category of plants used for treatment of eye diseases has the lowest degree of consensus (0.50) where only three informants mentioned ailments in this category.
Table 3 Diseases based on categories and informant consensus factor (ICF)
Most frequently cited plant species and medicinal uses
In this study, the most cited plants, those that had at least 20 or more citations for specific ailment, were Guiera senegalensis (57 citations) mainly used for the treatment of malaria (22 citations) and kidney disorders (20 citations). This is followed by Hydnora abyssinica (55 citations) used in the treatment of gastrointestinal system diseases (mainly for diarrhoea and dysentery (40 citations), Geigeria alata (50 citations) used mainly for the treatment of diabetes (20 citations) and hypertension (17 citations), Kigelia africana (32 citations) with 28 citations for the treatment of breast swellings and Carissa spinarum (28 citations) for envy eye.
Medicinal plants and the associated knowledge
Thirty healers (24 male and 6 female) were interviewed and divided into five different age groups (20–30, 31–40, 41–50, 51–60 and > 60). Analysis of the result on ages of healers revealed that the most dominant age of men is 41–50 while for women which were few in number is > 60 (Figs. 3 and 4).
Age group distribution of the traditional healers interviewed
In this study, the most cited plants, Guiera senegalensis, Hydnora abyssinica, Geigeria alata, Kigelia africana and Carissa spinarum, were previously reported with the same traditional uses in ethnobotanical studies from other regions of Sudan. For example, Guiera senegalensis was reported by EL-Kamali [3] and Suleiman [21] for the treatment of malaria. Hydnora abyssinica (H. johannis) for the treatment of diarrhoea and dysentery and Kigelia africana for the treatment of breast swellings were also reported by Musa et al. [22]. Geigeria alata for the treatment of diabetes was reported by EL-Kamali [3] and Suleiman [21]. Carissa spinarum (C. edulis) was reported by EL-Kamali [3] for charm and the treatment of madness. Kigelia africana was reported by Doka and Yagi [23] for swollen mastitis.
The high frequency of citations of medicinal plants can be explained by the fact that these plants are the best known and have long been used by the majority of informants, representing a source of reliability. In fact, many biological activity and phytochemical evaluation were carried out for these plants. For example, Traore-Keita et al. [24] reported that the chloroform extract of roots of Guiera senegalensis exhibited a pronounced antimalarial activity. They isolated two alkaloids, namely, harman and tetrahydroharman, that displayed high antimalarial activity (IC50 (50% inhibition) lower than 4 μg/mL) and low toxicity against human leukemia monocytic cell line (THP1). Yagi et al. [25] found that Hydnora johannis roots have no activity against bacteria spp. that are mainly responsible of diarrhoea but are rich in phenols. They suggested that the curing potency of the roots of H. johannis was not mainly associated with the presence of antibacterial activity agent(s) against bacterial species responsible of dysentery or diarrhoea but might be attributed to the role of tannins in reducing the effect through denaturing the proteins by the formation of protein tannate, thereby causing the intestinal mucosa to become more resistant, reducing the intestinal transit and by acting as a barrier against toxin exerted by bacteria. The antidiabetic potential of Geigeria alata root was evaluated, and diabetic rats dosed with 250 mg/kg of aqueous methanolic extract were found to have significantly (p < 0.05) decreased blood glucose level closer to that of non-diabetic rats and improved β-cell function and antioxidant status [26]. Kigelia africana was found to suppress the breast MCF7 [27], human colon adenocarcinoma (Caco-2), human embryonic kidney (HEK-293) [28] and HeLa cervical cancer cell proliferation [29].
Comparative review of traditional usages of reported species with previous studies from Sudan
A comparative review with previous reports [3, 21,22,23, 30,31,32,33] from different parts of Sudan was performed to identify the new medicinal plants and new uses reported in this study (Table 4). The plants reported by Suleiman [21] for traditional plants used by communities of Northern Kordofan region included a total of 44 plant species with 22 species with same traditional uses which were reported also in this study, while 2 species, Blepharis linariifolia and Catunaregam nilotica (Xeromphis nilotica, Randia nilotica), were reported with different uses. EL-Kamali [3] reported 48 plant species for traditional plant uses in North Kordofan too with 15 species with same traditional uses which were reported also in this study and 5 species, Acacia nilotica subsp. adstringens, Aristolochia bracteolate, Cissus quadrangularis, Dichrostachys cinerea and Sarcocephalus latifolius (Nauclea latifolia), with different uses. Doka and Yagi [23] reported 49 plant species for traditional plant uses in West Kordofan with 16 species with same traditional uses which were reported also in this study, and 9 species were reported in this study with different uses; these included Acacia senegal, Acacia seyal, Arachis hypogaea, Balanites aegyptiaca, Cissus quadrangularis, Combretum aculeatum, Grewia flavescens, Tamarindus indica and Catunaregam nilotica. Musa et al. [22] reported 53 plant species for traditional plant uses in the Blue Nile State, southeastern Sudan, with 18 species with same traditional uses which were reported in this study and 13 species with different uses: Acacia senegal, Acacia seyal, Anogeissus leiocarpus, Carissa spinarum (C. edulis), Cissus quadrangularis, Grewia villosa, Lannea fruticose, Piliostigma reticulatum, Senna occidentalis, Strychnos spinose, Tephrosia uniflora, Terminalia laxiflora and Ximenia americana. Moreover, El Ghazali et al. [30,31,32,33] in their books of Sudanese medicinal plants documented some of these plants for the same or very similar usages. In fact, there are 99 new traditional uses for some previously reported medicinal plants. For example, the whole plant of Striga hermonthica was previously reported to treat diabetes, but in this study, it is used also for menstrual cramps. The fruit of Senna occidentalis is reported to treat eczema beside its common use as a laxative. Plicosepalus acaciae is commonly used to enhance wound healing and as a lactagogue, but in this study, the smoke fumigant of the seeds is reported to repel insect from ear.
Table 4 Comparative review of traditional usages of reported species with previous studies from Sudan
New species and new uses for species are reported for the first time in this study. For example, Anastatica hierochuntica, Ctenolepis cerasiformis, Echinops longifolius, Cleome gynandra, Maerua pseudopetalosa, Martynia annua, Oldenlandia uniflora, Opuntia ficus-indica, Solanum dubium, Sonchus cornutus, Tribulus terrestris and Drimia maritima were not being mentioned in any previous study for the traditional Sudanese medicine. Acanthorrhinum ramosissimum, Cleome viscosa and Setaria acromelaena which were used for evil eye were also reported for the first time.
The majorities of the healers declared that they had learned about medicinal plants from their parents or grandparents. The lack of systematic documentation for medicinal plant knowledge which appears to occur in many parts of the world may contribute to the loss of this knowledge, particularly for plants that are neglected or non-preferred [34,35,36].
The number of medicinal plants reported in this paper reflects evidence that the Algoz area harbours a high diversity of medicinal plants that will continue to play an important role in the healthcare system in the study area. Evaluation of their claimed pharmacological potential efficacy and toxicity profile is essential. Moreover, the present study could contribute in conserving such rich heritage and providing precious information as a contribution through writing the Sudanese pharmacopeia.
Conservation of this traditional knowledge is very important. The progressing mass destruction of wild vegetation for various purposes may accelerate the disappearance of medicinal plants. This in turn may have profound consequences on the roles of traditional medicine on human health. Furthermore, the drop in the availability of raw materials due to the depletion of natural resources affects the discovery of potential drugs [37]. Thus, raising community awareness about conservation and sustainable utilization of the traditional medicinal plants is a vital part for the entire plant biodiversity [22]. Modern biotechnical approaches like genetic engineering, micropropagation via tissue encapsulation of propagules, tissue culture and fermentation should be applied to improve yield and modify the potency of medicinal plants [38].
ICF:
UV:
Mohammed AMA. Research advances in Sudanese traditional medicine: opportunities, constrains and challenges. Altern Integ Med. 2013;2:10.
Khalid H, Abdalla WE, Abdelgadir H. Gems from traditional north-African medicine: medicinal and aromatic plants from Sudan. Nat Prod Bioprospect. 2012;2:92–103.
EL-Kamali HH. Ethnopharmacology of medicinal plants used in north Kordofan (western Sudan). Ethnobot Leaflets. 2009;13:89–97.
Saeed MEM, Abdelgadir H, Sugimoto Y, Khalid HE, Efferth T. Cytotoxicity of 35 medicinal plants from Sudan towards sensitive and multidrug-resistant cancer cells. J Ethnopharmacol. 2015;174:644–58.
Abdalla OAE. Aquifer systems in Kordofan, Sudan: subsurface lithological model. S Afr J Geol. 2006;109:585–98.
Anonym. South Kordofan State, Sudan Ministry of the Cabinet Affairs, 2016 (In Arabic).
Broun AF, Massey RE. Flora of the Sudan. Thomas Murby and Co 1. Fleet Lane, London, E.C. 4. El, 1929.
Andrews FW. The vegetation of the Sudan. In: Tot till JD, editor. Agriculture in the Sudan. UK: Oxford University Press; 1948.
Andrews FW. The flowering plants of the Anglo-Egyptian Sudan, vol. 1. Arbroath: Buncle Co. Ltd.; 1950.
Ross JH. Flora of South Africa. In: Part I. The government printer Pretoria, vol. 16; 1975.
Hutchinson J, Dalziel JM. Flora of west tropical Africa. 1st ed. Millbank: Crown Agents for Overseas Governments and Administration; 1968.
Maydell HJV. Trees and shrubs of the Sahel, their characteristics and uses. Germany: GTZ; 1990.
Elamin HM. Trees and shrubs of the Sudan. U.K: Ithaca Press Exeter; 1990.
Cook FEM. Economic botany data collection standard. Kew: Royal Botanic Gardens; 1995.
Treyvaud AV, Arnason JT, Maquin P, Cal V, Vindas PS, Poveda L. A consensus ethnobotany of the Q'eqchi' Maya of southern Belize. Econ Bot. 2005;59:29–42.
Phillips O, Gentry AH, Reynel C, Wilkin P, Galvez DBC. Quantitative ethnobotany and Amazonian conservation. Conserv Biol. 1994;8:225–48.
Trotter RT, Logan MH. Informant consensus: a new approach for identifying potentially effective medicinal plants. In: Etkin NL, editor. Plants in indigenous medicine and diet. Bedford Hills: Redgrave Publishing Company; 1986.
Gazzaneo LRS, Lucena RFP, Albuquerque UP. Knowledge and use of medicinal plants by local specialists in a region of Atlantic Forest in the state of Pernambuco (Northeastern Brazil). J Ethnobiol Ethnomed. 2005;1:9.
Suleiman MHA. An ethnobotanical survey of medicinal plants used by communities of Northern Kordofan region, Sudan. J Ethnopharmacol. 2015;176:232–42.
Musa MS, Abdelrasoo FE, Elsheikh EA, Ahmed LAMN, Mahmoud AE, Yagi SM. Ethnobotanical study of medicinal plants in the Blue Nile State, south-eastern Sudan. J Med Plant Res. 2011;5(17):4287–97.
Doka IG, Yagi SM. Ethnobotanical survey of medicinal plants in west Kordofan (western Sudan). Ethnobot Leaflets. 2009;13:1409–16.
Traore-Keita F, Gasquet M, Di Giorgio C, Ollivier E, Delmas F, Keita A, Doumbo O, Balansard G, Timon-David P. Antimalarial activity of extracts and alkaloids isolated from six plants used in traditional medicine in Mali and Sao Tome. Phytother Res. 2002;16(7):646–9.
Yagi S, Chrétien F, Duval RE, Fontanay S, Maldini M, Henry M, Chapleur Y, Laurain-Mattar D. Antibacterial activity, cytotoxicity property and chemical constituents of Hydnora johannis roots. South Afr J Bot. 2012;78:228–34.
Hafizur RM, Babiker R, Yagi S, Chishti S, Kabir N, Choudhary MI. The antidiabetic effect of Geigeria alata is mediated by enhanced insulin secretion, modulation of β-cell function, and improvement of antioxidant activity in streptozotocin-induced diabetic rats. J Endocrinol. 2012;214:329–35.
Fouche G, Cragg GM, Pillay P, Kolesnikova N, Maharaj VJ, Senabe J. In vitro anticancer screening of South African plants. J Ethnopharmacol. 2008;119(3):455–61.
Chivandi E, Cave E, Davidson BC, Eriwanger KH, Mayo D, Madziva MT. Suppression of Caco-2 and HEK-293 cell proliferation by Kigelia africana, Mimusops zeyheri and Ximenia caffra seed oils. In Vivo. 2012;26(1):99–105.
Arkhipov A, Sirdaarta J, Rayan P, McDonnell PA, Cock IE. An examination of the antibacterial, antifungal, antigiardial and anticancer properties of Kigelia africana fruit extracts. Pharmacognosy Commun. 2014;4(3):62–76.
El Ghazali GB. Medicinal plants of the Sudan. Part I. Medicinal plants of Arkawit. Sudan: Khartoum University Press; 1987.
El Ghazali GB, El Tohami MS, El Egami AB. Medicinal plants of the Sudan. Part III. Medicinal plants of the White Nile Province. Sudan: Khartoum University Press; 1994.
El Ghazali GB, El Tohami MS, El Egami AB, Abdalla WS, Mohamed MG. Medicinal plants of the Sudan. Part IV. Medicinal plants of Northern Kordofan. Khartoum: Omdurman Islamic University Press; 1997.
El Ghazali GE, Aballa WE, Khalid HE, Khalafalla MM, Hamad AD. Medicinal plants of the Sudan, Part V. Medicinal plants of Ingessana. Khartoum: Sudan Currency Printing Press; 2003.
Fekadu F. Ethiopian traditional medicine, common medicinal plants in perspective, Sioux City, IA, (2001).
Brouwer N, Liu Q, Harrington D, Kohen J, Vemulpad S, Jamie J, Randall M, Randall D. An ethnopharmacological study of medicinal plants in New South Wales. Molecules. 2005;10:1252–62.
Bussmann RW, Sharon D. Traditional medicinal plant use in Loja province, southern Ecuador. J Ethnobiol Ethnomed. 2006;2:44.
Chivian E. Biodiversity: its importance to human health center for health and the global environment. USA: Harvard Medical School; 2002.
Chen S-L, Yu H, Luo H-M, Wu Q, Li C-F, Steinmetz A. Conservation and sustainable use of medicinal plants: problems, progress, and prospects. Chin Med. 2016;11:37.
We would like to thank all the traditional healers and local people of the study area for sharing their knowledge, cooperation and hospitality. The authors are grateful to Dr. Migdad Elsir Shuaib (Department of Geology, Faculty of Science, University of Khartoum) for the geographical and geological information.
This study was financed by the University of Bahri, Sudan, Code No: U of B-1-2015.
We have already included all data in the manuscript collected during the field surveys.
College of Applied and Industrial Sciences, University of Bahri, P.O. Box 1606, Khartoum, Sudan
Tahani Osman Issa
, Reem Hassan Ahmed
, Telal Mohammed Najeeb
, Abdelrafie Mohamed Makhawi
& Tarig Osman Khider
Institute of Medicinal and Aromatic Plant National Centre for Research, Khartoum, Sudan
Yahya Sulieman Mohamed
Department of Botany, Faculty of Science, University of Khartoum, P.O. Box 11115, Khartoum, Sudan
Sakina Yagi
Search for Tahani Osman Issa in:
Search for Yahya Sulieman Mohamed in:
Search for Sakina Yagi in:
Search for Reem Hassan Ahmed in:
Search for Telal Mohammed Najeeb in:
Search for Abdelrafie Mohamed Makhawi in:
Search for Tarig Osman Khider in:
TOI and YS conducted the field survey and collected the data, SY did the analysis and wrote the first draft of the manuscript, RHA and TMN provided support in sampling and plant species identification, AMM provided technical support and helped in the write-up and revision and TOK designed the study and supervised the project. All authors read and approved the final manuscript.
Correspondence to Tarig Osman Khider.
The present study is purely based on filed survey instead of human or animal trails.
Ethical guidelines of the International Society of Ethnobiology (http://www.ethnobiology.net/) were strictly followed.
Issa, T.O., Mohamed, Y.S., Yagi, S. et al. Ethnobotanical investigation on medicinal plants in Algoz area (South Kordofan), Sudan. J Ethnobiology Ethnomedicine 14, 31 (2018) doi:10.1186/s13002-018-0230-y
Algoz area | CommonCrawl |
Home Journals EJEE Exergetic Evaluation and Optimization of Combined Heat and Power (CHP) Plant of 20.7 MW Capacities under Varying Load Conditions: A Case Study
Exergetic Evaluation and Optimization of Combined Heat and Power (CHP) Plant of 20.7 MW Capacities under Varying Load Conditions: A Case Study
Shrikant M. Bapat* | Gururaj D. Gokak
Department of Mechanical Engineering, KLS, Gogte Institute of Technology, Belagavi, Karnataka 590008, India
smbapat6791@gmail.com
The main aim of this paper is to find the effect of power to heat ratio on exergy of Combined Heat and power (CHP) systems used in power plants. Lot of Energy Researchers have investigated and published their work in terms of energy efficiency and its parametric characteristics. But from a thermodynamic point of view it is Exergy and not Energy, which reveals a more meaningful performance of CHP system. In the present work a case study of a CHP system of 20.7 MW capacities is considered and analyzed based on varying load conditions as well as based on Exergy, based on experimental data taken from the plant. For 100 % PLF the optimum value of PHR in terms of TExDR and SSC is 0.546. Exergy analysis reveals that with a decrease in the value of PLF (plant load factor) the optimum value of PHR (power to heat ratio) also reduces.
bagasse, biomass combined heat and power, cogeneration, exergy analysis, sugar
India is a developing country and with the increase in industrialization coupled with population growth, the demand for power is rapidly increasing. Most of the energy needs are primarily met by fossil fuels. The demand for fossil fuels is ever increasing. But relying completely on energy from fossil fuels is not sustainable because of their regional depletion and associated impact on environment. In this context renewable energy sources play a vital role. The Indian economy is agriculture driven and sugarcane production and usage take a prominent place. Now-a-days the sugar industry is very fortunate of energy crises, as the industry generates its own fuel in the form of bagasse. Steam generated by burning bagasse is used to generate power and the exhaust steam is used for process. In fact, combined heat and power (CHP) or cogeneration systems are used in sugar industry. The principal advantage of CHP systems is their ability to improve the efficiency of fuel use in the production of electrical and thermal energy. The efficiency of energy production in CHP systems can be increased from current levels of 35 % to 55 % in conventional power plants to over 90% in CHP systems [1]. The evaluation of CHP system is a complex and demanding task. A complete performance analysis is essential both for the power plant constructor and for the end user. Evaluating CHP system based on exergy rather than energy is quite useful. Exergy is useful for improving the efficiency of energy resource use. It quantifies the locations, types and magnitudes of waste and losses. It represents quantitatively the 'useful' energy or ability to do or receive work [2].
A review of previously conducted studies says that a lot has been done regarding CHP systems. The CHP systems are gas turbine, diesel engine and steam turbine based (i.e. depending on the prime mover). F.F. Huang [3] examined three systems using state-of-the-art industrial gas turbines based on first law and second law. It is found that for all the three gas turbine cogeneration systems first law analysis is inadequate. E. Bilgen [4] analyzed gas turbine based cogeneration system. The components considered for simulation of the results are gas turbine, heat recovery steam generator (HRSG). To simulate these systems, an algorithm has been developed. HRSG is found to be the least efficient component from exergy point of view. Ozgur Balli and Haydar Aras [5], conducted performance evaluation of micro gas turbine (MGTCHP) driven combined heat and power system. The energetic and exergetic efficiencies of the MGTCHP system are 75.99 % and 35.80 % respectively. The exergy consumption occurred highest in the combustion chamber with a value of 129.61 kW. Yilmaz Yoru et al. [6], performed energy and exergy analysis of a gas turbine based CHP system in a ceramic factory. Actual operational data taken over a one-month period are utilized. The mean energetic and exergetic efficiency values of the CHP system are found to be 82.30 % and 34.70 % respectively. Literature available also reveals that exergy analysis is also performed for diesel engine based CHP systems. Aysegul Abusoglu and Mehmet Kanoglu [7, 8], performed exergy analysis of 25.32 MW diesel engine powered CHP system. Components considered for analyses are compressor, intercooler, waste heat boiler, condenser, pump, diesel engine. The exergy destructions are found to be the highest in the diesel engine. 83.32 % of total exergy destruction in the overall system took place in the diesel engine, while 45.94 % of total fuel exergy got destructed in the diesel engine. The exergy efficiency of diesel engine is found to be 40.4 %. A comparison of first law and second law analysis of the diesel engine based CHP system is also done [9]. At full load conditions the first law and second law efficiency of the overall plant is found to be 44.20 % and 40.70 % respectively. Exergetically waste heat boiler is found to be the least efficient component i.e. 11.40 %. A comparison of first law and second law analysis of diesel engine based CHP system [9] revealed that exergy analysis is more valuable than energy analysis.
Many scientists and researchers have also carried out exergy analysis of steam turbine driven CHP systems. Ozgur balli and co-researchers [10]. Rajkumar [11] performed energy analysis of combined heat performed exergy analysis of CHP system installed in Eskesehir city of turkey. The exergetic efficiency of CHP system is calculated to be 38.16 % with 49880 kW as electrical product. Highest exergy consumption is found in the combustion chamber. The analysis is performed assuming fixed fuel power and utilized power by the CHP system. M. Siddhartha Bhatt and N. and power systems in cane sugar industry. An analysis is carried out from pure back pressure to pure condensing environment based on standard steam conditions in installations and efficiencies which are currently being achieved experimentally. It is found that as heat to power ratio increases the efficiency of the steam turbine decreases. Also as the fraction of the extracted steam increased the overall efficiency of the plant approaches the boiler efficiency. S.C. Kamate and P.B. Gangawati [12], conducted exergy analysis of back pressure steam turbine (BPST) and condensing extraction steam turbine (CEST) based CHP systems in Indian sugar industries. It is found that boiler is the main contributor towards exergy destruction. It could utilize only 37 % of chemical exergy and nearly 63 % is lost in combustion irreversibility. A comparison of steam turbine, gas turbine and diesel engine based CHP system is also conducted based on exergy [13]. The exergy efficiency for steam turbine, gas turbine and diesel engine based CHP systems is found to be 23.10 %, 22.60 % and 47.70 % respectively. It is shown clearly that diesel engine based CHP systems should be selected when the thermal demand is small compared to electrical demand.
Based on the literature from the past, it appears that no exegetic studies of CHP system (with steam turbine as prime mover) with variable values of power to heat (PHR) ratio and PLF (plant load factor), is carried out as far as the authors' knowledge is concerned. The technological options available for co-generation in a sugar industry are Extraction cum back pressure route, Extraction cum condensing route and condensing route based on dual fuel system [14]. Usually sugar industry prefers extraction cum condensing route due to its longer operation period even during off-season. Hence in the present context a case study of a CHP system using extraction cum condensing route steam turbine in an Indian sugar industry of 20.7 MW capacities is proposed. Optimization of power generation systems is one of the most important subjects of the energy engineering field. It is well known fact that the power plant is designed to have maximum efficiency at full load (100 % PLF) conditions. But at full load conditions the waste exergy rates emitted to the environment (through flue gases and condensates) would increase. In turn at part load conditions the exergy destruction rates at component level and also the system as whole, would increase but the waste exergy rates emitted to environment would decrease. Thus, operation of the plant is a thermodynamic trade-off between exergy destruction rates and waste exergy rates emitted to environment.
In the present scenario of global warming and greenhouse effect it is very necessary to select environmentally optimal process by using concept of design for environment (DFE) [15]. This paper is an attempt to optimize the case study considered. The specific objectives are:
(1) To analyze exergetically each component of CHP system using actual operational data.
(2) To optimize the operation of CHP system in terms of exergy destruction rates and waste exergy rates in terms of PHR at various values of PLF.
The actual operational data used is of one-hour interval and for the whole operating/ harvesting season of 170 days from the plant DCS (Distributed control systems). The variations in the PLF and PHR values are found primarily due to the following two reasons:
(1) Variation in the sugarcane crushing rate, which in turn varied the process steam consumption from the turbine as well as the fuel (bagasse) supply rate to the boiler.
(2) Variation in the demand for power/ electricity from the grid, which in turn varied the PLF.
Thus, in the present study exergy analysis is carried out for various values of PLF and PHR and an attempt is made to find an optimum value for PHR in terms of exergy at different values of PLFs'. The following section deals with process description of the plant followed by the analysis procedure employed.
2. Process Description of the Plant
The schematic diagram of the plant is shown in Figure 1. It consists of Boiler, turbine, surface condenser, De-aerator, High pressure heater (HPH), condensate extraction pump (CEP), Boiler feed pump (BFP). The fuel used is bagasse which is a waste product for the sugar industry. The CHP plant is of condensate cum extraction type and is integrated with a sugar mill in order to meet internal power and steam requirements. The excess is exported to the grid. The bagasse as fuel is supplied from the same sugar mill of 5500 TCD (tons of cane crushed per day) capacity. It is generally having a moisture content of 50 %. The boiler is travelling grate type. This technology usually has a boiler efficiency based on LHV (Lower Heating value) and requires around 25-30 % excess air. The combustion air is pre-heated in Air pre-heater (APH) to a temperature of 220 ℃ using flue gases. This air is admitted into the combustion chamber at state point 16.
The steam temperature and pressure generated in the boiler depends on the steam turbine specification which in this case is 490±5 ℃ and 64 ata pressure. This boiler operates at a steam /bagasse (S/B) ratio of 2.52. Steam at 9 ata pressures is bled from the turbine at state point 2 which is used as process heat for distillery or ethanol production. Part of the steam is taken from this line to High pressure heater (HPH) in order to increase the temperature of the feed water. Process heat is also extracted at state point 3 for sugar production process at 3 ata. Maximum process heat is drawn from the turbine at state point 3 due to heavy steam demand conditions. The remaining steam goes to the surface condenser. The surface condenser is of shell and tube type utilizing cooling water from forced circulation cooling tower. The saturation temperature of steam entering the condenser is in the range of 42-48 ℃. The condenser pressure is at 0.075 bar i.e. under vacuum. Above all the amount of steam condensate collected is dependent on the nature of the production process. If lot of steam is drawn for process at state points 2 & 3 then the condensate collected would be quite less. But under any circumstances of operating conditions the condenser is designed for a minimum load of 20 t/hr (5.55 kg/s) and a maximum load of 80 t/hr. The condensate is pumped from the condenser to de-aerator with the help of condensate extraction pump (CEP). The de-aerator is used to degasify boiler feed water in order to minimize corrosion problems. It is a direct contact heat exchanger. Here water is preheated to nearly saturation conditions in order to attain zero solubility. This pressurized de-aerator operates at 1.5 bars pressure. Steam needs to be removed from the de-aerator along with insoluble and non-condensable gases. This deficit is met out by makeup water at state point 9. The condensate from the de-aerator is pumped to High pressure heater (HPH) by means of boiler feed pump (BFP). The feed water is pre-heated to a temperature of 145- 160 ℃ before entering the boiler (i.e. economizer). The plant operates under various PHR (power to heat ratio) values depending on the fluctuations in the demand for process heat and power export. The values of thermodynamic properties at various state points are shown in table 1 (refer Appendix).
Figure 1. Layout of Combined Heat and Power (CHP) plant system
To High pressure heater (HPH) by means of boiler feed pump (BFP). The feed water is pre-heated to a temperature of 145- 160 ℃ before entering the boiler (i.e. economizer). The plant operates under various PHR (power to heat ratio) values depending on the fluctuations in the demand for process heat and power export. The values of thermodynamic properties at various state points are shown in Table 1 (refer Appendix).
During past few decades, exergy analysis has been emerged as an important tool for design and optimization of engineering systems. Exergy is generally not conserved as energy but destructed in the system. Thus exergy analysis can be extremely beneficial for bagasse based CHP plant in identifying locations of deviations from ideality. The present work of analysis comprises of mass and exergy balance, exergy destruction, Improvement potential, exergy ratios, exergy efficiency.
3.1 Mass balance
The mass balance for steady state steady flow processes is as given below,
$\sum{\mathop{m}_{in}^{.}}=\sum{\mathop{m}_{out}^{.}}\,$ (1)
Suffixes 'in' and 'out' indicate inlet and outlet conditions respectively.
3.2 Exergy balance
The general exergy balance equation is given as,
$\sum{\mathop{Ex}_{in}^{.}}-\sum{\mathop{Ex}_{out}^{.}}=\sum{\mathop{Ex}_{Dest}^{.}}$ (2)
Specific flow exergy is given by,
$ex=\left( h-{{h}_{0}} \right)-{{T}_{0}}\left( s-{{s}_{0}} \right)$ (3)
Specific exergy for an incompressible flow is given as [16],
$e{{x}_{in}}=C\left[ \left( T-{{T}_{0}} \right)-{{T}_{0}}ln\frac{T}{{{T}_{0}}} \right]$ (4)
Equation 3 is used for steam flow and equation 4 is used for condensates and feed water flow conditions.
The physical / flow exergy for air and combustion/flue gases is given by [17],
$e{{x}_{in}}=C\left[ \left( T-{{T}_{0}} \right)-{{T}_{0}}ln\frac{T}{{{T}_{0}}}+R{{T}_{0}}ln\left[ \frac{P}{{{P}_{0}}} \right] \right]$ (5)
$e{{x}_{in}}=C\left[ \left( T-{{T}_{0}} \right)-{{T}_{0}}ln\frac{T}{{{T}_{0}}}+ \right]$ (5a)
The specific heat capacity of air is a function of absolute temperature is given as [18],
$\mathop{C}_{air}=1.04841-\frac{3.83719\mathop{\,T}^{{}}}{1{{0}^{4}}}+\frac{9.45378\mathop{\,T}^{2}}{1{{0}^{7}}}-\frac{5.4903\mathop{\,T}^{3}}{1{{0}^{10}}}+\frac{7.92981\,\mathop{T}^{4}}{1{{0}^{14}}}$ (6)
The specific heat of flue/combustion gases liberated by burning bagasse in a boiler is given as [19],
$\mathop{C}_{fg}=\left( 0.27+.00006{{T}_{fg}} \right)$ (7)
where Tfg is temperature of flue gases in ℃.
The fuel exergy rate is the sum of physical and chemical exergy which is given as,
Fuel exergy rate = [Physical exergy rate of air] + [chemical exergy rate of fuel]
$\mathop{Ex}_{f}^{.}=\mathop{Ex}_{ph}^{.}+\mathop{Ex}_{ch}^{.}=\mathop{m}_{a}^{.}\mathop{ex}_{a}^{ph}+\mathop{m}_{f}^{.}\mathop{ex}_{f}^{ch}$ (8)
The specific chemical exergy of bagasse is found experimentally [20] as 9890.70 KJ/kg.
3.3 Exergy destruction
Applying exergy balance equation 2, the exergy destruction equations are established for all the components of the CHP system as below,
Boiler:
$\mathop{Ex}_{Dest,\,boi}^{.}=\mathop{Ex}_{f}^{.}+\mathop{Ex}_{fw}^{.}-\mathop{Ex}_{fg}^{.}-\mathop{Ex}_{ash}^{.}$ (9a)
$\mathop{Ex}_{Dest,\,boi}^{.}=\mathop{Ex}_{16}^{.}+\mathop{Ex}_{17}^{.}+\mathop{Ex}_{15}^{.}-\mathop{Ex}_{18}^{.}-\mathop{Ex}_{19}^{.}-\mathop{Ex}_{20}^{.}\,$ (9b)
Turbine:
$\mathop{Ex}_{Dest,\,Tur}^{.}=\mathop{Ex}_{1}^{.}-\mathop{Ex}_{2}^{.}-\mathop{Ex}_{3}^{.}+\mathop{Ex}_{4}^{.}-\mathop{W}_{T}$ (10)
Condenser: The exergy destruction equation for condenser is given as below [21],
$\mathop{Ex}_{Dest,\,Con}^{.}={{X}_{1}}+{{X}_{2}}-{{X}_{3}}$ (11)
${{X}_{1}}=\mathop{m}_{v}\left\{ {{C}_{p,v}}\left[ \left( {{T}_{v1}}-{{T}_{cond}} \right)-{{T}_{0}}\ln \left[ \frac{{{T}_{v1}}}{{{T}_{cond}}} \right] \right] \right\}\,$ (11a)
${{X}_{2}}=\mathop{m}_{v}\left( \mathop{h}_{fg|T=TCond}-{{T}_{0}}\mathop{s}_{fg|T=TCond} \right)$ (11b)
Condensate extraction pump (CEP):
$\mathop{Ex}_{Dest,\,Cep}^{.}=\mathop{Ex}_{5}^{.}+\mathop{W}_{cep}-\mathop{Ex}_{8}^{.}\,$ (12)
De-aerator:
$\mathop{Ex}_{Dest,\,dei}^{.}=\mathop{Ex}_{8}^{.}+\mathop{Ex}_{9}^{.}+\mathop{Ex}_{10}^{.}+\mathop{Ex}_{11}^{.}-\mathop{Ex}_{12}^{.}$ (13)
Boiler feed pump (BFP):
$\mathop{Ex}_{Dest,\,bfp}^{.}=\mathop{Ex}_{12}^{.}+\mathop{W}_{bfp}-\mathop{Ex}_{13}^{.}\,$ (14)
High pressure heater (HPH):
$\mathop{Ex}_{dest,\,hph}^{.}=\mathop{Ex}_{14}^{.}+\mathop{Ex}_{13}^{.}-\mathop{Ex}_{15}^{.}-\mathop{Ex}_{11}^{.}$ (15)
Now the total exergy destruction taking place in the CHP system is given by,
$\sum{\mathop{Ex}_{dest,tot}^{.}}=\mathop{Ex}_{dest,boi}^{.}+\mathop{Ex}_{dest,tur}^{.}\mathop{Ex}_{dest,cond}^{.}+\mathop{Ex}_{dest,cep}^{.}+\mathop{Ex}_{dest,de}^{.}+\mathop{Ex}_{dest,bfp}^{.}+\mathop{Ex}_{dest,hph}^{.}$ (16)
3.4 Efficiency
The exergy efficiency of the CHP components and CHP system as a whole is as given below,
$\mathop{\eta }_{ex,boi}=\frac{\mathop{Ex}_{19}^{.}-\mathop{Ex}_{15}^{.}}{\mathop{Ex}_{f}^{.}}$ (17)
$\mathop{\eta }_{ex,tur}=\frac{{{W}_{T}}}{\mathop{Ex}_{1}^{.}-\mathop{Ex}_{2}^{.}-\mathop{Ex}_{3}^{.}-\mathop{Ex}_{4}^{.}}$ (18)
The exergetic efficiency of condenser is as below [21],
$\mathop{\eta }_{ex,ccod}=\frac{\mathop{m}_{c}{{C}_{p,c}}\left\{ \left[ \left( {{T}_{c2}}-{{T}_{c1}} \right)-{{T}_{0}}\ln \left[ \frac{{{T}_{c2}}}{{{T}_{c1}}} \right] \right] \right\}}{{{X}_{1}}+{{X}_{2}}}$ (19)
X1 and X2 have been defined in equation 11a and 11 b.
The exergetic efficiency of HPH is given as
$\mathop{\eta }_{ex,HPH}=\frac{\mathop{Ex}_{cold,out}^{.}-\mathop{Ex}_{cold,in}^{.}}{\mathop{Ex}_{hot,in}^{.}-\mathop{Ex}_{hot,out}^{.}}=\frac{\mathop{Ex}_{15}^{.}-\mathop{Ex}_{13}^{.}}{\mathop{Ex}_{14}^{.}-\mathop{Ex}_{11}^{.}}\,$ (20)
The exergetic efficiency of CHP system is given by
$\mathop{\eta }_{ex,CHP}=\frac{\mathop{W}_{T}^{.}+\mathop{Ex}_{P}^{.}}{\mathop{Ex}_{f}^{.}}=\frac{\mathop{\mathop{W}_{T}^{.}+Ex}_{2}^{.}+\mathop{Ex}_{3}^{.}\,\,\,}{\mathop{Ex}_{f}^{.}}$ (21)
The waste exergy rates are calculated as,
$\mathop{\sum{\mathop{Ex}_{w}}}_{{}}^{.}=\mathop{Ex}_{18}^{.}+\mathop{Ex}_{4}^{.}$ (22)
3.5 Improvement potential (IP)
Another useful parameter employed here is the concept of an exergetic improvement potential (IP), which is in the rate form, is below [22],
$\mathop{IP=\left( 1-{{\eta }_{ex}} \right)\,\mathop{\left( \mathop{\mathop{Ex}_{in}^{.}-Ex}_{out}^{.} \right)}_{{}}^{.}}_{{}}^{.}$ (23)
3.6 Total exergy destruction ratio (TExDR)
It is described as the ratio of total exergy destruction in the system to the total exergy input to the system as follows [23],
$TExDR=\frac{\mathop{Ex}_{Tot,dest}^{.}}{\mathop{Ex}_{Tot,in}^{.}}$ (24)
3.7 Component exergy destruction ratio (CExDR)
It is described as the ratio of exergy destruction of any component of the system to the exergy input to the system as follows [23],
$CExDR=\frac{\mathop{Ex}_{i,dest}^{.}}{\mathop{Ex}_{Tot,in}^{.}}$ (25)
3.8 Dimensionless exergy destruction ratio (DExDR)
It is described as the ratio of exergy destruction of any component of the system to the total exergy destruction of the system as follows [23],
$DExD=\frac{\mathop{Ex}_{i,dest}^{.}}{\mathop{Ex}_{Tot,dest}^{.}}$ (26)
3.9 Exergetic performance co-efficient (EPC)
Another important parameter used in the present analysis is the EPC, which is defined as the ratio of the total exergy output from the system to the total exergy destructed in the system. In the present context, the CHP system gives process heat and power as useful exergy outputs, and total exergy destruction is the sum of exergy destructions in the individual components. Mathematically it is defined as [24],
$EPC=\frac{\mathop{E}_{T}^{.}}{{{T}_{0}}{{s}_{g}}}=\frac{\mathop{Ex}_{p}^{.}+{{W}_{T}}}{\sum{\mathop{Ex}_{Tot,dest}^{.}}}=\frac{\mathop{Ex}_{2}^{.}+\mathop{Ex}_{3}^{.}+{{W}_{T}}}{\sum{\mathop{Ex}_{Tot,dest}^{.}}}$ (27)
3.10 Exergetic factor (EF)
It is defined as the ratio of waste exergy rates and the total exergy input rates as given below:
$EF=\frac{\sum{\mathop{Ex}_{_{W}}^{.}}}{\sum{\mathop{Ex}_{Tot,dest}^{.}}}$ (28)
3.11 Power to heat ratio (PHR)
It is defined as the ratio of power generated by the turbine to the process steam supplied. It is given by,
$EPC=\frac{\mathop{W}_{T}^{.}}{{{Q}_{P}}}=\frac{{{W}_{T}}}{\mathop{E}_{2}^{.}+\mathop{E}_{3}^{.}}$ (29)
3.12 Assumptions
The assumptions made in the present analysis are as follows,
(1) Only physical or flow exergy is taken into account.
(2) The changes in kinetic and potential energies are neglected.
(3) Both Air and products of combustion behave as ideal gas.
(4) The combustion is complete.
(5) Heat loss from the components to the Environment is negligible.
(6) Thermo physical properties of fluids are invariant.
(7) The CHP system operates in a steady state.
(8) The changes in the ambient conditions are neglected.
(9) The reference conditions adopted are 298 K and 1.0132 bar pressure.
The exergy destroyed in the plant's component is a function of entropy generated and the ambient air temperature surrounding the component. Temperature surrounding the component in a CHP system changes substantially in terms of location. For instance, the temperature of the air surrounding the boiler and condenser, have large variation in the ambient conditions. Hence in the present analysis a natural-environment-subsystem model [25] is adopted for the reference condition as stated in the assumptions above.
Table 2 (refer Appendix) shows the results of exergy analysis of different components and the CHP system as a whole. The highest exergy destruction take place in the boiler followed by turbine and condenser. Condenser is found to be the least efficient component and turbine is the most efficient component of the plant in terms of exergy. Highest IP value exists for boiler followed by turbine and condenser. Boiler has CExDR and DExDR values of 0.6656 and 0.8690 respectively. This indicates that 66.56 % of total exergy input and 86.90 % of total exergy destruction take place in boiler. Turbine has CExDR and DExDR values of 0.0815 and 0.1064 respectively. This indicates that 8.1 5% of total exergy input and 10.64 % of total exergy destruction take place in turbine. Condenser has CExDR and DExDR values of 0.0188 and 0.0245 respectively. This indicates that 1.88% of total exergy input and 2.45% of total exergy destruction take place in condenser. The TExDR value is 0.7660. This indicates that 76.60% of total exergy input got destructed collectively in three major components of CHP system. The exergy efficiency of the CHP system is 20.25 %.
Table 3 (refer Appendix) shows the values of exergy rates at different state points with 100 % PLF and variable PHR. The PHR value ranges from 0.6851 to 0.4660. This shows that the process steam demand increased with constant generator output of 20700 kW. This increased the load on the boiler and in turn the steam flow rate to the turbine. Hence the value of SSC increased from 5.655 to 5.855. Steam to bagasse ratio increased from 2.33 to 2.40. Table 4 (refer Appendix) shows the results of exergy analysis at 100 % PLF and PHR varying from 0.6851 to 0.4660. Exergy destruction and IP rates decrease for boiler from PHR value of 0.6851 to 0.4660. Boiler exergy efficiency increased from 29.62 % to 30.75 %. Similar trends can be noticed for exergy destruction and IP rates in turbine and condenser. The exergy efficiency of turbine increased, whereas for condenser it almost remained in the range of 20.63 % to 20.88 %. Exergy efficiency of CHP system increased from 20.50 % to 22.96 %. The TExDR value decreased from 0.7660 to 0.7440. This indicates that the CHP system is found to be more sustainable if it is operated with a PHR value of 0.4660 at 100 % PLF. Figure 2 shows variation of TExDR and SSC with PHR values at 100 % PLF. As PHR is decreased below 0.546, TExDR value decreases below 0.7533 but the value of SSC rises from 5.763 to 5.855. Hence if the plant is operated at PHR of 0.546 it would be quite balanced in terms of sustainability and the generation cost of electricity. After all electricity/ surplus power export increases the economic viability of the plant. Hence to strike a balance between SSC and TExDR it would be quite advisable to operate the plant with PHR value of 0.546 (approx.) at 100 % PLF.
Table 5 (refer Appendix) shows the values of exergy rates at various state points, with constant PLF of 89.37 % and variable PHR. The PHR values range from 0.3923 to 0.2445. The SSC ranges from 5.978 to 6.445. S/B ratio increased from 2.23 to 2.38. An increase in PHR at constant PLF indicates an increase in process steam demand, which in turn increased the load on boiler. Hence steam flow rate to turbine increased with generator output remaining constant at 18500 kW. Similarly, table 6 (refer Appendix) shows the values of exergy rates at various state points, with constant PLF of 75.36 % and variable PHR.
Figure 2 shows variation of TExDR and SSC with PHR at 100 % PLF. It shows that with an increase in the value of PHR from 0.6851 to 0.466 the value of SSC shows an increasing trend whereas the value of TExDR shows a decreasing trend. Hence to set a trade-off between SSC and TExDR the optimum value of PHR is found to 0.546. Similarly, in figure 3 (at 89.37 % PLF) and figure 4 (at 75.36 % PLF) the optimum values of PHR are found to be 0.2921 and 0.1064 respectively. This shows that as the value of PLF decreases the optimum value of PHR also decreases.
Hence in order to operate the CHP plant on a sustainable basis the value of PHR need to be decreased as the value of PLF decreases.
Figure 2. Variation of TExDR and SSC with PHR at 100 % PLF
Figure 3. Variation of TExDR and SSC with PHR at 89.37 % PLF
Figure 4. Variation of TExDR and SSC with PHR at 75.36 % PL
The following conclusions are drawn from the present analysis:
(1) For all values of PLF the value of SSC increased with an increase in the value of PHR.
(2) For all values of PLF the value of TExDR decreased with an increase in the value of PHR.
(3) For 100 % PLF the optimum value of PHR in terms of TExDR and SSC is 0.546.
(4) For 89.57 % PLF the optimum value of PHR in terms of TExDR and SSC is 0.2921.
(6) For further study in exergy analysis in the field of CHP plants, it is recommended to find the effect of inlet pressure and temperature of steam (at the inlet of the turbine) on exergy efficiency of the plant.
The authors would like to thank the process engineers of the CHP plant for their kind support and compliance in providing the data.
Specific heat, (kJ kg-1 K-1)
Energy kW
Specific exergy of steam/vapor, (kJ kg-1)
Total exergy content, (kW)
Specific enthalpy, (kJ kg-1)
hfg
Specific latent enthalpy, (kJ kg-1)
Mass flow rate, (kg s-1)
Pressure, (bar)
Particular gas constant (kJ kg-1 K-1)
Specific entropy (kJ kg-1 K-1)
Specific latent entropy, (kJ kg-1 K-1)
Temperature (K)
Work output, (kW)
Subscripts
Reference state
1,2….20
State points of the system
combustion air,
cooling water Inlet
cooling water outlet
flue gas
incompressible fluid,
p,c
constant pressure cooling water
p,v
vapor at constant pressure
practical conditions
theoretical conditions
vapor at inlet
Superscripts
A/F
Air fuel ratio (kg kg-1)
Air pre-heater
BFP
Boiler feed pump
BPST
Back pressure steam turbine
Condensate extraction pump
CExDR
Component exergy destruction ratio
DExDR
Dimensionless exergy destruction ratio
Exergetic factor
ηex
exergy efficiency, %
Table 1. Thermodynamic properties and Exergy rates at various state points in the CHP system with 100 % PLF and PHR=0.6851
$\overset{.}{\mathop{m}}\,$
kg s-1
kj kg-1 K-1
kj kg-1
$\overset{.}{\mathop{Ex}}\,$
Superheated steam
Process steam
Exhaust steam
Table 2. Results of exergy analysis of different components of CHP system with 100 % PLF and PHR=0.6851
${{\overset{.}{\mathop{Ex}}\,}_{dest}}$
${{\eta }_{ex}}$
$\overset{.}{\mathop{IP}}\,$
TExDR
Total/ Avg
${{\overset{.}{\mathop{Ex}}\,}_{in}}$kW
${{\eta }_{ex,CHP}}$
Table 3. Exergy rates at different state points at 100 % PLF and various values of PHR
PLF, %
PHR
SSC, kg kWhr-1
State point
Ex, kW
Table 4. Results of exergy analysis at 100 % PLF and various values of PHR
PLF,%
Generator output, kW
SSC kg kWhr-1
${{\overset{.}{\mathop{Ex}}\,}_{dest,boi}}$,kW
${{\overset{.}{\mathop{Ex}}\,}_{dest,tur}}$, kW
${{\overset{.}{\mathop{Ex}}\,}_{dest,cond}}$, kW
${{\overset{.}{\mathop{Ex}}\,}_{dest,tot}}$, kW
${{\overset{.}{\mathop{Ex}}\,}_{in}}={{\overset{.}{\mathop{Ex}}\,}_{15}}+{{\overset{.}{\mathop{Ex}}\,}_{16}}$, kW
${{\eta }_{ex,boi}}$, %
${{\eta }_{ex,tur}}$, %
${{\eta }_{ex,cond}}$, %
${{\overset{.}{\mathop{IP}}\,}_{boi}}$, kW
${{\overset{.}{\mathop{IP}}\,}_{tur}}$, kW
${{\overset{.}{\mathop{IP}}\,}_{cond}}$, kW
${{\eta }_{ex,CHP}}$, %
Table 5. Exergy rates at different state points at 89.37 % PLF and various values of PHR
Table 6. Exergy rates at different state points at 75.36 % PLF and various values of PHR.
[1] Rosen, A.M., Le, N.M., Dincer, I. (2005). Efficiency analysis of cogeneration and district energy systems. Applied Thermal Engineering, 25: 147-159. https://doi.org/10.1016/j.applthermaleng.2004.05.008
[2] Adrian, B. (2002). Fundamentals of exergy analysis, entropy generation minimization and the generation of flow architecture. International Journal of Energy Research, 26: 545-565. https://doi.org/10.1002/er.804
[3] Huang, F.F. (1990). Performance evaluation of selected combustion gas turbine cogeneration systems based on first law and second law analysis. Journal of Engineering for Gas turbines and Power (ASME), 112: 117-121. https://doi.org/10.1115/1.2906465
[4] Bilgen, E. (2000). Exergetic and engineering analysis of gas turbine based cogeneration systems. Energy, 25: 1215-1229. https://doi.org/10.1016/s0360-5442(00)00041-4
[5] Ozgur, B., Haydar, A. (2007). Energetic and exergetic performance evaluation of a combined heat and power system with micro-gas turbine (MGTCHP). International Journal of Energy Research, 31: 1425-1440. https://doi.org/10.1002/er.1308
[6] Yilmaz, Y., Hikmet, K.T., Arif, H. (2010). Dynamic energy and exergy analyses of an industrial cogeneration system. International Journal of Energy Research, 34: 345-356. https://doi.org/10.1002/er.1561
[7] Aysegul, A., Mehmet, K. (2009). Exergetic and thermo-economic analyses of diesel engine powered cogeneration: Part 1- Formulations. Applied Thermal Engineering, 29: 234-241. https://doi.org/10.1016/j.applthermaleng.2008.02.025
[8] Aysegul, A., Mehmet, K. (2009). Exergetic and thermo-economic analyses of diesel engine powered cogeneration: Part 2- Applications. Applied Thermal Engineering, 29: 242-249. https://doi.org/10.1016/j.applthermaleng.2008.02.026
[9] Aysegul, A., Mehmet, K. (2008). First law and second law analysis of diesel engine powered cogeneration systems. Energy Conversion and Management, 29: 2026-2031. https://doi.org/10.1016/j.enconman.2008.02.012
[10] Ozgur, B., Haydar, A., Arif, H. (2007). Exergetic performance evaluation of a combined heat and power (CHP) system in Turkey. International Journal of Energy Research, 31: 849-866. https://doi.org/10.1002/er.1353
[11] Siddhartha, B.M., Rajkumar, N. (2001). Mapping of combined heat and power systems in cane sugar industry. Applied Thermal Engineering, 21: 1707-1719. https://doi.org/10.1016/s1359-4311(01)00027-8
[12] Kamate, S.C., Gangawati, P.B. (2009). Exergy analysis of cogeneration power plants in sugar industry. Applied Thermal Engineering, 29: 1187-1194. https://doi.org/10.1016/j.applthermaleng.2008.06.016
[13] Mehmet, K., Ibrahim, D. (2009). Performance assessment of cogeneration plants. Energy Conversion and Management, 50: 76-81. https://doi.org/10.1016/j.enconman.2008.08.029
[14] Sharma, M.P., Sharma, J.D. (1999). Bagasse based co-generation system for Indian sugar mills. Renewable Energy, 16: 1011-1014. https://doi.org/10.1016/s0960-1481(98)00356-5
[15] Marc, A.R., Ibrahim, D. (1999). Exergy analysis of waste emissions. International Journal of Energy Research, 23: 1153-1163. https://doi.org/10.1002/(sici)1099-114x(19991025)23:13%3C1153::aid-er545%3E3.0.co;2-y
[16] Adrian, B. (1988). Advanced Engineering Thermodynamics. New York: Wiley. https://doi.org/10.1080/03043799808928263
[17] Kotas, T.J. (1995). The Exergy method of Thermal Plant Analysis. (reprint edn), Kieger: Malabar, 288-292. https://doi.org/10.1016/B978-0-408-01350-5.50020-9
[18] Moran, M.J., Shapiro, H.N. (2000). Fundamentals of Engineering Thermodynamics. Wiley: New York. https://doi.org/10.1080/03043799308928176
[19] Hugot, E. (1986). Handbook of Cane sugar Engineering. 3rd edition. Elsevier science publishing company. Amsterdam. https://doi.org/10.1016/b978-1-4832-3190-7.50016-7
[20] Cortez, C.A.B., Gomez, E.O. (1998). A method for Exergy analysis of sugarcane bagasse boilers. Brazilian Journal of Chemical Engineering, 15: 1-13. https://doi.org/10.1590/s0104-66321998000100006
[21] Haseli, Y., Dincer, I., Naterer, G.F. (2008). Optimum temperatures in a shell and tube condenser with respect to exergy. International Journal of Heat and Mass Transfer, 51: 2462-2470. https://doi.org/10.1016/j.ijheatmasstransfer.2007.08.006
[22] Van, G.W. (1997). Energy policy: fairy tales and factualities. In Innovation and technology- Strategies and policies. Soares, O.D.D., Martins da Cruz, A., Costa Pereira, G., Soares, I.M.R.T., Reis, A.J.P.S. (eds). Kluver: Dordrecht, 93-105. https://doi.org/10.1007/978-0-585-29606-7_6.
[23] Coskun, C., Oktay, Z., Dincer, I. (2011). Investigations of some renewable energy and exergy parameters for two Geothermal District Heating systems. International Journal of Exergy, 8: 1-15. https://doi.org/10.1504/IJEX.2011.037211
[24] Ust, Y., Sahin, B., Yilmaz, T. (2007). Optimization of a regenerative gas turbine co-generation system based on a new exergetic performance criterion: Exergetic performance co-efficient. Proceedings of the Institution of Mechanical Engineers Part A: Journal of power and energy, 221: 447-457. https://doi.org/10.1243%2F09576509JPE379
[25] Ibrahim, D., Marc, A.R. (2007). Exergy: Energy, Environment and sustainable development. First Edition. Elsevier Science Publications: Amsterdam. https://doi.org/10.1016/b978-008044529-8.50006-9 | CommonCrawl |
Fall 2011 Midterm 1 Solutions
Stony Brook Physics phy141:examprep
Trace: • Fall 2011 Midterm 1 Solutions
Midterm 1 Solutions - Q1 (Ave Score: 22.3/30)
Question 1. (30 points) A smooth block of mass 100g is sliding along the edge of a smooth cone with constant speed. The height of the cone is 20cm, and half of it's apex angle is 30$^{o}$.
A. (5 points) Draw a free body diagram which represents all the forces acting on the block.
B. (5 points) What is the magnitude of the gravitational force acting on the block?
$mg=0.1\mathrm{kg}\times9.81\mathrm{ms^{-2}}=0.981\mathrm{N}$
C. (5 points) What is the magnitude of the component of the gravitational force on the block which points down the slope of the cone?
$mg\sin(60^{o})=0.85N$
D. (5 points) What is the magnitude of the normal force acting on the block?
$F_{N}\sin(30^{o})=mg$
$F_{N}=\frac{0.981\mathrm{ms^{-2}}}{0.5}=1.962\mathrm{N}$
E. (10 points) What is the speed of the block?
$\frac{mv^{2}}{r}=F_{N}\cos(30^{o})=\frac{mg}{\tan(30^{o})}$
$r=0.2\tan{30^{o}}$
$v^{2}=0.2g$
$v=1.4\mathrm{ms^{-1}}$
Question 2. (35 points) A plane is flying horizontally with a constant speed of 100m/s at a height $h$ above the ground, and drops a 50kg bomb with the intention of hitting a car that has just begun driving up a 10$^{o}$ incline which starts a distance $l$ in front of the plane. The speed of the car is a constant 30m/s. For the following questions use the coordinate axes defined in the figure, where the origin is taken to be the initial position of the car. (Note: The car has been stolen by a Martian trying to get hands on experience with our GPS system and our planet's survival depends on us stopping the Martian).
A. (5 points) What is the initial velocity of the bomb relative to the car? Write your answer in unit vector notation.
$v_{x}=100-30\cos(10^{o})=70.46\mathrm{ms^{-1}}$
$v_{y}=-30\sin(10^{o})=-5.21\mathrm{ms^{-1}}$
$\vec{v}=70.46\mathrm{ms^{-1}}\hat{i}-5.21\mathrm{ms^{-1}}\hat{j}$
B. (5 points) Write equations for both components ($x$ and $y$) of the car's displacement as a function of time, taking t=0s to be the time the bomb is released.
$x=30\cos(10^{o})t=29.54t\,\mathrm{m}$
$y=30\sin(10^{o})t=5.21t\,\mathrm{m}$
C. (5 points) Write equations for both components ($x$ and $y$) of the bomb's displacement as a function of time, taking t=0s to be the time the bomb is released.
$x=100t-l\,\mathrm{m}$
$y=h-\frac{1}{2}gt^{2}\,\mathrm{m}$
D. (5 points) If the bomb hits the car at time t=10s what was the height of the plane above the ground $h$ when it dropped the bomb?
$y=52.1\mathrm{m}$
$52.1=h-\frac{1}{2}g10^{2}$
$h=52.1+50\times9.81=542.61\mathrm{m}$
E. (5 points) What is the horizontal displacement of the plane relative to the car when the bomb hits the car at t=10s.
$0\mathrm{m}$
F. (5 points) How much work did gravity do on the bomb while it was falling?
$mg\frac{1}{2}g10^{2}=50\times50\times9.81^2=240590\mathrm{J}$
G. (5 points) How much kinetic energy does the bomb have when it hits the car?
$\frac{1}{2}mv_{0}^{2}+240590=\frac{1}{2}\times50\times100^{2}+240590=250000+240590=490590\mathrm{J}$
phy141/examprep/m1f11sols.txt · Last modified: 2012/09/24 22:20 by mdawber | CommonCrawl |
Can a statically charged object flying in an airplane float?
Lets say you are flying in a plane headed due west at 1000 km/h (278 m/s) with an altitude of 10km. According to http://www.ngdc.noaa.gov/geomag-web/#igrfwmm, at 10km altitude, Earth's magnetic field is about $2*10^{-5}$ Tesla.
Let's say an object has a mass $m$ in kilograms and has a negative charge of magnitude $q$ in Coulombs. In the airplane it would experience a Lorentz force determined by $F=qv \times B$ of about $0.00556*q$ Newtons. This would result in an acceleration (given by $F = m*a$) of $0.00556*\frac{q}{m}$ meters per second squared in an upward direction (negative charges going west in a magnetic field pointing north experience an upward force). The object would also experience a downward acceleration of about $9.8$ meters per second squared.
Therefore if the object had a charge to mass ratio of roughly $$\frac{q}{m}= 1763 \text{ Coulombs per kilogram}$$ then it would float in mid air. So my first observation is that free moving negatively charged ions in the air which have a charge to mass ratio orders of magnitude above this would experience a strong upward force and positively charged ions would experience a strong downward force. This would quickly create an electric field from a positive charge buildup on the bottom of the plane and a negative charge buildup on the top. The electric field would grow until the electric field force (pushing in the opposite direction of the Lorentz force) balanced the Lorentz force.
Therefore, lets say you have the following available to you:
1) An airplane that does not generate an electric field while flying by somehow being built completely from a non conductive material and being filled with a vacuum and somehow generating no static build up from friction with the air outside. This possibly magical airplane is not of central importance to the question so don't focus on it. I just want to provide a vacuum environment with no electric field.
2) A Van de Graaff generator capable of generating very large static charges on objects.
3) Any materials you wish that you think can be charged with a large charge to mass ratio. For example: Aerogel, copper, aluminum, ceramics, graphene sheets, carbon nanotubes or something else which may be able to hold a lot of charge and be very light. I am guessing that a high surface area to weight ratio is probably the best bet. NOTE: Capacitors will not work. Capacitors store a large amount of charge, but it is positive and negative, so the net Lorentz force on a capacitor would be $0$.
If for some strange reason it is easier to place a large positive charge on an object by electron depletion, you are allowed to make the plane fly east and use a Van de Graaff generator which generated positive static charge.
So my question is; With modern materials, is it possible to construct and charge an object like this so that it will float in the airplane?
electromagnetism forces electrostatics capacitance aircraft
AndrewAndrew
$\begingroup$ You've put a lot of thought into this. I don't think it's possible to put that much charge on an object. A back-of-the envelope calculation suggest that if the material were carbon you would have to add or remove an electron from every fifteenth atom. Check my work on that. I can't imagine how it would be possible to do such a thing. $\endgroup$ – garyp Aug 21 '16 at 10:48
$\begingroup$ From my rough estimation. A carbon atom weighs 2E-26 kg and the charge of one electron is 1.6E-19 C so with only one extra electron on 4000 carbon atoms it would achieve a charge to mass ratio of 2000 C/kg $\endgroup$ – Andrew Aug 21 '16 at 11:00
$\begingroup$ That's right. Now take a cube of carbon having an excess charge density of one atom per 4000. How many neutral atoms are there between one charged atom and the next? $\endgroup$ – garyp Aug 21 '16 at 11:25
$\begingroup$ Looking at en.wikipedia.org/wiki/Carbon_nanofoam it would seem each electron would be 6 nm from the closest other extra electron. It may not be possible, but I would like a starting point to figure out the limit of static charge an object can hold. I could check experimentally, but this would be difficult for me to get all the supplies. Is there a non experimental way to determine the maximum amount of static charge an object can hold? $\endgroup$ – Andrew Aug 21 '16 at 13:25
Browse other questions tagged electromagnetism forces electrostatics capacitance aircraft or ask your own question.
What happens to this potential energy?
What determines how much electrical charge an object can hold?
What does really attracts a water stream to a charged object?
Calculating the amount of charges for an object that is electrostatically induced
Force on a charged particle due to an uncharged infinite conducting plate
Why would a moving infinite region of magnetic field exert an electric force?
Why is electric field lines away from (+) and toward (-)?
Capacitors, net charge and the Coulomb force
Confusion about fundamental charge vs non-fundamental
Can you trap electron in a ball charged with more electrons? | CommonCrawl |
Frequency-division multiplexer and demultiplexer for terahertz wireless links
Jianjun Ma1,
Nicholas J. Karl1,
Sara Bretin2,
Guillaume Ducournau2 &
Daniel M. Mittleman1
Nature Communications volume 8, Article number: 729 (2017) Cite this article
Photonic devices
Terahertz optics
The development of components for terahertz wireless communications networks has become an active and growing research field. However, in most cases these components have been studied using a continuous or broadband-pulsed terahertz source, not using a modulated data stream. This limitation may mask important aspects of the performance of the device in a realistic system configuration. We report the characterization of one such device, a frequency multiplexer, using modulated data at rates up to 10 gigabits per second. We also demonstrate simultaneous error-free transmission of two signals at different carrier frequencies, with an aggregate data rate of 50 gigabits per second. We observe that the far-field spatial variation of the bit error rate is different from that of the emitted power, due to a small nonuniformity in the angular detection sensitivity. This is likely to be a common feature of any terahertz communication system in which signals propagate as diffracting beams not omnidirectional broadcasts.
The volume of wireless data traffic is increasing exponentially and will surpass 24 exabytes per month by 12019. To accommodate this trend, future generations of wireless networks will require much higher capacity for data throughput. One favored solution is to operate at higher carrier frequencies, beyond 100 GHz2,3,4,5. Recent years have witnessed rapidly growing interest in the development of components to enable wireless communications in the terahertz (THz) range. One of the earliest examples is modulators, first discussed almost 20 years ago6, with rapid improvements continuing to be reported7,8,9,10. Other examples include power splitters11, 12, filters13, 14, phase shifters15, beam-steering devices16,17,18, passive reflectors for engineered multipath environments19, 20, and multiplexers and demultiplexers (mux/demux)21, 22. Despite these efforts, many important components of such networks remain at a very immature stage of development, including components for mux and demux. Mux and demux of non-interfering data streams is universally employed in existing communication systems and, in combination with advanced modulation schemes23, can be an efficient method to achieve the eventual data rate target of Tb/s. In the THz range, where frequency bands may not be continuous over a broad spectral range due to atmospheric attenuation24 or regulatory restrictions25, frequency-division multiplexing is even more of a compelling need.
We have recently proposed an architecture for waveguide-to-free space mux/demux based on a leaky waveguide21. This concept exploits the highly directional nature of THz signals, which are much more like beams than omnidirectional broadcasts. A particular client in a network would be assigned a spectral band based on its location, such that only signals within that spectral band are sent to the location of the particular client. The device can accommodate mobility by tuning the carrier frequency to account for changes in the client location; this process would likely rely on beam-sounding techniques using legacy bands at lower frequencies26. Alternatively, multiple clients can be served simultaneously by mux/demux of multiple signals lying in distinct frequency bands.
The operating principle of the leaky-wave device is straightforward. It is based on a metal parallel-plate waveguide (PPWG), which has proven to be a versatile platform for manipulation of THz signals27, 28. The waveguide has a narrow slot opened in one of the metal plates, which (in the demux configuration) allows some of the guided wave to leak out into free space. Similar leaky-wave designs have been used in the RF community for many years29, but their use in the THz range has so far been limited21, 30, 31. The frequency of the emitted radiation at a given angle is determined by a phase-matching constraint:
$${k_0}\cos \phi = {k_{{\rm{PPWG}}}},$$
where k 0 = 2πv/c 0 is the wave vector for free space with v as the frequency of the signal and c 0 as the speed of light in vacuum. ϕ is the propagation angle of the free-space mode relative to the waveguide propagation axis. The frequency-dependent propagation constant for the lowest-order transverse-electric (TE1) mode of a PPWG is27:
$${k_{{\rm{PPWG}}}} = {k_0}\sqrt {1 - {{\left( {\frac{{{c_0}}}{{2bv }}} \right)}^2}} ,$$
where b represents the plate separation. Substituting Eq. (2) into Eq. (1), the phase-matching condition results in an angle-dependent emission frequency:
$$v = \frac{{{c_0}}}{{2b\sin \phi }}.$$
For an incoming wave, the situation is simply reversed; an incident wave at a given frequency only couples into the waveguide if it arrives at the appropriate angle determined by Eq. (3). Thus, the design supports both mux and demux capabilities.
Although this initial study of a mux/demux device, and the other device demonstrations mentioned above, all represent significant advances in THz signal processing, it is important to note that these measurements have usually been performed in isolation with an unmodulated continuous-wave or pulsed time-domain source. Characterization of the performance of these devices in the context of a communication system, using data modulated at high bit rate, has for the most part not been demonstrated, and little consideration has yet been given to the enormous challenge of integration into a larger system. Meanwhile, there have also been several recent single-input single-output (SISO) THz link demonstrations3, 23, 32,33,34,35, which have achieved impressive data rates but have so far not progressed to the integration of any of the aforementioned signal processing components.
In this article, we report an attempt to bridge this conceptual gap, with the characterization of a THz mux/demux subsystem21 in a real THz data wireless link. We use modulated data to characterize bit error rates and power penalties for this subsystem, as a function of data rate and source power. We achieve single-channel error-free mux/demux at rates up to 10 gigabits per second (Gb/s), as well as the first report of mux/demux of two independent real-time video broadcasts, and the demux of two frequency channels with an aggregate data rate of 50 Gb/s. This work represents the first simultaneous mux/demux of real data flows in the THz range.
Characterization of bit error rate
The numerical simulation in Fig. 1a illustrates the performance of the leaky waveguide in a demux configuration, for a single-frequency (unmodulated) input wave, first propagating inside the waveguide and then radiating into free space and producing a diffracting beam in the far field at an angle determined by Eq. (3). The solid green and white lines added to this simulation show that the angular spread of first-order modulation sidebands is expected to be smaller than the size of the diffracting carrier wave, even up to 10 Gb/s. This suggests that a detector with sufficient aperture to collect most of the carrier wave will also capture the modulation information required for signal transmission. However, our experimental results, described below, reveal a surprising sensitivity of the signal quality to the angular position of the receiver, resulting from a small angular nonuniformity in the detection sensitivity.
Demultiplexing of modulated THz channels for different data rates. a A 3D numerical simulation (finite element method), of a single-frequency input wave (f = 312 GHz) propagating in the waveguide (b = 0.733 mm) and then radiating into the far field through a slot in the top plate. The horizontal plane shows the intensity in a plane centered between the metal plates (i.e., inside the waveguide). The vertical (out of plane) arc shows the radiated power as a function of angle. The solid green line indicates the angle predicted by Eq. (3) for the parameters used in this simulation. The two solid white lines on either side of the green line show the predicted angles for frequencies of 302 GHz and 322 GHz, corresponding to the ±1st-order sidebands for a modulation data rate of 10 Gb/s. The angular spread of these sidebands is smaller than the angular width of the carrier wave diffracting through the slot. b Measured angular distributions for the power (black curve) and bit error rate (BER, red symbols), for an input frequency of 300 GHz and a modulation rate of 6 Gb/s. Both are normalized to unity and plotted on a log scale (BER plotted as the negative log), to facilitate comparison of the angular widths. c Measured real-time BER performance of the THz link coupled out from the slot, as a function of the angular position of the detector, for a 300 GHz carrier wave. Here, the plate separation b is 0.8 mm and slot width is 0.7 mm. Results for several different data rates all show the same optimum angle of 38.7° independent of the data rates (indicated by the vertical dashed line), though the angular width varies slightly with data rate. d A model calculation of the effect of a non-uniform angular detection sensitivity on the BER, which qualitatively reproduces the observed results. These curves assume a specific (parabolic) form for the angular detection filter, but otherwise contain no free parameters (see Supplementary Note 1 for details). In this plot, the colors correspond to the same data rates as in (c)
We first explore the performance of the device in the demux configuration, with a single data-modulated input wave. We generate the THz signal by photomixing two infrared optical signals modulated using an optical modulator, resulting in a an amplitude-modulated signal (amplitude shift keying, ASK) with a carrier frequency determined by the optical frequency difference. This signal is coupled into the waveguide with an input power of about −10 dBm. The waveguide consists of two flat steel plates, with a plate separation of b = 0.8 mm and a length of 40 mm. The input aperture of the waveguide is tapered to improve the input coupling efficiency36. The slot in the top waveguide plate has a length of 28 mm and a width of 0.7 mm, and begins 5 mm beyond the input face of the waveguide. The signal radiated from the slot is collected by a Teflon lens (f = 25 mm) and focused onto a Schottky diode receiver. The collection and detection system is mounted on a rotation arm, to characterize the output as a function of the angular position of the receiver. After electrical amplification, the bit error rate (BER) is determined in real-time, i.e., without any off-line processing.
Figure 1 shows typical results for an input wave of 300 GHz (which, for the given value of b, corresponds to an output angle of 38.7°). Figure 1b shows a comparison of the angular distribution of the power to the angular dependence of the BER measured under identical conditions. Figure 1c displays the BER at different receiver angles, for several different data-modulation rates, all with the same carrier frequency.
This figure demonstrates several important results. First, we observe error-free data transmission through the demux device (BER < 10−10) for all data rates, proving that the propagation through the waveguide does not introduce excessive signal loss or distortion due to dispersion. This is consistent with previous work demonstrating the low-loss and low-dispersion characteristics of TE1 mode propagation in parallel-plate waveguides27, 37. We also note that the optimum BER and maximum power are always obtained at the same angle, regardless of the modulation rate. This is not surprising, as the angle is determined by the carrier frequency and the plate separation, according to Eq. (3).
The most surprising aspect of Fig. 1b and c involves the angular widths of the BER curves, which are all in the vicinity of just 2 or 3° (FWHM). This is considerably smaller than the measured angular width of the power distribution (as shown clearly in Fig. 1b), and also smaller than angular aperture of our collection optics. Moreover, at a given BER, the widths of the curves in Fig. 1c vary slightly with data rate, becoming somewhat narrower as the data rate increases. This strong and anomalous angular dependence suggests that the BER is significantly influenced by the angular sensitivity of the detection of modulation sidebands, which co-propagate with the carrier frequency (at slightly different angles, as shown in Fig. 1a), in a diffraction-limited beam.
Using a simple model for the angular filtering of the receiver, we can qualitatively understand both the observed angular widths and the data-rate dependence shown in Fig. 1c. We imagine that, regardless of the details of the detection system, its sensitivity (when it is located at a particular angular location) is a slowly varying function of the propagation angle of the THz signal, with a maximum sensitivity when the beam propagation angle is equal to the detector angle so that the beam hits the center of the detector. If the detector is moved so that it is not centered on the diffracting beam (i.e., at the angle determined by Eq. (3) for the carrier frequency), then positive-modulation sidebands and negative-modulation sidebands will not be detected with equal sensitivity. Even if this spectral asymmetry is small, it will lead to a decrease in the overall signal-to-noise of the detection, and thus a degrading of the BER. We note that this effect will not impact the detection of the overall signal power, which explains why the angular width of the power curve is significantly larger than that of the BER curve in Fig. 1b. Modulation at a higher data rate produces sidebands that are more widely spaced in frequency and therefore also in angle. These are more sensitive to the angular filtering as they sample the filter at larger angles away from the optimal central angle. Thus, the angular degradation of the BER is more rapid at higher modulation rates, consistent with our observations. Figure 1d shows the results of a simple model calculation, using an assumed parabolic form for the angular-filter function, which qualitatively reproduce the observed angular widths and also the trend with data rate (see Supplementary Note 1 for details). We note that the BER values estimated from this model change substantially within a small angular range, even though the assumed spectral filter is quite flat, varying by only about 1% within ± 10 GHz of the central frequency.
Given the highly directional nature of THz signals, this angular sensitivity is likely to be a quite general feature of any THz wireless network in which frequency multiplexing is used and in which beam widths are diffraction-limited. This result, which would not have been observed using an unmodulated THz source, has important implication for the trade-off between receiver aperture and data rate, and also for the design of antenna configurations in optimal multiple in/multiple out (MIMO) architectures3, 38.
Another important parameter is the insertion loss, which induces a power penalty for error-free operation. To explore this issue, we compare the measured BER values for demuxed signals (at the optimal receiver angular location) to those measured without demux; in that latter case the detector is placed directly at the location of the demux input port, bypassing the demux waveguide entirely. This result, shown in Fig. 2a, quantifies the power penalty induced by the demux. For example, at 10 Gb/s, the penalty is about 10 dB. These measurements were obtained for a carrier frequency of 312 GHz, and various data rates, up to 10 Gb/s (10 G Ethernet data rate) as indicated in the figure. Insets show the eye diagrams for a modulation rate of 10 Gb/s, both before and after demultiplexing. The eye opening becomes a little bit narrower after demultiplexing due to the power penalty, but it is still possible to obtain error-free transmission at all data rates, reaching a BER below 10−10. This penalty is probably due almost entirely to the efficiency of the coupling into and out of the waveguide, and not to propagation losses or dispersion inside the waveguide, which are known to be small37.
Demultiplexing of modulated THz channels as a function of detected power. a Measured real-time BER performance of the THz link as a function of the THz power at the receiver under different data rates up to 10 Gb/s. Values are recorded both before the demultiplexer (left set of curves), and also after demultiplexing (right set of curves) with the detector fixed at the optimum angular position for the carrier frequency of 312 GHz. Data rates are shown next to each curve, in Gb/s. Typical eye diagrams are shown for the input and demultiplexed links at a data rate of 10 Gb/s, both showing error-free transmission (BER < 10−10). Before demultiplexing, all the curves have about the same slope. But after the device, the slope changes for the higher data rates (8 and 10 Gb/s), due to scattering of residual radiation at the output end of the waveguide. b One frame from a two-dimensional numerical time-domain simulation movie, depicting the scattering phenomenon, which leads to inter-symbol interference at higher data rates, as discussed in the text. The inset (upper left) shows the input waveform for the simulation, which is a 300 GHz carrier wave modulated so that a pulse of radiation enters the waveguide every 100 ps. The waveguide is at the bottom left, where the red arrow indicates the propagation direction for the guided wave. Interference fringes are clearly evident due to interference between the bit emerging from the far end of the waveguide and the previous bit, which radiated through the slot
We also observe that the slope of the demuxed BER curves changes for higher data rates (above 6 Gb/s), indicating an increased noise level at these higher modulation rates. We speculate that this increased noise arises from signals emerging from the far end of the waveguide (rather than from the slot, as intended). The impedance mismatch to free space is not large39, so most of the remaining power is emitted into air, and then can scatter from this abrupt waveguide termination to cause interference at the detector. Such scattered signals are delayed by their extra travel time inside the waveguide. If this delay exceeds the duration of a single bit, then this coherent interference can leak over into the subsequent bit, thus degrading the eye diagram. Therefore, one could expect a higher BER for signals with data-modulation rate larger than a certain threshold value determined by the inverse of the extra travel time of the scattered interference signal. The phase delay inside the waveguide, roughly 190 ps, indicates a threshold value near 5 Gb/s for this inter-symbol interference (ISI) effect, which is close to what is observed experimentally in Fig. 2a. This idea is supported by the numerical time-domain simulation shown in Fig. 2b, for a bit period of 100 ps, (corresponding to a data rate of 10 Gb/s). This simulation is somewhat limited in accuracy as it is only a 2D simulation; nevertheless one can clearly see the fringes due to ISI between a bit emerging from the slot and one emerging from the end of the waveguide.
System demonstrations
To demonstrate the real-time mux and demux operation, we use two independent transmitters as shown in the schematic in Fig. 3. In this case, one channel is the photomixer-based THz source described above, and the other one is a frequency multiplication chain. These two signals with carrier frequencies of 264.7 GHz (channel 1, electronic source) and 322.5 GHz (channel 2, photomixer), are both amplitude-modulated (ASK modulation, as above) with independent bit patterns, both at a data rate of 1.5 Gb/s. The input powers were adjusted to reach a similar performance on the two signals and correspond to around −10 dBm in each channel incident on the mux input. In this case, the waveguide consists of a longer pair of plates (length = 80 mm) with two slots in the top plate, on opposite ends. We use one of the slots to couple two different signals into the waveguide (mux), and the other slot to couple them out (demux). In this measurement, the effective propagation distance for the two signals inside the waveguide is 14 mm. The input angles of the two signals into the first slot are adjusted according to the criterion of Eq. (3), to optimize the efficiency of input coupling into the waveguide. At the output, the receiver is rotated through a range of angles to characterize the angular distribution of the output, as in Fig. 1. We measure both the power (Fig. 3c) and the BER (Fig. 3d) as a function of angle, for each transmitter individually and also when both signals are in the waveguide at the same time. Figure 3c shows that the optimal output angles are again consistent with the prediction of Eq. (3). Figure 3d shows that the BER is <10−10 for both channels, whether or not the other channel is present. In other words, we achieve error-free mux and demux for each channel, whether or not the other channel is simultaneously propagating in the waveguide. The small changes in each BER curve when the other channel is present can be understood by noting the small overlap between the two demuxed beams as show in Fig. 3c. Nevertheless, it is clear that error-free mux-demux can be achieved for both channels. We further demonstrate this remarkable result by modulating the two channels using real video data from two different television broadcasts. When the receiver is rotated from one optimum angular position to another, the received video shown on the monitor switches from one channel (Fig. 3e) to another (Fig. 3f).
Schematic diagram and multiplexing/demultiplexing of two THz channels. a Schematic showing the measurement setup, with two different transmitters at 264.7 GHz and 332.5 GHz at fixed angular positions, and with the receiver mounted on a pivoting rail to vary the measurement angle. Power pattern and BER performance for both real-time links at 264.7 GHz and 332.5 GHz are measured after mux-demux with data rate at 1.5 Gb/s. b View of the mux-demux in the experimental setup. c Power pattern measured when channel 1 (264.7 GHz) is on while channel 2 (322.5 GHz) is off (red curve), channel 2 is on, whereas channel 1 is off (blue curve), and both channels are on (black curve). d BER performance for channel 1 only (red), channel 2 only (blue), channel 1 when channel 2 is on (light green) and channel 2 when channel 1 is on (dark green). Error-free operation can be achieved in both channels even with both signals on. (e, f) Two real-time videos (HD-TV broadcast) transmitted by the two THz links at 264 GHz and 332.5 GHz, each with a data rate of 1.5 Gb/s. The video signals are taken from two different TV broadcast channels and connected to the transmitters. In the monitor connected to the detector, the channel switches when the angular position of the receiver changes. This THz mux/demux can be observed in operation in the Supplementary Movie, showing excellent stability and reproducibility
Finally, we explore the efficacy of higher order modulation schemes, which can provide increased data rates while using less spectral bandwidth. For this measurement, the photomixer THz source is driven by an optical signal modulated using quadrature phase shift keying (QPSK) at 12.5 Gbaud. In this case, two QPSK-modulated carrier signals, each carrying 25 Gb/s of data, are generated in the photomixer at frequencies of 280 and 330 GHz. These are simultaneously injected into a waveguide in a demux configuration (plate separation = 0.7 mm, slot width = 0.8 mm), and the two outputs were measured independently as a function of angle. To preserve the phase information contained in the QPSK signal, we detect the signals using a sub-harmonic mixer. The down-converted signals are analyzed to recover the constellation diagrams and BER performance for both channels. This result, shown in Fig. 4, demonstrates demux of two signals with an aggregate data rate of 50 Gb/s, with acceptable BER of ~10−5 or better for both channels. Although not error-free, the BER is still well below the threshold for forward error correction (typically 2 × 10−3). The degraded BER relative to the results shown in Fig. 3 are probably due to the same effect of interference with scattered light mentioned above, which would be expected to have an increasing impact with increasing data rate.
Demux of two QPSK-modulated channels. BER vs. angle for two channels at 280 GHz and 330 GHz, both modulated at 12.5 Gbaud (corresponding to 25 Gb/s in each channel), for an aggregate throughput of 50 Gb/s. To preserve the QPSK phase information, signals were detected using a Schottky-based sub-harmonic mixer with the output analyzed on a real-time high-bandwidth oscilloscope. In both cases, the optimum BER is well below the threshold for forward error correction. The insets show the constellation diagrams measured for each channel. The vertical dashed lines show the predicted positions of the BER minima for the two channels, according to Eq. (3)
In summary, we have explored the performance of a leaky-wave device for multiplexing and demultiplexing in THz wireless links, using a realistic system configuration with the modulated data. We obtain error-free data transmission through the demux device for all data rates up to 10 Gb/s, which demonstrates that neither insertion loss nor waveguide dispersion are limiting factors in the operation of this mux/demux configuration. We characterize the power penalty when the wave propagates through the waveguide. This effective insertion loss results mainly due to the coupling efficiency between free space and the waveguide mode, and can therefore be further optimized by tailoring the waveguide input and/or the slot width.
Because of the strongly directional and diffraction-limited nature of THz signals, the measured bit error rate depends on the angular location of the detector, which changes with the data-modulation rate. This new phenomenon can be understood by applying a relatively simple filtering model. As any network operating above 100 GHz is almost certain to exhibit narrow diffraction-limited beams, this may be the limiting factor in achievable data rate, for a given single-point receiver aperture. On the other hand, in a MIMO configuration different antennas in an array may receive different subsets of the total spectral information in a signal. This presents an interesting challenge in the optimal detection and demodulation of demuxed signals, which could overcome the limitation imposed by a diffracting beam with spectral sidebands.
In addition, we demonstrate the effectiveness of this mux/demux approach by operating two independent wireless links at 264.7 GHz and 332.5 GHz to demonstrate real-time mux and demux, for simultaneous error-free transmission of two video signals with ASK modulation, as well as the demux of two QPSK-modulated signals with aggregate data rate of 50 Gb/s. Our results clearly suggest that two frequency channels is not the limit; additional channels could be added for increased aggregate throughput. In our earlier work21, we modeled a six-channel configuration with equal 20 GHz-wide channels spaced over 150 GHz of spectrum. This model configuration seems to be feasible, although an experimental realization would require an array of sources that are probably not yet available in any one laboratory. The practical limit on channel number will likely be determined by the size and positioning of coupling optics. We note that this mux/demux configuration can also accommodate mobility, with continuous tuning of the carrier frequency as a user moves and the angle between the waveguide axis and the user changes. This would obviously require a continuously tunable or very broadband THz source, which may be feasible using SiGe BiCMOS process technology40.
It is interesting to note the contrast with free-space optical (FSO) networking, another feasible approach to achieving wireless links with Tb/s throughput. FSO links can also employ frequency multiplexing, and like a THz link, the signals propagate as directional beams, not omnidirectional broadcasts41. However, the wavelength-dependent diffraction effects described here would not be expected to manifest themselves in FSO systems. The relevant parameter here, to determine the significance of diffraction effects, is the spacing between adjacent frequency channels dv, as a fraction of the average carrier frequency v 0. In a typical frequency-multiplexed FSO system41, this fractional spacing dv/v 0 is quite small, on the order of 10−4. In contrast, for our system demonstration (Fig. 4), this parameter is almost three orders of magnitude larger. Thus, diffractive spreading of the carrier wave (and all modulation sidebands) is a significant phenomenon in THz systems, and is irrelevant in FSO links where all of the multiplexed signals co-propagate with parallel wave vectors. Beam diffraction can be both a challenge and an advantage; for example, beam misalignment due to, e.g., atmospheric turbulence is a huge challenge for long-distance FSO links with tightly collimated beams, but has essentially no impact on THz links42. Of course, THz links also afford the substantial advantage that coherent phase-sensitive detection is relatively straightforward, which enables MIMO architectures that would be exceedingly challenging to implement using visible or near-infrared light sources.
Finally, by noting the differences between simple power measurements and BER data, we emphasize the fact that the study of THz signal processing devices using modulated data in realistic configurations can reveal new information about their characteristics. In many cases including this one, this information cannot be readily obtained using conventional measurements with an unmodulated continuous-wave or pulsed time-domain source. Thus, measurements using data-modulated signals will be crucial for optimizing device performance in communication networks.
Measurement setup
The THz link performance measurement setup consists of two THz sources, one based on photomixing technologies (332.5 GHz) and the other on a frequency multiplexer chain (264.7 GHz) with a tunable output in the 260–330 GHz frequency band. Detection is achieved using a zero-biased Schottky diode broadband intensity detector associated to RF amplifiers (amplification bandwidth of 12 GHz, which determines the overall system bandwidth) to drive the BER tester (N4903A J-BERT from Agilent Technologies, with the option A01/C13). The average output power of the two THz sources is tunable and adjusted to reach the best driving signal for the Schottky diode and RF detection for BER measurements. We verified that the two beams contain almost same power, by comparing the rectified voltages at Schottky output at the two optimal angles. For the THz signal intensity detection investigated in this study, we keep the THz power low enough to avoid saturating the detector, to optimize the signal-to-noise ratio of the detected signals. Last, we use absorbers to prevent detection of spurious signals that could leak out of the far end of the waveguide and scatter towards the receiver, or that could couple from the source directly to the receiver without propagating through the waveguide. We found that these absorbers were necessary in order to measure error-free performance, due to the effects of scattered radiation. Indeed, our efforts to block scattered signal at the waveguide output may require further improvement, as suggested by the data of Fig. 2. This emphasizes the extreme sensitivity of the BER to interference from scattered signals, which must be addressed with some care.
For the experiments employing QPSK modulation, an optical signal is modulated using a dual-nested Mach-Zender modulator before the photomixing process to generate the dual THz signal at 280 and 330 GHz. Two arbitrary waveform generators are used to create two baseband non-return-to-zero (NRZ) data signals for the in-phase and quadrature date flows. For detection, the dual frequency THz signal is down-converted in a Schottky-based sub-harmonic mixer to below 40 GHz. The output is amplified and then detected by a wide bandwidth oscilloscope (Tektronix DPO70000SX ATI, bandwidth of 70 GHz). The two QPSK signals corresponding to the two down-converted THz channels are analyzed to recover the two 25 Gb/s modulated data and the corresponding constellation diagrams.
Finite element method (FEM) simulation results were performed using COMSOL Multiphysics 5.2 with the RF module. Figure 1a shows a typical simulation result. For this figure, a perfect electric conductor (PEC) was used for the waveguide boundaries, with perfectly matched layers (PML) to absorb at the waveguide output. Scattering boundaries were used on the waveguide edges and on the upper air boundaries. A port boundary was used for the waveguide incidence, exciting the TE1 mode with a spot size of 1 mm. The waveguide width and length were 25 mm, with a 0.733 mm plate separation. The waveguide slot is 0.7 mm in width and 3 mm long and is located 4 mm from the front of the waveguide. The air above the waveguide is a 60° circle section extrusion with a radius of 22 mm and a width of 3 mm. Tetrahedral elements were used to mesh the geometry with a total of 4,831,496 domain elements. This simulation was solved at 312 GHz using the GMRES iterative solver.
The result shown in Fig. 2b was obtained using COMSOL Multiphysics 5.2 with the RF module in a transient finite difference time-domain simulation. PEC was used for the waveguide boundaries, with scattering boundary conditions to absorb in free space. A scattering boundary is used for the waveguide input. For exciting the parallel-plate waveguide TE1 mode, an amplitude-modulated signal was used as the input with a carrier frequency of 300 GHz, and with a modulation corresponding to a 10 Gbps data rate. The waveguide length was 33 mm, with a 0.8 mm plate separation. The waveguide slot is 3 mm long and is located 1 mm from the front of the waveguide. The air above the waveguide is a 60° circle section with a radius of 66 mm and a sector angle of 70°. Tetrahedral elements were used to mesh the geometry with a total of 465,048 domain elements. The simulation was solved to 300 ps with 0.1 ps time resolution.
All relevant data are available from the authors.
Cisco VNI Mobile Forecast. http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/mobile-white-paper-c11-520862.pdf. (2015–2020)
Federici, J. & Moeller, L. Review of terahertz and subterahertz wireless communications. J. Appl. Phys. 107, 111101 (2010).
Article ADS Google Scholar
Kleine-Ostmann, T. & Nagatsuma, T. A review on terahertz communications research. J. Infrared Millim. Terahertz Waves 32, 143–171 (2011).
Kürner, T. & Priebe, S. Towards THz communications—status in research, standardization, and regulation. J. Infrared Millim. Terahertz Waves 35, 53–62 (2014).
Nagatsuma, T., Ducournau, G. & Renaud, C. C. Advances in terahertz communications accelerated by photonics. Nat. Photonics 10, 371–379 (2016).
CAS Article ADS Google Scholar
Kersting, R., Strasser, G. & Unterrainer, K. Terahertz phase modulator. Electron Lett. 36, 1156–1158 (2000).
Chen, H.-T. et al. A metamaterial solid-state terahertz phase modulator. Nat. Photonics 3, 141–151 (2009).
ADS Google Scholar
Sensale-Rodriguez, B. et al. Broadband graphene terahertz modulators enabled by intraband transitions. Nat. Commun. 3, 780 (2012).
Karl, N. J. et al. An electrically driven terahertz metamaterial diffraction modulator with over 20 dB of dynamic range. Appl. Phys. Lett. 104, 091115 (2014).
Meijer, A. S. et al. An ultrawide-bandwidth single-sideband modulator for terahertz frequencies. Nat. Photonics 10, 740–744 (2016).
Pandey, S., Kumar, G. & Nahata, A. Slot waveguide-based splitters for broadband terahertz radiation. Opt. Express 18, 23466–23471 (2010).
Article PubMed ADS Google Scholar
Reichel, K., Mendis, R. & Mittleman, D. M. A broadband terahertz waveguide T-junction variable power splitter. Sci. Rep. 6, 28925 (2016).
CAS Article PubMed PubMed Central ADS Google Scholar
Libon, I. H. et al. An optically controllable terahertz filter. Appl. Phys. Lett. 76, 2821–2823 (2000).
Chen, C.-Y., Pan, C.-L., Hsieh, C.-F., Lin, Y.-F. & Pan, R.-P. Liquid-crystal-based terahertz tunable Lyot filter. Appl. Phys. Lett. 88, 101107 (2006).
Chen, C.-Y., Hsieh, C.-F., Lin, Y.-F., Pan, R.-P. & Pan, C.-L. Magnetically tunable room-temperature 2p liquid crystal terahertz phase shifter. Opt. Express 12, 2625–2630 (2004).
Sengupta, K. & Hajimiri, A. A 0.28 THz power-generatoin and beam-steering array in CMOS based on distributed active radiators. IEEE J. Solid-State Circ. 47, 3013–3031 (2012).
Monnai, Y. et al. Terahertz beam steering and variable focusing using programmable diffraction gratings. Opt. Express 21, 2347–2354 (2013).
Hashemi, M. R. M., Yang, S.-H., Wang, T., Sepulveda, N. & Jarrahi, M. Electronically-controlled beam-steering through vanadium dioxide metasurfaces. Sci. Rep. 6, 35439 (2016).
Krumbholz, N. et al. Omnidirectional terahertz mirrors: a key element for future terahertz communication systems. Appl. Phys. Lett. 88, 202905 (2006).
Ibraheem, I. A., Krumbholz, N., Mittleman, D. M. & Koch, M. Low dispersive dielectric mirrors for future terahertz wireless communication systems. IEEE Microwave Wireless Comp. Lett. 18, 67–69 (2008).
Karl, N. J., McKinney, R. W., Monnai, Y., Mendis, R. & Mittleman, D. M. Frequency-division multiplexing in the terahertz range using a leaky-wave antenna. Nat. Photonics 9, 717–720 (2015).
Yata, M., Fujita, M. & Nagatsuma, T. Photonic crystal diplexers for terahertz-wave applications. Opt. Express 24, 7835–7849 (2016).
CAS Article PubMed ADS Google Scholar
Jia, S. et al. THz photonic wireless links with 16-QAM modulation in the 375-450 GHz band. Opt. Express 24, 23777–23783 (2016).
Yang, Y., Shutler, A. & Grischkowsky, D. Measurement of the transmission of the atmosphere from 0.2 to 2 THz. Opt. Express 19, 8830–8838 (2011).
Proposal for IEEE802.15.3d—THz PHY. https://mentor.ieee.org/802.15/dcn/16/15-16-0595-03-003d-proposal-for-ieee802-15-3d-thz-phy.docx.
Haider, M. K. & Knightly, E. W. Mobility resilience and overhead constrained adaptation in directional 60 GHz WLANs: protocol design and system implementation. In Proceedings of the 17th International Symposium on Mobile Ad Hoc Networking and Computing 61–70 (ACM, 2016).
Mendis, R. & Mittleman, D. M. Comparison of the lowest-order transverse electric (TE1) and transverse magnetic (TEM) modes of the parallel-plate waveguide for terahertz pulse applications. Opt. Express 17, 14839–14850 (2009).
Mendis, R. & Mittleman, D. M. An investigation of the lowest-order transverse electric (TE1) mode of the parallel-plate waveguide for THz pulse propagation. J. Opt. Soc. Am. B 26, 6–13 (2009).
Balanis, C. A. Modern Antenna Handbook (Wiley, 2011).
Monnai, Y. et al. Terahertz beam focusing based on plasmonic waveguide scattering. Appl. Phys. Lett. 101, 015116 (2012).
McKinney, R. W., Monnai, Y., Mendis, R. & Mittleman, D. M. Focused terahertz waves generated by a phase velocity gradient in a parallel-plate waveguide. Opt. Express 23, 27947–27952 (2015).
Koenig, S. et al. Wireless sub-THz communication system with high data rate. Nat. Photonics 7, 977–981 (2013).
Nagatsuma, T. et al. Terahertz wireless communications based on photonics technologies. Opt. Express 21, 23736–23747 (2013).
Ducournau, G. et al. THz communications using photonics and electronic devices: the race to data-rate. J. Infrared Millim. Terahertz Waves 36, 198–220 (2015).
Kanno, A. et al. Coherent terahertz wireless signal transmission using advanced optical fiber communication technology. J. Infrared Millim. Terahertz Waves 36, 180–197 (2015).
Gerhard, M., Theuer, M. & Beigang, R. Coupling into tapered metal parallel plate waveguides using a focused terahertz beam. Appl. Phys. Lett. 101, 041109 (2012).
Mbonye, M., Mendis, R. & Mittleman, D. M. Inhibiting the TE1-mode diffraction losses in terahertz parallel plate waveguides using concave plates. Opt. Express 20, 27800–27809 (2012).
Akyildiz, I. F. & Jornet, J. M. Realizing ultra-massive MIMO (1024 × 1024) communication in the (0.06-10) terahertz band. Nano Commun. Netw. 8, 46–54 (2016).
Mbonye, M., Mendis, R. & Mittleman, D. M. Study of the impedance mismatch at the output end of a THz parallel-plate waveguide. Appl. Phys. Lett. 100, 111120 (2012).
Chen, P. Y., Assefzadeh, M. M. & Babakhani, A. A nonlinear Q-switching impedance technique for picosecond pulse radiation in silicon. IEEE Trans. Microwave Theory Tech. 64, 4685–4700 (2016).
Lin, C.-Y. et al. A 400 Gbps/100 m free-space optical link. Laser Phys. Lett. 14, 025206 (2017).
Ma, J., Moeller, L. & Federici, J. F. Experimental comparison of terahertz and infrared signaling in controlled atmospheric turbulence. J. Infrared Millim. Terahertz Waves 36, 130–143 (2015).
This work was supported by the US National Science Foundation, the US Army Research Office, and the W.M. Keck Foundation. The experimental setup of the THz communication was supported by the Agence Nationale de la Recherche (ANR) for funding the COM'TONIQ "Infra" 2013 program on THz communications, through the Grant ANR-13-INFR-0011-01 and the TERALINKS Chist-era project (Grant ANR-16-CHR2-0006-01), and the support from several French research programs and institutes—Lille University, IEMN institute (RF/MEMS Characterization Center, Nanofab and Telecom platform), IRCICA institute (USR CNRS 3380), the CNRS and by the French RENATECH network. This work was also supported in part by the French Programmes d'investissement d'avenir Equipex FLUX 0017, ExCELSiOR project and the Nord-Pas de Calais Regional council, and the FEDER through the CPER Photonics for Society, and the support of Tektronix (Klaus Engenhardt and Erwan Lecomte) considering the hardware used for QPSK measurements (AWG, optical modulation, and ATI 70 GHz Oscilloscope for wide bandwidth analysis). We also acknowledge valuable conversations with Prof. Larry Larson and Prof. Christopher Rose, both of Brown University.
School of Engineering, Brown University, 184 Hope Street, Providence, RI, 02912, USA
Jianjun Ma, Nicholas J. Karl & Daniel M. Mittleman
Institut d'Electronique de Microélectronique et de Nanotechnologie (IEMN), UMR CNRS 8520, Université de Lille 1, 59652, Villeneuve d'Ascq Cedex, France
Sara Bretin & Guillaume Ducournau
Jianjun Ma
Nicholas J. Karl
Sara Bretin
Guillaume Ducournau
Daniel M. Mittleman
All of the authors concieved of the experiments, and contributed to their design. J.M., S.B., and G.D. performed the measurements. N.J.K. performed the numerical simulations. All of the authors contributed to writing the manuscript.
Correspondence to Daniel M. Mittleman.
The authors declare no competing financial interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Description of Additional Supplementary Files
Supplementary Movie 1
Ma, J., Karl, N.J., Bretin, S. et al. Frequency-division multiplexer and demultiplexer for terahertz wireless links. Nat Commun 8, 729 (2017). https://doi.org/10.1038/s41467-017-00877-x
Real-time object tracking using a leaky THz waveguide
Yasith Amarasinghe
, Rajind Mendis
& Daniel M. Mittleman
Optics Express (2020)
Single-shot link discovery for terahertz wireless networks
Yasaman Ghasempour
, Rabi Shrestha
, Aaron Charous
, Edward Knightly
Nature Communications (2020)
Bi2O2Se for broadband terahertz wave switching
Zhe Wen Li
& Jiu-Sheng Li
Applied Optics (2020)
Design and simulation of high-performance 2:1 multiplexer based on side-contacted FED
Tara Ghafouri
& Negin Manavizadeh
Ain Shams Engineering Journal (2020)
26.8-m THz wireless transmission of probabilistic shaping 16-QAM-OFDM signals
Shiwei Wang
, Zijie Lu
, Wei Li
, Shi Jia
, Lu Zhang
, Mengyao Qiao
, Xiaodan Pang
, Nazar Idrees
, Muhammad Saqlain
, Xiang Gao
, Xiaoxiao Cao
, Changxing Lin
, Qiuyu Wu
, Xianmin Zhang
& Xianbin Yu
APL Photonics (2020)
Editors' Highlights
Nature Communications ISSN 2041-1723 (online) | CommonCrawl |
Xubo Tang1 &
Yanni Sun ORCID: orcid.org/0000-0003-1373-80231
There are many different types of microRNAs (miRNAs) and elucidating their functions is still under intensive research. A fundamental step in functional annotation of a new miRNA is to classify it into characterized miRNA families, such as those in Rfam and miRBase. With the accumulation of annotated miRNAs, it becomes possible to use deep learning-based models to classify different types of miRNAs. In this work, we investigate several key issues associated with successful application of deep learning models for miRNA classification. First, as secondary structure conservation is a prominent feature for noncoding RNAs including miRNAs, we examine whether secondary structure-based encoding improves classification accuracy. Second, as there are many more non-miRNA sequences than miRNAs, instead of assigning a negative class for all non-miRNA sequences, we test whether using softmax output can distinguish in-distribution and out-of-distribution samples. Finally, we investigate whether deep learning models can correctly classify sequences from small miRNA families.
We present our trained convolutional neural network (CNN) models for classifying miRNAs using different types of feature learning and encoding methods. In the first method, we explicitly encode the predicted secondary structure in a matrix. In the second method, we use only the primary sequence information and one-hot encoding matrix. In addition, in order to reject sequences that should not be classified into targeted miRNA families, we use a threshold derived from softmax layer to exclude out-of-distribution sequences, which is an important feature to make this model useful for real transcriptomic data. The comparison with the state-of-the-art ncRNA classification tools such as Infernal shows that our method can achieve comparable sensitivity and accuracy while being significantly faster.
Automatic feature learning in CNN can lead to better classification accuracy and sensitivity for miRNA classification and annotation. The trained models and also associated codes are freely available at https://github.com/HubertTang/DeepMir.
Non-coding RNAs (ncRNAs) refer to the RNAs that do not encode proteins and function directly as RNAs. Genome annotation of many different genomes show that ncRNAs are ubiquitous and have various important functions [1]. Besides commonly seen house-keeping ncRNAs such as transfer RNAs (tRNAs), ribosome RNAs (rRNAs), many small ncRNAs play important roles in gene regulation. This work is mainly concerned with a type of small ncRNA, microRNA (miRNA), which act as key regulators of gene expression at post-transcriptional level in different species [2–5]. In metazoans, mature miRNAs bind to the 3'-UTR of target mRNAs and can repress translation or promote mRNA degradation. As an miRNA can bind to multiple mRNA transcripts, a large number of protein-coding genes can be regulated by miRNAs [6, 7].
Because miRNAs' important functions and their associations with complicated diseases in human, there are intensive research about miRNA gene annotation, target search, function identification etc. A fundamental step in miRNA research is the identification of miRNA genes in genomes. In the canonical miRNA biogenesis pathway, miRNAs are processed from longer transcripts named as primary miRNAs (pri-miRNAs) [3]. The hairpin structures of pri-miRNAs are cleaved by a member of RNase II family of enzymes, Drosha and produce precursor miRNA (pre-miRNA) in the nucleus [8, 9]. Pre-miRNAs are then exported to the cytoplasm, where Dicer cleaves off the loop region of the hairpin and further processes it to mature miRNA(s) of about 21 nucleotides [10, 11]. MiRNA gene annotation usually refers to identification of pre-miRNAs and mature miRNAs.
Existing miRNA annotation tools can be generally divided into two groups depending on whether reference miRNA genes are used. Homology-based miRNA search identifies pre-miRNAs by conducting sequence and/or secondary structural similarity search against existing miRNA genes. Like other ncRNAs, pre-miRNAs preserve strong secondary structures [2]. Thus, homology search models [12, 13] that can explicitly encode both sequence and structural similarities usually achieve high sensitivity and accuracy in classifying query sequences into their originating homologous families. However, the high sensitivity comes with a price of high computational cost. For example, structural homology search models based on context-free grammar have cubic running time complexity [14]. Even with various heuristic filtration techniques, it can be still very time-consuming to conduct large-scale sequence classification using both sequence and structural alignments. Sequence similarity-based homology search tools such as BLAST [15] can be also applied to classify pre-miRNAs to their native families. However, remote homologs with high structural but low sequence conservation tend to be missed. Another group of tools [16–18] do not use reference sequences for pre-miRNA search. These de novo miRNA search methods mainly use features such as hairpin structures of pre-miRNAs to identify putative pre-miRNAs in genomes. As a large number of regions in a genome can form hair-pin structures, features from RNA-Seq [19] data such as expression levels and read mapping patterns are often used to reduce the false positive rate of miRNA search [20–23]. Both types of tools are useful for miRNA search and annotation. De novo methods have the advantage of identifying possibly novel miRNAs but additional processing is needed to validate the findings.
Homology search-based miRNA search methods can take advantage of accumulating characterized miRNAs. For example, MiRBase [24] is an online database for miRNA sequences and annotation. The current release 22 contains 1983 miRNA families from 271 organisms, including 38,589 pre-miRNAs and 48,860 mature miRNAs. Rfam [25] is a comprehensive ncRNA family database with over 3,000 ncRNA families. The release 14.1 contains 529 pre-miRNA families and 215,122 precursor sequences.
These classified pre-miRNA sequences can be used as training data for deep learning based models. Depending on the choice of the training sequences and the design of the model architecture, deep learning-based miRNA search can be applied to distinguish miRNAs from other types of ncRNAs and also to conduct finer scale classification for different types of miRNAs. In this work, we explore whether using convolutional neural network (CNN) has advantages in distinguishing different types of miRNAs over powerful covariance models. In particular, we investigated how the input sequence encoding and training set construction affect the performance of miRNA characterization using CNN.
We choose CNN as the deep learning model because of its recent success in other sequence classification studies [26–29]. Empirical analyses have shown that CNN can be applied to extract "motifs" from a set of homologous sequences. Motifs are essential features to distinguishing different groups of sequence families including miRNAs. DeepBind [26] used a single convolution layer to capture the motif from protein binding sites. DeepFam [29] applied the CNN on the protein classification and found that the frequently activated convolution filters are consistent with known motifs. As different miRNA families tend to have different conserved sequences, the convolution layers in CNN are expected to capture distinctive features for fine-grained classification. DanQ [30], proposed by Qiang et al., added additional long short term memory (LSTM) layers above the convolution layers to capture the dependency between the separated motifs extracted by convolution layers. But as miRNAs are relatively short, the sequential features within a filter are sufficient for classification.
In this section, we summarize related work on homology search-based miRNA identification. Some homology search tools are designed for comprehensive ncRNA search and can divide miRNAs into different types. For example, there are hundreds of different miRNA families in Rfam. The associated tool, Infernal [12], conducts homology search by incorporating both sequence and secondary structure similarities in context-free grammar based models. Input sequences can be classified into different miRNA families for functional inference. For identifying miRNAs with high sequence similarity, generic homology search tools such as BLASTn [15] can be applied as well.
Most tools designed specifically for miRNA search aim to distinguish miRNAs from other types of sequences [31–33]. The most successful ones usually employ transcriptomic data to improve the identification accuracy. When the reference genomes are available, reads from small RNA-Seq data are mapped to the reference genomes to locate possible pre-miRNA genes. Features such as the conserved hairpin structure, read mapping patterns on the mature miRNA vs. other regions, expression levels across multiple samples are utilized to screen miRNAs in those candidate regions. From the perspective of machine learning, distinguishing miRNAs from other regions can be formulated as a binary classification problem. Pre-miRNAs have the positive label and all others have the negative label. Classification models such as SVM [34, 35], Random Forest [36], and CNN [37] have been applied for miRNA search. Being different from these binary classification tools, ours focuses on classifying input sequences into different miRNA families for more detailed function annotation. Unrelated sequences including other types of ncRNAs are rejected using a threshold in the softmax value.
CNN was also employed by Genta Aoki [38] for ncRNA classification. The authors took ncRNA pairwise alignments and associated features as input to CNN and got 98% accuracy for 6 types of ncRNA.
Advances of feature selection and classification models in machine learning have enhanced the sensitivity and precision for miRNA search. However, highly unbalanced training set is still a challenge for various learning models [39]. Being formulated as a binary classification problem, there are significantly more negative samples (non-miRNAs) than positive samples (miRNAs). In addition, there are many different types of non-miRNA sequences. It is not clear how to compose the negative training data from such large and highly diverse sequences.
In this study, we intend to formulate miRNA search as a multi-label classification problem. Instead of using non-miRNAs as training data, we reject those un-relevant sequences using methods from open set problem [40]. In addition, we implemented two types of encoding methods based on whether we explicitly encode the secondary structure information.
The deep learning model we choose is Convolutional Neural Network (CNN), which has demonstrated some success in ncRNAs classification [38]. We implemented and compared two different encoding methods for CNN-based miRNA classification. In the first encoding method, we explicitly encode secondary structure information into matrices and use these matrices as training/testing data. In the second method, we use one-hot encoding matrix to represent the input sequences and do not take into account predicted secondary structures.
Explicitly encode secondary structures into matrices
We implemented three types of matrix to encode the secondary structure information from sequences: probability matrix, pair matrix, and mixed matrix. The first two are inspired from adjacency matrix for modeling secondary structures. The structural information is derived from the sequences using RNAfold [41], which is one module in the ViennaRNA [41] package. As the optimal structure predicted based on Minimum Free Energy (MFE) is often not accurate, we use RNAfold to output both the optimal and suboptimal structures. In addition, we also use the base pairing probabilities computed by the software.
Probability matrix simply contains the values of the base pairing probability outputted by RNAfold. For a sequence s, the size of the matrix is |s|×|s|. Pi,j is the predicted base pairing probability between the ith and jth base in s if the probability p is above a given threshold T. The equation for defining the value of each cell can be found below.
$$P_{i,j(probability\ matrix)} =\left\{ \begin{array}{rcl} p & & {if\ p\ \geq\ T}\\ 0 & & {if\ p\ <\ T.} \end{array} \right. $$
Being different from probability matrix, pair matrix distinguishes different base pairs including Watson-Crick pairs and G-U pair. If the base pairing probability is above a given threshold, we will record this base pair using its ID number, which is used to distinguish different base pairs. Depending on whether we take into account the order of the bases in a base pair, different base pairs can be converted into 6 or 3 different values. The conversion rules are summarized in the following equations. Xi,j refers to an element at position (i,j) in a pair matrix. si refers to the ith base in sequence s. T is a given threshold.
$$ X_{i,j(pair\ matrix\ with\ order)} = \left\{\begin{array}{ll} 0, &\text{if } {p} < \mathrm{T} \\ 1/6, &\text{if}\ (s_{i} s_{j}=AU)\ \text{and}\ p \geq \mathrm{T} \\ 2/6, &\text{if}\ (s_{i} s_{j}=UA)\ \text{and}\ p \geq \mathrm{T} \\ 3/6, &\text{if}\ (s_{i} s_{j}=CG)\ \text{and}\ p \geq \mathrm{T} \\ 4/6, &\text{if}\ (s_{i} s_{j}=GC)\ \text{and}\ p \geq \mathrm{T} \\ 5/6, &\text{if}\ (s_{i} s_{j}=GU)\ \text{and}\ p \geq \mathrm{T} \\ 6/6, &\text{if}\ (s_{i} s_{j}=UG)\ \text{and}\ p \geq \mathrm{T} \end{array}\right. $$
$${\begin{aligned} X_{i,j(pair\ matrix\ without\ order)}\! =&\\ &\left\{\begin{array}{ll} \!0, &\text{if } {p} < \mathrm{T} \\ \!1/3, &\text{if}\ (s_{i} s_{j}\,=\,AU \text{or}\ s_{i} s_{j}\,=\,UA)\!\ \text{and}\ p \geq \mathrm{T} \\ \!2/3, &\text{if}\ (s_{i} s_{j}\,=\,CG \text{or}\ s_{i} s_{j}\,=\,GC)\!\ \text{and}\ p \geq \mathrm{T} \\ \!3/3, &\text{if}\ (s_{i} s_{j}\,=\,GU \text{or}\ s_{i} s_{j}\,=\,UG)\!\ \text{and}\ p \geq \mathrm{T} \end{array}\right. \end{aligned}} $$
Combining these two features together, the original 2D matrix will become a 3D matrix with two layers, which is called mixed matrix, as shown in Fig. 1c. One layer of size |s|×|s| is the probability matrix and another layer of the same size is the pair matrix. Essentially, this matrix integrates different base pairs with the predicted pairing intensities.
Examples of different encoding matrices. (a) Probability matrix; (b) Pair matrix; (c) Mixed matrix; (d) One-hot encoding matrix
The pair and mixed matrices can be conveniently visualized as images. We presented the corresponding images for one miRNA and one tRNA in Fig. 2. The threshold T is 0.0001 in all the matrices. It is not hard to observe the stacking base pairs of the hairpin and cloverleaf structures of the miRNA and tRNA, respectively. The secondary structures are less obvious in the pair matrix because the cell values in the pair matrix are decided by the base pairs rather than the base pairing probabilities. Given a small T, cells with low pairing probabilities might still get a relatively big value because of the conversion rules.
The probability, pair and mixed matrix images of miRNA and tRNA. (a), (b), (c) correspond to probability matrix, ordered pair matrix, mixed matrix of a miRNA sequence respectively. (d), (e), (f) correspond to probability matrix, ordered pair matrix, mixed matrix of a tRNA sequence respectively. For the mixed matrices, the color green is from the layer of probability matrix while blue represents the layer of the pair matrix
CNN architecture for the matrices containing base pairing information
The CNN model contains two convolutional layers, followed by max pooling layers and three fully connected layers. Figure 3 sketches this architecture. To prevent overfitting, dropout is also applied. During the training of the CNN model, several hyperparameters were tuned within the given ranges, which are shown in Table 1. The parameters with best performance were selected. Finally, the hyperparameters were set as follows: number of convolution layers = 2, kernel size for each convolution layer = 2, the number of kernels in the two convolution layer = 64: 128, pooling method = max pooling, number of units in two fully connected layer = 256: 128, learning algorithm = Adam, dropout rate = 0.5, learning rate = 0.001, batch size = 32. The CNN model was implemented in Keras [42].
CNN structure of the probability/pair/mixed matrix
Table 1 The list of the tuned hyperparameters
Encoding the sequence using one-hot matrix
One-hot encoding matrix has been successfully used in encoding genomic sequences for deep learning models. Essentially, the sequence is converted to a |s|×4 one-hot encodidng matrix, where |s| is the length of an input sequence and 4 is the number of different bases. Let the matrix be M, where Mi,j is 1 if the ith base in the input sequence is the jth character in the alphabet. For any other characters, Mi,j is 0 (k≠j). An example one-hot encoding matrix is given in Fig. 1d.
The CNN architecture for one-hot encoding matrices
Inspired by Yoon Kim's work in sentence classification [43], a similar model is used in this work. Several convolution layers with different size of kernels, followed by global max pooling layer, are connected to input layer directly. The outputs of all pooling layers are concatenated together and then fed into two fully connected layers. Dropout is also employed to overcome overfitting. Tuned parameters are shown in Table 1. Finally, the hyperparameters are set as follow: the number of convolution layers = 1, the size of the convolution filters = [2, 4, 6, 8, 10, 12, 14, 16], the number of kernel in convolutional layer = 512, the number of units in first fully connected layer = 1024, dropout rate = 0.7, learning rate = 0.001, learning algorithm = Adam, batch size = 64. Figure 4 shows the architecture.
The CNN architecture of the one-hot encoding matrix encoding method
Excluding other ncRNA sequences using softmax probability threshold
As next-generation sequencing data such as small RNA-Seq data have become the major source of new miRNA discovery, useful miRNA search tools should be able to distinguish miRNAs from other types of ncRNAs, which usually co-exist with miRNAs in RNA-Seq data. Identifying miRNAs in RNA-Seq data is open set and thus any useful system must reject unknown/unseen classes in test set [40]. Existing binary classification tools often treat all the non-miRNA sequences as negative and need to choose non-miRNAs as the negative training samples. This often creates a highly unbalanced training set because there are significantly more non-miRNAs than miRNAs. In addition, it is not clear how to sample negative training sequences from many different types of ncRNAs. Our CNN model does not use an extra label for other ncRNAs. Instead, we reject out-of-distribution samples using the probability output of the softmax layer [44].
There are previous studies showing that the softmax probabilities of out-of-distribution samples are smaller than the probabilities of targeted samples [44]. Intuitively, out-of-distribution queries tend to produce a softmax probability vector with similar (small) values while an in-distribution query often yields a large softmax probability for one class. Thus, we will use carefully chosen softmax probability threshold to reject out-of-distribution samples, which in our case can be other types of ncRNAs in small RNA-Seq data. In addition, not all miRNA families are used in our training data. Any unseen miRNA families are also out-of-distribution samples. The softmax probability threshold should be used to reject them as well. We will use ROC curves to empirically choose a threshold.
We will first compare the classification accuracy of the two types of encoding methods. In particular, we will examine whether explicitly encoding the structural information in input matrices can improve the performance of miRNA classification. As real data such as small RNA-Seq data contain different types of transcripts, we will examine whether the softmax output can be used to reject non-miRNA sequences. Then, we will compare the performance of the CNN-based miRNA classification with other ncRNA classification tools.
Experimental data and pre-processing
For most of our training process, we use pre-miRNA families from Rfam as the training and testing data because we would like to compare our method with Infernal [12], which can conveniently use trained covariance models from Rfam. The current release of Rfam contains 529 pre-miRNA families and 215,122 precursor sequences. Another popular miRNA database is miRBase [24], which currently contains 1983 miRNA families from 271 organisms, including 38,589 pre-miRNAs and 48,860 mature miRNAs. In the experiment where we only use the mature miRNAs as the training data, we use miRBase because miRBase provides easy access to collect all the mature miRNAs.
We noticed that some of the pre-miRNA families in Rfam contain repeated sequences. Thus, in our pre-processing step, we will remove all the redundant sequences from the 529 pre-miRNA families in Rfam. As a result, 17.6% sequences were removed and 177,160 sequences were kept for downstream analysis. Each family contained different number of sequences (from 1 to 95,247) with different length. The distribution of the family size is shown in Fig. 5.
Rfam characteristics. Percentage of families in family size
To train in mini-batch, a fixed size of the input matrix should be set. Although there are a few pre-miRNA families with particularly long sequences, 96.88% miRNAs in Rfam were less than 200nt. Thus, we only keep the families with size at most 200nt. Although commonly seen pre-miRNAs are about 70nt, we did not exclude the long ones, such as those occurring in plant genomes, before pre-processing. The input matrix has size 200. All the shorter sequences were converted into 200nt sequences by inserting zero padding at the end. These padded zeros will lead to zero during the scanning of a convolution filter and thus won't affect the downstream layers after maxpooling.
Classification performance of probability and pair matrix
Following our definition of the probability and pair matrix, a threshold T is needed to decide the values of these matrices. In this experiment, we evaluate the change of T on the classification performance. At the same time, we also compare the performance of ordered and unordered pair matrices. These experiments were conducted using 30 randomly selected pre-miRNA families with at least 100 member sequences.
Considering that the probabilities may not be linearly distributed from 0 to 1, we sorted all the pairing probabilities (greater than 0.0001) of each miRNA sequence in Rfam and then used the values of different percentiles as the thresholds. The 0th, 10th, 20th, 30th and 40th percentile are selected; the corresponding values are 0.0001, 0.00487, 0.00772, 0.01307, and 0.02411.
For the 30 pre-miRNA families, 100 sequences were randomly selected from all member sequences. Then we used 5-fold cross validation so that there were 80 training sequences vs. 20 test sequences. CNN models with 30 classes are trained using different types of encoding methods. As there are 10 different types of matrices using 5 thresholds combined with two types of base pairs (ordered vs. unordered), 10 CNNs are trained. Note that the test sequences are encoded using the same method as the corresponding training data. We first compared the classification accuracy of using different thresholds with boxplot in Fig. 6a. For each threshold, there are 10 classification accuracy values for 5-fold cross validation results of both ordered and unordered cases. The comparison shows that allowing small base pairing probabilities yields higher average accuracy but also a slightly larger deviation. Overall, because of the higher average accuracy, we set the default threshold T as 0.0001 in all the following experiments. Figure 6b compares the classification accuracy of ordered vs. unordered matrices. The results show that they have very similar accuracy, with median accuracy around 0.92. By default, we use ordered base pairs in the pair matrix.
Performance comparison on classification accuracy using different secondary structure encoding methods. a 5 different thresholds (T) of base pairing probabilities. b ordered vs. unordered base pairs
Performance on pre-miRNAs classification
One-hot encoding matrix has been widely adopted for converting genomic data as inputs to deep learning models. Although it does not explicitly incorporate any structure information from the sequences, it has successful applications in protein homology search [29]. Thus, we will conduct a comprehensive experiment to compare the performance of one-hot encoding matrix and probability/pair matrix using pre-miRNA families from Rfam.
As different pre-miRNA families have different numbers of sequences, which can affect the performance of classification, we built 4 different datasets based on the size of families. Each dataset has different number of "classes" or "labels". The details about the four groups can be found in Table 2. Taken the Rfam-300 dataset as an example, there are 47 families in this dataset and each family contains 300 sequences (including 250 training sequences and 50 testing sequences). The model trained using this dataset needs to classify queries into one of the 47 families (or classes). We will compare the classification performance of CNNs on the four groups of training data and examine how the training set size affects the accuracy.
Table 2 Four groups of pre-miRNA families with different training set sizes
In order to quantify the prediction performance, we use two metrics: accuracy and F-score \(\left (F-score = \frac {2 \times Precision \times Recall}{Precision+Recall}\right)\). Classification accuracy quantifies the percentage of the correct predictions in all the test sequences. For each family, we also computed the recall \(\left (Recall=\frac {TP}{TP+FN}\right)\) and precision \(\left (Precision=\frac {TP}{TP+FP}\right)\). Here, TP, TN, FP, and FN correspond to the numbers of true positive, true negative, false positive, and false negative, respectively. The average F-score for all different families for one trained CNN is reported in Table 3. We evaluated the performance by the average accuracy of 5 independent experiments, each of which was measured with randomly selected testing sequences.
Table 3 Prediction accuracy(%) and F-score(%) of CNNs trained on families of different sizes
The results show that using one-hot encoding matrix led to much better performance than other methods even though it does not integrate base pairing information. In addition, it was less susceptible to the reduction of training data size. On the other hand, matrices focusing on base pairs need bigger training data to achieve better classification accuracy. These comparisons indicate that using one-hot encoding matrices is able to distinguish different types of miRNA families. One possible reason behind the inferior performance of using base pairing information is that all these pre-miRNA families have similar secondary structures and thus it is more difficult to conduct finer scale classification within the big family of miRNAs. For using one-hot matrix is less vulnerable to the decreased size of the training dataset, one possible reason is that one-hot matrix model has much fewer trainable parameters. For example, inputting the same sequence of length 200nt, one-hot model can update 4,485,255 parameters while the pair matrix model can update 78,748,399 parameters. Fewer parameters can help the model maintain high accuracy even if the training set is relatively small.
However, our additional experiments (next section) showed that these matrices cannot distinguish miRNAs from C/D box snoRNAs with high accuracy either, probably because of the similarity in the secondary structures, indicating that it is more difficult to train effective CNNs for matrices encoding base pairs. Larger training data are needed to improve the classification accuracy, which may not be always available for some miRNA families.
Use softmax probability threshold to reject other types of ncRNA sequences
Transcriptomic data such as small RNA-seq data can contain reads from other types of ncRNAs or miRNA families that are different from the many data. In this experiment, we will show that appropriate softmax probability value can be chosen as the threshold to distinguish targeted miRNAs from out-of-distribution samples.
As an example, we demonstrate the softmax output using the CNN model trained on Rfam-60 dataset (including 165 miRNA families). The positive set includes 155,392 test sequences from the Rfam-60 dataset while the negative (i.e. out-of-distribution) set contains all sequences from untrained miRNA families and randomly selected sequences from all other types of ncRNA in Rfam. There are 186,112 sequences in the out-of-distribution set. For each test sequence, the softmax layer will output a vector of normalized probabilities for all the 165 classes. The test sequence is assigned to the class with the the highest probability in the vector. We will set a threshold on this value so that a test sequence with maximum softmax output below this threshold will be rejected. We empirically determined the threshold by analyzing the distribution of the maximum softmax values for each input sequences.
We first plot the distribution of softmax values of the targeted miRNAs and other ncRNAs. Then we show the receiver operating characteristic (ROC) curve, which is constructed using false positive rate\(\left (FPR=\frac {FP}{FP+TN}\right)\) and true positive rate\(\left (TPR=\frac {TP}{TP+FN}\right)\) computed under different thresholds. Figure 7a and c show the distribution of the softmax probabilities for targeted miRNAs and negative samples. The comparison of (a) and (c) shows that using one-hot encoding matrix leads to smaller overlaps between the two distributions, which is consistent to the comparison of the ROC curves in Fig. 7b and d. Most of softmax values of the targeted miRNAs are greater than 0.9 and the area under the ROC curve for one-hot encoding matrix is very close to 1. By using one-hot encoding matrix, we can find an appropriate probability threshold to reject a majority of the negative samples (high precision) while still keeping targeted pre-miRNAs (high sensitivity). According to Fig. 7b, we choose the threshold leading to a large F-score. The default softmax value threshold for our trained CNNs is 0.977, with associated FPR of 0.05. Any test sequence with maximum softmax probability below 0.977 will be rejected.
Choosing appropriate softmax probability threshold to reject out-of-distribution samples.
We hypothesized that using pair and probability matrix cannot distinguish different pre-miRNA families because of their similar secondary structures. These matrices should thus be able to distinguish different types of ncRNAs with different secondary structures. Thus, we constructed a smaller negative data set containing tRNA, C/D box snoRNA, and other unseen miRNA families, including 20,000, 60,000 and 6,500 sequences, respectively. The secondary structure of tRNA is cloverleaf, which is very different from miRNA's hairpin structure. But the C/D box's stem box structure is somewhat similar to miRNA's. According to Fig. 8b, probability/pair matrix can distinguish tRNA from miRNA well, but still has difficulty rejecting C/D box snoRNAs. Considering that different types of ncRNAs might share globally or locally similar structures, pair and probability matrices have limited utilities in ncRNA classification.
Distribution of softmax values for unseen miRNAs, tRNAs, and C/D box snoRNAs. In both plots, the bin width is 0.01. (a) uses the one-hot encoding matrix model; (b) uses the pair matrix model
Directly classifying mature miRNAs
As many small RNA-seq datasets contain only mature miRNA, we evaluated whether deep learning could be used to directly classify mature miRNAs. As mature miRNAs in the same family can be well conserved because of their binding preference, using either mature miRNAs or pre-miRNAs as the training data may lead to similar classification accuracy for mature miRNAs. We again conduct the comparison using Rfam-60 set, where 50 sequences are used for training and 10 for testing. As we cannot conveniently obtain the mature miRNA annotation in the pre-miRNA families in Rfam, we downloaded the mature miRNAs from MiRBase. Thus, two CNN models are trained on pre-miRNAs and mature miRNAs, respectively. All the test sequences are mature miRNAs. For all the sequences, only one-hot matrix is used because of its superior performance. The mature miRNA classification accuracy of using pre-miRNAs and mature miRNAs as training data is 65.26% and 92.43%, respectively. Thus, when there are no reference genomes and read mapping cannot be used to identify possible pre-miRNAs, mature miRNAs should be used as training data for CNNs.
Performance on the input sequences with extra bases
Determining the exact boundary of pre-miRNAs in genomes is still challenging. For example, reads from small-RNA seq data can be mapped to reference genomes to identify possible mature miRNAs. Then those regions plus possibly mapped miRNA regions will be extended to identify candidate pre-miRNAs. The extension can go beyond the true pre-miRNA boundaries. Thus, we investigate whether having extra bases affects the classification accuracy. We still use Rfam-60 as our dataset, but 5, 10, 15 or 20 random nucleotides are added around each test sequence. The results can be found in Table 4.
Table 4 Classification performance on the test sequences with added bases
Comparison with other tools
In addition to the classification accuracy, the running time is also an important consideration for practical applications, especially when identifying miRNAs from next-generation sequencing data. Here, we compared the classification accuracy and running time of our trained CNNs with Infernal and miRClassify [45]. We also evaluated the performance of each method as the number of miRNA families (i.e. classes) increased. Four testing dataset were constructed by randomly selecting 1000 sequences from Rfam-300, Rfam-120, Rfam-60, and Rfam-30 respectively. Note that all these testing sequences are chosen from the set excluding training sequences and thus have no overlap with the training data for our CNN models. This experiment was repeated for five times and the average performance was reported in Table 5. The variance of each experiment in one-hot matrix method and Infernal is very small (less than 5e- 3). And for the miRClassify, the variance is slightly bigger and the biggest variance is 0.02. In order to run Infernal, we directly downloaded the covariance models associated with the corresponding dataset from Rfam. Thus, it is possible that some of these test sequences were used for training the covariance models. MiRClassify uses a hierarchical random forest model to classify the miRNAs into different families. The models of MiRClassify were downloaded from their website and they were constructed from miRBase version 16.0.
Table 5 Comparison with Infernal and miRClassify
To ensure a fair comparison in the running time, we used single core for all the three tools because miRClassify is single-threaded. For Infernal, we set the option '–cpu' as 1. All other options for Infernal are the default parameters. The command is:
>cmscan –cpu 1 rfam_60.cm rfam_60.fa
Here, 'rfam_60.cm' contained all the required covariance models and 'rfam_60.fa' is the test sequence set. For each query sequence, Infernal might generate several hits. In that case, we only kept the one with the lowest E-value. CNN model was implemented by Keras so we added extra commands to make sure only one core was used. In addition, the mini-batch size used in CNN was 64. Table 5 summarized the results.
The result in Table 5 shows that despite the possible overlaps between training and testing data for Infernal and MiRClassify, our trained CNN models still have high accuracy with minimum running time. We then conducted the χ2-test between the 20 accuracy values output by the three methods. The p-value between the one-hot matrix method and Infernal was very close to 1 (0.999), indicating that their accuracy is comparable. On the other hand, the p-value between ours and miRClassify is 4.59e- 275. The running time comparison also shows that Infernal took more time as the number of families increased. The other two methods were not affected by the number of families.
Frequently activated filters represent part of mature miRNAs
To interpret why the one-hot encoding method performed well, we visualized some motifs extracted by our CNN model. Employing the method used in DeepFam [29], we utilized the most frequently activated filters in trained Rfam-300 model to extract motifs from the RF00247 training sequences. We compared the motifs obtained by CNN with the motifs produced by MEME on training sequences, as shown in Fig. 9. Because the convolution layer used filters of different sizes, this model can identify motifs with various lengths. We found that the identified motifs represented part of the mature miRNA. We tested other families and had the same observation. This is consistent to the findings by DeepFam.
Visualizing and comparing the motifs extracted by MEME [46] and CNN model in RF00247. (a) Motifs extracted by MEME and CNN and the corresponding convolution filter of length 8. (b) Motifs extracted by MEME and CNN and the corresponding convolution filter of length 16. (c) The secondary structure of RF00247 with highlighted mature miRNA
We evaluated and compared the classification performance using different encoding methods and CNN architectures. Based on the experimental results, simple one-hot matrix performed much better than other encoding methods that explicitly incorporate predicted secondary structures. This could be caused by similar secondary structures among different types of pre-miRNA families. As shown by Do et al. [37], it is possible that encoding secondary structures will benefit distinguishing miRNAs from other ncRNAs in the binary classification problem.
In practice, input data such as small RNA-Seq can contain sequences from other types of ncRNAs. Useful miRNA classification must be able to reject out-of-distribution samples. Our experiments demonstrated that using softmax output can achieve an optimal trade-off between sensitivity and precision in distinguishing targeted miRNAs from other sequences. Thus, the designed classification models are practically useful in conducting finer scale miRNA analysis. By comparing our tool with a general ncRNA classification tool Infernal and also another machine learning based miRNA classification tool, we conclude that ours can achieve high sensitivity and accuracy with significantly reduced running time.
In this work, we developed CNN-based classification models for identifying different types of miRNAs. By using the output of the softmax probability as a threshold, our model can reject other types of ncRNAs and out-of-distribution miRNAs with high precision. Comparing with two existing methods, our one-hot encoding method takes much less time and still has high accuracy.
Although this work only concerns miRNAs, the trained CNNs can be extended to classify other types of ncRNAs. The method holds the promise to achieve comparable performance while achieving significant speedups compared to Infernal. It is our future work to extend and optimize our model for other types of ncRNAs.
The source code and datasets used during the current study are available at https://github.com/HubertTang/DeepMir
CNN:
Convolution neural network
FPR:
False position rate
LSTM:
Long short term memory
MFE:
Minimum free energy
pre-miRNA:
precursor microRNA
pri-miRNA:
primary microRNA
rRNA:
ribosome RNA
ROC:
Receiver Operating characteristic
tRNA:
transfer RNA
TPR:
True positive rate
Cech TR, Steitz JA. The noncoding RNA revolution—trashing old rules to forge new ones. Cell. 2014; 157(1):77–94.
Kim VN, Nam J-W. Genomics of microRNA,. Trends Genet. 2006; 22(3):165–73.
Krol J, Loedige I, Filipowicz W. The widespread regulation of microRNA biogenesis, function and decay,. Nat Rev Genet. 2010; 11(9):597–610.
Berezikov E. Evolution of microRNA diversity and regulation in animals,. Nat Rev Genet. 2011; 12(12):846–60.
Bartel DP. MicroRNAs: genomics, biogenesis, mechanism, and function. Cell. 2004; 116(2):281–97.
Mallanna SK, Rizzino A. Emerging roles of microRNAs in the control of embryonic stem cells and the generation of induced pluripotent stem cells. Dev Biol. 2010; 344(1):16–25.
Saini HK, Griffiths-Jones S, Enright AJ. Genomic analysis of human microRNA transcripts. Proc Natl Acad Sci U S A. 2007; 104(45):17719–24.
Ruby JG, Jan CH, Bartel DP. Intronic microRNA precursors that bypass Drosha processing. Nature. 2007; 448(7149):83–6.
Lee Y, Ahn C, Han J, Choi H, Kim J, Yim J, Lee J, Provost P, Rådmark O, Kim S, et al.The nuclear RNase III Drosha initiates microRNA processing. Nature. 2003; 425(6956):415–9.
Kuehbacher A, Urbich C, Zeiher AM, Dimmeler S. Role of Dicer and Drosha for endothelial microRNA expression and angiogenesis. Circ Res. 2007; 101(1):59–68.
Xie M, Li M, Vilborg A, Lee N, Shu M-D, Yartseva V, Šestan N, Steitz Ja. Mammalian 5'-capped microRNA precursors that generate a single microRNA. Cell. 2013; 155(7):1568–80.
Nawrocki EP, Eddy SR. Infernal 1.1: 100-fold faster RNA homology searches. Bioinformatics. 2013; 29(22):2933–5.
Artzi S, Kiezun A, Shomron N. miRNAminer: a tool for homologous microRNA gene search. BMC Bioinformatics. 2008; 9(1):39.
Sippl MJ. Biological sequence analysis. Probabilistic models of proteins and nucleic acids In: Durbin R, Eddy S, Krogh A, Mitchinson G, editors. 356 pp. £55.00 ($80.00)(hardcover); £19.95 ($34.95)[J]. Protein Science.Cambridge: Cambridge University Press: 1998. 8(3);695.
Johnson M, Zaretskaya I, Raytselis Y, Merezhuk Y, McGinnis S, Madden TL. NCBI BLAST: a better web interface. Nucleic Acids Res. 2008; 36(suppl_2):5–9.
Vitsios DM, Kentepozidou E, Quintais L, Benito-Gutiérrez E, van Dongen S, Davis MP, Enright AJ. Mirnovo: genome-free prediction of microRNAs from small RNA sequencing data and single-cells using decision forests. Nucleic Acids Res. 2017; 45(21):177.
Kadri S, Hinman V, Benos PV. HHMMiR: efficient de novo prediction of microRNAs using hierarchical hidden Markov models. BMC Bioinformatics. 2009; 10(1):35.
Teune J-H, Steger G. NOVOMIR: de novo prediction of microRNA-coding regions in a single plant-genome. J Nucleic Acids. 2010; 2010:10.
Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009; 10(1):57–63.
Lei J, Sun Y. miR-PREFeR: an accurate, fast and easy-to-use plant miRNA prediction tool using small RNA-Seq data. Bioinformatics. 2014; 30(19):2837–9.
Wang W-C, Lin F-M, Chang W-C, Lin K-Y, Huang H-D, Lin N-S. miRExpress: analyzing high-throughput sequencing data for profiling microRNA expression. BMC Bioinformatics. 2009; 10(1):328.
Yang X, Li L. miRDeep-P: a computational tool for analyzing the microRNA transcriptome in plants. Bioinformatics. 2011; 27(18):2614–5.
Conesa A, Madrigal P, Tarazona S, Gomez-Cabrero D, Cervera A, McPherson A, Szcześniak MW, Gaffney DJ, Elo LL, Zhang X, et al.A survey of best practices for RNA-seq data analysis. Genome Biol. 2016; 17(1):13.
Kozomara A, Birgaoanu M, Griffiths-Jones S. miRBase: from microRNA sequences to function. Nucleic Acids Res. 2018; 47(D1):155–62.
Kalvari I, Argasinska J, Quinones-Olvera N, Nawrocki EP, Rivas E, Eddy SR, Bateman A, Finn RD, Petrov AI. Rfam 13.0: shifting to a genome-centric resource for non-coding RNA families. Nucleic Acids Res. 2017; 46(D1):335–42.
Alipanahi B, Delong A, Weirauch MT, Frey BJ. Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat Biotechnol. 2015; 33(8):831.
Zeng H, Edwards MD, Liu G, Gifford DK. Convolutional neural network architectures for predicting DNA–protein binding. Bioinformatics. 2016; 32(12):121–7.
Zhou J, Troyanskaya OG. Predicting effects of noncoding variants with deep learning–based sequence model. Nat Methods. 2015; 12(10):931.
Seo S, Oh M, Park Y, Kim S. DeepFam: deep learning based alignment-free method for protein family modeling and prediction. Bioinformatics. 2018; 34(13):254–62.
Quang D, Xie X. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences. Nucleic Acids Res. 2016; 44(11):107.
de ON Lopes I, Schliep A, de Carvalho ACdL. The discriminant power of RNA features for pre-miRNA recognition. BMC Bioinformatics. 2014; 15(1):124.
Gao D, Middleton R, Rasko JE, Ritchie W. miREval 2.0: a web tool for simple microRNA prediction in genome sequences. Bioinformatics. 2013; 29(24):3225–6.
Gudyś A, Szcześniak MW, Sikora M, Makałowska I. HuntMi: an efficient and taxon-specific approach in pre-miRNA identification. BMC Bioinformatics. 2013; 14(1):83.
Batuwita R, Palade V. microPred: effective classification of pre-miRNAs for human miRNA gene prediction. Bioinformatics. 2009; 25(8):989–95.
Liu B, Fang L, Chen J, Liu F, Wang X. miRNA-dis: microRNA precursor identification based on distance structure status pairs. Mol BioSyst. 2015; 11(4):1194–204.
Jiang P, Wu H, Wang W, Ma W, Sun X, Lu Z. MiPred: classification of real and pseudo microRNA precursors using random forest prediction model with combined features. Nucleic Acids Res. 2007; 35(suppl_2):339–44.
Do BT, Golkov V, Gürel GE, Cremers D. Precursor microRNA identification using deep convolutional neural networks. bioRxiv. 2018:414656.
Aoki G, Sakakibara Y. Convolutional neural networks for classification of alignments of non-coding rna sequences. Bioinformatics. 2018; 34(13):237–44.
Stegmayer G, Di Persia LE, Rubiolo M, Gerard M, Pividori M, Yones C, Bugnon LA, Rodriguez T, Raad J, Milone DH. Predicting novel microRNA: a comprehensive comparison of machine learning approaches. Brief Bioinform. 2018. https://doi.org/10.1093/bib/bby037.
Bendale A, Boult TE. Towards open set deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE: 2016. p. 1563–72.
Lorenz R, Bernhart SH, Zu Siederdissen CH, Tafer H, Flamm C, Stadler PF, Hofacker IL. Viennarna package 2.0. Algoritm Mol Biol. 2011; 6(1):26.
Chollet F, et al.Keras. 2015. https://keras.io. Accessed Oct 2018.
Kim Y. Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics: 2014. p. 1746–51.
Hendrycks D, Gimpel K. A baseline for detecting misclassified and out-of-distribution examples in neural networks: 2017.
Zou Q, Mao Y, Hu L, Wu Y, Ji Z. miRClassify: an advanced web server for miRNA family classification and annotation. Comput Biol Med. 2014; 45:157–60.
Bailey TL, Elkan C, et al.Fitting a mixture model by expectation maximization to discover motifs in biopolymers. In: Proceedings of the Second International Conference on Intelligent Systems for Molecular Biology. AAAI Press: 1994. p. 28–36.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 20 Supplement 23, 2019: Proceedings of the Joint International GIW & ABACBS-2019 Conference: bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-20-supplement-23.
This work and the publication costs were supported by City University of Hong Kong (Hong Kong, China SAR) project 7200620. The funding did not play any role in design/conclusion.
Department of Electronic Engineering, City University of Hong Kong, Kowloon Tong, Hong Kong SAR
Xubo Tang & Yanni Sun
Xubo Tang
Yanni Sun
YS initiated the project. Both YS and XT designed the methods. XT conducted the experiments. Both YS and XT contributed to the writing of this manuscript. Both YS and XT read and approved the final manuscript.
Correspondence to Yanni Sun.
Tang, X., Sun, Y. Fast and accurate microRNA search using CNN. BMC Bioinformatics 20, 646 (2019). https://doi.org/10.1186/s12859-019-3279-2
Convolution neural network (CNN)
Open set problem | CommonCrawl |
The Pusey, Barrett and Rudolph (PBR) theorem and "shut up and calculate"
I was looking around for a competent, recent, persuasive presentation of the "shut up and calculate" philosophy regarding interpretations of quantum mechanics, and google led me to Fuchs and Peres, "Quantum Theory Needs No 'Interpretation,'" Physics Today, March 2000. Although the article is paywalled, you can find PDFs online. They talk about interpreting quantum mechanics simply in terms of the information available to observers, and Copenhagen-style collapse as nothing more than updating one's estimates of probability based on new information.
But there is a 2011 paper by Pusey, Barrett and Rudolph (PBR), "On the reality of the quantum state," presenting what seems to be a no-go theorem for such interpretations.
Is it accurate to think of PBR as conflicting with Fuchs and Peres' particular brand of "shut up and calculate?" Are there other expositions of this approach that avoid all the Bayesian-sounding stuff, or that are recent enough to explicitly discuss PBR? It would be particularly helpful to have something that wasn't paywalled.
quantum-mechanics quantum-interpretations
Emilio Pisanty
Ben CrowellBen Crowell
$\begingroup$ Much as I'd like to know what modern psi-epistemic/non-realist interpretations look like, I find the piece you linked to be of little use. Fuchs and Peres are extremely non-committal on whether there is an underlying reality for the wavefunction's information to describe (so I'd counter that the piece doesn't really describe a particular brand of anything, really). To the degree that they do, they are indeed in territory ruled out by PBR, I should think; to the degree that they don't, they just describe an operationalist, non-realist model that's wholly outside the PBR framework. $\endgroup$ – Emilio Pisanty Jan 1 '18 at 21:46
$\begingroup$ I'm interested in see a really good article on the matter, too, but ... I've never really felt the need to go beyond 'well, applying these rules leads to correct predictions and all the attempts to "interpret" them have lead either to mistakes or impenetrable philosophical gobbledygook'. As a result I am in the habit of telling students that quantum foundations is a subject to take up after you have tenure. $\endgroup$ – dmckee♦ Jan 1 '18 at 22:24
$\begingroup$ @dmckee: I think it's fine if people don't worry about foundational issues, provided that they know what it is that they don't know. But many, many people seem to absorb some kind of half-baked version of the Copenhagen interpretation without realizing that it's an interpretation. E.g., if you ask, they will insist that there really is some physical process of wavefunction collapse, and quote a textbook to the effect that this belief is "standard." It would be nice to have a "shut up and calculate" philosophy that was actually well thought out, but I just haven't seen such a thing yet. $\endgroup$ – Ben Crowell Jan 1 '18 at 23:01
$\begingroup$ Not what the OP is asking for, actually rather the opposite, a quite enjoyable demolition of the "shut up and calculate" paradigm: arxiv.org/abs/1308.5619 $\endgroup$ – Stéphane Rollandin Jan 1 '18 at 23:05
$\begingroup$ Closely related: physics.stackexchange.com/a/17186/3811 $\endgroup$ – Steve Byrnes Jan 2 '18 at 1:23
Frankly, I don't think "shut up and calculate" gets ruled out by PBR, simply because "shut up and calculate" isn't a (single) interpretation, at least in the sense of PBR.
The Fuchs and Peres piece you linked to is a good example of why I say this, but it's far from alone in that ground. The core problem is that "shut up and calculate" (SU&C) does not actually specify whether one should take some form of ontological viewpoint on the subject matter of quantum mechanics and, if so, which one. There are multiple choices there, several of which might be compatible with viewpoints that can be loosely grouped under SU&C; PBR rules out some of them, and it is entirely silent on others.
On the whole, though, I don't really think that the "shut up and calculate" school of thought is really an interpretation at all: quite on the contrary, the spirit it encapsulates is one that explicitly rejects the need for physics to even postulate an ontology for its subject matter. Is the act of actively not-even-trying-to-interpret and interpretation? That's ultimately semantics, but if you do call that an interpretation then I'll just agree to disagree (while quietly seething in a corner about the liberties some people take with language).
On somewhat more concrete grounds, though, if you try to formalize the strict dictum to leave ontology well alone, I would argue that what you get is an operationalist interpretation of quantum mechanics. The best exposition I know of is this talk by Rob Spekkens (starting at around the 51:20 mark, and continued here; Spekkens was aware of PBR at the time, and he gives a more direct response here), but the essence revolves around this mostly-uncontroversial statement about quantum mechanics:
In QM, (i) each preparation procedure $\mathsf P$ is associated with some density matrix $\hat \rho$, (ii) each outcome $m$ of some measurement procedure $\mathsf M$ is associated with a projection operator $\hat \Pi$, such that (iii) the probability of getting outcome $m$ through the measurement procedure $\mathsf M$ after the system has been prepared according to $\mathsf P$ is $P(m|\mathsf P,\mathsf M) = \mathrm{Tr}(\hat \rho\hat\Pi)$.
Most interpretations will then go on and invest the various elements with some form of ontology, but the operationalist approach will stop right at this statement, and hold that there is nothing else to say about the system, including such things as whether there is in fact some system that's produced by $\mathsf P$, and whether it has any properties at all.
If this is what you mean by SU&C, then PBR is completely silent on it. The PBR framework requires an ontological model to work, and the operationalist approach doesn't give it one.
Now, while it's a plausible philosophy to class under that banner, I don't really think that this is what's really meant by SU&C (again, even explicitly refraining from imbuing the mathematical components with any reality is already a good deal more waffling than what I associate with the SU part of SU&C), and indeed e.g. the Fuchs and Peres piece you link to does a great job at playing both sides of that boundary: they start off claiming that there is no need to provide anything beyond "an algorithm for computing probabilities", but then they go on to speak of Cathy's experiment like it actually "exists", and they do that in models that are quite $\psi$-epistemic in ways that are indeed liable to getting ruled out by PBR. However, I don't think that piece is specific enough with its models to tell what it's actually postulating, and by extension it's not specific enough to tell whether PBR impacts its conclusions.
What I think you really wanted to know, though, is not the relationship between PBR and "shut up and calculate", but its relationship to so-called $\psi$-epistemic interpretations: these are realist models that assume the existence of some form of system with some form of properties, which get described by the wavefunction in a strictly 'statistical' way.
If that's what you really wanted to ask, then I personally don't really know ─ but really, when people insist on things like "PBR doesn't rule out any statistical interpretations that are under active consideration", like Steve Byrnes and Ron Maimon (kind of) do here, I really have no idea what kinds of statistical interpretations they do think are worth considering, and I'd quite like to know what they are and how they interface with PBR.
However, if that's what you really wanted to ask, then I would definitely raise a pretty strong objection to the identification of statistical interpretations with the SU&C paradigm ─ which, again, isn't an interpretation.
Emilio PisantyEmilio Pisanty
"Shut up and calculate" (SUAC) is a philosophical doctrine and so can't be refuted by any experiment. SUAC holds that it doesn't matter what's happening in reality as long as you can predict experimental results. Advocates of SUAC would say that they can predict the results of the PBR experiment so it is irrelevant to SUAC.
SUAC is a special case of philosophical doctrine of instrumentalism applied to quantum mechanics. Instrumentalism is the idea that it doesn't matter what's happening in reality as long as you can make predictions. Instrumentalism was refuted more than 50 years ago by Popper (see his book "Conjectures and Refutations" Chapter 3) and has been criticised more recently by David Deutsch in "The Fabric of Reality" and "The Beginning of Infinity".
One problem is that you have to understand what's happening in reality to perform an experiment and understand its significance. If quantum mechanics isn't true, then it's a bit of a mystery why you would use it to make predictions.
Another problem is that if you use quantum mechanics to predict the result of experiment and then deny it represents reality, then all you have done is taken quantum mechanics and added an extra complication: a bunch of labels saying "this isn't real" about the wavefunction. This makes quantum mechanics more complicated and obscure and solves no scientific or philosophical problems.
SUAC has no scientific or philosophical value at all and should be discarded.
alanfalanf
$\begingroup$ Advocates of SUAC would say that they can predict the results of the PBR experiment so it is irrelevant to SUAC. There is no PBR experiment. There is a PBR paper, which presents a theorem...? $\endgroup$ – Ben Crowell Jan 3 '18 at 17:17
$\begingroup$ The theorem is about experimental predictions. This is a direct quote: "Here we present a no-go theorem: if the quantum state merely represents information about the real physical state of a system, then experimental predictions are obtained which contradict those of quantum theory." $\endgroup$ – alanf Jan 3 '18 at 17:20
Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-interpretations or ask your own question.
Consequences of the new theorem in QM?
What are the differences between a $\psi$-epistemic ontological model and a $\psi$-ontic model of quantum mechanics, exactly?
Assumptions in Bell's Theorem
Bohmian loophole in PBR-like theorems
(thought) experiment re: Bell's Theorem and Schrodinger's cat
Free Will Theorem question
Can someone clarify whether the recent experiment closing all remaining loopholes to Bell's Theorem really shut the door on local realism for good?
Question about Bell's Theorem and hidden variable equations
Bell's theorem and how it solves the EPR paradox
Confusion by Murray Gell Mann video and Bell's Theorem
If correct, Bell's Theorem rules out local variables and suggests a superdeterministic for to the Universe? | CommonCrawl |
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
Underlying structure behind relativity?
Is there a way to understand the underlying "cause" for observing (special) relativity? What is the structure/geometry that makes these observations make sense? I noticed that if all objects are required to move at the speed of light through a normal four dimensional euclidean space, this has essentially the opposite of the intended effect. Things in motion relative to you would appear stretched out and their clocks would be sped up. Is there a way to explain the discrepancy?
special-relativity spacetime metric-tensor inertial-frames
Qmechanic♦
Jeff BassJeff Bass
$\begingroup$ The underlying structure is imbedded in the signature of the metric on Minkowski space. Take that as given and everything else follows. Or if you prefer start with invariance of the speed of light together with some symmetry assumptions; this comes down to the same thing. $\endgroup$
– WillO
$\begingroup$ If all objects are required to move at the speed of light through a normal four dimensional space-time environment, the result is the intended SR effect. youtube.com/… $\endgroup$
– Sean
Let me try an analogy. Back in the early days of America people wanted to measure the exact distance between their flats in New York. As you know, New York has a very regular map, there are streets going in one direction and avenues going along the perpendicular direction. The distance between two flats was not easily measurable if the houses were not at the same street or avenue, so they decided to measure the distance $\Delta x$ along the avenues and the distance $\Delta y$ along the streets and compute the beeline distance $d$ with the formula $d^2=\Delta x^2+\Delta y^2$. This worked very well so each two New Yorkers could easily determine the distance between their flats.
However, time passed, new technology has been developed. Distances could now be determined directly using sound waves and exact clocks. Clocks got more and more precise and at some point somebody noticed that there are slight discrepancies between the distances calculated along the streets and avenues and the directly measured distances. Interestingly, the discrepancies were not always the same, they seemed to depend on the heights of the houses considered.
The first skyscrapers were built. They gave rise to further experiments. It turned out that if one flat is in a skyscraper and the other is near the ground, the discrepancies are biggest. So it seemed to depend on the height difference $\Delta z$ between the flats. After many experiments somebody proposed that the formula for calculating the distance is wrong and a better ansatz is $D^2=\Delta x^2+\Delta y^2+\Delta z^2$ (or, in term of the old length $d$, $D^2=d^2+\Delta z^2$) which puts the distances along the streets and avenues and the height on an equal footing. The new formula has been experimentally verified and turned out to be correct.
One interesting observation. The "old" length $d$ has a physical meaning. If you hang a rope between the two flats (the rope will have the correct length $D$), it will drop a shadow onto the ground. This shadow will have exactly the "wrong" length $d$. Actually, in reality, nobody is interested in the "correct" distance $D$ between the flats, the much more relevant quantities are the distance between the houses - which is exactly $d$ - and the heights of the houses $z_1$ and $z_2$.
This is Special Relativity. $d$ corresponds to the usual three-dimensional length, $z$ corresponds to time and $D$ corresponds to the space-time interval.
However, after even more time, the measurements got even more exact. Again discrepancies have been found between the direct measurement and the calculated result. After a lot of research it has been found that it was due to the fact that the curvature of the earth surface has been neglected. So the formula needed to be modified again to take account for the curvature of the earth surface. Obviously, the discrepancies grew with the distance but with the new formula scientists could in principle calculate distances between flats around the globe. But for practical purposes (within one city) the original formula was precise enough and remained a good approximation for the exact formula. This exact formula (which I didn't state) corresponds to the metric of curved space-time in General Relativity.
PhotonPhoton
$\begingroup$ By the way, you get all the fancy effects of SR if you freely rotate the rope which connects the two flats. You will see that the shadow of the rope gets shorter or longer depending on the rope's slope. But it can never get longer than the actual rope. The shadow is longest if the rope lays on the earth. This is similar to the length contraction. Of course, the height difference between the ropes' ends changes as well, if you rotate the rope. This is similar to time dilation. The analogy is not full due to different signs in the Minkowski metric, unfortunately but most of the things survive. $\endgroup$
– Photon
$\begingroup$ Thank you again Photon. I guess my question can be rephrased by asking why one would observe the "shadow" of a four dimensional object, rather than a "slice" of a four dimensional object. $\endgroup$
– Jeff Bass
$\begingroup$ Hmm, a bit hard to explain without a Minkowski diagram. I will write another answer (to embed a second graphic). $\endgroup$
You are right, the formula for length contraction is not as simply explained as the analogy in my first answer implied. If you have an extended object, its 4D representation is a strip through space-time:
In the picture we have a rod whose world strip is marked green. There are two coordinate systems: The lab frame $(t_S,x_S)$ and the comoving frame $(t_L,x_L)$ of the rod. The lines $P_4P_2$ and $P_1P_3$ are parallel to the $x_L$ axis and therefore are equal time snapshots of the rod at the times $t_{L,1}$ and $t_{L,2}$ in the rod's frame $(t_L,x_L)$. If you want, they are cuts/slices through the world strip at some given times. In the stationary frame $(t_S,x_S)$ the rod is seen as $P_1P_2$ at some time $t_{S,0}$.
To get the length of the rod in the lab frame $l=x_{P2,S}-x_{P1,S}=\Delta x_{12,S}$, we have to project $P_1P_2$ to the $x_S$ axis. We can transform this into the comoving system using Lorentz transformations and get $\Delta x_{12,L}$. This is the projection of $P_1P_2$ to the $x_L$ axis. Note that it is equal to the projection of $P_4P_2$ or $P_1P_3$ to the $x_L$ axis which is exactly the length $l'$ of the rod in the comoving frame.
Try to do the math and you will get exactly the formula for Lorentz length contraction!
$\begingroup$ The diagram makes things very intuitive. Why does the x axis in the rod's frame not rotate to keep a 90 degree angle with its time axis? Is this essentially equivalent to saying that spacetime is not euclidean but a Minkowski space? $\endgroup$
$\begingroup$ Yes, exactly. Just try transforming the basis vectors $(0;1)$ and $(1;0)$ with a Lorentz boost. You will get the basis vectors in the moving inertial frame. By the way, the Lorentz boosts are essentially pseudo-rotations which gives a nice intuition to them. $\endgroup$
Underlying structure of Special Relativity (SR) is 3 dimensional real Lobachevskian (hyperbolic) geometry. Minkowski R4 so called space-time is just 19 century useless and misleading concept which says that ,for instance , you need 3 dimensional Euclidean space to work with the two dimensional surface of a sphere ( globus). When you fly e.g. from SF to Berlin airplane auto pilot has no idea about the three dimensional space the Earth is embeded in. It uses only two internal coordinates on the sphere and it guides precisely plane to destination. Lobachevskian velocities space of SR is 3 dimensional ,non compact ( velocities of photons are at the Eboundary at infinity) , metric space of constant Gaussian curvature K=-1/c square ,distance in it is called relative velocity , and all geometric formulas of Lobachevskian geometry can be translated to physics which in considerable portion is SR. See e.g. "Expansion of Universe -mistake of Edwin Hubble.... " in Acta Physica Polonica B, or the same paper on researchgate, or under name of von Brzeski on Harvard astrophysics site..
Georg von BrzeskiGeorg von Brzeski
$\begingroup$ Dear Georg von Brzeski: For your information, Physics.SE has a policy that it is OK to cite oneself, but it should be stated clearly and explicitly in the answer itself, not in attached links or comments. $\endgroup$
– Qmechanic ♦
$\begingroup$ Precisely and clearly, Acta Physica Polonica, B, vol39,2008, n6 Title: " Expansion of the Universe -Mistake of Edwin Hubble? Cosmological Redshiftand and Related Electromagnetic Phenomena in Static Lobachevskian (hyperbolic) Universe." author: J.Georg von Brzeski $\endgroup$
– Georg von Brzeski
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged special-relativity spacetime metric-tensor inertial-frames or ask your own question.
How do I develop an intuitive model of spacetime?
Why is space-time four dimensional?
Question about spacetime interval in acceleration in special relativity
Does special relativity imply that time dilation is affected by an orientation of clocks?
Reference systems in Special and General Relativity
Conflicting definitions of reference frames in general relativity
Doubt about the ways to define Spacetimes | CommonCrawl |
Polarization noise places severe constraints on coherence of all-normal dispersion femtosecond supercontinuum generation
Iván Bravo Gonzalo1,
Rasmus Dybbro Engelsholm1,
Mads Peter Sørensen2 &
Ole Bang1,3
Scientific Reports volume 8, Article number: 6579 (2018) Cite this article
Supercontinuum generation
Supercontinuum (SC) generated with all-normal dispersion (ANDi) fibers has been of special interest in recent years due to its potentially superior coherence properties when compared to anomalous dispersion-pumped SC. However, care must be taken in the design of such sources since too long pump pulses and fiber length has been demonstrated to degrade the coherence. To assess the noise performance of ANDi fiber SC generation numerically, a scalar single-polarization model has so far been used, thereby excluding important sources of noise, such as polarization modulational instability (PMI). In this work we numerically study the influence of pump power, pulse length and fiber length on coherence and relative intensity noise (RIN), taking into account both polarization components in a standard ANDi fiber for SC generation pumped at 1064 nm. We demonstrate that the PMI introduces a power dependence not found in a scalar model, which means that even with short ~120 fs pump pulses the coherence of ANDi SC can be degraded at reasonable power levels above ~40 kW. We further demonstrate how the PMI significantly decreases the pump pulse length and fiber length at which the coherence of the ANDi SC is degraded. The numerical predictions are confirmed by RIN measurements of fs-pumped ANDi fiber SC.
Commercially available silica fiber-based and ultra-broadband supercontinuum (SC) sources are typically generated by pumping with high-power picosecond or nanosecond pulses close to the zero-dispersion wavelength (ZDW) in the anomalous dispersion regime of a photonic crystal fiber (PCF)1,2. However, the SC source is typically characterized by large intensity fluctuations2,3,4, due to modulation instability (MI) and soliton collisions, which limits their performance for applications in imaging, such as optical coherence tomography (OCT)5,6 or coherent anti-Stokes Raman scattering (CARS) spectroscopy7. Reduction of the shot-to-shot fluctuations and coherence stabilization can be achieved through various methods, for instance the use of fiber tapers2,4,8, seeding with a weak trigger signal9,10,11, or back-seeding part of the SC spectrum12. Increasing the repetition rate of the pump laser leads to a noise improvement in spectral domain OCT because of averaging in the spectrometer13. An alternative approach to eliminate the influence of noise sensitive effects is to pump a PCF in the normal dispersion regime to avoid MI and soliton collisions. It is however necessary to pump with short enough pump pulses to suppress stimulated Raman scattering (SRS), which is known to be as noisy a process as MI14,15. In this way, the bandwidth of the SC is limited not only due to the high dispersion at the pump but also by a constraint to stay away from any ZDW, since noisy SC could be generated when crossing the ZDW1,14.
The use of all-normal dispersion (ANDi) PCF fibers, which offer normal dispersion for all the wavelengths covered by the SC16, opened up the possibility of potentially coherent broadband SC sources. By pumping with femtosecond pulses, the SC is initiated by self-phase modulation (SPM) and followed by creation of new wavelengths through optical wave breaking (OWB), processes which are known to be coherent17,18. Since then, there has been an increasing trend of generating ANDi based SC. Previously, silica fibers with parabolic-like dispersion19,20,21 were pumped at the point of minimum absolute dispersion with high peak powers but the SC extension was limited to 1.5 µm20. This issue was overcome with the design of dispersion-flattened germanium doped silica fibers22,23 enabling broadening up to 2.2 µm and relaxing the condition of high peak powers23. Moreover, ANDi fibers in other materials like soft glasses24,25,26 have been fabricated to push the SC extension towards the mid-infrared, where fiber-based SC sources using chalcogenide fibers are emerging27,28,29.
The superior signal-to-noise ratio performance of ANDi fibers compared to anomalous dispersion pumping was demonstrated experimentally using 390 fs pump pulses by Klimczak et al.30. However, the SC generation was stabilized with a cladding mode to avoid detrimental effects of SRS due to the long pulse length of the pump25,30. A recent numerical study demonstrated the limitations in the coherence of ANDi based SC, pointing out the role of SRS in the process of incoherent dynamics15. In general, coherence degradation was found to depend on the relative importance of the mixed parametric-Raman length and the OWB length, which results in a limit that is inverse proportional to the pulse length. For example, for 100 kW peak power and fibers shorter than 1 m complete coherence was found for pump pulses shorter than 700 fs and for pulses shorter than 2 ps complete coherence was found for fibers shorter than 40 cm15.
However, in that work15 the scalar generalized Schrödinger equation (GNLSE) was used in the modelling, just as in other studies of the coherence of ANDi SC generation19,20,23,26. In this way, propagation is assumed in only one axis of the fiber fundamental mode. This means that for example polarization modulational instability (PMI) is neglected, which can be an important source of noise when pumping in the normal dispersion regime31,32,33. In fact, polarization instabilities were experimentally demonstrated in weakly birefringent ANDi fibers34,35,36, which decreases the usability of the ANDi SC source.
It has nevertheless been experimentally demonstrated that PMI, and therefore the noise it introduces, can be suppressed by pumping along a principal axis of a polarization maintaining (PM)-ANDi fiber35,36. However, when the polarization of the pump is not along the principal axis of the fiber, PMI-induced noise will be generated, with maximum gain when pumping at 45 degrees18. As most of the ANDi SC sources reported are made with non-PM fibers20,21,22,23,24,25,26,30 an investigation of the polarization properties in non-PM ANDi SC is needed. Therefore, a complete understanding of the decoherence mechanism of the weakly birefringent ANDi based SC, including both SRS and PMI, i.e., both polarizations, is crucial in order to analyze the noise properties and limitations of ANDi SC sources.
In this work we therefore numerically study the coherence and RIN in a standard ANDi fiber for SC generation pumped at 1064 nm, taking into account both polarization components in our model. We demonstrate that the PMI introduces a power dependence not found in a scalar model, which means that even with short ~120 fs pump pulses the coherence of ANDi SC can be degraded at reasonable power levels above ~40 kW. We further demonstrate how the PMI significantly decreases the pump pulse length and fiber length at which the coherence of the ANDi SC is degraded. For example, the scalar model predicts coherence in a 1 m ANDi fiber for pump pulses shorter than about 700 fs, whereas this limit is reduced to only ~120 fs when the second polarization and thus PMI is taken into account, which drastically limits the available pump sources. We further present experimental noise measurements of ANDi based SC pumped with a femtosecond laser, which confirms the numerical predictions.
We want to point out with this study that care must be taken to evaluate the coherence properties of the ANDi SC sources using numerical modelling and that both polarizations should preferably be considered in order to have accurate predictions.
Discussion of polarization noise
In our modelling we use the generalized coupled nonlinear Schrödinger equations (CGNLSEs), given in the Methods section. We first reduce our vector model to the single-polarization scalar GNLSE1,18 in order to compare to the study done in by Heidt et al.15. In contrast to that study mode profile dispersion, loss and experimental Raman gain are included in the model to better match the experimental conditions. Note that fiber parameters (pitch and hole-to-pitch ratio) slightly differ.
For a given fiber length and pump pulse length we calculate the spectrally averaged coherence \(\langle |{{g}}_{12}^{(1)}|\rangle \) and spectrally averaged RIN, \(\langle {\rm{RIN}}\rangle \) (see the Methods section) to get a single number characterizing the noise and coherence performance. The results are displayed in Fig. 1 for a peak power of 44 kW and pulse lengths from 50 fs to 3 ps along 1 m of fiber every 0.05 m. In evaluating the noise, the important length scales are the OWB length L WB and the coherence length L C , which for a sech input pulse are given by15,17
$${L}_{WB}=\sqrt{\frac{3}{2}}\frac{{L}_{D}}{\sqrt{1+{N}^{2}}};\,{L}_{C}\propto \frac{{L}_{R}^{\ast }}{{L}_{WB}}\Rightarrow {L}_{C}=\xi ({\beta }_{2},\gamma ,\,{{\rm{P}}}_{0})\frac{1}{{f}_{R}{{\rm{\Omega }}}_{R}{T}_{0}},$$
where \({L}_{R}^{\ast }\) is the mixed parametric-Raman length evaluated at the angular frequency of the peak Raman gain \({{\rm{\Omega }}}_{R}=2\pi \cdot 13.2\,\,{\rm{THz}}\). In the limit of high pump powers and low absolute dispersion, FWM will tend to suppress the Raman gain and one can obtain the simple expression \({L}_{R}^{\ast }\approx {(1.38{f}_{R}{{\rm{\Omega }}}_{R}\sqrt{\gamma {P}_{0}{\beta }_{2}})}^{-1}\), when taking into account only β215. The fractional contribution of Raman is\({f}_{R}=0.18\). As defined in equation (1), the coherence length corresponds to the fiber length at which the incoherent mixed parametric-Raman process is comparable to the coherent OWB. According to this definition, the coherence length is inversely proportional to the pump pulse length, which follows the trend observed in the simulations15. In addition, the function \(\xi ({\beta }_{2},\gamma ,{P}_{0})\) has units of length and it is introduced to fit the coherence length to the edge of the region where \(\langle |{{g}}_{12}^{(1)}|\rangle =0.9\) in the plot of the average coherence as a function of pulse length15. Therefore, the coherence length is an analytical estimate of the propagation distance at which the coherence degradation starts in the simulations for a given pulse length, and it is introduced here to facilitate the discussions of the ANDi SC coherence. The soliton number N, the dispersion length L D , and the nonlinear length L N , are given by1
$$N=\sqrt{\frac{{L}_{D}}{{L}_{N}}};\,{L}_{D}=\frac{{T}_{0}^{2}}{|{\beta }_{2}|};\,{L}_{N}=\frac{1}{\gamma {P}_{0}}.$$
One polarization (scalar model): Spectrally averaged coherence and RIN. (a) Average coherence and (b) RIN versus pulse length along 1 m of fiber. The peak power is 44 kW, which is the maximum value in the experiments with 0.67 W average power, 80 MHz repetition rate and 170 fs pulse length. The OWB length L WB and coherence length L C are given by dashed lines.
The results presented in Fig. 1 recover the same qualitative behavior obtained by Heidt et al.15, and show that the coherence is maintained and RIN is low for the first 1 m of the fiber for pulse lengths shorter than around 1 ps. Due to the simpler model and slightly different fiber this limit was found to be around 0.7 ps in the previous study by Heidt et al.15.
In general, when fiber lengths longer than the coherence length are used, the generated SC is affected by Raman noise and will have reduced coherence. When the fiber length is longer than the OWB length but shorter than the coherence length, the SC is completely developed before Raman lines are generated and thus the broadest possible SC is generated coherently. This occurs for pulse lengths shorter than around 2.2 ps, as seen in Fig. 1a. Using pump pulses longer than 2.2 ps Raman noise will be generated before OWB and thus only a narrow coherent SC can be generated. The dynamical evolution of the noise is studied in more detail by Heidt et al.15. In addition to the coherence, we also consider the spectrally averaged RIN shown in Fig. 1b, which is seen to follow a similar trend as the coherence, not surprisingly, considering the definition used. Therefore, we can verify qualitatively the results obtained by Heidt et al.
After analyzing and verifying the numerical implementation obtained with the scalar approach, we now study the effect of including the second polarization of the fundamental mode in the model by solving the CGNLSEs. Fiber birefringence is also included as the unintentional birefringence measured in the ANDi fiber (see the Methods section). All the parameters are the same as for the scalar model except for the pulse lengths (from 50 fs to 500 fs), which are chosen within the coherent regime in Fig. 1, where the noise from SRS has a minor contribution. Figure 2 shows the spectrally averaged coherence and RIN for two different input polarizations, one along the slow-axis (Fig. 2a,b), and the other at a 20 degrees angle with respect to the slow-axis of the fiber (Fig. 2c,d).
Two polarizations (vector model): Spectrally averaged coherence and RIN. Average coherence and RIN calculated with input polarization along the slow-axis (a,b) and with a 20 degrees angle with respect to the slow-axis (c,d) versus pulse length along 1 m of fiber. The peak power is 44 kW as in Fig. 1. \({L}_{C}^{PMI}\)(green line) indicates the maximum propagation distance for which the average coherence is higher than 0.9.
In contrast to the scalar model, the vector simulations show that degradation of the phase and intensity stability starts already for much shorter pump pulses and much shorter fiber lengths. The PMI influenced coherence length, \({L}_{C}^{PMI}\), shown in Fig. 2a and c, indicates the edge of the region where the coherence of the SC is maintained, which we define as when the coherence is higher than 0.9. In the limit of long pulse lengths, it goes towards the analytical PMI length, which for slow-axis pumping is given by \({L}_{PMI}=3/2\gamma {P}_{0}=1\,{\rm{mm}}\) for the parameters used in Fig. 2a.
Figure 2 shows that the good noise properties, up to 1 ps after 1 m of propagation predicted by the scalar model in Fig. 1, are lost already at about 120 fs with the vector model for both slow-axis and 20 degrees pumping. The results further show that coherent SC with longer pulse lengths than 150 fs can only be achieved with very short fiber lengths, shorter than about 5 cm, again for both pump configurations. Let us consider the dependence of the input polarization on PMI in our case: Since we are pumping in the normal dispersion regime there is a critical power for PMI when pumping along the fast-axis18, which is \({P}_{cr}^{fast}=3\pi \,{\rm{\Delta }}n/{\lambda }_{p}\gamma =3.2\,\,{\rm{kW}}{\rm{.}}\) So when pumping with a peak power of 44 kW and increasing the angle relative to the slow-axis, the power in the slow-axis is reduced, resulting in a reduced PMI gain, and the power in the fast-axis is increased, achieving the threshold at an angle of 15.6 degrees. Thus, for 20 degrees pumping there is PMI from the fast-axis in addition to the PMI from the slow-axis with reduced gain. This fact means that we see only a minor difference in the coherence and RIN between the two configurations seen in Fig. 2a–d, respectively.
In Fig. 3 we consider in more detail the 4 cases marked in Fig. 2a with blue dots for slow-axis pumping, to see the spectral distribution of the power in the two polarizations. Pumping with the shortest pulses, 50 fs and 100 fs, almost no transfer of energy between the polarizations occurs (the x-polarization power is identical to the total power), resulting in good coherence over 1 m. However, more energy is transferred between polarizations already at 120 fs and coherence degradation takes place. Furthermore, while not shown here, we observe that power transfer between the polarization states saturates for long enough propagation distances. This behavior agrees well with previous experiments carried out with the same ANDi fiber34, where the power in the slow- and fast-polarization was measured for pumping along the slow-axis.
Simulated SC for both polarizations. Mean total spectrum (red), mean x-polarization (black), mean y-polarization (blue). The spectral fluctuations are shown in grey for the x and y-polarization. (a) 50 fs, (b) 100 fs, and (c) 120 fs pump pulse at 1 m and (d) 400 fs at 25 mm, corresponding to the blue dots in Fig. 2. The fractional power in the x-polarized field at 1 m is given by Px.
From Fig. 3, we can identify PMI as the mechanism that leads to coherence degradation. The interaction between the polarizations due to PMI results not only in transfer of power but also noise between them, which seems to be the main effect responsible for coherence degradation. For 100 fs, amplification of the noise floor through PMI in the y-polarization takes place, similarly to how scalar MI amplifies noise in anomalous dispersion pumped SC1,37. This results in a very noisy but low power SC in that polarization (Fig. 3b). However, as most of the power is still in the x-polarization, whose noise properties remain good, the total relative noise is low. Once the PMI gain is sufficiently high to lead to stronger interaction between polarizations, such as for the 120 fs case, the power in the noisy y-polarization is higher and contributes more to the total relative noise. This together with the transfer of noise to the x-polarization will result in a noisier total SC, as shown in Fig. 3c. Therefore, stronger interaction between the polarizations leads to a noisier total SC.
To confirm the presence of PMI we plot in Fig. 3d (red curve) the theoretical gain profile, which shows the well-known two sidebands around the pump for slow-axis pumping, with a maximum gain and wavelength detuning being determined only by the input power when the fiber parameters are fixed (dispersion, nonlinearity and birefringence)18,31,32,33. The simulation in Fig. 3d for a 400 fs pump pulse shows agreement with the sidebands calculated theoretically. Higher order dispersion terms and Raman are not included in the simple analytical small-signal gain PMI model18, resulting in the observed deviation between simulation and theory. As we shall see in the experimental section, the characteristic PMI sidebands are also observed experimentally.
In order to gain more insight into the polarization noise and with the aim of comparing the modelling to the experiments, simulations with different input powers were performed for a fixed fiber length of 0.5 m. We also need to consider the effect of a chirp on the input pulse, because our 80 MHz pump laser emits chirped pulses with a FWHM bandwidth of the intensity profile of \({\rm{\Delta }}{\lambda }_{FWHM}=16.6\,{\rm{nm}}\). Assuming a quadratic chirp of an input 170 fs (\({T}_{0}=96.4\,{\rm{fs}}\)) sech pulse \(U(0,{\rm{T}})\) = sech \((T/{T}_{0})\exp (\,-\,iC({T}^{2}/{{T}_{0}}^{2}))\), we estimate the chirp parameter to be \(C=\sqrt{{({\rm{\Delta }}{\lambda }_{FWHM}/{\rm{\Delta }}{\lambda }_{FWHM}^{TL})}^{2}-\,1}=2.15\), where \({\rm{\Delta }}{\lambda }_{FWHM}^{TL}\) is the transform limited FWHM bandwidth of the intensity profile. Calculations are done with both chirped and un-chirped input pulses.
Figure 4 shows the evolution of the spectrum and the spectral profiles of the RIN and coherence as a function of the input power for a 170 fs pump with input polarization at 20 degrees. As expected, at a low input power of 23 kW (Fig. 4a), a SC with low RIN (\(\langle {\rm{RIN}}\rangle =2.3\, \% \)) and high coherence (\(\langle |{{g}}_{12}^{(1)}|\rangle =0.997\)) across the whole profile is generated. This is because the PMI gain is not sufficiently high to amplify noise from the other polarization within the chosen fiber length. Increasing the input power (Fig. 4b,c,d) to achieve a broader spectrum the PMI gain is increased, resulting in lower coherence and higher RIN. Already at 23 kW a weak and narrow noise band appears below the pump wavelength (Fig. 4a). At higher pump powers the strength and bandwidth of the noise increases (Fig. 4b,c) and at 44 kW coherence is lost across the whole bandwidth of the spectrum, except for in a very narrow region around the edges (Fig. 4d).
Dependence of RIN and coherence on input peak power for 170 fs pumping. Calculated mean SC spectra (red), RIN (blue) and \(|{{g}}_{12}^{(1)}|\) (black) vs wavelength pumping with 170 fs and 20 degrees respect to slow-axis at 0.5 m using input peak powers (and corresponding output average powers in the experiments) (a) 23 kW (0.36 W), (b) 29 kW (0.45 W), (c) 34 kW (0.53 W) and (d) 44 kW (0.67 W). Results for chirped (un-chirped) pump pulses are shown as full (lighter dashed) curves.
The results for an un-chirped input pulse are shown as lighter dashed lines and they show the same general behavior, but interestingly demonstrate that the coherence and RIN performance is improved when pumping with a chirped pulse, while the spectrum remains unchanged. The average RIN and coherence for 29 kW is for example, \(\langle {\rm{RIN}}\rangle =22.8\, \% \) and \(\langle |{{g}}_{12}^{(1)}|\rangle =0.779\) for the un-chirped pump, and improved to 7.6% and 0.941 for a chirped pump, respectively (Fig. 4b).
Supercontinuum and relative intensity noise measurements
The numerical study presented above shows that the coherence is degraded and noise increases when increasing the input power for a given fiber length due to the presence of PMI. These results are confirmed in this section, where the noise of a femtosecond pumped ANDi SC is experimentally measured for two pump pulse length configurations, 170 fs (Fig. 5) and 235 fs (Fig. 6). The details of the experimental setup and the method used to measure the RIN as a function of wavelength are given in the Methods section.
Measured supercontinuum and RIN vs power for a 170 fs pump and 0.5 m long ANDi fiber. SC spectra (a) and corresponding RIN spectra (b) for different output average powers. The dotted horizontal line in (b) is the noise of the pump laser, 0.71%, and the numbers indicate the power level in dBm/nm. Histograms of the pulse energy and corresponding Gamma (full red) and Gaussian (dotted light red) fit of the 1064 nm pump (c) and a 10 nm band of the SC at 1100 nm (grey bar) for output average powers of (d) 0.19 W, (e) 0.53 W and (f) 0.67 W.
Measured supercontinuum and RIN vs power for a 235 fs pump and 0.5 m long ANDi fiber. SC spectra (a) and corresponding RIN spectra (b) for different output average powers (numbers in (b) indicate the power level in dBm/nm). The dotted horizontal line in (b) is the noise of the pump laser, 0.67%, and the numbers indicate the corresponding power levels in dBm/nm. (c) SC spectrum with 0.4 W output power for a 540 fs pump pulse (black) and corresponding theoretical PMI gain for slow-axis (red) and fast-axis (green) pumping.
Figures 5a and 6a show the SC generated with 170 fs and 235 fs, respectively, for different power levels ranging from 0.19 W to 0.67 W using 0.5 m of the ANDi fiber. For the SC generated at each power level, the corresponding RIN, shown in Figs 5b and 6b, is measured every 50 nm with 10 nm bandwidth filters across the SC spectrum. Let us focus on the general trends:
As expected from the numerical modelling, the SC generated at the lowest pump powers has the lowest noise, for both pump pulse configurations, which is at the same level as the 0.71% RIN of the pump laser (black dotted line), indicating that there is negligible noise added by the nonlinear SC processes. Increasing the pump power is seen to generate a broader SC and lead to an increase of RIN as also found numerically. In terms of the spectral RIN profile, the noise is seen to start at wavelengths below the pump for both pump pulse lengths, again as found numerically in Fig. 4. For example, pumping with 170 fs low RIN of less than 10% can be obtained for wavelengths above the pump for output powers up to around 0.45 W, but the RIN is already high at shorter wavelengths below the pump for an output power of 0.26 W.
Another common feature for both pulse lengths is that at higher pump powers the RIN around the pump wavelength drops to very low values, which was not found in the simulations in Fig. 4. This could be explained by pump light propagating in the cladding, which has been demonstrated to lower the noise around that wavelength30.
Examples of the filtered pulse energy histograms from which the RIN value is calculated is shown for the un-filtered 1064 nm pump in Fig. 5c and for a 10 nm band around 1100 nm for three different power levels in Fig. 5d–f. The fitted gamma distributions, from which the RIN in Fig. 5b was calculated (see the Methods section) are shown as solid red curves, whereas a fitted Gaussian is shown as a red dotted curve. The histograms show a gradual increase of the FWHM of the Gamma distribution (of the noise) and a transition from a Gaussian distribution for low powers (0.19 W) to a more skewed distribution for higher powers (0.67 W). To the best of our knowledge this observation is new and in fact non-trivial, because it says that the noise statistics of a fs ANDi SC follow the same trends as those presented for MI driven SC pumped in the anomalous dispersion regime reported before37,38, further indicating a connection between MI and PMI, even if the soliton dynamics is absent in ANDi SC generation.
To demonstrate experimentally that the ANDi SC is initiated by PMI we increase the pulse length to 540 fs and decrease the power to give an output average power of 0.4 W, corresponding to 8.5 kW input peak power if one neglects loss. The spectrum shown in Fig. 6c clearly reveals the separated PMI gain bands obtained for pumping along the slow-axis, coinciding with the analytically predicted PMI gain bands (red curve). The presence of the single central gain band obtained when pumping along the fast axis (green curve) cannot be determined and thus we cannot say whether we are pumping along an optical axis. Most probably we are not and both types of PMI gain bands are present. In any case the experiments verify the presence of PMI in the ANDi SC generation.
Noise comparison between simulations and measurements
We performed an ensemble of 20 simulations using the experimental conditions, with certain approximations: a hyperbolic sech pulse was assumed as input pulse, and positive chirp was added according to the measured bandwidth. The fiber dispersion and nonlinearity used in the simulation were calculated with COMSOL and might thus differ slightly from the actual fiber dispersion and nonlinearity. The loss was included in the simulations (see the Methods section) although its influence is negligible since the fiber length was only 0.5 m. Coupled-in power was estimated based on measured output power, taking into account the fiber output facet transmission, but neglecting loss in the fiber.
Figure 7 shows the measured and calculated spectral RIN profiles for the two different pump pulse lengths used in the experiment, for 3 different average output power levels. The calculated RIN was filtered every 50 nm over 10 nm bandwidth to match simulations to the experiments, since noise is bandwidth dependent39. Because of filtering (averaging) over 10 nm, the noise is lower than the one calculated in Fig. 4, where the RIN was not wavelength filtered.
Comparison between measured and calculated RIN. Measured (circle + solid line) and calculated (star + dashed line) RIN vs wavelength for (a–c) 170 fs and (d–f) 235 fs pump pulses for three output average power levels. Insets in (c) and (f) show a close-up for better comparison of the measured and calculated RIN.
The general behavior of the measured RIN is reproduced by the simulations for both pump pulse lengths, showing the same general power and wavelength dependence as also discussed in connection with Figs 5 and 6 above, i.e., that the noise increases with power (for instance, Fig. 7a,b,c) and starts at wavelengths below the pump (Fig. 7b,e). Taking into account the uncertainty in the pump power level and pump profile, the quantitative level is quite well reproduced, except at the pump at high power and in some cases close to the edges. The edge points are highly influenced by the fact that there is very little power (generally below −50 dBm/nm as seen in Figs 5 and 6) and should probably not be considered. The RIN close to the pump in Fig. 7a,b could be highly influenced by pump light in the cladding as discussed above30.
One observation is that the calculated RIN is usually lower than in the experiments. For the 170 fs case with the lowest power (0.19 W), the RIN is almost zero in the simulations in contrast to the experiment as shown in Fig. 7c. This can be explained by the intensity fluctuations of the pump laser (0.71%), which sets the lower limit for the experiment. While not shown here, simulations performed with 1% variation in the pump intensity reproduce very well this lower limit behavior.
Design of broadband low-noise SC sources using ANDI fibers requires a good understanding of the phenomena involved in the process in order to avoid any possible noise sources. The coherence is usually calculated with the GNLSE following a scalar approach, which ignores polarization effects, such as PMI. With the use of the vector GNLSE, the two polarizations of the fundamental mode are included and thus the noise arising from PMI and other polarization processes can be studied in order to optimize the noise performance of the SC source.
We have investigated numerically the coherence and noise of SC in a standard ANDi fiber with the CGNLSE, explaining the mechanism of polarization noise in this weakly birefringent fiber pumped with femtosecond pulses. Vector propagation simulations unveil PMI driven noise in ANDi SC, which is hidden when doing scalar simulations. Polarization modulational instability redistributes the power between the two polarizations leading to intensity and phase fluctuations. The polarization noise depends not only on the pump parameters, such as pulse length, power and polarization orientation, but also on the fiber length. Comparing to the single-polarization case, the fiber length at which the ANDi SC is fully coherent (coherence length L C defined by Heidt et al.15) is drastically reduced to the PMI influenced coherence length \({L}_{C}^{PMI}\), and thus fully coherent SC can in a 1 m ANDi fiber for example only be achieved with pulses shorter than around 120 fs, in contrast to the 1 ps obtained with the scalar model.
The optimum conditions for designing a low-noise ANDi SC source are thus not trivial and depend strongly on both the nonlinear fiber properties and the pump configuration, in particular also on their polarization properties. From the results presented here, we can conclude that there are several regions of operation for a given fiber type and pump. According to the obtained numerical results, broadband fully coherent ANDi SC can be easily achieved in general with short fiber lengths and short femtosecond pump pulses. The crossing point of L WB and \({L}_{C}^{PMI}\) in Fig. 2 sets the maximum pump pulse length to about 180 fs for low noise SC operation in which optical wave breaking is fully used to generate the broadest possible spectrum. Above 180 fs PMI will generate noise before the broadest ANDi SC is achieved. Shorter pump pulses than 180 fs can be used to pump longer fibers without coherence degradation, for instance, pumping with 50 fs the decoherence will start after more than 1 m. In this case, the upper limit for the fiber length is set by the point at which the SRS starts to generate noise. Another option to achieve broadband coherent ANDi SC would be to use PM-ANDi fibers as previously proposed35,36, but PMI-induced noise will be suppressed only when pumping along one of the principal axes of the fiber18. Therefore, a good control of the input polarization for coherent ANDi SC with PM fibers would be required.
The numerical results obtained with the CGNLSE were confirmed experimentally by pumping a 0.5 m long commercially available ANDi fiber with a femtosecond laser. The results of the measured RIN of the ANDi SC for two pump pulse lengths verified the power and wavelength dependence of the SC noise. Polarization modulation instability gain sidebands were also observed using 540 fs pump pulses, verifying that PMI is the decoherence mechanism involved in the SC generation in the weakly birefringent ANDi fiber under study.
Numerical model
In our study we use the well-known CGNLSEs, written in terms of circular polarization components, in which the two orthogonal polarizations of the fundamental mode are included18,40,41,42
$$\begin{array}{rcl}\frac{\partial {\tilde{C}}_{1}({\rm{\Omega }},z)}{\partial z} & = & i\frac{{\rm{\Delta }}{\beta }_{0}}{2}{\tilde{C}}_{2}({\rm{\Omega }},z)+i[\beta (\omega )-[\beta ({\omega }_{0})+{\beta }_{1}({\omega }_{p}){\rm{\Omega }}]]{\tilde{C}}_{1}({\rm{\Omega }},z)-\frac{\alpha (\omega )}{2}{\tilde{C}}_{1}({\rm{\Omega }},z)\\ & & +i\bar{\gamma }(\omega )(1+\frac{{\rm{\Omega }}}{{\omega }_{0}})\cdot {\rm{F}}\{(1-{f}_{R}){C}_{1}(t,z)[\frac{2}{3}|{C}_{1}(t,z){|}^{2}+\frac{4}{3}|{C}_{2}(t,z){|}^{2}]\\ & & +{f}_{R}{C}_{1}(t,z)\cdot {{\rm{F}}}^{-1}\{{\tilde{h}}_{R}({\rm{\Omega }})\cdot {\rm{F}}\{|{C}_{1}(t,z){|}^{2}+|{C}_{2}(t,z){|}^{2}\}\}\},\end{array}$$
$$\begin{array}{rcl}\frac{\partial {\tilde{C}}_{2}({\rm{\Omega }},z)}{\partial z} & = & i\frac{\Delta {\beta }_{0}}{2}{\tilde{C}}_{1}({\rm{\Omega }},z)+i[\beta (\omega )-[\beta ({\omega }_{0})+{\beta }_{1}({\omega }_{p}){\rm{\Omega }}]]{\tilde{C}}_{2}({\rm{\Omega }},z)-\frac{\alpha (\omega )}{2}{\tilde{C}}_{2}({\rm{\Omega }},z)\\ & & +i\bar{\gamma }(\omega )(1+\frac{{\rm{\Omega }}}{{\omega }_{0}})\cdot {\rm{F}}\{(1-{f}_{R}){C}_{2}(t,z)[\frac{2}{3}{|{C}_{2}(t,z)|}^{2}+\frac{4}{3}{|{C}_{1}(t,z)|}^{2}]\,\\ & & +{f}_{R}{C}_{2}(t,z){{\rm{F}}}^{-1}\{{\tilde{h}}_{R}({\rm{\Omega }})\cdot {\rm{F}}\{|{C}_{2}(t,z){|}^{2}+|{C}_{1}(t,z){|}^{2}\}\}\},\end{array}$$
where \({{\rm{C}}}_{1,2}(t,z)\) are the pseudo-field envelopes of the two circular polarization components in the time domain, and their Fourier transforms are given by
$${\tilde{C}}_{1,2}({\rm{\Omega }},z)={[\frac{{A}_{eff}(\omega )}{{A}_{eff}({\omega }_{0})}]}^{-\frac{1}{4}}{\tilde{A}}_{1,2}({\rm{\Omega }},z),$$
where \({\tilde{A}}_{1,2}({\rm{\Omega }},z)\) are the field envelopes in the frequency domain of the two circular polarization components, Ω = ω − ω0, and \({A}_{eff}(\omega )\) is the effective area. The transformation to the pseudo-field envelopes in equation (5) is done to include mode profile dispersion43. In this way the photon number is conserved when the loss is set to zero. In this version of CGNLSEs the nonlinear parameter is modified and given by
$$\bar{\gamma }(\omega )=\frac{{\omega }_{0}{n}_{2}{n}_{eff}({\omega }_{0})}{c{n}_{eff}(\omega )\sqrt{{A}_{eff}(\omega ){A}_{eff}({\omega }_{0})}},$$
where \({n}_{2}=2.66\times {10}^{-20}{m}^{2}/W\) is the value of the nonlinear refractive index of silica, and \({n}_{eff}(\omega )\) is the effective refractive index of the fundamental mode calculated with COMSOL Multiphysics. Equations (3) and (4) include the full dispersion, where ω0 is the central frequency of the numerical frequency domain and ω p is the pump frequency. The total loss \(\alpha (\omega )={\alpha }_{m}(\omega )+\,{\alpha }_{c}(\omega )\) is also included as the contribution of the material loss, \({\alpha }_{m}(\omega )\), of silica44 and the confinement loss, \({\alpha }_{c}(\omega )\), which is calculated from the imaginary part of the effective refractive index obtained with COMSOL Multiphysics. Finally, the measured Raman response function of a silica fiber in the frequency domain is included as \({\tilde{h}}_{R}({\rm{\Omega }})\), and \({f}_{R}=0.18\) is the fraction of the Raman contribution to the nonlinear polarization.
For the case under study, in which a weakly birefringent fiber is considered, the nonlinear parameter and dispersion is assumed to be the same for the two polarizations18,34,41,42. The birefringence of the fiber is added through the phase mismatch between the two polarization modes given by \({\rm{\Delta }}{\beta }_{0}={\beta }_{0,x}-{\beta }_{0,y}=[{n}_{eff,x}({{\rm{\omega }}}_{0})-\)\({n}_{eff,y}({{\rm{\omega }}}_{0})]\,{{\rm{\omega }}}_{0}/c={\rm{\Delta }}n\,{{\rm{\omega }}}_{0}/c\), where \({\rm{\Delta }}n=1.3\times {10}^{-5}\) was measured in34 and specified by the supplier, NKT Photonics. Note that the group velocity mismatch between the axes \(({\rm{\Delta }}{\beta }_{1}=0)\) can be ignored in the CGNLSEs for relatively low-birefringent fibers18,34,41,42, and it was neglected in our model.
The input conditions to equations (3) and (4) are circularly field envelopes in the time domain given by \({A}_{1,2}(T,0)=\)\({U}_{z=0}\exp (\pm i\theta )/\sqrt{2}\), where the input field envelope is a chirped sech pulse \({U}_{z=0}\) = sech \((T/{T}_{0})\exp (\,-\,iC({T}^{2}/{{T}_{0}}^{2}))\), \(\theta \) is the angle respect to the x-axis, C is the chirp parameter, and \({T}_{0}=\,{T}_{FWHM}/2\,\mathrm{ln}(1+\sqrt{2})\), where \({T}_{FWHM}\) is the full width at half-maximum pulse length. The pump wavelength of the input pulse is 1064 nm. Finally, the circularly polarized components of the field envelope in the frequency domain are related to the linear components as \({\tilde{A}}_{x,y}({\rm{\Omega }},z)=[{\tilde{A}}_{1}({\rm{\Omega }},z)\,\pm \,{\tilde{A}}_{2}({\rm{\Omega }},z)]\exp (\,\mp \,i{\rm{\Delta }}{\beta }_{0}/2)/\sqrt{2}\). Equations (3) and (4) were implemented in Matlab and solved in the frequency domain in the interaction picture method by using a Runge-Kutta (RK4 (3)) scheme for integration of the nonlinear operator45. The step size is fixed to 25 µm and the number of points used in the simulation is \({N}_{p}={2}^{16}\).
Simulation parameters were taken from a commercially available ANDi fiber NL-1050-NEG-146 for comparison with the experiments. The cross section of the fiber used is displayed in Fig. 8a. Fiber parameters such as group velocity dispersion, confinement loss and effective area were calculated with COMSOL Multiphysics and are shown in Fig. 8a,b. The frequency dependence of the nonlinear parameter was calculated with equation (6).
Fiber properties used in the simulations. (a) Dispersion profile (black) and total loss (red), including material and confinement loss. (b) Effective area (black) and nonlinear parameter (red) for the ANDi fiber NL-1050-NEG-1 (\({\rm{\Lambda }}=1.44\,\mu m,\) \(d/{\rm{\Lambda }}=0.39\)). A cross section of the ANDi fiber is shown in the inset in (a).
Coherence and noise are calculated for all pairs of an ensemble of 20 independent simulations with the same input parameters but different initial quantum noise. Quantum noise is added in both polarizations as one-photon-per-mode, in which a photon with random phase is added to each frequency bin1,3,47. The noise is in the frequency domain given by \({\tilde{a}}_{oppm}({{\rm{\Omega }}}_{m})=\sqrt{\hslash ({N}_{p}-1){\mathrm{dT}{\rm{\Omega }}}_{m}}\exp [2\pi {\rm{i}}{\rm{\Phi }}({{\rm{\Omega }}}_{m})]\), where \({\rm{\Phi }}({{\rm{\Omega }}}_{m})\) is the random phase corresponding to a white noise uniformly distributed in the interval [0, 1] in each frequency bin Ω m . The noise is transformed back to the time domain and added to \({A}_{1,2}(T,0).\) Different noise seeds are used for each polarization42,48, and we are assuming no correlation between them. The coherence of the SC ensemble is quantified with the first-order spectral coherence function defined by1,37
$$|{g}_{12}^{(1)}(\omega )|=|\frac{{\langle {\tilde{A}}_{i}^{\ast }(\omega ){\tilde{A}}_{j}(\omega )\rangle }_{i\ne j}}{\sqrt{\langle {|{\tilde{A}}_{i}(\omega )|}^{2}\rangle \langle {|{\tilde{A}}_{j}(\omega )|}^{2}\rangle }}|=|\frac{2N}{({N}^{2}-N)}\frac{{\sum }_{i\ne j}^{N}{\tilde{A}}_{i}^{\ast }(\omega ){\tilde{A}}_{j}(\omega )}{\sqrt{{\sum }_{i}^{N}{\tilde{A}}_{i}(\omega ){\sum }_{j}^{N}{\tilde{A}}_{j}(\omega )}}|,$$
where \([{\tilde{A}}_{i}(\omega ),\,{\tilde{A}}_{j}(\omega )]\) are independent pairs of SC, being 190 pairs for \(N=20\) realizations. The first-order spectral coherence function returns a value between 0 (low coherence) and 1 (high coherence) for each frequency. The spectrally averaged coherence is also calculated as1
$$\langle |{g}_{12}^{(1)}|\rangle =\frac{{\int }_{0}^{\infty }|{g}_{12}^{(1)}(\omega )|\langle {|\tilde{A}(\omega )|}^{2}\rangle d\omega }{{\int }_{0}^{\infty }\langle {|\tilde{A}(\omega )|}^{2}\rangle d\omega },$$
which gives \(0\le \langle |{{g}}_{12}^{(1)}|\rangle \le 1\) for an ensemble of SC. Furthermore, the noise is calculated with the RIN defined as the ratio of the mean to the standard deviation49,
$${\rm{RIN}}(\omega )=\frac{\sigma (\omega )}{\mu (\omega )}=\frac{{\langle {({|{\tilde{A}}_{i}(\omega )|}^{2}-\mu (\omega ))}^{2}\rangle }^{1/2}}{\langle {|{\tilde{A}}_{i}(\omega )|}^{2}\rangle }=\frac{\sqrt{\frac{1}{N-1}{\sum }_{i}^{N}{({|{\tilde{A}}_{i}(\omega )|}^{2}-\mu (\omega ))}^{2}}}{\frac{1}{N}{\sum }_{i}^{N}{|{\tilde{A}}_{i}(\omega )|}^{2}}.$$
In the same way as with the first-order coherence, the RIN can also be spectrally averaged to yield a single number for a SC ensemble,
$$\langle {\rm{RIN}}\rangle =\frac{{\int }_{0}^{\infty }{\rm{RIN}}(\omega )\langle {|\tilde{A}(\omega )|}^{2}\rangle d\omega }{{\int }_{0}^{\infty }\langle {|\tilde{A}(\omega )|}^{2}\rangle d\omega }.$$
The experimental setup is shown in Fig. 9a. A collimated beam from a mode-locked laser with center wavelength at 1064 nm and 80 MHz repetition rate was focused into 0.5 m of ANDi fiber with an aspheric lens. The laser pump is a passively mode-locked laser borrowed from Fianium (FP-1060–5-fs), in which picosecond pulses emitted from the laser are compressed in an external stage with bulk compression. The laser emits a collimated and linearly polarized beam and its output was characterized as shown in Fig. 9b,c. First, the pulse length was measured with an intensity autocorrelator (Femtochrome FR-103HP) and an oscilloscope for two different configurations of the laser (Fig. 9b). The FWHM of the laser was found to be 170 fs and 235 fs, assuming a hyperbolic secant squared shape power profile. Furthermore, the spectra for these two configurations (Fig. 9c) were also measured with an optical spectrum analyzer (ANDO AQ6317B) in order to estimate the linear chirp in the simulations. The measured FWHM bandwidth was 16.6 nm and 15.5 nm for 170 fs and 235 fs, respectively. Nonlinear chirp due to self-phase modulation in the output spectra was also observed because of the amplification stage in the laser.
Experimental setup. (a) Schematic of the experimental setup for supercontinuum generation and noise measurements; BPF – Bandpass filter. (b) Intensity autocorrelation measurement (FWHM) and (c) spectrum of the pump laser for two different settings.
The SC generated after 0.5 m of ANDi fiber was collimated and measured with the optical spectrum analyzer (OSA) used before. An integrating sphere (~20 dB wavelength independent loss) was used to reduce the power going to the OSA, to avoid damaging the instrument as well as to collect all divergent light. Each spectrum is an average over thousands of pulses due to the slow response of the detector in the OSA. The input power to the ANDi fiber was controlled with neutral density filters from Thorlabs, in order to measure the power dependence of the generated SC and power dependence of the RIN. In these measurements, the input polarization was not adjusted to the slow-axis as in previous studies34,35,36 where they tried to minimize the noise. The noise investigation is here carried out aiming at a practical source where low birefringence fibers are spliced on directly to the pump laser and no extra components are used to match the input polarization to the slow-axis.
To measure the wavelength dependence of the RIN37,39, the SC is filtered every 50 nm with bandpass filters of 10 nm bandwidth from Thorlabs. After this, the filtered SC is then detected with two fiber-coupled photodiodes, an InGaAs for longer wavelengths 950–1350 nm (Thorlabs - DET08CFC - 800 to 1700 nm, 5 GHz BW), and a Si for shorter wavelengths 750–900 nm (Thorlabs - DET025AFC - 400 to 1100 nm, 2 GHz BW). Then, the voltage time series are recorded with a fast oscilloscope (Teledyne LeCroy - HDO9404–10 bits resolution, 40 Gs/s, and 4 GHz BW). Around ~16000 pulses are recorded for each 10 nm filtered SC. The power incident on the detector was purposely maintained low to operate in the linear regime, far from saturation. The described technique has been already used to measure the noise in SC sources50. To calculate the RIN, we extract the maximum of the pulses from the voltage time series recorded for each 10 nm filtered SC and subtract the noise floor. The histograms obtained for the extracted ~16000 peak values are fitted to a gamma distribution, and the RIN is calculated as the ratio of the standard deviation to the mean of the distribution. Besides the SC noise, the noise of the pump laser was also characterized by measuring the pulse to pulse fluctuations with the photodiode without filter. No filter was used since the width of the pump pulse is close that of the filters. The value obtained was 0.71% (170 fs) and 0.67% (235 fs).
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Dudley, J. M., Genty, G. & Coen, S. Supercontinuum generation in photonic crystal fiber. Rev. Mod. Phys. 78, 1135–1184 (2006).
ADS CAS Article Google Scholar
Møller, U. et al. Power dependence of supercontinuum noise in uniform and tapered PCFs. Opt. Express 20, 2851–2857 (2012).
ADS Article PubMed Google Scholar
Dudley, J. M. & Coen, S. Coherence properties of supercontinuum spectra generated in photonic crystal and tapered optical fibers. Opt. Lett. 27, 1180–1182 (2002).
Kudlinski, A. et al. Control of pulse-to-pulse fluctuations in visible supercontinuum. Opt. Express 18, 27445–27454 (2010).
ADS CAS Article PubMed Google Scholar
Brown, W. J., Kim, S. & Wax, A. Noise Characterization of Supercontinuum Sources for Low Coherence Interferometry Applications. J. Opt. Soc. Am. A 31, 2703–2710 (2014).
Yuan, W. et al. Optimal operational conditions for supercontinuum-based ultrahigh-resolution endoscopic OCT imaging. Opt. Lett. 41, 250–253 (2016).
ADS CAS Article PubMed PubMed Central Google Scholar
Tu, H. & Boppart, S. A. Coherent anti-Stokes Raman scattering microscopy: overcoming technical barriers for clinical translation. J. Biophoton. 7, 9–22 (2014).
Falk, J. P., Frosz, M. H. & Bang, O. Supercontinuum generation in a photonic crystal fiber with two zerodispersion wavelengths tapered to normal dispersion at all wavelengths. Opt. Express 13, 7535–7540 (2005).
Sørensen, S. T. et al. Influence of pump power and modulation instability gain spectrum on seeded supercontinuum and rogue wave generation. J. Opt. Soc. Am. B 29, 2875–2885 (2012).
Solli, D. R., Ropers, C. & Jalali, B. Active control of rogue waves for stimulated supercontinuum generation. Phys. Rev. Lett. 101, 233902 (2008).
Cheung, K. K. Y., Zhang, C., Zhou, Y., Wong, K. K. Y. & Tsia, K. K. Manipulating supercontinuum generation by minute continuous wave. Opt. Lett. 36, 160–162 (2011).
Moselund, P. M., Frosz, M., Thomsen, C. & Bang, O. Backseeding modulational instability and supercontinuum generation. Opt. Express 16, 11954–11968 (2008).
Maria, M. et al. Q-switch-pumped supercontinuum for ultra-high resolution optical coherence tomography. Opt. Lett. 42, 4744–4747 (2017).
Møller, U. & Bang, O. Intensity noise in normal-pumped picosecond supercontinuum generation, where higher-order Raman lines cross into anomalous dispersion regime. Electron. Lett. 49, 63–65 (2013).
Heidt, A. M., Feehan, J. S., Price, J. H. V. & Feurer, T. Limits of coherent supercontinuum generation in normal dispersion fibers. J. Opt. Soc. Am. B 34, 764–775 (2017).
Hartung, A., Heidt, A. M. & Bartelt, H. Design of all-normal dispersion microstructured optical fibers for pulsepreserving supercontinuum generation. Opt. Express 19, 7742–7749 (2011).
Finot, C., Kibler, B., Provost, L. & Wabnitz, S. Beneficial impact of wave-breaking for coherent continuum formation in normally dispersive nonlinear fibers. J. Opt. Soc. Am. B 25, 1938–1948 (2008).
Agrawal, G. P. Nonlinear Fiber Optics. 4th ed. (Elsevier 2007).
Heidt, A. M. Pulse preserving flat-top supercontinuum generation in all-normal dispersion photonic crystal fibers. J. Opt. Soc. Am. B 27, 550–559 (2010).
Heidt, A. M. et al. Coherent octave spanning near-infrared and visible supercontinuum generation in all-normal dispersion photonic crystal fibers. Opt. Express 19, 3775–3787 (2011).
Hooper, L. E., Mosley, P. J., Muir, A. C. W., Wadsworth, J. & Knight, J. C. Coherent supercontinuum generation in photonic crystal fiber with all-normal group velocity dispersion. Opt. Express 19, 4902–4907 (2011).
Nishizawa, N. & Takayanagi, J. Octave spanning high-quality supercontinuum generation in all-fiber system. J. Opt. Soc. Am. B 24, 1786–1792 (2007).
Tarnowski, K. et al. Coherent supercontinuum generation up to 2.2 μm in an all-normal dispersion microstructured silica fiber. Opt. Express 24, 30523–30536 (2016).
Klimczak, M. et al. Coherent supercontinuum generation up to 2.3 μm in all-solid soft-glass photonic crystal fibers with flat all-normal dispersion. Opt. Express 22, 18824–18832 (2014).
Klimczak, M., Soboń, G., Abramski, K. & Buczyński, R. Spectral coherence in all-normal dispersion supercontinuum in presence of Raman scattering and direct seeding from sub-picosecond pump. Opt. Express 22, 31635–31645 (2014).
Liu, L. et al. Coherent mid-infrared supercontinuum generation in all-solid chalcogenide microstructured fibers with all-normal dispersion. Opt. Lett. 41, 392–395 (2016).
Petersen, C. R. et al. Mid-infrared supercontinuum covering the 1.4–13.3 μm molecular fingerprint region using ultra-high NA chalcogenide step-index fibre. Nature Photon 8, 830–834 (2014).
Cheng, T. et al. Mid-infrared supercontinuum generation spanning 2.0 to 15.1 μm in a chalcogenide step-index fiber. Opt. Lett. 41, 2117–2120 (2016).
Petersen, C. R. et al. Increased mid-infrared supercontinuum bandwidth and average power by tapering large-mode-area chalcogenide photonic crystal fibers. Opt. Express 25, 15336–15347 (2017).
Klimczak, M., Soboń, G., Kasztelanic, R., Abramski, K. & Buczyński, R. Direct comparison of shot-to-shot noise performance of all normal dispersion and anomalous dispersion supercontinuum pumped with sub-picosecond pulse fiber-based laser. Sci. Rep. 6, 19284 (2016).
Murdoch, S. G., Leonhardt, R. & Harvey, J. D. Polarization modulation instability in weakly birefringent fibers. Opt. Lett. 20, 866–868 (1995).
Millot, G., Seve, E., Wabnitz, S. & Haelterman, M. Observation of induced modulational polarization instabilities and pulse-train generation in the normal-dispersion regime of a birefringent optical fiber. J. Opt. Soc. Am. B 15, 1266–1277 (1998).
Kruhlak, R. J. et al. Polarization modulation instability in photonic crystal fibers. Opt. Lett. 31, 1379–1381 (2006).
Tu, H. et al. Nonlinear polarization dynamics in a weakly birefringent all-normal dispersion photonic crystal fiber: toward a practical coherent fiber supercontinuum laser. Opt. Express 20, 1113–1128 (2012).
Domingue, S. R. & Bartels, R. A. Overcoming temporal polarization instabilities from the latent birefringence in all-normal dispersion, wave-breaking-extended nonlinear fiber supercontinuum generation. Opt. Express 21, 13305–13321 (2013).
Liu, Y. et al. Suppressing Short-Term Polarization Noise and Related Spectral Decoherence in All-Normal Dispersion Fiber Supercontinuum Generation. J. Lightwave Technol. 33, 1814–1820 (2015).
Sørensen, S. T., Bang, O., Wetzel, B. & Dudley, J. M. Describing supercontinuum noise and rogue wave statistics using higher-order moments. Opt. Comm. 285, 2451–2455 (2012).
Wetzel, B. et al. Real-time full bandwidth measurement of spectral noise in supercontinuum generation. Sci. Rep. 2, 882 (2012).
Corwin, K. L. et al. Fundamental Noise Limitations to Supercontinuum Generation in Microstructure Fiber. Phys. Rev. Lett. 90, 1–4 (2003).
Poletti, F. & Horak, P. Description of ultrashort pulse propagation in multimode optical fibers. J. Opt. Soc. Am. B 25, 1645–1654 (2008).
Coen, S. et al. Supercontinuum generation by stimulated Raman scattering and parametric four-wave mixing in photonic crystal fibers. J. Opt. Soc. Am. B 19, 753–764 (2002).
Zhu, Z. & Brown, T. G. Polarization properties of supercontinuum spectra generated in birefringent photonic crystal fibers. J. Opt. Soc. Am. B 21, 249–257 (2004).
Lægsgaard, J. Mode profile dispersion in the generalized nonlinear Schrödinger equation. Opt. Express 15, 16110–16123 (2007).
Zhou, J. et al. Progress on low loss photonic crystal fibers. Optical Fiber Technology 11, 101–110 (2005).
Balac, S. & Mahé, F. Embedded Runge–Kutta scheme for step-size control in the interaction picture method. Comput. Phys. Commun. 184, 1211–1219 (2013).
ADS MathSciNet CAS Article MATH Google Scholar
NKT Photonics A/S, http://www.nktphotonics.com.
Smith, R. Optical power handling capacity of low loss optical fibers as determined by stimulated Raman and Brillouin scattering. Appl. Opt. 11, 2489–2494 (1972).
Facão, M., Carvalho, M. I., Fernandes, G. M., Rocha, A. M. & Pinto, A. N. Continuous wave supercontinuum generation pumped in the normal group velocity dispersion regime on a highly nonlinear fiber. J. Opt. Soc. Am. B 30, 959–966 (2013).
Sørensen, S. T. et al. The role of phase coherence in seeded supercontinuum generation. Opt. Express 20, 22886–22894 (2012).
Lafargue, C. et al. Direct detection of optical rogue wave energy statistics in supercontinuum generation. Electron. Lett. 45, 217–219 (2009).
The authors acknowledge financial support from Innovationsfonden (SHAPEOCT—4107-00011B); GALAHAD Horizon 2020 Framework Programme (H2020) (732613) and Det Frie Forskningsråd (DFF) (LOISE—4184-00532B).
Department of Photonics Engineering, Technical University of Denmark, DK-2800, Kgs. Lyngby, Denmark
Iván Bravo Gonzalo, Rasmus Dybbro Engelsholm & Ole Bang
Department of Applied Mathematics and Computer Science, Technical University of Denmark, DK-2800, Kgs. Lyngby, Denmark
Mads Peter Sørensen
NKT Photonics A/S, Blokken 84, DK-3460, Birkerød, Denmark
Ole Bang
Iván Bravo Gonzalo
Rasmus Dybbro Engelsholm
I.B.G. performed the supercontinuum and noise experiments, analyzed and presented the data, and wrote the manuscript. R.D.E. provided the code for the experimental noise data analysis and helped with the interpretation of the numerical and experimental part, as well as thoroughly reviewing the paper. M.P.S. reviewed the paper. O.B. initiated the project, supervised the work and reviewed the paper.
Correspondence to Iván Bravo Gonzalo.
Bravo Gonzalo, I., Engelsholm, R.D., Sørensen, M.P. et al. Polarization noise places severe constraints on coherence of all-normal dispersion femtosecond supercontinuum generation. Sci Rep 8, 6579 (2018). https://doi.org/10.1038/s41598-018-24691-7
Three-Octave Supercontinuum Generation Using SiO2 Cladded Si3N4 Slot Waveguide With All-Normal Dispersion
Yuxi Fang
, Changjing Bao
, Zhi Wang
, Bo Liu
, Lin Zhang
, Xu Han
, Yuxuan He
, Hao Huang
, Yongxiong Ren
, Zhongqi Pan
& Yang Yue
Journal of Lightwave Technology (2020)
Supercontinuum generation in photonic crystal fibers infiltrated with nitrobenzene
Lanh Chu Van
, Van Thuy Hoang
, Van Cao Long
, Krzysztof Borzycki
, Khoa Dinh Xuan
, Vu Tran Quoc
, Marek Trippenbach
, Ryszard Buczyński
& Jacek Pniewski
Laser Physics (2020)
Noise-related polarization dynamics for femto and picosecond pulses in normal dispersion fibers
James S. Feehan
, Enrico Brunetti
, Samuel Yoffe
, Wentao Li
, Samuel M. Wiggins
, Dino A. Jaroszynski
& Jonathan H. V. Price
Optics Express (2020)
Noise and spectral stability of deep-UV gas-filled fiber-based supercontinuum sources driven by ultrafast mid-IR pulses
Abubakar I. Adamu
, Md. Selim Habib
, Callum R. Smith
, J. Enrique Antonio Lopez
, Peter Uhd Jepsen
, Rodrigo Amezcua-Correa
, Ole Bang
& Christos Markos
Scientific Reports (2020)
Temperature-assisted broadly tunable supercontinuum generation in chalcogenide-glass-based capillary optical fiber
Satya Pratap Singh
, Jasleen Kaur
, Keshav Samrat Modi
, Umesh Tiwari
& Ravindra Kumar Sinha
Journal of the Optical Society of America B (2020) | CommonCrawl |
Chemical Composition of RR Lyn - an Eclipsing Binary System with Am and λ Boo Type Components
Jeong, Yeuncheol;Yushchenko, Alexander V.;Doikov, Dmytry N.;Gopka, Vira F.;Yushchenko, Volodymyr O. 75
https://doi.org/10.5140/JASS.2017.34.2.75 PDF KSCI
High-resolution spectroscopic observations of the eclipsing binary system RR Lyn were made using the 1.8 m telescope at the Bohuynsan Optical Astronomical Observatory in Korea. The spectral resolving power was R = 82,000, with a signal to noise ratio of S/N > 150. We found the effective temperatures and surface gravities of the primary and secondary components to be equal to $T_{eff}$ = 7,920 & 7,210 K and log(g) = 3.80 & 4.16, respectively. The abundances of 34 and 17 different chemical elements were found in the atmospheric components. Correlations between the derived abundances with condensation temperatures and the second ionization potentials of these elements are discussed. The primary component is a typical metallic line star with the abundances of light and iron group elements close to solar values, while elements with atomic numbers Z > 30 are overabundant by 0.5-1.5 dex with respect to solar values. The secondary component is a ${\lambda}$ Boo type star. In this type of stars, CNO abundances are close to solar values, while the abundance pattern shows a negative correlation with condensation temperatures.
Spatial Configuration of Stars Around Three Metal-poor Globular Clusters in the Galatic Bulge, NGC 6266, NGC 6273, and NGC 6681 : Surface Density Map and Radial Density Profile
Han, Mihwa;Chun, Sang-Hyun;Choudhury, Samyaday;Chiang, Howoo;Lee, Sowon;Sohn, Young-Jong 83
We present extra-tidal features of spatial configuration of stars around three metal-poor globular clusters (NGC 6266, NGC 6273, NGC 6681) located in the Galactic bulge. The wide-field photometric data were obtained in BVI bands with the MOSAIC II camera at CTIO 4 m Blanco telescope. The derived color-magnitude diagrams (CMDs) contain stars in a total $71^{\prime}{\times}71^{\prime}$ area including a cluster and its surrounding field outside of the tidal radius of the cluster. Applying statistical filtering technique, we minimized the field star contaminations on the obtained cluster CMDs and extracted the cluster members. On the spatial stellar density maps around the target clusters, we found overdensity features beyond the tidal radii of the clusters. We also found that the radial density profiles of the clusters show departures from the best-fit King model for their outer regions which support the overdensity patterns.
Variation in Solar Limb Darkening Coefficient Estimated from Solar Images Taken by SOHO and SDO
Moon, Byeongha;Jeong, Dong-Gwon;Oh, Suyeon;Sohn, Jongdae 99
The sun is not equally bright over the whole sphere, but rather is darkened toward the limb. This effect is well-known as limb darkening. The limb darkening coefficient is defined by the ratio of the center intensity to limb intensity. In this study, we calculate the limb darkening coefficient using the photospheric intensity estimated from solar images taken by solar and helispheric observatory (SOHO) and solar dynamics observatory (SDO). The photospheric intensity data cover almost two solar cycles from May 1996 to December 2016. The limb darkening coefficient for a size of 0.9 diameter is about 0.69 and this value is consistent with solar limb darkening. The limb darkening coefficient estimated from SOHO shows a temporal increase at solar maximum and a gradual increase since the solar minimum of 2008. The limb darkening coefficient estimated from SDO shows a constant value of about 0.65 and a decreasing trend since 2014. The increase in the coefficient reflects the effect of weakened solar activity. However, the decrease since 2014 is caused by the aging effect.
Characteristics of Solar Wind Density Depletions During Solar Cycles 23 and 24
Park, Keunchan;Lee, Jeongwoo;Yi, Yu;Lee, Jaejin;Sohn, Jongdae 105
Solar wind density depletions are phenomena that solar wind density is rapidly decreased and keep the state. They are generally believed to be caused by the interplanetary (IP) shocks. However, there are other cases that are hardly associated with IP shocks. We set up a hypothesis for this phenomenon and analyze this study. We have collected the solar wind parameters such as density, speed and interplanetary magnetic field (IMF) data related to the solar wind density depletion events during the period from 1996 to 2013 that are obtained with the advanced composition explorer (ACE) and the Wind satellite. We also calculate two pressures (magnetic, dynamic) and analyze the relation with density depletion. As a result, we found total 53 events and the most these phenomena's sources caused by IP shock are interplanetary coronal mass ejection (ICME). We also found that solar wind density depletions are scarcely related with IP shock's parameters. The solar wind density is correlated with solar wind dynamic pressure within density depletion. However, the solar wind density has an little anti-correlation with IMF strength during all events of solar wind density depletion, regardless of the presence of IP shocks. Additionally, In 47 events of IP shocks, we find 6 events that show a feature of blast wave. The quantities of IP shocks are weaker than blast wave from the Sun, they are declined in a short time after increasing rapidly. We thus argue that IMF strength or dynamic pressure are an important factor in understanding the nature of solar wind density depletion. Since IMF strength and solar wind speed varies with solar cycle, we will also investigate the characteristics of solar wind density depletion events in different phases of solar cycle as an additional clue to their physical nature.
Pi2 Pulsations During Extremely Quiet Geomagnetic Condition: Van Allen Probe Observations
Ghamry, Essam 111
A ultra low frequency (ULF) wave, Pi2, has been reported to occur during periods of extremely quiet magnetospheric and solar wind conditions. And no statistical study on the Pi2 has been performed during extremely quiet conditions, using satellite observations to the author's knowledge. Also Pi2 pulsations in the space fluxgate magnetometers near perigee failed to attract scientist's attention previously. In this paper, Pi2 pulsations detected by the Van Allen probe satellites (VAP-A & VAP-B) were investigated statistically. During the period from October 2012 to December 2014, ninety six Pi2 events were identified using VAP when Kp = 0 while using Kakioka (KAK, L = 1.23) as a reference ground station. Seventy five events had high coherence between VAP-Bz and H components at KAK station. As a result, it was found that 77 % of the events had power spectra between 5 and 12 mHz, which differs from the regular Pi2 band range of from 6.7 to 25 mHz. In addition, it was shown that it is possible to observe Pi2 pulsations from space fluxgate magnetometers near perigee. Twenty two clean Pi2 pulsations were found where L < 4 and four examples of Pi2 oscillations at different L shells are presented in this paper.
Mesospheric Temperatures over Apache Point Observatory (32°N, 105°W) Derived from Sloan Digital Sky Survey Spectra
Kim, Gawon;Kim, Yong Ha;Lee, Young Sun 119
We retrieved rotational temperatures from emission lines of the OH airglow (8-3) band in the sky spectra of the Sloan digital sky survey (SDSS) for the period 2000-2014, as part of the astronomical observation project conducted at the Apache Point observatory ($32^{\circ}N$, $105^{\circ}W$). The SDSS temperatures show a typical seasonal variation of mesospheric temperature: low in summer and high in winter. We find that the temperatures respond to solar activity by as much as $1.2K{\pm}0.8K$ per 100 solar flux units, which is consistent with other studies in mid-latitude regions. After the seasonal variation and solar response were subtracted, the SDSS temperature is fairly constant over the 15 year period, unlike cooling trends suggested by some studies. This temperature analysis using SDSS spectra is a unique contribution to the global monitoring of climate change because the SDSS project was established for astronomical purposes and is independent from climate studies. The SDSS temperatures are also compared with mesospheric temperatures measured by the microwave limb sounder (MLS) instrument on board the Aura satellite and the differences are discussed.
Mission Orbit Design of CubeSat Impactor Measuring Lunar Local Magnetic Field
Lee, Jeong-Ah;Park, Sang-Young;Kim, Youngkwang;Bae, Jonghee;Lee, Donghun;Ju, Gwanghyeok 127
The current study designs the mission orbit of the lunar CubeSat spacecraft to measure the lunar local magnetic anomaly. To perform this mission, the CubeSat will impact the lunar surface over the Reiner Gamma swirl on the Moon. Orbit analyses are conducted comprising ${\Delta}V$ and error propagation analysis for the CubeSat mission orbit. First, three possible orbit scenarios are presented in terms of the CubeSat's impacting trajectories. For each scenario, it is important to achieve mission objectives with a minimum ${\Delta}V$ since the CubeSat is limited in size and cost. Therefore, the ${\Delta}V$ needed for the CubeSat to maneuver from the initial orbit toward the impacting trajectory is analyzed for each orbit scenario. In addition, error propagation analysis is performed for each scenario to evaluate how initial errors, such as position error, velocity error, and maneuver error, that occur when the CubeSat is separated from the lunar orbiter, eventually affect the final impact position. As a result, the current study adopts a CubeSat release from the circular orbit at 100 km altitude and an impact slope of $15^{\circ}$, among the possible impacting scenarios. For this scenario, the required ${\Delta}V$ is calculated as the result of the ${\Delta}V$ analysis. It can be used to practically make an estimate of this specific mission's fuel budget. In addition, the current study suggests error constraints for ${\Delta}V$ for the mission.
A Deep Space Orbit Determination Software: Overview and Event Prediction Capability
Kim, Youngkwang;Park, Sang-Young;Lee, Eunji;Kim, Minsik 139
This paper presents an overview of deep space orbit determination software (DSODS), as well as validation and verification results on its event prediction capabilities. DSODS was developed in the MATLAB object-oriented programming environment to support the Korea Pathfinder Lunar Orbiter (KPLO) mission. DSODS has three major capabilities: celestial event prediction for spacecraft, orbit determination with deep space network (DSN) tracking data, and DSN tracking data simulation. To achieve its functionality requirements, DSODS consists of four modules: orbit propagation (OP), event prediction (EP), data simulation (DS), and orbit determination (OD) modules. This paper explains the highest-level data flows between modules in event prediction, orbit determination, and tracking data simulation processes. Furthermore, to address the event prediction capability of DSODS, this paper introduces OP and EP modules. The role of the OP module is to handle time and coordinate system conversions, to propagate spacecraft trajectories, and to handle the ephemerides of spacecraft and celestial bodies. Currently, the OP module utilizes the General Mission Analysis Tool (GMAT) as a third-party software component for high-fidelity deep space propagation, as well as time and coordinate system conversions. The role of the EP module is to predict celestial events, including eclipses, and ground station visibilities, and this paper presents the functionality requirements of the EP module. The validation and verification results show that, for most cases, event prediction errors were less than 10 millisec when compared with flight proven mission analysis tools such as GMAT and Systems Tool Kit (STK). Thus, we conclude that DSODS is capable of predicting events for the KPLO in real mission applications.
Computational Science-based Research on Dark Matter at KISTI
Cho, Kihyeon 153
The Standard Model of particle physics was established after discovery of the Higgs boson. However, little is known about dark matter, which has mass and constitutes approximately five times the number of standard model particles in space. The cross-section of dark matter is much smaller than that of the existing Standard Model, and the range of the predicted mass is wide, from a few eV to several PeV. Therefore, massive amounts of astronomical, accelerator, and simulation data are required to study dark matter, and efficient processing of these data is vital. Computational science, which can combine experiments, theory, and simulation, is thus necessary for dark matter research. A computational science and deep learning-based dark matter research platform is suggested for enhanced coverage and sharing of data. Such an approach can efficiently add to our existing knowledge on the mystery of dark matter.
Estimation of the Latitude, the Gnomon's Length and Position About Sinbeop-Jipyeong-Ilgu in the Late of Joseon Dynasty
Mihn, Byeong-Hee;Lee, Yong Sam;Kim, Sang Hyuk;Choi, Won-Ho;Ham, Seon Young 161
In this study, the characteristics of a horizontal sundial from the Joseon Dynasty were investigated. Korea's Treasure No. 840 (T840) is a Western-style horizontal sundial where hour-lines and solar-term-lines are engraved. The inscription of this sundial indicates that the latitude (altitude of the north celestial pole) is $37^{\circ}$ 39', but the gnomon is lost. In the present study, the latitude of the sundial and the length of the gnomon were estimated based only on the hour-lines and solar-term-lines of the horizontal sundial. When statistically calculated from the convergent point obtained by extending the hour-lines, the latitude of this sundial was $37^{\circ}$ $15^{\prime}{\pm}26^{\prime}$, which showed a 24' difference from the record of the inscription. When it was also assumed that a convergent point is changeable, the estimation of the sundial's latitude was found to be sensitive to the variation of this point. This study found that T840 used a vertical gnomon, that is, perpendicular to the horizontal plane, rather than an inclined triangular gnomon, and a horn-shaped mark like a vertical gnomon is cut on its surface. The length of the gnomon engraved on the artifact was 43.1 mm, and in the present study was statistically calculated as $43.7{\pm}0.7mm$. In addition, the position of the gnomon according to the original inscription and our calculation showed an error of 0.3 mm.
A Study on an Analysis and Design of the Internal Structure of Heumgyeonggak-nu
Kim, Sang Hyuk;Yun, Yong-Hyun;Ham, Seon Young;Mihn, Byeong-Hee;Ki, Ho-Chul;Yoon, Myung-Kyoon 171
In this study, the internal structure of a Heumgyeonggak-nu (欽敬閣漏) was designed, and the power transmission mechanism was analyzed. Heumgyeonggak-nu is an automated water clock from the Joseon Dynasty that was installed within Heumgyeonggak (欽敬閣), and it was manufactured in the $20^{th}$ year of the reign of King Sejong (1438). As descriptions of Heumgyeonggak-nu in ancient literature have mostly focused on its external shape, the study of its internal mechanism has been difficult. A detailed analysis of the literature record on Heumgyeonggak-nu (e.g., The Annals of the Joseon Dynasty) indicates that Heumgyeonggak-nu had a three-stage water clock, included a waterfall or tilting vessel (欹器) using the overflowed water, and displayed the time using a ball. In this study, the Cheonhyeong apparatus, water wheel, scoop, and various mechanism wheels were designed so that 16 fixed-type scoops could operate at a constant speed for the water wheel with a diameter of 100 cm. As the scoop can contain 1.25 l of water and the water wheel rotates 61 times a day, a total of 1,220 l of water is required. Also, the power gear wheel was designed as a 366-tooth gear, which supported the operation of the time signal gear wheel. To implement the movement of stars on the celestial sphere, the rotation ratio of the celestial gear wheel to the diurnal motion gear ring was set to 366:365. In addition, to operate the sun movement apparatus on the ecliptic, a gear device was installed on the South Pole axis. It is expected that the results of this study can be used for the manufacture and restoration of the operation model of Heumgyeonggak-nu. | CommonCrawl |
Iterated integrals on elliptic and modular curves
Oxford Mathematician Ma Luo talks about his work on constructing iterated integrals, which generalizes usual integrals, to study elliptic and modular curves.
Usual integrals
Given a path $\gamma$ and a differential 1-form $\omega$ on a space $M$, we can parametrize the path $$\gamma:[0,1]\to M, \qquad t\mapsto\gamma(t)$$ and write $\omega$ as $f(t)dt$, then define the usual integral $$\int_\gamma \omega=\int_0^1 f(t)dt.$$ If we have two loops $\alpha$ and $\beta$ based at the same point $x$ on $M$, then $$\int_{\alpha\beta}\omega=\int_\alpha \omega+\int_\beta \omega=\int_{\beta\alpha} \omega.$$ The order of the loops from which we integrate does not affect the result. Therefore, the usual integral can only detect commutative, i.e. abelian, information in the fundamental group $\pi_1(M,x)$.
Iterated integrals
Kuo-Tsai Chen has discovered a generalization of the usual integral as follows: Given a path $\gamma$ and differential 1-forms $\omega_1,\cdots,\omega_r$ on $M$. Write each $\omega_j$ as $f_j(t)dt$ on the parametrized path $\gamma(t)$. Define an iterated integral by \begin{equation}\label{def} \int_\gamma \omega_1\cdots\omega_r=\idotsint\limits_{0\le t_1\le \cdots \le t_r\le 1} f_1(t_1)f_2(t_2)\cdots f_r(t_r) dt_1\cdots dt_r. \end{equation} It is a time ordered integral. Now for the two loops $\alpha$ and $\beta$, we have $$\int_{\alpha\beta}\omega_1\omega_2-\int_{\beta\alpha}\omega_1\omega_2= \begin{vmatrix} \int_\alpha\omega_1 & \int_\beta\omega_1\\ \int_\alpha\omega_2 & \int_\beta\omega_2 \end{vmatrix},$$ which is often nonzero. Therefore, iterated integrals are sensitive to the order, and they must capture some non-abelian information. But what kind of non-abelian information?
Differential equations and nilpotence
We can reformulate the definition of iterated integral as the element $y_r$ in the solution of a system of differential equations: \begin{align*} dy_0/dt &=0\\ dy_1/dt &=f_1\cdot y_0\\ dy_2/dt &=f_2\cdot y_1\\ \cdots & \\ dy_r/dt &=f_r\cdot y_{r-1} \end{align*} where we insist $y_0(t)\equiv 1$ so that $y_r$ agrees with our previous definition. The auxiliary functions $\{y_0,y_1,\cdots,y_{r-1}\}$ allow us to rewrite the system in the following way: $$ \frac{d}{dt}(y_0,y_1,\cdots,y_r)=(y_0,y_1,\cdots,y_r) \begin{pmatrix} 0 & f_1 & 0 & \cdots & 0 \\ 0 & 0 & f_2 & \ddots & \vdots \\ \vdots & \vdots & 0 & \ddots & 0 \\ \vdots & \vdots & \vdots & \ddots & f_r \\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix} $$ where the matrix on the right is nilpotent (powers to 0). In general, the solutions to a system exist on a small scale for a short time locally. As the time progresses, the local information is being transferred globally. The global behaviour of the solutions is dictated by the system. In our case, iterated integrals are limited by the nilpotence property. Perhaps surprisingly, even with this limited non-abelian information, one finds they have interesting applications to number theory, most notably in Minyhong Kim's work, which uses $p$-adic iterated integrals (local) to help find rational points on curves (global).
Algebraic iterated integrals and beyond nilpotence
Elliptic curves and modular curves both feature prominently in the proof of Fermat's Last Theorem by Andrew Wiles and are extensively studied objects in number theory. My recent work (PhD thesis) constructs algebraic iterated integrals on elliptic curves and the modular curve (of level one). The construction proceeds in a similar fashion as iteratively solving the system of differential equations above. In the case of elliptic curves, my work is based on previous work of Levin--Racinet. The algebraic iterated integrals on elliptic curves lead naturally to elliptic polylogarithms, which generalizes classical polylogarithms \begin{align*} \mathrm{Li}_k(x):&=\sum_{n=1}^\infty\frac{x^n}{n^k},\qquad k\ge 1 \\ &=\int_0^x \frac{dz}{1-z}\underbrace{\frac{dz}{z}\cdots\frac{dz}{z}}_{(k-1)\text{ times}} \end{align*}
In the case of the modular curve, one needs to go beyond nilpotence, by adding some prescribed reductive (more complicated non-abelian) data, thereby constructing iterated integrals with coefficients. Specifically, algebraic iterated integrals of modular forms are constructed. They provide multiple modular values, which belong to a special class of numbers called periods. These periods appear not only in number theory, but also in quantum field theory, and in the study of motives. Francis Brown has proposed a framework where Galois theory of periods can be studied. Just as symmetries of algebraic numbers can be deduced from their defining equations, many relations between these periods result from structural properties of their defining iterated integrals. Our goal is to understand these structures and then connect them back to relations between periods.
Please contact us for feedback and comments about this page. Last update on 16 October 2019 - 14:16. | CommonCrawl |
Solving $y' = \sqrt{|y|}$
I would like to solve the differential equation given by $$ y' = \sqrt{|y|},\qquad y(0) = 0 $$ This is equivalent, if we suppose that $y > 0$, to $$ \frac{dy}{dt} = y^{1/2} \text{ if and only if } y^{-1/2} dy = dt $$ so it should be: $$ 2 y^{1/2} = t + c \implies y = \frac{(t+c)^2}{4} $$ As a test I have checked that $$ y' = \frac{t+c}{2} = \sqrt{|y|} = \sqrt{y} $$ However, I would like to know how to obtain other solutions, just as: $$ y_{\alpha,\beta}(t) = \begin{cases} (t-\alpha)^2 / 4 & t < \alpha,\\ 0 & \alpha \leq t \leq \beta,\\ (t-\beta)^2/4 & t > \beta \end{cases} $$ for any $\alpha < 0 < \beta$, real numbers. I mean, I don't know how would you find out every solution to this differential equation. Thanks in advance.
ordinary-differential-equations
$\begingroup$ What are $\alpha$ and $\beta$? Do you have boundary conditions for this problem? $\endgroup$ – Vlad Jun 9 '15 at 21:21
$\begingroup$ @Vlad I've edited the question $\endgroup$ – user55268 Jun 9 '15 at 21:26
$\begingroup$ @AlbertT. What Vlad asked you is the role of $\alpha$ and $\beta$ in your problem. The fact that $\alpha < \beta$ is just an hypotesis you had, not the "meaning" of $\alpha$ and $\beta$. $\endgroup$ – the_candyman Jun 9 '15 at 21:28
$\begingroup$ @the_candyman For every two $\alpha,\beta \in \mathbb R$, such that $\alpha < \beta$, $y$ defined as above satisfies the equation, so there are infinite solutions $\endgroup$ – user55268 Jun 9 '15 at 21:29
$\newcommand{\Strut}{\vphantom{(}}$When performing separation of variables, you can't re-write your ODE as $y^{-1/2}\, dy = dt$ in a neighborhood of $t_{0}$ if $y(t_{0}) = 0$.
What you might do instead is:
Observe that $y(t) = 0$ is a solution in an arbitrary interval.
If $y(t_{0}) = y_{0} > 0$, separate variables in a neighborhood of $t_{0}$ on which $y$ is positive: $$ t - t_{0} = \int_{t_{0}}^{t} y^{-1/2}\, dy = 2\left(\sqrt{y(t)} - \sqrt{y_{0}\Strut}\right), $$ so $y(t) = \frac{1}{4}\bigl(t - t_{0} + 2\sqrt{y_{0}\Strut}\bigr)^{2}$.
If $y(t_{0}) = y_{0} < 0$, separate variables in a neighborhood of $t_{0}$ on which $y$ is negative: $$ t - t_{0} = \int_{t_{0}}^{t} (-y)^{-1/2}\, dy = -2\left(\sqrt{-y(t)} - \sqrt{-y_{0}\Strut}\right), $$ so $y(t) = -\frac{1}{4}\bigl(t - t_{0} + 2\sqrt{-y_{0}\Strut}\bigr)^{2}$.
Observe that all three solutions have $y' = 0$ when $y = 0$ (as required by the ODE), so piecing together formulas over abutting intervals gives continuously-differentiable solutions.
Andrew D. HwangAndrew D. Hwang
$\begingroup$ Incidentally, every solution $y$ is non-decreasing (obvious from the ODE, or can be read off the integrated equations in 2 and 3). The quadratics as given come with implicit "fine print": The solutions are the "right half" ($t > t_{0} - 2\sqrt{y_{0}\Strut}$) of the quadratic in 2, or the "left half" ($t < t_{0} - 2\sqrt{-y_{0}\Strut}$) in 3. Particularly, the function $y_{\alpha,\beta}$ in the question appears not to be a solution; you'd need a minus sign in the portion where $t < \alpha$. $\endgroup$ – Andrew D. Hwang Jun 10 '15 at 1:53
How to solve the differential equation: $y'=\sqrt{|y|}$
Cannot understand this part from the textbook
Nonlinear first order system of ODEs
Does anyone know how to solve this differential equation?
How to solve this differential equation??
Solving differential equation: $\frac{dx}{dy}=\frac{G(y)}{\sqrt{1-G^2(y)}}$
General solution of a nonlinear differential equation
First-order nonlinear ordinary differential equation $ \frac {dy}{dt} = a \frac{ (b \cdot t + c )^2} {y + d\cdot (b\cdot t + c)}$
Solutions of the differential equation $\frac{dx}{dt}=(1-x)x$
Solving Non-Linear First order ODEs | CommonCrawl |
Nucleic Acids Res. 2018 Feb 28;46(4):1674-1683. doi: 10.1093/nar/gkx1269.
A nucleobase-centered coarse-grained representation for structure prediction of RNA motifs.
Poblete S1, Bottaro S1, Bussi G1.
Scuola Internazionale Superiore di Studi Avanzati, 265, Via Bonomea I-34136 Trieste, Italy.
We introduce the SPlit-and-conQueR (SPQR) model, a coarse-grained (CG) representation of RNA designed for structure prediction and refinement. In our approach, the representation of a nucleotide consists of a point particle for the phosphate group and an anisotropic particle for the nucleoside. The interactions are, in principle, knowledge-based potentials inspired by the $\mathcal {E}$SCORE function, a base-centered scoring function. However, a special treatment is given to base-pairing interactions and certain geometrical conformations which are lost in a raw knowledge-based model. This results in a representation able to describe planar canonical and non-canonical base pairs and base-phosphate interactions and to distinguish sugar puckers and glycosidic torsion conformations. The model is applied to the folding of several structures, including duplexes with internal loops of non-canonical base pairs, tetraloops, junctions and a pseudoknot. For the majority of these systems, experimental structures are correctly predicted at the level of individual contacts. We also propose a method for efficiently reintroducing atomistic detail from the CG representation.
10.1093/nar/gkx1269
(A) Schematic representation of the mapping of four nucleotides with the nucleoside in blue and the phosphate in red. (B) The definition of the coordinate system from the oriented base, for two nucleotides. Dashed lines represent the projection of the vector that joins both bases in the x-y plane of each nucleoside. The base–base interactions, the base-phosphate interactions, and the interactions along the backbone are defined with respect to this reference frame.
A nucleobase-centered coarse-grained representation for structure prediction of RNA motifs
Nucleic Acids Res. 2018 Feb 28;46(4):1674-1683.
Depiction of selected clouds of points found in structural database: (A) CG and atomistic representations of Adenine as found in a typical duplex. The own phosphate group and the one of the following nucleotide are colored in blue and red in the CG representation, respectively. (B) Clouds of the nucleotide's own phosphate group; all clouds represent C3′-endo cloud under different χ conformations: anti (green), high-anti (purple) and syn(cyan). (C) Clouds of the neighboring nucleotide's phosphate group position, χ in anti conformation with sugar in C3′-endo (gold) and in C2′-endo (orange). Also, the syn conformation is shown in red. (D) Cloud of stacking points, (E) cloud of positions of paired bases through sugar (purple), Watson–Crick (orange) and Hoogsteen (cyan) faces. (F) represents the positions of the phosphate groups for base-phosphate interactions, with the same color nomenclature of (E).
(A) UUCG tetraloop predicted by SPQR. (B) UUCG tetraloop predicted by χpc annealing. (C) CUUG tetraloop, closest structure to native and (D) CUUG tetraloop predicted by χpc simulations.
(A) Native pseudoknot 1L2X, (B) folded after refinement. To facilitate the comparison between the two structures, the three initial nucleotides and two bulges are colored in red.
(A) RMSD and phosphate RMSD as a function of time for backmapping of 1ZIH. (B) Comparison of GCAA tetraloop(1ZIH) at full-atom resolution for native (blue) and annealed (red).
Models, Molecular*
Nucleotide Motifs
Nucleotides/chemistry
RNA/chemistry*
RNA, Double-Stranded/chemistry
RNA, Double-Stranded
Silverchair Information Systems | CommonCrawl |
Forces other than the fundamental interactions, e.g. friction
Forgive me for the silly question, but I just don't get it.
I just completed an elementary course in mechanics, and I am curious to know what I am about to ask.
We have, all year, dealt with many forces like gravity, friction, normal forces, tensions etc.
But only one of them is listed as a fundamental force, that is, gravity.
I know that the only forces that exist in nature are the four fundamental force, and all of these are, apparently, non-contact forces.
But then how do you account for, for example, friction? We know that $F_\text{frictional}=\mu N$, But how do we arrive at that? Is this experimental?
I cannot see how contact forces like friction can exist, when none of the fundamental force is a contact force.
Again, forgive me for my ignorance.
newtonian-mechanics forces friction
rob♦
pkwssispkwssis
$\begingroup$ It is not silly question at all. Feynman was also thinking about exact same question -- how to get above formula for the friction from basic principles. Formula for friction is experimental. $\endgroup$ – Asphir Dom Aug 2 '14 at 16:19
$\begingroup$ Comment to the question (v2): Consider restricting the question to only the friction force to avoid getting too broad. $\endgroup$ – Qmechanic♦ Aug 2 '14 at 16:25
AlanZ2223 has given a nice summary of what's going on. I'll just make a couple of points that are orthogonal to his and that wouldn't fit in comments.
The electrical force is a non-contact force; it falls off with distance like $1/r^2$. But most of the objects we deal with in everyday life are electrically neutral, i.e., they contain equal amounts of positive and negative charge. You would think that this would mean the attractions and repulsions would exactly cancel out, but that's not quite true. When two electrically neutral objects are close together, they can influence each other to rearrange their charges somewhat, so that the cancellation isn't perfect due to the different distances and angles involved in all the force vectors that are being added. This is called a residual interaction. The residual electrical interaction falls off much more quickly than $1/r^2$ at large distances -- more like $1/r^6$. This is the basic reason why bulk-matter forces, which are electrical, appear to be zero-range contact forces.
The other thing to realize is that it is not possible to explain forces such as the frictional and normal forces purely by using classical mechanics and an electrical interaction. If you try to do that, you'll find that bulk matter isn't stable, and that one piece of bulk matter won't prevent another from penetrating into it. In fact, you need two ingredients to explain these forces: (1) electrical interactions, and (2) the Pauli exclusion principle. If you try to explain it using only one of these ingredients without the other, it doesn't work.
Ben CrowellBen Crowell
$\begingroup$ Nice answer. Any suggestions on where I could learn more (at a fairly basic level) about the $r^{-6}$ interaction? (E.g., why is the next term not $r^{-4}$?) $\endgroup$ – Charles Aug 3 '14 at 5:27
$\begingroup$ @Charles: they are called van der Waals forces. $\endgroup$ – gatsu Aug 3 '14 at 9:19
$\begingroup$ Does this imply that any surface, however smooth (If we consider a hypothetical 100% smooth surface) would exhibit a frictional force if an object were to be pushed atop it? Regardless of the nature of the surface these interactions would remain, would they not? $\endgroup$ – SNB May 24 '18 at 6:05
These "non-contact" forces that are ubiquitous in everyday life are mainly attributed to electromagnetism. Basically the four fundamental forces which are the strong force,weak, electromagnetic, and gravitational all have a sort of realm within which they influence most.
The strong/nuclear reigns within the subatomic domain, the weak as well but this kind of force is not nearly as prominent. Then comes the electromagnetic force whose influence falls within our human sized domain and gravity which does have effects in our everyday life but it is mostly prominent in larger bodies such as our solar system or galaxies.
Now all of these forces are able to manifest by the exchange of particles. Trillions and trillions of them are created and annihilated every second. For the electromagnetic force the exchange particle is the photon.
Now for example whenever you touch an object your hand does not just go through the object, the electric forces are acting to repel the object. The electrons that compose your hand repel the electrons of the material keeping your hand from going though, not by directly touching the other electrons but by the exchange of the carriers of the electromagnetic force, the photon, and this is what lies withing the "forces" of friction, normal force, so on and so forth.
Let's examine friction. Whenever you push an object lets say a book across a desk. Friction opposes the direction of motion, you push left, friction pushes right. This is because atomically the molecules of the book are "pushing" or more technically repelling and attracting by effect of the electromagnetic force which is able to manifest by exchanging photons with the atoms of the desk, and this is what is the sort of big picture of what composes these non fundamental "forces".
Ben Crowell
AlanZ2223AlanZ2223
$\begingroup$ Does this imply that any surface, however smooth (If we consider a hypothetical 100% smooth surface) would exhibit a frictional force if an object were to be pushed atop it? Regardless of the surface these interactions would remain, would they not? $\endgroup$ – SNB May 24 '18 at 6:00
I think it's best to just quote Feynman here:
Friction: the force of friction against a dry surface is $-\mu N$, and again you have to know what the symbols mean: when an object is pushed against another surface with a force whose component perpendicular to the surface is $N$, then in order to keep it sliding along the surface, the force required is $\mu$ (friction coefficient) times $N$. You can easily figure out which direction the force is; it's opposite to the direction you slide it.
Does friction force result from a potential energy like gravitational forces do?
The answer is No: friction does not conserve energy, and therefore we have no formula for the potential energy for friction. If you push an object along a surface one way, you do work; then, when you drag it back, you do work again. So after you've gone through a complete cycle, you haven't come out with no energy change; you've done work-and so friction has no potential energy.
Finally, in the example gravitational force (equivalently can also be shown for Coulomb's force and the electric field), it results from a difference of potential energy in the gravitational field $\mathbf{g}$, which can be shown to be conservative (through a complete cycle, 0 work is done, energy is conserved), is equal to the gradient in gravitational potential $\Phi$: $$\mathbf{g} = -\nabla \Phi $$
PhononPhonon
Friction is indeed a phenomenon that is difficult to harness theoretically. The formula you mention is not derived from first principles, but is justified only by experimental evidence, i.e. it drops from heaven. Moreover, it gives a false impression of simplicity. The friction coefficient is all but constant, as it may depend on temperature, pressure, etc. Just think of a ski on snow. The friction there does depend on snow temperature, the consitency of the snow, how well you prepared the ski, ...
What is next is that there is not just one single friction phenomenon. The friction between ski and snow for example is described by a thin film of liquid water between the ski and the snow, whereas the friction of a shoe on the street is something else entirely. Related to friction is diffusion and dissipation, e.g. the air resistance when you ride a bycicle is essentially explained by the viscosity of air and the turbulent cascade towwards small scales.
However, as the previous answers explain, all of these phenomena are fundamentally explained by electromagnetic forces between molecules; and these never "touch" each other, their electron hull prevents that from happening.
maze-cooperationmaze-cooperation
The four fundamental interactions (gravity, weak, strong and electromagnetic) are useful to understand how the basic ingredients (fundamental particles of the standard model) of our world interact with one another. It doesn't imply that all the forces witnessed in our world can be reduced to these forces only. Most physical objects comprises many many many of these fundamental particles and the point consists in realizing that "the properties of the sum" is not the same as "the sum of the properties" and, in fact, "the properties of the sum" is much richer than "the sum of the properties". This is what I meant when I said that physical forces cannot be reduced to the four fundamental interactions only.
Any physical system that displays properties which are not captured by the properties of the ingredients is said to have emergent properties. And basically, all the forces you are having questions about are emergent forces that do not have any equivalent when you just look at the most fundamental ingredients alone.
I preach for my chapel here (and we'll see if people disagree) but the most general strategy one can use to understand how these effective forces at the macroscopic/mesoscopic scale emerge from the microscopic world (that of basic ingredients) is that of statistical thermodynamics (quantum or classical) which is a huge field that one cannot cover in few lines only I am afraid.
gatsugatsu
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces friction or ask your own question.
Application of Newton's third law
How does static friction increase with increase in the applied force?
Physics Forces and Friction
How to calculate friction on a spinning ball on the ground?
How does friction act on a body, if only 2 regions on it are rough?
Friction forces on car wheel
Do all four fundamental forces have effects on space time?
Frictional forces and their directions
As the universe expands, do we have any reason to suspect further separation of the fundamental forces/interactions?
What is the force of friction on the body?
Is friction an emergent phenomenon? | CommonCrawl |
Journal of Shipping and Trade
Impact of the Panama Canal expansion on Latin American and Caribbean ports: difference in difference (DID) method
Kahuina Miller ORCID: orcid.org/0000-0002-0623-230X1 &
Tetsuro Hyodo1
Journal of Shipping and Trade volume 6, Article number: 8 (2021) Cite this article
The expanded Panama Canal opened on June 26, 2016. This expansion is the third set of locks that enabled the canal to double its capacity through the addition of new traffic lanes, which allowed neo-Panamax and some post-Panamax vessels to transit across the canal. The widening of the canal has increased maritime traffic within Latin America and the Caribbean (LAC). Major ports in the regions have made huge investments in port expansion and infrastructural development to accommodate neo-Panamax vessels. In this study, we investigated the impact of the Panama Canal expansion (PCE) on the Latin America and the Caribbean (LAC) ports by using the Difference in Difference (DID) method. This impact was evaluated for 100 major and regular ports within the three sub-regions of LAC, namely Caribbean, Central, and South America, before and after the treatment effect, that is, the PCE. The findings from the model revealed that the average container port throughput (TEUs) for the treated ports (DTrp) was more than that of the controlled ports (CONTp) with transshipment hub, Central America, and South America having 20%, 12%, and 34% growth, respectively, since the PCE (the treatment) except for the Caribbean ports (DTrp), which experienced losses of 8% within the LAC region from 2010 to 2019.
The Panama Canal (PC) is one of the two most strategic artificial waterways critical to global maritime trade, and the other is the Suez Canal. The Panama Canal (PC) is a narrow isthmus approximately 65 km between the Caribbean Sea and the Pacific Ocean. The canal was completed on August 15, 1914, becoming an essential route connecting vessels sailing from the West and East coasts of the United States and the LAC regions (Cho et al. 2019). Before the canal's existence, the Cape horn was the only trading route for ships connecting the East and West Coast of the Americas, and vessels sailing from Europe to the West coast had to sail around the Cape horn of South America (Gro 2016). The Panama Canal (PC) is the shortest operative route connecting maritime trade between the Atlantic and Pacific oceans. It is also the shortest passage for gas cargoes from the Gulf of Mexico to Northern Asia (Rodrigue 2015). For instance, for LNG carriers, the Gulf of Mexico's distance to Japan is approximately 17,064 km (9214 nm) compared to 27,317 km (14,750 nm) through the Suez Canal (Thomas 2015). The Panama Canal (PC) is essential to global trade, wherein an estimate of over $270 billion worth of cargo crosses the canal each year; this serves over 140 maritime routes to over 80 countries (Panama Canal Authority 2019). The expansion was completed on June 26, 2016, allowing Neo-Panamax and some Post-Panamax vessels to transit; thus, increasing port competition, trade, cargo tonnage, and shipping activities within the regions for the US East and Gulf Coasts (Rodrigue 2020).
Mega-ships have increased the economy of scale in maritime transport, boosting regional ports' transshipment activities (ACS 2017). For example, In the Caribbean, global hub port terminals such as Kingston; Jamaica, Freeport; Bahamas, Caucedo; Dominican Republic, and Juan; Puerto Rico (US territory) seek to capitalize on the anticipated increase in transshipment activities. However, investments in deepening harbors and expanding capacity handling may not be sustainable or profitable due to increased competition among regional ports (Gooley 2018). The widening of the canal and the increase in container volume have provided promised growth for United States cargo and transportation among East and Gulf coast ports such as New York and New Jersey, Port of Houston, South Carolina Ports, Port of Miami, et cetera. However, to what extent has this expansion impacted container port throughput (TEUs) growth within the LAC region. It is essential to quantify the impact that PCE contributes to the LAC region to determine if ports benefit from this expansion (intervention). Major ports in the LAC region have made substantial investments towards improving port services and infrastructure. However, are these investments reaping success in container throughput growth (container handled at ports that include the port of origin, destination, and transshipment)? An impact evaluation of the PCE among LAC ports is vital for improving strategies to mitigate endogenous and exogenous factors that may contribute to unsatisfactory outcomes (Hawkins et al. 2015). These factors may include port development, international trade, economics, and policies that directly impact TEU growth (Notteboom et al. 2021).
This paper seeks to analyze the impact of the PCE among 100 ports within the Caribbean, Central, and South America sub-regions. One of the impact evaluation methods, Difference in Difference (DID), will determine the overall and sub-regional causal effects. This evaluation aims to use the DID as an alternative method for assessing a policy and interventions' causal effect in the maritime sector.
Panama Canal impact on the regions
Undoubtedly, the expansion of the Panama Canal has impacted both North America and LAC regions. It has allowed the transit of mega-ships such as Neo and Post Panamax vessels to increase container throughput (TEUs) and Cargo tonnage at ports within the region. PCE has increased competition among important transshipment ports in Panama, Brazil, Jamaica, Mexico, the Bahamas, and Dominican Republic (Rodrigue and Ashar 2016). Most of these countries have made considerable investments in port expansion, dredging, and logistics centers to accommodate and attract mega-ships to their shores.
Using an impact evaluation method was necessary to assess the impact of the expansion within this region. Hawkins et al. (2015, pp. 26) define impact as a longer-term result generated by policy decisions, often through intervention, project, or programs. The PCE project has influenced the Americas' subprojects, including the LAC region, in dredging and port infrastructural improvements (Link 2015). Rodrigue and Ashar (2016), UNCTAD (2014), and Singh et al. (2015) stated that the advent of Mega-ships through the now expanded canal would influence greater transshipment yield and container traffic among transshipment ports. On the other hand, Marle (2016) alluded that the PCE has raised fears that the LAC container terminals were overcapacity due to port infrastructure and usage. Gooley (2018) also stated that the Port of Panama (ACP) indicated that some carriers shift from mega-ships due to high operating costs per container. He further stated that International Maritime Organization (IMO) mandated on January 1, 2020, that the use of low sulfur fuel could see more ships slow steaming to reduce fuel consumption by using the longer Suez route instead of shorter transits via Panama.
The expansion of the Panama Canal has impacted ports on the East and Gulf ports of the USA. According to Bhadury (2016) and Park et al. (2020), the PCE has increased cargo traffic flow from the West Coast to the East Coast, decreasing transportation costs and increasing transit time. This impact will enable more cargo traffic to transit the Panama Canal and increase transshipment activities within the Caribbean region. Nicholson and Boxill (2017) strongly believe that if most of the US East Coast ports become "ship ready" by improving port infrastructure such as longer quays, bigger cranes to accommodate 18 to 22 containers, more storage space for containers, deeper channel and berth, and higher bridges, then most Caribbean ports could see a reduction in transshipment activities. For example, ports such as Baltimore, Charleston, Miami, Philadelphia, and Virginia have official increases in container throughput (TEUs) due to ships transiting the expanded Panama Canal.
Panama Canal impact on liner shipping and trading routes
The World Shipping Council (WSC) (2019) defines Liner shipping as the service of transporting goods utilizing high-capacity, ocean-going ships that regular transit routes on fixed schedules. World Shipping Council (WSC) (2019) further stated there were 400 liner services in operation providing weekly sailing from the port of call. Several authors studied the impact of the PCE on liner shipping concerning routes section, intermodal options, container vessel sizing, economic growth, and trading routes. The PCE had substantial impacts on the structure of liner services in terms of capacity deployment. Rodrigue (2020) stated that the first notable impact was the rapid transition from Panamax ships towards Neo-Panamax ships for Deep-sea services between major ports.
Pham et al. (2018) studied the PCE and its effects on East and West Liner Shipping route choice. An empirical study was conducted for ocean-borne trade between New York and Hong Kong. They examined route selection decisions for the PCE post-era by combining qualitative and quantitative studies. Using a two-stage methodological framework to assess both the Panama and Suez Canals and the US intermodal system alternative route competitiveness. The findings indicated that transportation was an essential element for route selection, followed by the duration of transportation, dependability, and route characteristics. The Panama Canal was the preferred route over the Suez and US intermodal options.
Fan and Gu (2019) studied the PCE impact on container shipping route networks. They used a dual-target route distribution model to evaluate the PCE. The results revealed that during the PCE post-era, 15,000 TEU and 6500 TEU container vessels were mainly deployed through the expanded canal while 8500 TEU, 10500 TEU, and 12,500 TEU used the Suez Canal.
Wang (2017) studied the impact of the Panama Canal on global shipping. The research was based on empirical studies using annual reports and publications from the Panama Canal Authority (ACP). Findings revealed that the expansion had generated more revenue since the Neo-Panamax vessel deployment, which has resulted in further economic growth for Panama.
Liu et al. (2016) analyzed the potential impacts of the PCE on the advancing competitive, collaborative relations and the allocation of market dominance among the supply chain (SC) players on US container markets. Cooperative Game theory was used to assess this impact. The results revealed that Mega-ships transiting the canal would increase East Coast markets by 32% while negatively impacting West Coast markets by 22%. Findings also revealed that the Ocean Carrier sub-coalition between West Coast SC companies would shift to the preferred sub-coalition between Ocean Carriers and East Coast SC companies after the PCE.
Carral et al. (2018) studied the impact of PCE on vessel size and seaborne transport. Statistical analysis was used to assess this effect on the type and size of ships transiting the canal. The findings revealed that growth in size and traffic for the container, LNG, and LPG vessels had significant growth since PCE.
Zupanovic et al. (2019) analyzed the impact of PCE on cost-saving in the shipping industry. The paper examined operational cost savings for three types of post-Panamax vessels; bulk, container, and tanker vessels on three different routes. The results revealed savings range from 33 to 76%, equivalent to saving from US$227,562 to US$ 1,042,324. Hence the PCE will result in a significant saving for specific categories of ships.
Shibasaki et al. (2018) studied the anticipated impact of PCE and Northern Sea routes on LNG imports of Asian countries from macroeconomic and diversification perspectives from exporting countries. The finding revealed that the divergence of exporting countries for LNG imports was not affected by the change in Japan's import pattern, and some degree of impact was observed for these countries' national economies.
Achurra-Gonzalez et al. (2016) studied the use of different liner shipping network scenarios such as natural disasters or infrastructure development impacts on container trade routes. They used a cost-based network model for Southeast Asia to Europe liner shipping trade. The results suggested that interconnectivity was susceptible to disruptions.
Van Hassel et al. (2020) analyzed the PCE influence on perspective shifts of cargo flow from US and European ports. They used model design to calculate the container transportation cost using the Panama Canal. Studies were conducted before and after the PCE for shipments from the US to Europe. The study concluded that the expansion had impacted port selection mainly for the United States and, to a lesser extent Europe.
Martinez et al. (2016) studied the PCE effect on the shipping routes of Asian imports into the United States. They investigated factors affecting routing decisions by using a Coast Choice Model. The simulation results showed that the PCE would generate significant time saving on shipments from Asia and was projected to shift significant traffic flow from West to East Coast ports, establishing vital policy repercussions for port operators on both coasts.
Reyes et al. (2019) studied the impact of PCE on Caribbean Ports. They examined how ports can adapt to the opportunities available from the expansion. Adaptive Port Planning (APP) framework was used to assess long-term planning for Caribbean ports. The study revealed that in the short-term Caribbean ports will experience decreases in transshipment container volume due to direct service deploying Neo-Panamax vessels calling to East Coast and Gulf of Mexico new ports.
All authors agree that the PCE has impacted the liner shipping and trading routes, resulting in comparative cost savings for some vessel classification. Undoubtedly, this shift in liner shipping routes will affect both Caribbean and US west coast ports.
The advent of mega-ships to LAC (economy of scale)
The Panama Canal is one of the main passages connecting the Pacific and Atlantic oceans, accounting for approximately 6% of global trade (FreightWaves 2020). According to Panama Canal Authority (2019), in 2018, United States, China, Japan, Mexico, and Colombia were the primary Canal users, with the United States account for 68.3% of the total cargo transiting the canal. This expansion has opened the doors to Neo-Panamax and Post-Panamax vessels, impacting cargo throughput volumes for intraregional ports, US Gulf, and East coast ports.
Figure 1 shows that After the expansion in 2016, there was a surge in cargo tonnage through the expanded canal, while no significant changes were observed for the number of transits (Rodrigue 2020). Several authors supported the positive effects of mega-ships on international and regional ports.
Panama Canal Traffic and Traffic vs. Net Tonnage comparison for period 2010–2018. Source: Panama Canal Authority (2019)
Merk (2018) stated that doubling the maximum container ship size has reduced total vessel cost per transported container by roughly a third over the last decade. OECD (2015) supported his view, stating that containerization has contributed to decreased transportation costs. On the other hand, Lim (2011) studied the economies of scale in container shipping. The findings revealed that although huge container ships will produce economies of scale and significantly reduce the slot cost in container trade to which ship assigned, the industry may never make an adequate return because of over demand. Therefore, the benefits of economies of scale will diminish over time. Kapoor (2016) studied the economics of scale for mega container vessels. The report revealed four (4) significant findings; (1) that the economies of scale diminish for vessel sizes beyond 18,000 TEUs; (2) that terminals will incur significant capital expenditure to handle larger vessels size and requires terminal yard to increase by third in order to avoid congestion; (3) terminal will have to increase productivity to compile with the increase in vessel size and (4) vessel upsizing risk the results of no significant cost benefit that will furthermore contribute to higher supply chain risk as volumes will be concentrated on fewer ships that will compile environmental issue of dredging deeper channels. Overview of the authors revealed that vessels were getting larger because of the theory of economy of scale. However, the effects of diminishing 'scale of economic' of mega-ships may not be necessarily beneficial for some regional ports.
The impact of the PCE on the US ports and LAC region has been studied by several authors such as Rodrigue and Ashar (2016), Singh et al. (2015), Bhadury (2016), and Park et al. (2020). They strongly agreed that PCE had impacted port infrastructure improvement within both regions. Pham et al. (2018), Rodrigue (2020), Fan and Gu (2019) studies agreed that PCE has influence liner shipping, trading routes, and cost savings for LAC and US ports on both east and west coasts. Several authors, such as Merk (2018), Lim (2011), Kapoor (2016), and Rodrigue (2020), strongly agreed that the advent of mega-ships had impacted container throughput and cargo tonnage. On the other hand, few authors address the causal effects of PCE on the LAC regional ports before and after the expansion to determine its overall impact. Several methodology applications such as port choice, route planning, adaptive port planning, and Cost-base analysis models were used to assess the effect of this expansion on global and US ports. However, limited authors use impact evaluation methods to determine the causal impact of the PCE as an intervention within the LAC. This research gap will be addressed using impact evaluation; Difference in Difference (DID), to assess the PCE implications for all three LAC sub-regions and transshipment ports.
Method and data
Competition among ports on the US East Coast for cargo
The PCE has influenced the development of ports within the US, especially on the East Coast, regarding traffic patterns, infrastructural upgrades, and intermodal connectivity (Kendrick 2020). The anticipated projection for improvement in the shipping industry among the East Coast and the Gulf of Mexico has increased container throughput growth (TEUs). This growth was because more container ships from Asia will directly access East Coast markets (Morley and Ashe 2019).
Table 1 shows the container throughput for five major ports on the East and Gulf coasts. In 2019, TEUs growth percentage, the port of New York and New Jersey (4%), Port of Houston (11%), Port of Miami (3%), Port of Charleston (5%), and the Port of Savannah (6%). These improvements in East Coast and Gulf of Mexico ports attract more container traffic at the expense of ports within the Caribbean and some parts of the Americas (Morley and Ashe 2019).
Table 1 Top 5 US East and Gulf of Mexico Ports, TEU annual Percentage Growth (%)
The top 5 major ports within the region have experienced small increases in TEUs except for the port of Balboa. The improvement of US East and Gulf ports has increased competition among East Coast for cargo, impacting transshipment volumes within the LAC region.
Table 2 shows the top five ports within the LAC region regarding the annual percentage growth in TEUs from 2010 to 2019. Among five other transshipment hubs, these ports represent approximately 84.1% of cargo's total regional movement (CEPAL 2020). The TEU growth (%) for port of Colon (1%), Port of Santos (2%), Manzanillo (0%), Cartagena (2%) and Balboa (15%).
Table 2 Top 5 LAC ports percentage growth (%) in container throughput
The comparison of container throughput (TEU) growth shown in Fig. 2, the percentage of US East and Gulf ports vs. top 5 LAC ports, shows that in 2019, the top 5 East coast ports recorded more percentage growth than LAC ports.
The top five regional ports for both East/Gulf and LAC TEUs growth (%). Source: Own Elaboration
The difference in difference (DID)
An impact evaluation provides evidence about the impacts that have been produced or the impacts that are expected to be produced (Hawkins et al. 2015). The choice of methods and designs for evaluating policies, projects, and programs, can be difficult to be evaluated and may come with unique challenges (Hawkins et al. 2015). White and Sabarural (2014) stated that a quasi-experimental approach is an empirical intervention study used to estimate an intervention's causal impact or test causal hypotheses. The most frequently used quasi-experiment approach is Differences in Differences (DID), based on a combination of before - after and treatment - control group comparisons (Fredriksson and Oliveira 2019; World Bank 2021). Several authors used the Difference in Difference (DID) approach to assess government policies and programs' impact and their effectiveness.
Card and Krueger (1994) studied the impact of the increase in the minimum wage on employment for fast-food restaurants in New Jersey, the US, and Eastern Pennsylvania before and after the increase. The findings revealed that by using DID. There was no indication that an increase in the minimum wage reduced employment. Qiu and He (2017) researched the impact of the Green Traffic Policy on air quality in China. They concluded that the pilot program was effective in reducing the annual concentration of pollutants.
However, although the DID method is popular among various research fields, it is not without limitations. Bertrand et al. (2003) mention that the great appeal for DID estimation comes from its simplicity and potential to circumvent many of the endogeneity problems that arise when comparing heterogeneous groups. Wing et al. (2018) supported Bertrand et al. (2003) view, they stated that the Difference in Difference (DID) design was not an ideal alternative for randomized experiments, but it often signifies as a viable way to learn about causal relationships. They further concluded that multiple quasi-experimental techniques might be an essential support for the Difference in Difference (DID) approach.
Parallel trend assumption (PTA)
All the assumptions of the Ordinary Least Square Model apply equally to Difference in Difference (DID). Many assumptions, such as Parallel Trend Assumption (PTA), exchangeability, and Stable Unit Treatment Value Assumption (SUTVA), must hold to ensure the models' internal validity (Columbia Public Health 2020; Mckenzie 2021). Two of the most popular assumptions are Parallel Trend Assumption (PTA) and Stable Unit Treatment Value Assumption (SUTVA).
According to Lechner (2011), SUTVA indicates that there should be no spill-over influences between the treatment and control groups, as the treatment effect would then not be identified. The Parallel Trend Assumption (PTA) is the most critical of the above assumptions to ensure the DID Model's internal validity and may be difficult to execute because it requires that the difference between the treatment and control groups be constant over time (Lechner 2011). The assumption is fundamentally untestable because the treatment group is only observed as treated (Fredriksson and Oliveira 2019). "One can lend support to the assumption, however, using several periods of pre-reform data, showing that the treatment and control groups exhibit a similar pattern in pre-reform periods" (Fredriksson and Oliveira 2019, p.523).
These studies focused on using the DID approach for assessing treatment effects on policies and programs in the sector of education, finance, and the public sector economic, healthcare, sales, and marketing. This research will focus on using the DID model for the Maritime Industry to assess the PCE impact on TEUs growth among ports in Latin America and the Caribbean regions (LAC).
Albouy (2015) evaluated an intervention, program, or treatment on an effect Y over an individual's population. Two groups were indexed by treatment status T = 0, 1 where 0 denotes individuals who were not offered treatment, classified as the control group, and 1 indicates the group that received treatment, classified as the treatment group (Heckman et al. 1997). Two time periods were assumed on the observed individual, t = 0, 1 where 0 indicates a time before the treatment; pre-treatment and 1indicates a time after the treatment; post-treatment (Athey and Imbens 2006). All observations were indexed by i = 1 … N whereby, the individuals will have two observations each, pre-treatment and post-treatment denoted as follows: for average sample outcome for the treatment group, \( {Y}_0^{-T} \) and \( {Y}_1^{-T} \) and the average outcome for the control group, \( {Y}_0^{-c} \) and \( {Y}_1^{-c.} \)
The outcome of Yi was modeled by Albouy (2004) and Abadie (2005) in the following equation.
$$ {Y}_i=\alpha +\beta {T}_{i\kern0.5em }+\gamma {t}_i+\delta \left({T}_{i\kern0.5em }.{t}_i\right)+{\varepsilon}_i $$
Where α = constant term
β = treatment group-specific effect (accounting for average permanent differences between treatment and control)
γ = time trend common to control and treatment groups
Simple pre versus post estimator
According to Albouy (2015), "a simple Pre versus Post Estimator Consider first an estimator based on comparing the average difference in the outcome Yi before and after the treatment for the treatment group."
$$ {\hat{\delta}}_1={\hat{Y}}_1^T-{\hat{Y}}_0^T $$
The expectation of the estimator is as follows.
$$ E\left[{\tilde{\delta}}_1\right]=E\left[{\hat{Y}}_1^T\right]-\left[{\hat{Y}}_0^T\right] $$
$$ =\left[\alpha +\beta +\gamma +\delta \right]-\left[\alpha +\beta \right] $$
$$ =\gamma +\delta $$
According to Albouy (2015), the estimator will be biased γ ≠ 0, which is the constant average differences in outcomes Yi Post-treatment, between the treatment.
Simple treatment versus control estimator
Now, considering the estimator that will be established on evaluating the median outcome Yi, post-treatment, between the treatment and control groups,
$$ {\hat{\delta}}_1={\hat{Y}}_1^T-{\hat{Y}}_1^C $$
$$ E\left[{\tilde{\delta}}_1\right]=E\left[{\hat{Y}}_1^T\right]-E\left[{\hat{Y}}_1^C\right] $$
$$ =\left[\alpha +\beta +\gamma +\delta \right]-\left[\alpha +\gamma \right] $$
$$ =\beta +\delta $$
According to Albouy (2015), the estimator is biased so long as β ≠ 0, which is the constant average differences in outcomes Yi, post-treatment, between the treatment.
The difference in difference (DID) estimator
DID estimator is defined as the difference in the treatment group's average outcome before subtracting the control group's average outcome before and after treatment (Albouy 2015; Abadie 2005).
$$ {\hat{\delta}}_{DD}={\hat{Y}}_1^T-{\hat{Y}}_0^T-\left({\hat{Y}}_1^C-{\hat{Y}}_1^C\right) $$
According to Albouy (2015), the expectation of this estimator will become unbiased.
$$ {\hat{\delta}}_{DD}=E\left[{\hat{Y}}_1^T\right]-E\left[{\hat{Y}}_0^T\right]-\left(E\left[{\hat{Y}}_1^C\right]-E\left[{\hat{Y}}_1^C\right]\right) $$
$$ =\alpha +\beta +\gamma +\delta -\left(\alpha +\beta \right)-\left(\ \alpha +\gamma -\gamma \right) $$
$$ =\left(\ \gamma +\delta \right)-\gamma \Big) $$
$$ ={\hat{\delta}}_{DD} $$
The difference in difference (DID) model for LAC ports
The following equation below shows the DID model formulation for LAC's TEUs outcome.
$$ TEUs=\alpha +\beta\ TreatmentPort+\gamma\ PostTreatment+\delta\ \left( TreatmentPort\cdotp Posttreatment\right)+{\varepsilon}_i\ (Outcome) $$
TEUs: the average container throughput for Latin America and Caribbean ports from the period 2010 to 2019.
Treatment Port (DTrp): Treatment dummy variable T when T = 1 represents container port throughput above 1 million TEUs. Treatment port (DTrp) includes transhipments that are both global and intra-regional ports. Treatment port (DTrp) invest in port development in hinterland expansion, dredging, and ship to shore (STS) gantry cranes (Neo Panamax compatibility) before the Panama Canal expansion in July 26, 2016. T = 0, represents container port throughput below 1 million TEUs. Control Ports (CONTp) include regular ports (non-transshipment ports) that cannot accommodate Neo-Panamax and Post-Panamax container vessels. Post-Treatment (Postt) is the time variable dummy that reflects periods; 'Before' intervention T = 0 and 'After' intervention T = 1.
Table 3 further explains the descriptive classification of ports within the LAC region that will be used to measure the impact of the Panama Canal expansion. The sample size of 100 ports was selected from 118 LAC ports from thirty-one (31) countries. These ports were selected based on throughput volume (TEUs) that were greater than 20,000 TEUs. Therefore, ports with less throughput volume were removed from the observation. Ports excluded from the sample were mostly Eastern Caribbean and some Central America.
Table 3 Classification of Treatment and Control Groups (100 Ports) within LAC
Data analysis software
STATA and R packages were used to analyze the impact of the PCE on the top 100 ports within the LAC region using the DID method.
Data sample
The data sample comprises 100 ports within the LAC region divided into three (3) sub-regions, South America, Central America, and the Caribbean. The container throughput (TEUs) data from these regional ports were retrieved from the CEPAL and the World Bank. Port profiles and characteristics data were retrieved from the following websites: Logistics Capacity Assessment, Marine Traffic, Ports.com, and regional port websites. The LAC regional ports within the research are listed in sub-regional categories, South America, Central America, and the Caribbean, as shown in Table 4.
Table 4 LAC Ports and Rankings 2020
Table 4 shows the sample data of 100 ports within the LAC region that will give this research conclusive results of the PCE's impact on regional and sub-regional ports. Table 5 shows the profile and characteristics of the top 25 ports within the region detailing the infrastructure of each port; Area, Mobile Crane, S.T.S. gantry, Depth, and the number of berths that can be used as variables that influence container throughput volume (output) for each port (Sarriera et al. 2015; Logistics Capacity Assessments (LCAs) 2021; Marine Traffic 2021; World Port Source 2021).
Table 5 Characteristics of 25 LAC Ports
Quality of port infrastructure in the LAC region
Quality of Port Infrastructure (QPI) evaluates business executives' view of a country's port facilities (World Economic Forum 2018). Improving port infrastructure quality contributes to higher logistics performance, seaborne trade, and higher economic growth (Munim et al. 2018). Quality of port infrastructure, WEF (1 = extremely underdeveloped to 7 = well developed and efficient by international standards).
As shown in Fig. 3, the Quality of Port infrastructure in the LAC region has improved from 3.6 in 2007 to 3.96 in 2017. The highest score was recorded at 4.1 in 2010, then gradually declined through 2011 to 2015, then rebounded in 2016 to 3.96, which was the year that PCE was completed.
Quality of Port Infrastructure (QPI) scores for LAC. The QPI score for the LAC region from the periods 2007 to 2018. Source: World Economic Forum 2018
FDI is a key component in international economic amalgamation (OECD 2020). It is also a major investment funding source; therefore, developing countries offer incentives to encourage FDI (United Nations 2005). FDI has a positive effect on trade because companies expand their production operations for larger capital and borrow from international markets, thus benefiting from economies of scale, leading to an increase in trade for the host country (OECD 2020). FDI investment within the LAC region has increased since the inception of the expansion. For example, Panama's FDI growth has increased since the canal expansion (Lloyd 2017). Figure 4 shows the FDI (US$) investment in LAC for the period 2010 to 2019 that 2013 was the highest recorded FDI, 3.812 Billion declined to 2.589 Billion in 2019. During the period 2017 to 2019, there was a gradual increase from 2.226 Billion (US$) to 2.589 Billion (US$), representing a 16% FDI growth in the region.
FDI (Billion US$) trend in the LAC region. Source: World Bank 2021
Trade freedom (TRFR)
TRFR is a composite measure of the absence of tariff and non-tariff barriers that affect the trade of goods and services. Trade freedom (TRFR) is based on the inputs: Trade-weight, average, and Non-tariff barriers (Index of Economic Freedom (IEF). 2020). The growth in trade freedom was declined from 74.8 in 2007 to 74.6 in 2014, then rebound to 74.7 in 2018, as shown in Fig. 5. It is showing that there were improvements in Trade Freedom (TRFR) within the region.
LAC region Trade Freedom (TRFR) from 2007 to 2018. Source: World Bank 2021
Port liner shipping connectivity index (PLSCI) in LAC and Transhipment ports
PLSCI assesses how well a country links to the global shipping networks (UNCTAD 2021). The LSCI is measured by five components of the maritime transport sector: number of ships, container-carrying capacity, maximum vessel size, number of services, and companies that deploy container ships in a country's ports (World Economic Forum 2018). Port infrastructure and PLSCI impacts freight rates in the LAC region (Wilmsmeier et al. 2006). The port liner connectivity is an important factor determining trade activity in the maritime industry for regional ports within LAC and US East and Gulf coast. The PCE has largely impacted LSCI. The growth of the LSCI is shown in Fig. 6 that reveals the average Liner shipping Connective Index (LSCI) for ports within the LAC region.
Port Liner Shipping Connectivity Index (PLCI). Index (Maximum Q1 2006 = 100). Source: UNCTAD (2021)
The average Port Liner Shipping Index (PLSCI) for the three (3) regions showed consistent growth in South America, Central America, and the Caribbean. As shown in Fig. 6, for South America (SA), the PLSCI score increases from 8.50 to 12.40, Central America (CA) score increases from 8.63 to 13.82, and the Caribbean score from 8.63 to 12.41. In 2019, the top three transshipment ports within the region located in Central America; Colon; Panama (33.2), Balboa; Panama (35.2), and Manzanillo; Mexico (37.8). Regional transshipment within the LAC such as Colon; Panama; Balboa; Panama; Cartagena; Colombia, Santos; South America, Kingston; Jamaica, Freeport; Bahamas, Buenaventura; Colombia, Caucedo; Dominican Republic, San Juan; Puerto Rico and San Antonio; Chile; PLCI scores were way above the average regional PLSCI scores.
The results on the impact of the Panama Canal expansion (PCE) on LAC regional ports were conducted using the traditional Difference in Difference (DID) equation – i.e., exactly the specification described.
$$ \mathrm{TEUs}=\alpha +\beta\ \mathrm{TreatmentPort}+\gamma\ \mathrm{PostTreatment}+\delta\ \left(\mathrm{TreatmentPort}\cdotp \mathrm{Posttreatment}\right)+{\upvarepsilon}_{\mathrm{i}} $$
Where intercept (α), TreatmentPort(β), PostTreatment (γ), and Diff-in-Diff (δ) were all statistically significant at 1%, 5%, and 10% levels as shown in Table 6. The regression results for transshipment, Caribbean, Central America, and South America ports, r values were 0.41, 0.87, 0.83, and 0.31. Table 7, statistical description of three (3) sub-regional and transshipment hubs of 100 ports from the period 2010 to 2019; the coefficient β for the treatment (DTrp) and Control (CONTp) ports, were all statistical significance at 1% level.
Table 6 Statistical Descriptive
Table 7 Differences in Differences (DID) Regressions (2010 to 2019)
For transshipment hub ports, the estimated coefficient δ = 0.077 (statistically significant at the 10% level) and indicates that the average container port throughput (TEU) of the DTrp increased by 20% (170,000 TEUs) more than that of non-transshipment ports within the LAC region since the PCE. For the Caribbean region, the estimated coefficient δ = 0.026 (statistically significant at the 5% level) and indicates that the average container throughput (TEU) for Treatment Ports (DTrp) decreased by 8% (140,000 TEUs) less than control ports (CONTp). For the Central American region, the estimated coefficient δ = 0.087 (statistically significant at the 10% level) and an average container throughput (TEU) increase of 12% (280,000 TEUs) than control ports (CONTp) since the PCE. For ports in the South American region, δ = 0.095 (statistically significant at the 10% level) and indicates 34.4% (260,000 TEUs) than control ports (CONTp) since the PCE.
Parallel trend assumption test
The Parallel Trend Assumption (PTA) was used to test the model's validity to ensure no biased estimation of causal effects (Fredriksson and Oliveira 2019). A validity check compares changes to the treatment and comparison group's changes before and after the program (Columbia Public Health 2020; Mckenzie 2021). Table 5 was used to classify the LAC ports into treatment (DTrp) and control (CONTp) groups from 2010 to 2019. Pre-treatment period "Before" and "After" the PCE. Fig. 7 shows that in 2016, there were increases in container port throughput (TEUs) from 2017 to 2019 for the total summation of Treatment Ports (DTrp), while for the Control Ports (CONTp) constant trend was seen during those periods. Therefore, the parallel trend assumption holds for the Treated Ports (DTrp) and Control Ports (CONTp) because the Container throughput moves in tandem with each other until 2016, rapid growth container throughput (TEUs) was observed from that period to 2019 for the treated ports (DTrp).
Note: For the period 2010–2015, classified as the era "before" and "after" PCE. The DTrp showed that after the completion, TEUs volumes increased. This visual inspection satisfies the Parallel Trend Assumption (PTA); Source: Own Elaboration
The DID Model results revealed that PCE (Intervention) positively impacted container port throughput (TEUs) within the LAC region. All estimated coefficients δ in the model were statistically significant at 1%, 5%, and 10%. The findings from the model revealed that the average container port throughput for Treated ports (DTrp) was more than that of Controlled ports (CONTp) for Transhipment hub, Central America, and South America having 20%, 12%, and 34% growth since the canal expansion, except for the Caribbean ports (DTrp) that experienced losses of 8%. These DID results were expected and supported by several authors and data resources such as Rodrigue and Ashar (2016), CEPAL (2020), World Bank (2021), UNCTAD (2014), Martinez et al. (2016), and Singh et al. (2015). For the positive impact of PCE (Intervention), Martinez et al. 2016, studies revealed that the PCE would generate significant transit time saving and shifting container traffic from West Coast to East Coast ports. Rodrigue and Ashar (2016) forecast increases in both transshipment activity and container throughput through the PCE. However, the Caribbean Treatment (DTrp) ports have experienced decreases in container port throughput based on the DID model's findings. This decline may be largely influenced by port infrastructural development and improvement of the US East and Gulf coast, increasing competition among US ports and regional ports (Van Hassel et al. 2020; Martinez et al. 2016). Ports that lack or delayed port modernization investments will experience losses in container throughput (TEUs) and changes in liner shipping routes (Talley 2006; Sarriera et al. 2015; Kendrick 2020). The DID results revealed that major Caribbean ports (DTrp) such as Kingston, Freeport, San Juan, and Caucedo had experienced losses in container throughput (TEUs) since the PCE. Reyes et al. (2019) and Park et al. (2020) supported this finding; Reyes et al. (2019) revealed that the short-term impact of Caribbean ports would decrease transshipment volume because port modernization investment among US ports will impact liner shipping routes.
The Canal expansion has reshaped US and LAC ports' economic and environmental geography beyond this research scope. However, other factors were considered, such as Quality of Port Infrastructure (QPI), Foreign Direct Investment (FDI), and Trade Freedom (TRFR) (Bhadury 2016; Prozzi and Overmyer 2018; Ashley and Dettoni 2016; United Nations 2005; Carral et al. 2018). These data were not included in the model but were considered supporting graphs to justify the expansion's pre- and post-era impact. Fig. 3 shows that the overall QPI scores have improved from 3.6 to 3.96. Fig. 4 shows FDI rebounded in 2017 from US$2.22 Billion to US$2.59 Billion in 2019. Fig. 5 reveals that TRFR improved from 74.6 in 2014 to 74.74 in 2018; simultaneously, it may be said that these variables may have influenced container port throughput (TEUs) growth. However, The PCE had impacted liner shipping routes, cargo tonnage growth, and port investment within LAC and US East and Gulf ports that resulted in water channel investments and improvement of policies to foster economic growth in anticipation of the PCE (Prozzi and Overmyer 2018; Bhadury 2016; Carral et al. 2018; Sarriera et al. 2015; Kendrick 2020; Rodrigue 2020).
The dynamics of trade globalization, development of transport technology, application of cargo-handling technology, and cargo unitisation are keen attributes that will determine regional ports' competitiveness (ICS 2020; Park et al. 2020; Nicholson and Boxill 2017). The Caribbean region (DTrp) ports' finding was unexpected because of the Transshipment history and strategic location of these ports being a part of the "Transshipment triangle" of the LAC region (Notteboom et al. 2021). These results also revealed that Mega-ships' introduction to the Caribbean region does not necessarily benefit transshipment ports due to the following: the inability to accommodate Neo-Panamax, lack of proactiveness to global changes, poor port infrastructure, and competition from regional ports, especially the US and Gulf Coast port (Merk 2018; Kapoor 2016; Bhadury 2016; Park et al. 2020).
The twenty-first century shows that radical changes in the maritime trade will impact port operations' dynamics and their capability to compete for container traffic. Impact evaluation such as DID enables ports to assess an intervention's impact and efficiently make adjustments in trade policy reforms, port infrastructure, and most importantly, prepare them to be more resilient towards sustainable developments for the present and future dynamism in global trade.
The sample size was taken from ECLAC and World Bank from 31 countries and 118 ports and port zones from 2010 to 2019. The nine (9) timestamps may not fully justify the Parallel trend Assumption (PTA) of the DID model. However, the container throughput (TEUs) of 100 LAC ports gives a clearer perspective on the PCE's causal effect. Some regional ports, mostly Caribbean ports, were excluded from the model because of limited and missing data. Each of the 100 regional ports' profile and characteristics were difficult to obtain because of limited data. However, Digital Logistics Assessment, Marine Traffic database, and World Port Source (WPS) websites helped retrieve data such as the number of terminals, berth length, port area, number of gantries, and draught for major ports within the region but were limited for small ports. The classification of 100 ports in the category of treated (DTrp) and Control (CONTp) ports was classified according to CEPAL and UNCTAD container throughput data. Port ratings were divided into transshipment hubs and ports that improved infrastructural development for mega-ships; therefore, some deep-water ports (mainly Pacific coast) accommodated post-Panamax before PCE were not classified as treatment (DTrp) ports.
Limited research articles were published on DID application within the Maritime field. The main limitation to this technique is the non-verifiability of its assumptions (Schiozer et al. 2020). This model's application to assess causal effects of endogenous and exogenous variables associated with the maritime industry may be proven challenging and may require additional methods to evaluate an intervention's impact. The maritime industry's volatility triggered by exogenous factors such as oil prices, freight rates, natural and economic disasters, wars, etcetera., can create limitations to the DID applications. The parallel trend assumption (PTA), although one of the most popular used methods for determining the DID model's internal validity, as shown in Fig. 7, was the method used to validate the model in this research. However, Kahn-Lang and Lang (2018) believe that the PTA is insufficient to establish the DID's validity. Therefore, other procedures such as the Robustness test and reformulating the model to allow non-parallel pre-period trends can be applied to test the model's validity (Bilinski and Hatfield 2018; Rambachan and Roth 2019).
Economic and environmental variables such as Gross Domestic Product (GDP), Direct Foreign Investment (FDI), maritime pollution, Carbon and Green gas emission from Neo-Panama and Post-Panama vessels were not covered within the scope of this research. Excluding these variables may limit the full justification of the PCE to the region from an economic and environmental perspective.
This study examined the impact of PCE on 100 ports within the LAC region from 2010 to 2019. The DID model was used to assess the causal effect of the PCE on container throughput (TEUs) among ports within the LAC region which includes the three (3) subregions and major transshipment ports. This method was significant for analyzing the Pre and Post PCE era's impact on regional ports since the advent of Neo-Panamax and Post-Panamax vessels (Mega-ships) transiting the PCE in 2016. The DID model's finding revealed that PCE has positively impacted container throughput volumes among LAC regional ports except for the Caribbean regional transshipment ports (DTrp) that experienced TEUs' losses since the PCE (Intervention). The findings were important in evaluating the PCE's causal effect on container throughput volume among LAC ports and determining endogenous factors that may affect regional port competitiveness.
Despite its limitations, the DID model is an alternative approach in impact evaluation that can be used to assess the effectiveness of governmental policies, environmental policies, and socio-economic programs (Hawkins et al. 2015). The DID model can also be used as a guide for policymakers to improve or adjust an intervention's outcome for regional ports. Limited studies were conducted on the DID approach in the maritime sector; therefore, it is recommended that future studies use the DID approach with other variables such as GDP, DFI, and environmental policies (MARPOL Annex VI), to determine the holistic impact of the PCE on the ports within the LAC region.
In general, the maritime sector is volatile and sensitive to the dynamic changes within global trade. Therefore ports that are proactive in assessing the effectiveness of a policy or intervention will have a competitive edge in adjusting or improving endogenous factors (e.g., policies, infrastructure, and trade) to remain sustainable in the maritime industry.
Abadie A (2005) Semiparametric difference-in-differences estimators. Review of Economic Studies 72(1):1–19. https://economics.mit.edu/files/11869. https://doi.org/10.1111/0034-6527.00321
Achurra-Gonzalez P, Novati M, Foulser-Piggott R, Graham D, Bowman G, Bell M, Angeloudis P (2016) Modelling the impact of liner shipping network perturbations on container cargo routing: Southeast Asia to Europe application. Accid Anal Prev 123:399–410. https://doi.org/10.1016/j.aap.2016.04.030
ACS. (2017). The future of the informal shipping sector in the Caribbean | ACS-AEC. ACS-AEC. http://www.acs-aec.org/index.php?q=transport/the-future-of-the-informal-shipping-sector-in-the-caribbean
Albouy D (2004) The Colonial Origins of Comparative Development: A Reinvestigation of the Data. University of California, Berkeley July
Albouy D (2015) Program evaluation and the difference in difference estimator. Economics 131 https://eml.berkeley.edu/~webfac/saez/e131_s04/diff.pdf
Ashley, Z., & Dettoni, J. (2016). From the depths: canal expansion gives post-Panama papers boost. The Financial Times Ltd. https://www.fdiintelligence.com/article/66572
Athey S, Imbens GW (2006) Identification and inference in nonlinear difference-in-differences models. Econometrica 74(2):431–497. http://www.jstor.org/stable/3598807. https://doi.org/10.1111/j.1468-0262.2006.00668.x
Bertrand M, Duflo E, Mullainathan S (2003) How much should we trust differences-in-differences estimates? Q J Econ 119(1):249–275 https://economics.mit.edu/files/750
Bhadury J (2016) Panama Canal expansion and its impact on east and Gulf Coast ports of USA. Maritime Policy Manag 43(8):928–944. https://doi.org/10.1080/03088839.2016.1213439
Bilinski, A., & Hatfield, L. A. (2018). Seeking evidence of absence: Reconsidering tests of model assumptions. http://arxiv.org/abs/1805.03273
Card D, Krueger A (1994). Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania. American Economic Association, 84(4):772-793. https://davidcard.berkeley.edu/papers/njmin-aer.pdf.
Carral L, Tarrio-Saavedra J, Castro-Santos L, Lamas-Galdo I, Sabonge RL (2018) Effects of the expanded Panama Canal on vessel size and seaborne transport. PROMET – Traffic Transportation 30(2):241–251. https://doi.org/10.7307/ptt.v30i2.2442
CEPAL (2019). Port activity report of Latin America and the Caribbean 2018 | Briefing note | Economic Commission for Latin America and the Caribbean. Economic Commission for Latin America and the Caribbean. https://www.cepal.org/en/notes/port-activity-report-latin-america-and-caribbean-2018.
CEPAL (2020). Economic Commission for Latin America and the Caribbean. Ports. http://perfil.cepal.org/l/en/portmovements_classic.html
Cho, A., Gordon, BL., Bray, WD., Padelford, WE. (2019). Panama Canal. Encyclopedia Britannica. https://www.britannica.com/topic/Panama-CanalGro
Columbia Public Health. (2020). Difference-in-difference estimation. https://www.publichealth.columbia.edu/research/population-health-methods/difference-difference-estimation
Fan H, Gu W (2019) Study on the impact of the Panama Canal expansion on the distribution of container liner routes. J Transport Technol 9(2):204–214. https://doi.org/10.4236/jtts.2019.92013
Fredriksson A, Oliveira GMD (2019) Impact evaluation using difference-in-differences. RAUSP Manag J 54(4):519–532. https://doi.org/10.1108/RAUSP-05-2019-0112
FreightWaves (2020). American Shipper—Global trade and shipping news. https://www.freightwaves.com/news/american-shipper
Gooley, T. (2018). Has the Panama Canal expansion changed anything? Transportation report. https://www.dcvelocity.com/articles/30335-has-the-panama-canal-expansionchanged-anything
Gro. (2016). Panama Canal expansion: a case of bad timing. Gro Intelligence. https://gro-intelligence.com/insights/articles/panama-canal-expansion-a-case-of-bad-timing
Hawkins, A., McDonald, B., Rogers, P., Macfarlan, A., & Milne, C. (2015). Choosing appropriate designs and methods for impact evaluation. Department of Industry, Innovation and Science. https://www.heritage.org/index/trade-freedom#:%7E:text=Trade%20freedom%20is%20a%20composite,%2Dtariff%20barriers%20(NTBs)
Heckman JJ, Ichimura H, Todd PE (1997) Matching as an econometric evaluation estimator: evidence from evaluating a job training Programme. Rev Econ Stud 64(4):605–654. https://doi.org/10.2307/2971733
ICS. (2020). Port and Terminal Management (British Ports Association ed., Vol. 268). Institute of Chartered Shipbrokers
Index of Economic Freedom (IEF). (2020). Trade Freedom: Tariffs, Imports, Exports, and Economic Freedom. World Bank. https://www.heritage.org/index/trade-freedom#:%7E:text=Trade%20freedom%20is%20a%20composite,%2Dtariff%20barriers%20(NTBs).
Kahn-Lang A, Lang K (2018) The promise and pitfalls of differences-in-differences: Reflections on "16 and Pregnant" and other applications, National Bureau of Economic Research. https://doi.org/10.3386/w24857
Kapoor, R. (2016). Diminishing economies of scale from megaships? Drewry. https://www.marinemoney.com/system/files/media/mm/pdf/2016/1150%20Rahul%20Kapoor.pdf
Kendrick, R. (2020. The Panama Canal Expansion and the rise of containerized cargo at east coast ports. Xebec Realty. https://xebecrealty.com/blog/panama-canal-expansion-rise-containerized-cargo-east-coast-ports/
Lechner M (2011) The estimation of causal effects by difference-in-difference methods. Foundations Trends Econ 4(3):165–224
Lim S (2011) Economies of scale in container shipping. Maritime Policy Manag 25(4):361–373. https://doi.org/10.1080/03088839800000059
Link, J. (2015). The Panama Canal expansion's massive ripple effect on US ports and shipping. Autodesk.
Liu Q, Wilson WW, Luo M (2016) The impact of Panama Canal expansion on the container-shipping market: A cooperative game theory approach. Maritime Policy Manag 43(2):209–221. https://doi.org/10.1080/03088839.2015.1131863
Lloyd R (2017) The Panama Canal as a determinant of FDI inflows in Panama. Rev Integrative Bus Econ Res 7:87–102
Logistics Capacity Assessments (LCAs) (2021) Brazil: Limited Port Assessment – Logistics Capacity Assessment – Digital Logistics Capacity Assessments. LCA https://dlca.logcluster.org/display/public/DLCA/Brazil+-+Limited+Port+Assessment
Marine Traffic (2021). Global ship tracking intelligence | AIS Marine traffic. Marine Traffic. https://www.marinetraffic.com/en/ais/home/centerx:-15.3/centery:28.0/zoom:2
Marle, G. (2016). Overcapacity may hit Caribbean transhipment ports following Panama Canal expansion. The Loadstar. https://theloadstar.com/overcapacity-may-hit-caribbean-transhipment-ports-following-panama-canal-expansion/
Martinez C, Steven AB, Dresner M (2016) East Coast vs. West Coast: The impact of the Panama Canal's expansion on the routing of Asian imports into the United States. Transport Res Part E 91:274–289. https://doi.org/10.1016/j.tre.2016.04.012
Mckenzie, D. (2021). Revisiting the difference-in-differences parallel trends assumption: Part I Pre-trend testing. World Bank Blogs. https://blogs.worldbank.org/impactevaluations/revisiting-difference-differences-parallel-trends-assumption-part-i-pre-trend
Merk O (2018) Container ship size and port relocation, International Transport Forum Discussion Paper, No. 2018–10. Organisation for Economic Co-Operation and Development (OECD), International Transport Forum, Paris. https://doi.org/10.1787/d790ae41-en
Morley H, Ashe A (2019). ARO 2020: US East Coast ports vie for rising cargo volumes. ARO 2020. https://www.joc.com/port-news/us-ports/aro-2020-us-east-coast-ports-vie-rising-cargo-volumes_20191231.html.
Munim ZH, Schramm HJ (2018). The impacts of port infrastructure and logistics performance on economic growth: the mediating role of seaborne trade. Journal of Shipping and Trade, 3(1). https://doi.org/10.1186/s41072-018-0027-0.
Nicholson G, Boxill K (2017). The Caribbean and the widening of the Panama Canal: panacea or problems? Association of Caribbean States (ACS AEC). http://www.acs-aec.org/index.php?q=fr/node/4325.
Notteboom, T., Pallis, A., & Rodrigue, J. P. (2021). Port Economics, Management and Policy. Port Economics, Management and Policy | A Comprehensive Analysis of the Port Industry. https://porteconomicsmanagement.org
OECD (2015). Organization for Economic Co-Operation and Development. Competition issues in liner shipping. http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=DAF/COMP/WP2(2015)3&docLanguage=En
OECD (2020). Organization for Economic Co-Operation and Development iLibrary | Foreign direct investment (FDI). https://www.oecd-ilibrary.org/finance-and-investment/foreign-direct-investment-fdi/indicator-group/english_9a523b18-en
Panama Canal Authority. (2019). Maritime Services - PanCanal.com. Panama Canal Traffic Along Principal Trade Routes. https://www.pancanal.com/eng/op/transit-stats/index.html
Park C, Richardson HW, Park J (2020) Widening the Panama Canal and US ports: Historical and economic impact analyses. Maritime Policy Manag 47(3):419–433. https://doi.org/10.1080/03088839.2020.1721583
Pham T, Kim K, Yeo G (2018) The Panama Canal expansion and its impact on East–West liner shipping route selection. Sustainability 10(12):4353. https://doi.org/10.3390/su10124353
Prozzi J, Overmyer S (2018) The Potential Impacts of the Panama Canal Expansion on Texas Ports. Texas A&M Trasport Instit PRC 17–78:11–18 https://rosap.ntl.bts.gov/view/dot/34899
Qiu LY, He LY (2017) Can green traffic policies affect air quality? Evidence from a difference-in-difference estimation in China. Sustainability 9(6):1067. https://doi.org/10.3390/su9061067
Rambachan, A., & Roth, J. (2019). An honest approach to parallel trends [Working paper]. https://scholar.harvard.edu/files/jroth/files/roth_jmp_honestparalleltrends_main.pdf
Reyes OS, Taneja P, Pielage BA, van Schuylenburg M (2019) Dynamic Planning for Flexible Port Infrastructure after Panama Canal Expansion: A Real Case Study. Ports 2019:3–10 https://doi.org/10.1061/9780784482629.028
Rodrigue, J. P. (2015). How serious are the alternatives to the Panama Canal? | Logistics Regional Observatory. Inter-America Devlopment Bank. http://logisticsportal.iadb.org/node/4212?language=en#:%7E:text=At%20the%20macro%20level%2C%20due,the%20Atlantic%20and%20Pacific%20oceans.
Rodrigue, J.P. (2020). Geography of transport systems (5th ed). The Geography of Transport Systems. https://transportgeography.org/geography-of-transport-systems-5th-edition/
Rodrigue JP, Ashar A (2016) Transshipment hubs in the New Panamax Era: The role of the Caribbean. J Transport Geography 51(C):270–279 https://www.researchgate.net/publication/284017736_Transshipment_hubs_in_the_New_Panamax_Era_The_role_of_the_Caribbean
Sarriera, J., Suárez-Alemán, A., Serebrisky, T., & Trujillo, L. (2015). When it comes to container port efficiency, are all developing regions equal? (IDB Working Paper Series no. IDB-WP-568). https://publications.iadb.org/publications/english/document/When-It-Comes-to-Container-Port-Efficiency-Are-All-Developing-Regions-Equal.pdf
Schiozer R, Mourad AF, Martins CT (2020) A Tutorial on the Use of Differences-in-Differences in Management, Finance, and Accounting. Rac: Revista De Administração Contemporânea 25:1
Shibasaki R, Usami T, Furuichi M, Teranishi H, Kato H (2018) How do the new shipping routes affect Asian liquefied natural gas markets and economy? Case of the Northern Sea Route and Panama Canal expansion. Maritime Policy Manag 45(4):543–566. https://doi.org/10.1080/03088839.2018.1445309
Singh, A., Asmath, H., Leung Chee, C., & Darsan, J. (2015). Marine Pollution Bulletin. Potential Oil Spill Risk from Shipping and the Implications for Management in the Caribbean Sea 93(1–2), 217–277. doi: https://doi.org/10.1016/j.marpolbul.2015.01.013.
Talley WK (2006) Chapter 22 Port performance: An economics perspective. Res Transport Econ 17:499–516. https://doi.org/10.1016/S0739-8859(06)17022-5
Thomas, A. R. (2015). Suez and Panama: A Healthy Competition. IndustryWeek. https://www.industryweek.com/ideaxchange/article/21965825/suez-and-panama-a-healthy-competition
UNCTAD (2014). Review Of Maritime Transport: 2013. United Nations. https://unctad.org/system/files/official-document/rmt2013_en.pdf.
UNCTAD. (2021). Port liner shipping connectivity index, quarterly. UNCTAD STAT. https://unctadstat.unctad.org/wds/?aspxerrorpath=/wds/TableViewer/tableView.aspx
United Nations (2005) World Investment Report. United Nations publication https://unctad.org/system/files/official-document/wir2005_en.pdf.
Van Hassel, E., Meersman, H., Voorde, E., & Vanelslander, T. (2020). The impact of the expanded Panama Canal on port range choice for cargo flows from the U.S. to Europe. Maritime Policy Manag. 1–19. doi: https://doi.org/10.1080/03088839.2020.1718230.
Wang M (2017) The role of Panama Canal in global shipping. Maritime Bus Rev 2(3):247–260. https://doi.org/10.1108/MABR-07-2017-0014
White H, Sabarural S (2014) Quasi-Experimental Design and Methods. Methodological briefs, impact evaluation No. 8. UNICEF, September 2014
Wilmsmeier G, Hoffmann J, Sanchez RJ (2006) The impact of port characteristics on international maritime transport costs. Res Transport Econ 16:117–140. https://doi.org/10.1016/S0739-8859(06)16006-0
Wing C, Simon K, Bello-Gomez RA (2018) Designing difference in difference studies: Best practices for public health policy research. Ann Rev Public Health 39(1):453–469. https://doi.org/10.1146/annurev-publhealth-040617-013507
World Bank (2021). Container port traffic (TEU: 20 ft equivalent units) | data. WorldBank. https://data.worldbank.org/indicator/IS.SHP.GOOD.TU
World Economic Forum (2018). The global competitiveness index 4.0 methodology and technical notes. The Global Competitiveness Report. http://www3.weforum.org/docs/GCR2018/04Backmatter/3.%20Appendix%20C.pdf
World Port Source. (2021). WPS - Home Page. http://www.worldportsource.com/
World Shipping Council (WSC) (2019). Trade routes. World Shipping Council. https://www.worldshipping.org/about-the-industry/global-trade/trade-routes
Zupanovic D, Grbić L, Barić M (2019) The impact of the new Panama Canal on cost-savings in the shipping industry. TransNav Int J Marine Navigation Safety SEA Transport 13(3):537–541. https://doi.org/10.12716/1001.13.03.07
I would like to acknowledge Japanese International Corporation Agency (JICA) for selecting me has a scholarship recipient. I am thankful for this opportunity in my research endeavors.
CEPAL (2019). Port Activity report of Latin America and Caribbean. https://www.cepal.org/en/notes/port-activity-report-latin-america-and-caribbean-2018
United Nations Conference on Trade and Development. (UNCTAD). https://unctad.org/en/Pages/statistics.aspx
Clarkson Research data 2020 https://www.clarksons.net/portal
Container Port traffic (TEU: 20 Foot equivalent units)https://data.worldbank.org/indicator/IS.SHP.GOOD.TU
Latin America and Caribbean Ports
http://perfil.cepal.org/l/en/portmovements_classic.html
All funding for this research is sponsored by the Japan International Corporation Agency (JICA). JICA provides an annual academic budget for research. This budget is managed by Professor Tetsuro Hyodo, Tokyo University of Marine Science and Technology (TUMSAT) from the Department of Logistics and Information engineering.
Department of Logistics and Information Engineering, Tokyo University of Marine Science and Technology, 2-1-6, Etchujima Koto-ku, Tokyo, 135-8533, Japan
Kahuina Miller & Tetsuro Hyodo
Kahuina Miller
Tetsuro Hyodo
Professor Tetsuro Hyodo is advisor for this research and was instrumental in recommending the appropriate methodology for this article. The author(s) read and approved the final manuscript.
Professor Tetsuro Hyodo, head of the department of Logistics and Information engineering. He graduated in Civil Engineering, Tokyo Institute of Technology in 1984. In 1986, completed the master's Course in Civil Engineering at the Graduate School. 1989.Completed the Doctoral Course (Doctor of Engineering). 1998, Visiting Researcher at the Institute of Transportation Research, University of California, Berkeley. He is the author and co-author of several research journals, please see link https://tumsatdb.kaiyodai.ac.jp/html/100000623_ronbn_1_en.html.
Kahuina Hassan Miller, 2nd year Doctoral student from the Tokyo University of Marine Science and Technology (TUMSAT). Course of Applied Marine Environmental Studies specialization logistics and information engineering. He is the graduate World Maritime University (2014), obtaining MSc in Maritime Affair specialization Ship management and Logistics. Email: kahuinam@gmail.com.
Correspondence to Kahuina Miller.
Authors declares no competing interests
Miller, K., Hyodo, T. Impact of the Panama Canal expansion on Latin American and Caribbean ports: difference in difference (DID) method. J. shipp. trd. 6, 8 (2021). https://doi.org/10.1186/s41072-021-00091-5
Maritime traffic
Difference in differences (DID)
Neo-Panamax vessels | CommonCrawl |
Home > Journals > Afr. Diaspora J. Math. (N.S.) > Volume 12 > Issue 2 > Article
2011 Interior Controllability of the $nD$ Semilinear Heat Equation
H. Leiva, N. Merentes, J. L. Sanchez
Afr. Diaspora J. Math. (N.S.) 12(2): 1-12 (2011).
In this paper we prove the interior approximate controllability of the following Semilinear Heat Equation $$ \left\{ \begin{array}{lr} z_{t}(t,x) = \Delta z(t,x) + 1_{\omega}u(t,x)+f(t,z,u(t,x)) & \mbox{in} \quad (0, \tau] \times \Omega,\\ z = 0, & \quad \mbox{on} \quad (0, \tau) \times \partial \Omega, \\ z(0,x) = z_{0}(x), & x \in\Omega, \end{array} \right. $$ where $\Omega$ is a bounded domain in $\mathbb{R}^{N}(N\geq1)$, $z_0 \in L^{2}(\Omega)$, $\omega$ is an open nonempty subset of $\Omega$, $1_{\omega}$ denotes the characteristic function of the set $\omega$,the distributed control $u$ belong to $\in L^{2}([0,\tau]; L^{2}(\Omega;))$ and the nonlinear function $f:[0, \tau] \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is smooth enough and there are $a,b, c \in \mathbb{R}$, with $c \neq -1$, such that $$ \sup_{(t,z,u) \in Q_{\tau}} |f(t,z,u) -az-cu-b | < \infty, $$ where $Q_{\tau}= [0, \tau] \times \mathbb{R} \times \mathbb{R}$. Under this condition we prove the following statement: For all open nonempty subset $\omega$ of $\Omega$ the system is approximately controllable on $[0, \tau]$. Moreover, we could exhibit a sequence of controls steering the nonlinear system (1.1) from an initial state $z_0$ to an $\epsilon$ neighborhood of the final state $z_1$ at time $\tau \gt 0$, which is very important from a practical and numerical point of view.
H. Leiva. N. Merentes. J. L. Sanchez. "Interior Controllability of the $nD$ Semilinear Heat Equation." Afr. Diaspora J. Math. (N.S.) 12 (2) 1 - 12, 2011.
Primary: 93B05
Secondary: 93C25
Keywords: interior controllability , semilinear heat equation , strongly continuous semigroups
Rights: Copyright © 2011 Mathematical Research Publishers
Afr. Diaspora J. Math. (N.S.)
Vol.12 • No. 2 • 2011
Mathematical Research Publishers
H. Leiva, N. Merentes, J. L. Sanchez "Interior Controllability of the $nD$ Semilinear Heat Equation," African Diaspora Journal of Mathematics. New Series, Afr. Diaspora J. Math. (N.S.) 12(2), 1-12, (2011) | CommonCrawl |
In Appendix_4 we derived the optimum diameter of a pinhole for the given focal length and the given wavelength of the light. Then how much resolving power is attained by such an optimized "pinhole"? In other words, how much is the smallest diameter of a sunspot which can be recognized when we observe a sunspot by using the optimized "pinhole lens"?
Limit of the resolving power
Resolving power is obtained by calculating the distribution of the light intensity of the image of a point source at infinity by a pinhole with a radius of \(a\). Though the image of a point source should be a point without area from the mathematical viewpoint, the projected image of the point becomes a circle with a finite diameter \(b\) as shown below due to the diffraction of the light wave. $$b \cong \frac{3.832}{2\pi} \frac{\lambda f}{a}\cong 0.6098 \frac{\lambda f}{a}$$ where \(\lambda\) and \( f\) are the wavelength of light and the distance between the pinhole and a screen.
An image of a point source by a pinhole
The left sub-figure shows a contour map of an image of a point source on the focal plane (image screen). The image becomes a circle with a radius \(b\) and concentric zones of weak light intensity are seen around the circle. The right sub-figure shows a 3D schematic figure of the light intensity distribution for the same case.
If images of two point sources keep away by \(d\)and the distance \(d\) is smaller than the above-mentioned radius \(b (d<b)\), it is impossible to distinguish these two images (the left sub-figure). Contrary, for \(d>b\) two images are distinguishable (the right sub-figure). Therefore, it is appropriate to consider the distance \(d\) as the limit of the resolving power. In a case of a telescope, \(b/f)\) is the definition of the resolving power.
Images of two source points and the resolution
The left sub-figure: Images of two point sources placed aloof by \(d(<b)\) are not distinguishable. The right sub-figure: Images of two point sources placed aloof by \(d (>b)\) are distinguishable.
Relative Resolving Power
When one would take a photograph of sunspots so that the image of the sun fills up the whole imaging screen, relation between the length of a side \(S\) of an imaging screen (a sensor) and the resolving power \(b\) becomes important and we define the relative resolving power \(G(=S/b)\). This relative resolving power is rewritten as a function of a wavelength of the light \(\lambda\), the focal length \(f\), and the size of the sensor \(S\), and it is expressed as $$G \cong \frac{1.28S}{\sqrt{\lambda f}}$$ Though in the case of a photograph or a computer display the resolving power is usually expressed as "a number of pixels per unit length", the relative resolving power defined here is "a number of pixels per a side of a square sensor". As astronomical objects such as the sun are located at infinity a size of an object is expressed not by a length but by an angle viewing the object. As the relation between the viewing angle \(\theta\) and the size of the object is expressed as \(S=\theta f\), the relative resolving power is derived as $$G \cong 1.28 \theta \sqrt{\frac{f}{\lambda}}$$ This means that the relative resolving power is increased by increasing the focal length and the magnification factor. If the apparent diameter of the sun, about 32'(=0.00931 radian), as adopted as the viewing angle, the relative resolving power for the sun is $$G_{sun}\cong 0.51 \sqrt{f}$$ where the unit of \(f\) is millimeter. It should be noted that the equation is derived for the case of constant viewing angle \(\theta\). Therefore, the size of the imaging screen becomes larger with increasing focal length. When one takes a photograph by loading the "pinhole lens" to a ready-made camera, such as a SLR, the formula of the relative resolving power should be a function of \(S\), \(f\) and \(\lambda\) as $$G \cong \frac{1.28S}{\sqrt{\lambda f}}$$. This equation means that the relative resolving power is improved by increasing the focal length \(f\) to increase the multiplication factor. As the apparent diameter of the sun is about \(32'\)(=0.00931 radian), for the wavelength of \(550 nm\) this equation is reduced to $$G_{sun}\cong 0.51 \sqrt{f} $$ It should be noted that this equation can be used for the constant value of viewing angle \(\theta\), which is the case the size of the imaging screen is increased with increasing focal length. Therefore, for taking a photograph by using a camera with a pinhole it is necessary to use the equation $$G \cong \frac{1.28S}{\sqrt{\lambda f}}$$ It should be remarked that the resolving powers of these cases depend on \(f\) inversely.
For reference we show some typical graphs: a pinhole diameter as a function of the focal length, \(D \cong 0.0366 \sqrt{f}\), the relative resolving power for observation of the sun, \(G_{sun} \cong 0.51 \sqrt{f}\), and the resolving power as a function of the size of the imaging plane \(S\), \(G \cong 55S/\sqrt{f}\), where the wavelength of the incident light is \(550 nm\).
Pinhole diameter vs. focal length
Pinhole diameter versus focal length
Relative resolving power versus focal length for a fixed viewing angle (the Sun)
Resolving power versus focal length for a fixed size of a picture screen | CommonCrawl |
Last edited by Kagale
4 edition of Stochastic programming with multiple objective functions found in the catalog.
by I. M. Stancu-Minasian
Published 1984 by Editura Academiei, D. Reidel, Distributors for the U.S.A. and Canada, Kluwer Academic in București, România, Dordrecht, Boston, Hingham, Ma., U.S.A .
Stochastic programming.
Other titles Stochastic programming.
Statement I.M. Stancu-Minasian ; translated from the Romanian by Victor Giurgiuțiu.
Series Mathematics and its applications. East European series, Mathematics and its applications (D. Reidel Publishing Company).
LC Classifications T57.79 .S7213 1984
6 Introductory Lectures on Stochastic Optimization and by inspection, a function is convex if and only if its epigraph is a convex set. A convex function fis closed if its epigraph is a closed set; continuous convex functions are always closed. We will assume throughout that any convex function we deal with is closed. Stochastic Programming. This example illustrates AIMMS capabilities for stochastic programming support. Starting from an existing deterministic LP or MIP model, AIMMS can create a stochastic model automatically, without the need to reformulate constraint definitions.
Multiple Objective and Goal Programming It seems that you're in USA. We have a The Design of the Physical Distribution System with the Application of the Multiple Objective Mathematical Programming. Case Study. Pages Multiple Objective and Goal Programming Book Subtitle Recent Developments Editors. 5 custom conference, december 9 information/model observations • evpi and vss: • always >= 0 (ws >= rp>= emv) • often different (ws=rp but rp > emv and vice versa) • fit circumstances: • cost to gather information • cost to build model and solve problem • mean value problems: • mv is optimistic (mv=4 but emv=3, rp=) • always true if convex and random.
In many optimization problems the objective function may depend on a random set of coefficients that have some known distribution. For example, a profit function may depend on certain market condit Cited by: Overview of Different Approaches for Solving Stochastic Programming Problems with Multiple Objective Functions A Goal Programming Application for Waste Treatment Quality Control International Journal of Quality & Reliability Management, Vol. 5, No. 4Cited by:
Theory and design of electron beams.
Protocols of proceedings of the International marine conference
Farmers And the State in Colonial Kano
The Bridgton town register, 1905
The 2000-2005 World Outlook for Oil and Gas Machinery (Strategic Planning Series)
Reconstructing the past
Newfangled fairy tales
The Quick
Brazil (Countries and Cultures)
Comparative pricing of prescription drugs sold in the United States and Canada and the effects on U.S. customers
Gross anatomy
Little Women Sticker Paper Dolls
The virtues and services of Francis Wayland
From an old garden
Oh, what nonsense
Stochastic programming with multiple objective functions by I. M. Stancu-Minasian Download PDF EPUB FB2
Stochastic Programming It seems that you're in USA. We have a dedicated site for USA. Search Menu. Loading. Stochastic Programming with Multiple Objective Functions. Buy this book Hardcover ,39 € price for Spain (gross). COVID Resources.
Reliable information about the coronavirus (COVID) is available from the World Health Organization (current situation, international travel).Numerous and frequently-updated resource results are available from this 's WebJunction has pulled together information and resources to assist library staff as they consider how to handle coronavirus.
Isbn Edition Number: Stochastic Programming: With Multiple Objective Functions by I.M. Stancu-Minasian Estimated delivery business days Format Hardcover Condition Brand New Description By writing the first book of its kind, I.M.
Stancu-Minasian has made a significant contribution to the field of MCDM. Stochastic programming. Stochastic programming has been applied in several domains: production planning, energy investment, water management and finance (Shahinidis, ). As in the single objective case, two main approaches are used to solve stochastic program, namely, the recourse approach and the chance constrained by: Run fmincon on a Smooth Objective Function.
The objective function is smooth (twice continuously differentiable). Solve the optimization problem using the Optimization Toolbox fmincon n finds a constrained minimum of a function of several variables.
This function has a unique minimum at the point x* = [-5,-5] where it has a value f(x*) = The main topic of this book is optimization problems involving uncertain parameters, for which stochastic models are available. Although many ways have been proposed to.
A multiple objective stochastic programming for working capital The main objective of the start-up retailer is to maximize its profitability and liquidity. Eljelly () and Rehman et al. () document the opposing relationship between profitability and liquidity in Saudi Stochastic programming with multiple objective functions book by: 3.
Stancu-Minasian, I.M. () 'Recent results in stochastic programming with multiple objective functions', in M. Grauer, A. Lewandowski, A.P. Wierzbicki(eds.),Multiobjective and Stochastic Optimization. IIASA Collaborative Proceedings Series CP-S12, 79– Google ScholarCited by: How is the above objective function different from the following objective function- \begin{equation} E\left(\min_{x} \parallel Ax-b \parallel_2^2 \right) \end{equation} Certainly, for the basic objective function, first objective has a closed form compared the second objective.
But, why aren't we dealing with the second objective in general. (version J ) This list of books on Stochastic Programming was compiled by J. Dupacová (Charles University, Prague), and first appeared in the state-of-the-art volume Annals of OR 85 (), edited by R. J-B. Wets and W. Ziemba. Books and collections of papers on Stochastic Programming, primary classification 90C15 A.
The known ones ~ in English, including translations. Introduction. A multi-objective optimization problem is an optimization problem that involves multiple objective functions. In mathematical terms, a multi-objective optimization problem can be formulated as ((), (),()) ∈,where the integer ≥ is the number of objectives and the set is the feasible set of decision vectors.
The feasible set is typically defined by some constraint functions. The aim of this paper is to investigate the objective functions corresponding to the individual problems belonging to the one multistage stochastic programming problem. A special attention is paid. deterministic programming.
We have stochastic and deterministic linear programming, deterministic and stochastic network flow problems, and so on. Although this book mostly covers stochastic linear programming (since that is the best developed topic), we also discuss stochastic nonlinear programming, integer programming and network flows.
This is the second of a two-part series on stochastic optimization, defined in Wikipedia as "optimization methods that generate and use random stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involve random objective functions or random constraints, for example.".
Multi-objective stochastic programming for portfolio selection Article in European Journal of Operational Research (3) February with Reads How we measure 'reads'.
Stochastic programming - the science that provides us with tools to design and control stochastic systems with the aid of mathematical programming techniques - lies at the intersection of statistics and mathematical programming. The book Stochastic Programming is a comprehensive introduction to the field and its basic mathematical tools.
While Cited by: Stochastic programming. Stochastic programming, as the name implies, is mathematical (i.e. linear, integer, mixed-integer, nonlinear) programming but with a stochastic element present in the data.
By this we mean that: in deterministic mathematical programming the data (coefficients) are known numbers. Currently, stochastic optimization on the one hand and multi-objective optimization on the other hand are rich and well-established special fields of Operations Research.
Much less developed, however, is their intersection: the analysis of decision problems involving multiple objectives and stochastically represented uncertainty simultaneously.
This is amazing, since in economic and Cited by: the maximum proflt $of the stochastic decision program (). The difierence $ 1, is called the Value of the Stochastic Solu-tion (VSS) re°ecting the possible gain by solving the full stochastic model. Two-stage stochastic program with recourse For a stochastic decision program, we denote by x 2 lRn1;x ' 0; theFile Size: KB.
maker. Another complication in this setting is the choice of objective function: maximizing expected return becomes less justifiable when the decision is to be made once only, and the decision-maker's attitude to risk then becomes important.
The most widely applied and studied stochastic programming models are two-stage (lin-ear) programs. The aim of stochastic programming is to find optimal decisions in problems which involve uncertain data. This field is currently developing rapidly with contributions from many disciplines including operations research, mathematics, and probability.
At the same time, it is now being applied in a wide variety of subjects ranging from agriculture Cited by: This Week About You { Quiz #1! 1 Name 2 Nationality 3 Education Background. 4 Research Interests/Thesis Topic? 5 (Optimization) Modeling Languages you know: (AMPL, GAMS, Mosel, CVX 6 Programming Languages you know: (C, Python, Matlab, Julia, FORTRAN, Java) 7 Anything speci c you hope to accomplish/learn this week?
8 One interesting fact about yourself you think we should Size: 1MB.parts are skipped, stochastic programming will come forward as merely an algorithmic and mathematical subject, which will serve to limit the usefulness of the field. In addition to the algorithmic and mathematical facets of the field, stochastic programming also involves model creation and specification of solution Size: 2MB.
travel-australia-planning-guide.com - Stochastic programming with multiple objective functions book © 2020 | CommonCrawl |
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.
Iodine test for Starch
The starch and iodide test is to detect amylose, a polysaccharide. We add an iodide solution to a sample and observe a color change with the starch; it turns dark blue.
While researching information online, I understood that $\ce{I_{2}}$ is not soluble in water, hence it needs to react with iodide ions present to at least create $\ce{I_{3}}$, which is soluble in water, and also forms polyiodide complex, as research shows. I also understood that this linear molecule of iodide migrates to the center of the helicoid polysaccharide structure. Then, with some electronic interactions, there is a modification in the electron configuration which produces the dark blue color.
My questions are as follows. What are the electron's movements? Which atoms are implicated in this transfer of electrons? What is the electron configuration change that results in the color change? In other words, I don't understand the electron's interactions between iodide and amylose that are creates the color change.
polymers halides carbohydrates
MattMatt
The amylose present in starch is responsible for the deep blue colouration. The structure of amylose consists of long polymer chains of glucose units connected by an alpha acetal linkages. All of the monomer units are $\alpha$ -D-glucose, and all the alpha acetal links connect $C1$ of one glucose and to $C4$ of the next glucose.
As a result of the bond angles in the acetal linkage, amylose actually forms a spiral structure.
X-ray diffraction studies$^{[1]}$, demonstrate that the amylose is organized as left-handed helices, which have outer diameters of 13 Å and a pitch of 8 Å, with each turn of the helix corresponding to six 1,4-anhydroglucose units. The iodine components are linearly arranged in the 5 Å wide inner cavity of the helices with an $\ce{I–I}$ distance of approximately 3.1 Å.
Starch-iodine complexes show dichroism of flow$^{[2]}$, i.e light with its electric vector parallel to the flow lines is more strongly absorbed than light with its electric vector normal to the flow lines. The dichroism of flow is shown to require that the long axes of the iodine molecules in the complex be parallel to the long axis of the starch iodine complex.
The exact structure of the polyiodide chain remains contentious, however, the following species are commonly implicated as key elements of the substructure:
$$\ce{I2 + I^- -> I3^-}$$
So, ignoring the structural specifics of the polyiodide chain, one notes that if an iodine molecule enters the internal channel of the amylose helix, it may be surrounded by about six glucose residues. The following image is from reference (3):
Bear this image in mind, and note that the hydroxyl oxygen is pojected into the helix cavity.
Also, when iodine is dissolved in ether or alcohol, a new strong absorption band appears in the near ultraviolet region. In relation to the study of the benzene-iodine complex, Mulliken postulated the complex formation between iodine and ether or alcohol, and explained this new band by the concept of intermolecular charge transfer spectra$^{[3]}$.
The "simple" idea behind a charge transfer complex is given below:
$$\ce{I2 + D -> I^{\delta -}...D^{\delta +}}$$ Reference 4 gives more details. (It also likens this bonding to a "hydrogen bond" if that helps, you understand). In this case, D is the donor atom (electronegative atoms like nitrogen, oxygen etc.)
The iodine-donor distance is usually in 2~3 Å range. Thus, based on all of this one can conclude that possible charge transfer interactions between oxygen atoms that are projecting in towards the helical cavity and the iodine molecules in the polyiiodide chain are responsible for the colouration. You can view how the oxygen molecules are pointed towards the chain of iodine here. (I recommend rotating the diagram so that the axis of the cavity is towards you, and you are essentially looking down the tube)
1.https://www.researchgate.net/publication/244335699_The_complex_of_amylose_and_iodine
http://pubs.acs.org/doi/abs/10.1021/ja01244a017
http://scitation.aip.org/content/aip/journal/jcp/22/3/10.1063/1.1740076 (I recommend reading this if you are not satisfied with the brief, qualitative discussion I have provided here)
Inorganic Chemistry (Nils Wiberg) (google books, relevant portion is part of the preview)
getafixgetafix
$\begingroup$ I do not have access to the article but it sounds interesting :"Also, when iodine is dissolved in ether or alcohol, a new strong absorption band appears in the near ultraviolet region." Do you know what could cause the slight color change between collagen and starch? $\endgroup$ – Matt Oct 11 '16 at 20:28
$\begingroup$ I can edit my answer to include some more details from the paper, and/or if you give me your personal contact in chat, I can download a copy and forward it to you. $\endgroup$ – getafix Oct 11 '16 at 23:42
Thanks for contributing an answer to Chemistry Stack Exchange!
Not the answer you're looking for? Browse other questions tagged polymers halides carbohydrates or ask your own question.
What chemical properties make some grasses usable for producing textile fabrics?
What the balanced chemical equation for the conversion of starch to dextrin by dry heating?
Can I break starch down into glucose units?
Why starch (amylose) and cotton (cellulose) are so different?
Gel filtration separation of cellulose, starch, oxytocin and palmitic acid
Optical activity of starch
Clarification in the mechanism for Molisch's test for glucose
Reasons for negative iodoform test | CommonCrawl |
arXiv.org > astro-ph > arXiv:1509.06391
astro-ph.GA
Astrophysics > Astrophysics of Galaxies
Title:Evidence That Hydra I is a Tidally Disrupting Milky Way Dwarf Galaxy
Authors:Jonathan R. Hargis (1), B. Kimmig (1), B. Willman, (1 and 2), N. Caldwell (3), M. G. Walker (4), J. Strader (5), D. J. Sand (6), C. J. Grillmair (7), J. H. Yoon (8) ((1) Haverford College, (2) LSST and Steward Observatory, (3) Harvard CfA, (4) McWilliams Center for Cosmology, Carnegie Mellon University, (5) Michigan State University, (6) Texas Tech University, (7) Spitzer Science Center, (8) University of California Santa Barbara)
Abstract: The Eastern Banded Structure (EBS) and Hydra~I halo overdensity are very nearby (d $\sim$ 10 kpc) objects discovered in SDSS data. Previous studies of the region have shown that EBS and Hydra I are spatially coincident, cold structures at the same distance, suggesting that Hydra I may be the EBS's progenitor. We combine new wide-field DECam imaging and MMT/Hectochelle spectroscopic observations of Hydra I with SDSS archival spectroscopic observations to quantify Hydra I's present-day chemodynamical properties, and to infer whether it originated as a star cluster or dwarf galaxy. While previous work using shallow SDSS imaging assumed a standard old, metal-poor stellar population, our deeper DECam imaging reveals that Hydra~I has a thin, well-defined main sequence turnoff of intermediate age ($\sim 5-6$ Gyr) and metallicity ([Fe/H] = $-0.9$ dex). We measure statistically significant spreads in both the iron and alpha-element abundances of $\sigma_{[Fe/H]} = 0.13 \pm 0.02$ dex and $\sigma_{[\alpha/{\rm Fe}]} = 0.09 \pm 0.03$ dex, respectively, and place upper limits on both the rotation and its proper motion. Hydra~I's intermediate age and [Fe/H] -- as well as its low [$\alpha$/Fe], apparent [Fe/H] spread, and present-day low luminosity -- suggest that its progenitor was a dwarf galaxy, which subsequently lost more than $99.99\%$ of its stellar mass.
Comments: 17 pages, 14 figures, submitted to ApJ
Subjects: Astrophysics of Galaxies (astro-ph.GA)
DOI: 10.3847/0004-637X/818/1/39
Cite as: arXiv:1509.06391 [astro-ph.GA]
(or arXiv:1509.06391v1 [astro-ph.GA] for this version)
From: Jonathan Hargis [view email]
[v1] Mon, 21 Sep 2015 20:30:07 UTC (1,893 KB) | CommonCrawl |
Short report | Open | Published: 18 February 2015
A (1;19) translocation involving TCF3-PBX1 fusion within the context of a hyperdiploid karyotype in adult B-ALL: a case report and review of the literature
Carlos A Tirado1,
David Shabsovich1,
Lei Yeh1,
Sheeja T Pullarkat1,
Lynn Yang1,
Michael Kallen1 &
Nagesh Rao1
The t(1;19)(q23;p13), which can result in the TCF3-PBX1 chimeric gene, is one of the most frequent translocations in B-acute lymphoblastic leukemia (B-ALL) and is observed in both adult and pediatric populations at an overall frequency of 6%. It can occur in a balanced or unbalanced form and as a sole abnormality is associated with an intermediate prognosis. Additionally, this translocation is observed in the context of hyperdiploid B-ALL, in which case it is associated with a poor prognosis. However, due to different translocation partner genes at chromosomes 1 and 19, distinct subtypes of hyperdiploid B-ALL with t(1;19)/der(19)t(1;19) are recognized based on the presence or absence of the TCF3-PBX1 fusion gene, but the cytogenetic and etiologic differences between the two remain understudied.
We report a case of an adult with a history of relapsed precursor B-ALL whose conventional cytogenetics showed an abnormal female karyotype with both hyperdiploidy and a t(1;19)(q23;p13). Fluorescence in situ hybridization (FISH) on previously G-banded metaphases using the LSI TCF3/PBX1 Dual Color, Dual Fusion Translocation Probe confirmed the presence of the TCF3-PBX1 gene fusion.
This particular pattern with a TCF3-PBX1 fusion within the context of a hyperdiploid karyotype is seen in B-ALL and is usually associated with a poor outcome. This case is one of only a few cases with both hyperdiploidy and a confirmed TCF3-PBX1 fusion, demonstrating the importance of using FISH for proper molecular classification of these cases in order to distinguish them from those with hyperdiploidy but no TCF3-PBX1 fusion gene. Such molecular studies may provide insight into the precise differences between TCF3-PBX1 positive and negative hyperdiploid B-ALL bearing the t(1;19)(q23;p13).
The t(1;19)(q23;p13) is one of the most frequent translocations in B-acute lymphoblastic leukemia (B-ALL), and is observed in both adult and pediatric populations at an overall frequency of 6% . This translocation can occur in a balanced – t (1;19)(q23;p13) – or unbalanced – der(19)t(1;19)(q23;p13) – form and can result in the fusion of TCF3 (transcription factor 3) found at 19p13 and PBX1 (pre-B cell leukemia homebox 1) found at 1q23 to form a chimeric gene whose protein product alters cell differentiation arrest, among other cellular processes [1]. Specifically, the fusion gene encodes a transcription factor bearing the transactivation domain of TCF3 and the DNA-binding domain of PBX1, which facilitates constitutive activation of genes bound by the protein encoded by PBX1 and other PBX proteins [2]. As a sole abnormality, t(1;19)/der(19)t(1;19) is associated with an intermediate prognosis in B-ALL, and hyperdiploidy is associated with a favorable prognosis [1]. However, more rarely, cases of t(1;19)/der(19)t(1;19) within the context of a hyperdiploid karyotype have been observed, only some of which express the TCF3-PBX1 fusion gene and are associated with a poor prognosis [3]. In addition to PBX1, other partner genes involved in rearrangements of TCF3, although at much lower frequencies, include ZNF384 (12p13; prognosis unknown), NOL1 (12p13; prognosis unknown), an unknown partner gene at 13q14 (prognosis unknown), HLF (17q22; extremely poor prognosis), and FB1/TFPT (19q13.4; prognosis unknown) [4-6]. The cytogenetic and etiologic differences between TCF3-PBX1 positive and negative B-ALL with hyperdiploidy and t(1;19)/der(19)t(1;19) remain understudied due to lack of molecular classification of the cases reported in the literature.
The patient was a forty-four year old woman with a history of relapsed precursor B-ALL, who was initially diagnosed in March 2013 with leukemic cells showing an immunophenotype positive for CD10, CD19, icCD22, CD38, icCD79a, CD138, TdT, HLA-DR and icIgM as well as a normal karyotype. Initial diagnosis was established at another institution at which point FISH analysis was not performed. After UK ALL 14 protocol consolidation therapy, she was considered to be in remission. In December 2013, a bone marrow biopsy showed evidence of relapse, and was comprised of approximately 85% blasts with a pre-B immunophenotype and a hyperdiploid, complex, poor-risk karyotype, further described in the results section. In January 2014, the patient underwent therapy with FLAG-Ida, resulting in a hypoplastic marrow with no significant residual blast population. Later in April 2014 she enrolled in a clinical trial with blinatumomab, which was eventually discontinued because the patient experienced multiple seizure episodes. A bone marrow biopsy showed extensive tumor necrosis with involvement by B-lymphoblasts representing over 90% of viable cells and comprising 5% of the total surface area. The immunophenotype was positive for CD10, CD19, PAX-5, CD79a and TdT (weak, rare), and negative for CD34 and CD20. The patient expired in May 2014 of relapsed B-lymphoblastic leukemia. Autopsy included a bone marrow biopsy, which revealed a hypercellular marrow of greater than 95% cellularity with sheets of lymphoblasts and extensive tumor necrosis.
Chromosome analysis was performed using standard cytogenetic techniques on the bone marrow of this patient. The karyotypes were prepared using the Applied Imaging CytoVision software (Applied Imaging, Genetix, Santa Clara, CA) and described according to the ISCN 2013 nomenclature [7].
Fluorescence in situ hybridization (FISH) was performed on interphase nuclei using the Vysis MYC-IGH Dual Color, Dual Fusion Probe, Vysis LSI BCR,ABL ES Dual Color Translocation Probe, and Vysis LSI MLL Dual Color, Break Apart Rearrangement Probe from Abbott Molecular (Des Plaines, Illinois 60018). Additionally, FISH was performed with the LSI TCF3/PBX1 Dual Color, Dual Fusion Translocation Probe on previously G-banded metaphases.
Only three metaphase cells were available for chromosome analysis due to a poor mitotic index. These cells revealed an abnormal female karyotype with numerical and structural abnormalities including extra copies of chromosomes 1, 8, 11, 20, 22, a (1;19) translocation, an unbalanced rearrangement of the long arm of chromosome 13 leading to 13q-, and a marker chromosome of unknown origin. This karyotype was described as (Figure 1):
$$ 53\hbox{--} 54,\mathrm{X}\mathrm{X},+1,\mathrm{t}\left(1;19\right)\left(\mathrm{q}23;\mathrm{p}13\right),+8,+8,+8,+11,\mathrm{add}(13)\left(\mathrm{q}34\right),+20,+22,+\mathrm{m}\mathrm{a}\mathrm{r}\left[\mathrm{c}\mathrm{p}3\right] $$
Karyotype of female patient revealing t(1;19) in a hyperdiploid context.
FISH on interphase nuclei confirmed the additional copies of chromosome 8 in 73.8% of nuclei (79/107), chromosome 22 in 80% of nuclei (44/55), as well as chromosome 11 in 4.7% of nuclei (4/85) examined. The FISH results (Figure 2) were described as:
nuc ish(MYCx5,IGH)x2)[79/107]
nuc ish(BCRx3,ABL1x2)[44/55]
nuc ish(MLLx3)[4/85]
FISH analysis was used to confirm the additional copies of chromosomes 8, 22 and 11.
To further characterize and confirm the previous conventional cytogenetics findings [t(1;19) which fuses TCF3 (green signal) on 19p13 with PBX1 (red signal) at 1q23], FISH studies on previously G-banded metaphases were performed, and detected two fusion [t(1;19)] signals and an additional copy of red signal (+1q) with the TCF3-PBX1 probe indicative of translocation between TCF3 and PBX1, as well as an additional copy of the 1q23 locus, which is consistent with the karyotype results found previously. Gain of chromosome 1q is often seen in association with disease progression or advanced disease. Based on these studies the karyotype was described as (Figure 3):
$$ \begin{array}{l}53\hbox{-} 54,\mathrm{X}\mathrm{X},+1,\mathrm{t}\left(1;19\right)\left(\mathrm{q}23;\mathrm{p}13\right),+8,+8,+8,+11,\mathrm{add}(13)\left(\mathrm{q}34\right),+20,+22,+\mathrm{m}\mathrm{a}\mathrm{r}\left[\mathrm{c}\mathrm{p}3\right].\mathrm{i}\mathrm{s}\mathrm{h}\\ {}\left(\mathrm{P}\mathrm{B}\mathrm{X}1\mathrm{x}4\right)\left(\mathrm{T}\mathrm{C}\mathrm{F}3\mathrm{x}3\right)\left(\mathrm{P}\mathrm{B}\mathrm{X}1\kern0.5em \mathrm{con}\kern0.5em \mathrm{T}\mathrm{C}\mathrm{F}3\mathrm{x}2\right)\end{array} $$
FISH on a previously G-banded metaphase confirmed t(1;19)(q23;p13) involving the TCF3 and PBX1 genes, as well as an additional copy of chromosome 1.
The t(1;19)(q23;p13)/der(19)t(1;19)(q23;p13) is one of the most common translocations seen in B-ALL cases and is typically found as a sole abnormality. It creates a fusion of TCF3 on 19p13 with PBX1 at 1q23, can be present in balanced or unbalanced form, and is usually associated with an intermediate prognosis [1]. Hunger et al. noted in an early study that 95% of t(1;19)/der(19)t(1;19)-positive cases of B-ALL with <50 chromosomes expressed the TCF3-PBX1 fusion transcript, whereas only 25% of cases with >50 chromosomes did. Furthermore, immunophenotypic differences between TCF3-PBX1 positive and TCF3-PBX1 negative cases were observed, which suggested etiologic differences between the two subtypes [3].
In a recent study conducted by Paulsson et al., 42 cases with both t(1;19)/der(19)t(1;19) and high hyperdiploidy (HeH; 51–67 chromosomes) from both published literature and the LRCG database were analyzed, revealing similar numerical chromosomal gains in both translocation-HeH (t-HeH) and classic-HeH (c-HeH) cases, most commonly involving chromosomes 21, 4, 6, 10, 18, 14, X, and 17, in decreasing frequency [8]. Furthermore, none of these cases were found to have a stemline balanced or unbalanced t(1;19), whereas 11% had hyperdiploid stemlines, suggesting that numerical chromosomal gains resulting in HeH are primary cytogenetic aberrations and occur prior to t(1;19)/der(19)t(1;19) [8]. This may result in clinical similarities between t(1;19)/der(19)t(1;19)-HeH ALL and c-HeH ALL, as the two may share a similar cytogenetic progression and etiology. It was also found that the majority of t(1;19)/der(19)t(1;19)-HeH cases tested by molecular methods were negative for the presence of the TCF3-PBX1 fusion gene [8]. Additionally, previous studies have found that greater than 90% of t(1;19) positive, TCF3-PBX1 fusion negative cases have an unbalanced form of the rearrangement [4]. In Paulsson et al's study, only 18% of the cases had a balanced rearrangement [8], while 40% of TCF3-PBX1 positive cases overall have been found to have a balanced rearrangement [1], ultimately suggesting etiologically distinct subtypes of B-ALL with both hyperdiploidy and t(1;19)/der(19)t(1;19) based on the presence of the TCF3-PBX1 fusion gene by FISH and/or polymerase chain reaction (PCR) [8].
In the present study, we report a case of hyperdiploid B-ALL with a balanced t(1;19) bearing the TCF3-PBX1 fusion gene confirmed by metaphase FISH, which has only been previously reported in a small number of cases and represents a distinct subtype of B-ALL based on the presence of the confirmed fusion gene in conjunction with hyperdiploidy and t(1;19)/der(19)t(1;19). Interestingly, the numerical gains present in our case, of chromosomes 1, 8, 11, 20, and 22, are not consistent with the most common gains found in t-HeH/t(1;19)-positive B-ALL by Paulsson et al. [8]. In that study, the majority of cases were found to have unbalanced rearrangements and out of those that had molecular evidence, most did not bear the TCF3-PBX1 fusion [8]. The presence of different numerical gains between our case and those of Paulsson et al. further supports the fact that the TCF3-PBX1 positive and negative variants of t(1;19)/hyperdiploid B-ALL represent distinct subtypes of the disease. Furthermore, we queried the Mitelman Database of Chromosome Aberrations in Cancer for reported cases of ALL with t(1;19)/der(19)t(1;19), hyperdiploidy, and molecular evidence (FISH and/or PCR) of TCF3-PBX1 fusion, and only identified 5 cases that were positive for the fusion (Table 1). When compiling the karyotypes of these cases and the present case, we noted that four out of six cases had additional copies of chromosome 8, which was interestingly not found to be one of the most common numerical gains in t-HeH/t(1;19)-positive B-ALL [8].
Table 1 Cases of adult B-ALL with hyperdiploidy, t(1;19)/der(19)t(1;19), and TCF3-PBX1 fusion confirmed by FISH and/or polymerase chain reaction (PCR)
Recent molecular insights into the TCF3-PBX1 fusion protein have revealed its involvement in complex signaling pathways. In particular, deregulation of JunD and NFX1-regulated transcriptional processes has been noted to be a significant effect of the fusion protein [9]. Additionally, PAX5 (19p13.2) haploinsufficiency, detectable both by conventional and molecular cytogenetics, is associated with TCF3-PBX1 in B-ALL. Specifically, FISH using both TCF3 split signal probes in conjunction with PAX5 locus-specific deletion probes suggests that PAX5 is a secondary event in the oncogenesis of TCF3-PBX1-positive B-ALL, and may be associated with clonal evolution of the malignancy [10]. Furthermore, studies have revealed that vascular endothelial growth factor-C (VEGF-C), encoded by VEGFC (4q34.3), is involved and perhaps essential to proliferation of TCF3-PBX1 positive leukemic B cells [11]. Finally, treatment with hyperfractionated cyclophosphamide, vincristine, doxorubicin, and dexamethasone alternating with methotrexate and high-dose cytarabine (hyper-CVAD) has shown a favorable outcome in adults with t(1;19)-positive ALL [12].
Hyperdiploidy in B-ALL normally conveys a favorable prognosis, but in the present study, the particular pattern of a t(1;19)(q23;p13.3) with TCF3-PBX1 fusion within the context of a complex karyotype (>3 abnormalities) and hyperdiploidy due to extra copies of chromosomes 8, 11 and 22 (confirmed by FISH) plus the presence of a marker chromosome of unknown origin is associated with an unfavorable prognosis in B-ALL [3]. It is one of only a few published cases with hyperdiploidy, t(1;19)/der(19)t(1;19), and a confirmed TCF3-PBX1 fusion in B-ALL, demonstrating the importance of using FISH and PCR for proper cytogenetic and molecular classification in order to distinguish the present scenario from hyperdiploid B-ALL with t(1;19)/der(19)t(1;19), but lacking the TCF3-PBX1 fusion. The latter represents a different subtype of B-ALL that may be primarily driven by chromosomal gains or other fusion genes rather than the t(1;19)/der(19)t(1;19) resulting in the TCF3-PBX1 fusion and should not be confused with the entity presented in this report. Further investigation of the cytogenetic and molecular etiologies of these subtypes of B-ALL is warranted to determine their implications in the diagnosis and prognosis of the malignancy.
Heim S, Mitelman F. Cancer Cytogenetics. 3rd ed. Hoboken, New Jersey: Wiley-Blackwell Publishers; 2009.
Troussard X, Rimokh R, Valensi F, Leboeuf D, Fenneteau O, Guitard AM, et al. Heterogeneity of t(1;19)(q23;p13) acute leukaemias. French Haematological Cytology Group. Br J Haematol. 1995;89(3):516–26.
Hunger SP, Sun T, Boswell AF, Carroll AJ, McGavran L. Hyperdiploidy and E2A-PBX1 fusion in an adult with t(1;19) + acute lymphoblastic leukemia: case report and review of the literature. Genes Chromosomes Cancer. 1997;20(4):392–8.
Barber KE, Harrison CJ, Broadfield ZJ, Stewart AR, Wright SL, Martineau M, et al. Molecular cytogenetic characterization of TCF3 (E2A)/19p13.3 rearrangements in B-cell precursor acute lymphoblastic leukemia. Genes Chromosomes Cancer. 2007;46(5):478–86.
Boomer T, Varella-Garcia M, McGavran L, Meltesen L, Olsen AS, Hunger SP. Detection of E2A translocations in leukemias via fluorescence in situ hybridization. Leukemia. 2001;15(1):95–102.
Brambillasca F, Mosna G, Colombo M, Rivolta A, Caslini C, Minuzzo M, et al. Identification of a novel molecular partner of the E2A gene in childhood leukemia. Leukemia. 1999;13(3):369–75.
Shaffer LG, McGowan-Joran J, Schmid MS. ISCN 2013: An International System of Human Cytogenetic Nomenclature. Unionville, CT,USA: S. Karger Publications, Inc; 2013.
Paulsson K, Harrison CJ, Andersen MK, Chilton L, Nordgren A, Moorman AV, et al. Distinct patterns of gained chromosomes in high hyperdiploid acute lymphoblastic leukemia with t(1;19)(q23;p13), t(9;22)(q34;q22) or MLL rearrangements. Leukemia. 2013;27(4):974–7.
Hajingabo LJ, Daakour S, Martin M, Grausenburger R, Panzer-Grümayer R, Dequiedt F, et al. Predicting interactome network perturbations in human cancer: application to gene fusions in acute lymphoblastic leukemia. Mol Biol Cell. 2014;25(24):3973–85. Doi: 10.1091/mbc.E14-06-1038. Epub 2014 Oct 1.
Familiades J, Bousquet M, Lafage-Pochitaloff M, Béné MC, Beldjord K, De Vos J, et al. PAX5 mutations occur frequently in adult B-cell progenitor acute lymphoblastic leukemia and PAX5 haploinsufficiency is associated with BCR-ABL1 and TCF3-PBX1 fusion genes: a GRAALL study. Leukemia. 2009;23(11):1989–98. Doi: 10.1038/leu.2009.135. Epub 2009 Jul 9.
Shirasaki R, Tashiro H, Oka Y, Sugao T, Yamamoto T, Yoshimi M, et al. Vascular endothelial growth factor-C and its receptor type-3 expressed in acute lymphocytic leukemia cases with t(1;19). Int J Hematol. 2011;94(2):203–8. Doi: 10.1007/s12185-011-0889-5. Epub 2011 Jul 6.
Garg R, Kantarjian H, Thomas D, Faderl S, Ravandi F, Lovshe D, et al. Adults with acute lymphoblastic leukemia and translocation (1;19) abnormality have a favorable outcome with hyperfractionated cyclophosphamide, vincristine, doxorubicin, and dexamethasone alternating with methotrexate and high-dose cytarabine chemotherapy. Cancer. 2009;115(10):2147–54. doi:10.1002/cncr.24266.
Rowe D, Devaraj PE, Irving JA, Hogarth L, Hall AG, Turner GE. A case of mature B-cell ALL with coexistence of t(1;19) and t(14;18) and expression of the E2A/PBX1 fusion gene. Br J Haematol. 1996;94(1):133–5.
Foa R, Vitale A, Mancini M, Cuneo A, Mecucci C, Elia L, et al. E2A-PBX1 fusion in adult acute lymphoblastic leukaemia: biological andclinical features. Br J Haematol. 2003;120(3):484–7.
To UCLA Clinical Cytogenetics Laboratory.
Department of Pathology and Laboratory Medicine, David Geffen UCLA School of Medicine, Los Angeles, CA, 90024, USA
Carlos A Tirado
, David Shabsovich
, Lei Yeh
, Sheeja T Pullarkat
, Lynn Yang
, Michael Kallen
& Nagesh Rao
Search for Carlos A Tirado in:
Search for David Shabsovich in:
Search for Lei Yeh in:
Search for Sheeja T Pullarkat in:
Search for Lynn Yang in:
Search for Michael Kallen in:
Search for Nagesh Rao in:
Correspondence to Carlos A Tirado.
CAT and DS had equal contribution to this manuscript and led drafting, conducted survey of relevant literature, and edited and revised all drafts. LY wrote the initial draft. SP revised the manuscript and added various comments. LY conducted the bench work analysis. MK provided the clinical presentation of the patient. NR edited the manuscript. All authors read and approved the final manuscript.
Carlos A Tirado and David Shabsovich contributed equally to this work.
TCF3-PBX1
hyperdiploidy
B-ALL | CommonCrawl |
Search Results: 1 - 10 of 392634 matches for " C.-P. Chang "
Page 1 /392634
South China Sea Warm Pool in Boreal Spring
South China Sea Warm Pool in Boreal Spring
Peter C Chu,C-P Chang,
Peter C. Chu,C.-P. Chang
大气科学进展 , 1997,
Abstract: During the boreal spring of 1966, a warm-core eddy is identified in the upper South China Sea (SCS) west of the Philippines through an analysis of the U.S. Navy's Master Oceanographic Observation Data Set. This eddy occurred before the development of the northern summer monsoon and disappeared afterward. We propose that this eddy is a result of the radiative warming during spring and the downwelling due to the anticyclonic forcing at the surface. Our hypothesis suggests an air-sea feedback scenario that may explain the development and withdrawal of the summer monsoon over the SCS. The development phase of the warm-core eddy in this hypothesis is tested by using the Princeton Ocean model. Authors are grateful to Yongfu Qian and Shihua Lu for discussion. This work was supported by the Office of Naval Research NOMP AND NAMP Programs, and by the Naval Oceanographic Office.
Endoscopic Mucosal Dissection in GI Malignancies
C.-P. Swain,,I. Lu
Annals of Gastroenterology , 2010,
Abstract: In this paper we review the endoscopic mucosal dissection technique for gastrointestinal malignancies in the perspective of the evolution of flexible endoscopic tools to assist in cutting tissue. The available methods as well as the future technology are analyzed in details. It is concluded that the tools for cutting at flexible endoscopy have evolved substantially on recent years.
A New Dual-Frequency Liquid Crystal Lens with Ring-and-Pie Electrodes and a Driving Scheme to Prevent Disclination Lines and Improve Recovery Time
Yung-Yuan Kao,Paul C.-P. Chao
Sensors , 2011, DOI: 10.3390/s110505402
Abstract: A new liquid crystal lens design is proposed to improve the recovery time with a ring-and-pie electrode pattern through a suitable driving scheme and using dual-frequency liquid crystals (DFLC) MLC-2048. Compared with the conventional single hole-type liquid crystal lens, this new structure of the DFLC lens is composed of only two ITO glasses, one of which is designed with the ring-and-pie pattern. For this device, one can control the orientation of liquid crystal directors via a three-stage switching procedure on the particularly-designed ring-and-pie electrode pattern. This aims to eliminate the disclination lines, and using different drive frequencies to reduce the recovery time to be less than 5 seconds. The proposed DFLC lens is shown effective in reducing recovery time, and then serves well as a potential device in places of the conventional lenses with fixed focus lengths and the conventional LC lens with a single circular-hole electrode pattern.
Four-point high time resolution information on electron densities by the electric field experiments (EFW) on Cluster
A. Pedersen,P. Décréau,C.-P. Escoubet,G. Gustafsson
Annales Geophysicae (ANGEO) , 2003,
Abstract: For accurate measurements of electric fields, spherical double probes are electronically controlled to be at a positive potential of approximately 1 V relative to the ambient magnetospheric plasma. The spacecraft will acquire a potential which balances the photoelectrons escaping to the plasma and the electron flux collected from the plasma. The probe-to-plasma potential difference can be measured with a time resolution of a fraction of a second, and provides information on the electron density over a wide range of electron densities from the lobes (~ 0.01 cm-3) to the magnetosheath (>10 cm-3) and the plasmasphere (>100 cm-3). This technique has been perfected and calibrated against other density measurements on GEOS, ISEE-1, CRRES, GEOTAIL and POLAR. The Cluster spacecraft potential measurements opens the way for new approaches, particularly near boundaries and gradients where four-point measurements will provide information never obtained before. Another interesting point is that onboard data storage of this simple parameter can be done for complete orbits and thereby will provide background information for the shorter full data collection periods on Cluster. Preliminary calibrations against other density measurements on Cluster will be reported. Key words. Magnetospheric physics (magnetopause, cusp, and boundary layers) Space plasma physics (spacecraft sheaths, wakes, charging; instruments and techniques)
Quantifying flow and remediation zone uncertainties for partially opened wells in heterogeneous aquifers
C.-F. Ni, C.-P. Lin, S.-G. Li,J.-S. Chen
Hydrology and Earth System Sciences (HESS) & Discussions (HESSD) , 2011,
Abstract: This study presents a numerical first-order spectral model to quantify transient flow and remediation zone uncertainties for partially opened wells in heterogeneous aquifers. Taking advantages of spectral theories in solving unmodeled small-scale variability in hydraulic conductivity (K), the presented nonstationary spectral method (NSM) can efficiently estimate flow uncertainties, including hydraulic heads and Darcy velocities in r- and z-directions in a cylindrical coordinate system. The velocity uncertainties associated with the particle backward tracking algorithm are then used to estimate stochastic remediation zones for scenarios with partially opened well screens. In this study the flow and remediation zone uncertainties obtained by NSM were first compared with those obtained by Monte Carlo simulations (MCS). A layered aquifer with different geometric mean of K and screen locations was then illustrated with the developed NSM. To compare NSM flow and remediation zone uncertainties with those of MCS, three different small-scale K variances and correlation lengths were considered for illustration purpose. The MCS remediation zones for different degrees of heterogeneity were presented with the uncertainty clouds obtained by 200 equally likely MCS realizations. Results of simulations reveal that the first-order NSM solutions agree well with those of MCS for partially opened wells. The flow uncertainties obtained by using NSM and MCS show identically for aquifers with small ln K variances and correlation lengths. Based on the test examples, the remediation zone uncertainties (bandwidths) are not sensitive to the changes of small-scale ln K correlation lengths. However, the increases of remediation zone uncertainties (i.e. the uncertainty bandwidths) are significant with the increases of small-scale ln K variances. The largest displacement uncertainties may have several meters of differences when the ln K variances increase from 0.1 to 1.0. Such conclusions are also valid for the estimations of remediation zones in layered aquifers.
Temporal variation of nitrate and phosphate transport in headwater catchments: the hydrological controls and land use alteration
T.-Y. Lee, J.-C. Huang, S.-J. Kao,C.-P. Tung
Biogeosciences (BG) & Discussions (BGD) , 2013, DOI: 10.5194/bg-10-2617-2013
Abstract: Oceania rivers are hotspots of DIN (dissolved inorganic nitrogen) and DIP (dissolved inorganic phosphorus) transport due to humid/warm climate, typhoon-induced episodic rainfall and high tectonic activity that create an environment favorable for high/rapid runoff and soil erosion. In spite of its uniqueness, effects of hydrologic controls and land use on the transport behaviors of DIN and DIP are rarely documented. A 2 yr monitoring study for DIN and DIP from three headwater catchments with different cultivation gradient (0 To 8.9%) was implemented during a ~ 3 day interval with an additional monitoring campaign at a 3 h interval during typhoon periods. Results showed the DIN yields in the pristine, moderately cultivated (2.7%), and intensively cultivated (8.9%) watersheds were 8.3, 26, and 37 kg N ha 1 yr 1, respectively. For the DIP yields, they were 0.36, 0.35, and 0.56 kg P ha 1 yr 1, respectively. Higher year-round DIN concentrations and five times larger in DIN yields in intensively cultivated watersheds indicate DIN is more sensitive to land use changes. The high background DIN yield from the relatively pristine watershed was likely due to high atmospheric nitrogen deposition and large subterranean N pool. The correlations between runoff and concentration reveals that typhoon floods purge out more DIN from the subterranean reservoir, i.e., soil, by contrast, runoff washes off surface soil resulting in higher suspended sediment with higher DIP. Collectively, typhoon runoff contributes 20–70% and 47–80%, respectively, to the annual DIN and DIP exports. The DIN yield to DIP yield ratio varied from 97 to 410, which is higher than the global mean of ~ 18. Such a high ratio indicates a P-limiting condition in stream and the downstream aquatic environment. Based on our field observation, we constructed a conceptual model illustrating different remobilization mechanisms for DIN and DIP from headwaters in a mountainous river, which is analogous to typical Oceania rivers and the headwater of large rivers in similar climate zones. Our study advanced our understanding about the role of cyclones, which exert hydrological control, and land use on nutrient export in the Oceania region, benefiting watershed management under the context of climate change.
Temporal variation of nitrate and phosphate transport in headwater catchments: the hydrological controls and landuse alteration
T.-Y. Lee,J.-C. Huang,S.-J. Kao,C.-P. Tung
Biogeosciences Discussions , 2012, DOI: 10.5194/bgd-9-13211-2012
Abstract: Oceania Rivers are hotspots of high DIN (dissolved inorganic nitrogen) and DIP (dissolved inorganic phosphorus) transport. However, the effects of hydrologic controls and land use alternation on the temporal variations of DIN and DIP are rarely documented. In this study, we monitored the nitrate and phosphate concentrations from three headwater catchments with different cultivation gradients at a 3-day interval. This sampling scheme was supplemented with a 3-h interval monitoring during typhoon periods. The results showed that the DIN and DIP yields in the pristine, moderately cultivated, and intensively cultivated watersheds were 7.52/0.31, 31.17/0.30, and 40.96/0.52 kg ha 1 yr 1, respectively. The high DIN yields are comparable to the intensively and extensively disturbed large rivers around the world. These N yields may be due to a high level of nitrogen deposition, rainfall-runoff, and fertilizer application. The importance of event sampling was indicated by the contribution of the three typhoons to the annual DIN and DIP fluxes, which were 30% and 60%, respectively. Both DIN and DIP fluxes significantly increased as the cultivation gradient increased. The DIN and DIP ratio varied from 54 to 230 depending on the decrease of the cultivation gradient. This value is higher than the global mean of ~18. Thus, we speculate that nitrogen saturation occurs in the headwater catchments of Oceania Rivers. The results obtained provide fundamental clues of DIN and DIP yield of Oceania Rivers, which are helpful in understanding the impact of human disturbance on headwater watersheds.
C.-F. Ni,C.-P. Lin,S.-G. Li,J.-S. Chen
Hydrology and Earth System Sciences Discussions , 2011, DOI: 10.5194/hessd-8-3133-2011
Abstract: This study presents a numerical first-order spectral model to quantify flow and remediation zone uncertainties for partially opened wells in heterogeneous aquifers. Taking advantages of spectral theories in solving unmodeled small-scale variability in hydraulic conductivity (K), the presented nonstationary spectral method (NSM) can efficiently estimate flow uncertainties, including hydraulic heads and Darcy velocities in r- and z profile in a cylindrical coordinate system. The velocity uncertainties associated with the particle backward tracking algorithm are then used to estimate stochastic remediation zones for scenarios with partially opened well screens. In this study the flow and remediation zone uncertainties obtained by NSM were first compared with those obtained by Monte Carlo simulations (MCS). A layered aquifer with different geometric mean of K and screen locations was then illustrated with the developed NSM. To compare NSM flow and remediation zone uncertainties with those of MCS, three different small-scale K variances and correlation lengths were considered for illustration purpose. The MCS remediation zones for different degrees of heterogeneity were presented with the uncertainty clouds obtained by 200 equally likely MCS realizations. Results of simulations reveal that the first-order NSM solutions agree well with those of MCS for partially opened wells. The flow uncertainties obtained by using NSM and MCS show identically for aquifers with small ln K variances and correlation lengths. Based on the test examples, the remediation zone uncertainties are not sensitive to the changes of small-scale ln K correlation lengths. However, the increases of remediation zone uncertainties are significant with the increases of small-scale ln K variances. The largest displacement uncertainties may have several meters of differences when the ln K variances increase from 0.1 to 1.0. Such results are also valid for the estimations of remediation zones in layered aquifers.
60 GHz Indoor Propagation Studies for Wireless Communications Based on a Ray-Tracing Method
C.-P. Lim,M. Lee,R. J. Burkholder,J. L. Volakis
EURASIP Journal on Wireless Communications and Networking , 2007, DOI: 10.1155/2007/73928
Abstract: This paper demonstrates a ray-tracing method for modeling indoor propagation channels at 60 GHz. A validation of the ray-tracing model with our in-house measurement is also presented. Based on the validated model, the multipath channel parameter such as root mean square (RMS) delay spread and the fading statistics at millimeter wave frequencies are easily extracted. As such, the proposed ray-tracing method can provide vital information pertaining to the fading condition in a site-specific indoor environment.
Search for B_{s}^{0}->hh Decays at the $Υ(5S)$ Resonance
C. -C. Peng,P. Chang,the Belle Collaboration
Physics , 2010, DOI: 10.1103/PhysRevD.82.072007
Abstract: We have searched for B_{s}^{0}->hh decays, where h stands for a charged or neutral kaon, or a charged pion. These results are based on a 23.6 fb^{-1} data sample collected with the Belle detector on the \Upsilon(5S) resonance at the KEKB asymmetric-energy e^{+}e^{-} collider, containing 1.25x10^6 B_{s}^{(*)}\bar{B}_{s}^{(*)} events. We observe the decay B_{s}^{0}->K^{+}K^{-} and measure its branching fraction, \mathcal{B}(B_{s}^{0}->K^{+}K^{-}) = [3.8_{-0.9}^{+1.0}(\mathrm{stat})\pm 0.5(\mathrm{syst})\pm 0.5(f_s)] \times 10^{-5}. The first error is statistical, the second is systematic, and the third error is due to the uncertainty in the B^0_s production fraction in $e^+e^-\to b\bar{b}$ events. No significant signals are seen in other decay modes, and we set upper limits at 90% confidence level: \mathcal{B}(B_{s}^{0}->K^-\pi^{+})< 2.6 \times 10^{-5}, \mathcal{B}(B_{s}^{0}->\pi^{+}\pi^{-})< 1.2 \times 10^{-5} and \mathcal{B}(B_{s}^{0}->K^0\bar{K}^0) < 6.6\times 10^{-5}. | CommonCrawl |
A second order energy stable scheme for the Cahn-Hilliard-Hele-Shaw equations
DCDS-B Home
Numerical results on existence and stability of standing and traveling waves for the fourth order beam equation
January 2019, 24(1): 183-195. doi: 10.3934/dcdsb.2018093
Global existence and large time behavior of a 2D Keller-Segel system in logarithmic Lebesgue spaces
Chao Deng 1, and Tong Li 2,,
School of Mathematics and Statistics, Jiangsu Normal University, Xuzhou 221116, China
Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA
* Corresponding author: Tong Li
Received December 2016 Revised September 2017 Published March 2018
Fund Project: The first author is supported by NSFC (No. 10931001)
This paper is devoted to the global analysis for the two-dimensional parabolic-parabolic Keller-Segel system in the whole space. By well balanced arguments of the $L^1$ and $L^∞$ spaces, we first prove global well-posedness of the system in $L^1× L^∞$ which partially answers the question posted by Kozono et al in [19]. For the case $μ_0>0$, we make full use of the linear parts of the system to get the improved long time decay property. Moreover, by using the new formulation involving all linear parts, introducing the logarithmic-weight in time to modify the other endpoint space $L^∞× L^∞$, and carefully decomposing time into several pieces, we are able to establish the global well-posedness and large time behavior of the system in $L^∞_{ln}× L^∞$.
Keywords: The Keller-Segel model of chemotaxis, 2D parabolic system, global well-posedness, large time behavior, logarithmic Lebesgue spaces.
Mathematics Subject Classification: Primary: 35B40, 35K15, 42B30, 42B25, 42B35; Secondary: 92C15.
Citation: Chao Deng, Tong Li. Global existence and large time behavior of a 2D Keller-Segel system in logarithmic Lebesgue spaces. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 183-195. doi: 10.3934/dcdsb.2018093
A. Blanchet, J. Dolbeault and B. Perthame, Two-dimensional Keller-Segel model: Optimal critical mass and qualitative properties of the solutions, Electron. J. Diff. Eqns., 44 (2006), 1-32. Google Scholar
V. Calvez and L. Corrias, The parabolic-parabolic Keller-Segel model in $R^2$, Commun. Math. Sci., 6 (2008), 417-447. doi: 10.4310/CMS.2008.v6.n2.a8. Google Scholar
S. Childress and J. K. Percus, Nonlinear aspects of chemotaxis, Math. Biosci., 56 (1981), 217-237. doi: 10.1016/0025-5564(81)90055-9. Google Scholar
S. Childress and J. K. Percus, Chemotactic collapse in two dimensions, Lecture Notes in Biomathematics, 55, Springer, Berlin-Heidelberg-New York, 61-66,1984. Google Scholar
L. Corrias, B. Perthame and H. Zaag, A chemotaxis model motivated by angiogenesis, C. R. Acad. Sci. Paris, Ser. Ⅰ., 336 (2003), 141-146. doi: 10.1016/S1631-073X(02)00008-0. Google Scholar
L. Corrias, B. Perthame and H. Zaag, Global solutions of some chemotaxis and angiogenesis systems in high space dimensions, Milan J. Math., 72 (2004), 1-28. doi: 10.1007/s00032-003-0026-x. Google Scholar
J. I. Diaz, T. Nagai and J. M. Rakotoson, Symmetrization techniques on unbounded domains: Application to a chemotaxis system on $\mathbb{R}^n$, J. Differential Equations, 145 (1998), 156-183. doi: 10.1006/jdeq.1997.3389. Google Scholar
C. Deng and T. Li, Well-posedness of the 3D Parabolic-hyperbolic Keller-Segel System in the Sobolev space framework, J. Differential Equations, 257 (2014), 1311-1332. doi: 10.1016/j.jde.2014.05.014. Google Scholar
M. Eisenbach, Chemotaxis, Imperial College Press, London, 2004.Google Scholar
Y. Guo and H. J. Hwang, Pattern formation (Ⅰ): The Keller-Segel model, J. Differential Equations, 249 (2010), 1519-1530. doi: 10.1016/j.jde.2010.07.025. Google Scholar
D. Horstmann, From 1970 until present: The Keller-Segel model in chemotaxis and its consequences Ⅰ., Jahresber. Dutsch. Math. Ver., 105 (2003), 103-165. Google Scholar
Y. Kagei and Y. Maekawa, On asymptotic behaviors of solutions to parabolic systems modelling chemotaxis, J. Differential Equations, 253 (2012), 2951-2992. doi: 10.1016/j.jde.2012.08.028. Google Scholar
T. Kato, Strong ${L}^p$-solutions of the Navier-Stokes equation in $\mathbb{R}^m$, with applications to weak solutions, Math. Z., 187 (1984), 471-480. doi: 10.1007/BF01174182. Google Scholar
E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theor. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
E. F. Keller and L. A. Segel, Model for chemotaxis, J. Theor. Biol., 30 (1971), 225-234. doi: 10.1016/0022-5193(71)90050-6. Google Scholar
E. F. Keller and L. A. Segel, Traveling bands of chemotactic bacteria: A theoretical analysis, J. Theor. Biol., 30 (1971), 235-248. doi: 10.1016/0022-5193(71)90051-8. Google Scholar
H. Kozono and Y. Sugiyama, Global strong solution to the semi-linear Keller-Segel system of parabolic-parabolic type with small data in scale invariant spaces, J. Differential Equations, 247 (2009), 1-32. doi: 10.1016/j.jde.2009.03.027. Google Scholar
H. Kozono and Y. Sugiyama, Keller-Segel system of parabolic-parabolic type with initial data in weak $L^{n/2}$ and its application to self-similar solutions, Indiana Univ. Math. J., 57 (2008), 1467-1500. doi: 10.1512/iumj.2008.57.3316. Google Scholar
H. Kozono, Y. Sugiyama and T. Wachi, Existence and uniqueness theorem on mild solutions to the Keller-Segel system in the scaling invariant space, J. Differential Equations, 252 (2012), 1213-1228. doi: 10.1016/j.jde.2011.08.025. Google Scholar
P. G. Lemarié-Rieusset, Recent Developments in the Navier-Stokes Problem, Research Notes in Mathematics, Chapman & Hall/CRC, 2002. Google Scholar
H. A. Levine and B. D. Sleeman, A system of reaction diffusion equations arising in the theory of reinforced random walks, SIAM J. Appl. Math., 57 (1997), 683-730. doi: 10.1137/S0036139995291106. Google Scholar
D. Li, T. Li and K. Zhao, On a hyperbolic-parabolic system modeling chemotaxis, Math. Model. Meth. Appl. Sci., 21 (2011), 1631-1650. doi: 10.1142/S0218202511005519. Google Scholar
T. Li, R. H. Pan and K. Zhao, Global dynamics of a chemotaxis model on bounded domains with large data, SIAM J. Appl. Math., 72 (2012), 417-443. doi: 10.1137/110829453. Google Scholar
T. Li and Z. A. Wang, Nonlinear stability of traveling waves to a hyperbolic-parabolic system modeling chemotaxis, SIAM J. Appl. Math., 70 (2009), 1522-1541. Google Scholar
T. Li and Z. A. Wang, Asymptotic nonlinear stability of traveling waves to conservation laws arising from chemotaxis, J. Differential Equations, 250 (2011), 1310-1333. doi: 10.1016/j.jde.2010.09.020. Google Scholar
C. S. Lin, W. M. Ni and I. Takagi, Large amplitude stationary solutions to a chemotaxis system, J. Differential Equations, 72 (1998), 1-27. doi: 10.1016/0022-0396(88)90147-7. Google Scholar
T. Nagai and T. Ikeda, Traveling waves in a chemotaxis model, J. Math. Biol., 30 (1991), 169-184. doi: 10.1007/BF00160334. Google Scholar
H. Othmer and A. Stevens, Aggregation, blowup and collapse: The ABCs of taxis in reinforced random walks, SIAM J. Appl. Math., 57 (1997), 1044-1081. doi: 10.1137/S0036139995288976. Google Scholar
C. S. Patlak, Random walk with persistence and external bias, Bull. Math. Biophys., 15 (1953), 311-338. doi: 10.1007/BF02476407. Google Scholar
B. D. Sleeman, M. Ward and J. Wei, The existence and stability of spike patterns in a chemotaxis model, SIAM J. Appl. Math., 65 (2005), 790-817. doi: 10.1137/S0036139902415117. Google Scholar
Z. A. Wang and T. Hillen, Shock formation in a chemotaxis model, Math. Meth. Appl. Sci., 31 (2008), 45-70. doi: 10.1002/mma.898. Google Scholar
M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differential Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. Google Scholar
Y. Yang, H. Chen, W. Liu and B. D. Sleeman, The solvability of some chemotaxis systems, J. Differential Equations, 212 (2005), 432-451. doi: 10.1016/j.jde.2005.01.002. Google Scholar
Zhichun Zhai. Well-posedness for two types of generalized Keller-Segel system of chemotaxis in critical Besov spaces. Communications on Pure & Applied Analysis, 2011, 10 (1) : 287-308. doi: 10.3934/cpaa.2011.10.287
Piotr Biler, Ignacio Guerra, Grzegorz Karch. Large global-in-time solutions of the parabolic-parabolic Keller-Segel system on the plane. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2117-2126. doi: 10.3934/cpaa.2015.14.2117
Qi Wang. Boundary spikes of a Keller-Segel chemotaxis system with saturated logarithmic sensitivity. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1231-1250. doi: 10.3934/dcdsb.2015.20.1231
Peng Jiang. Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2045-2063. doi: 10.3934/dcds.2017087
Xinru Cao. Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3369-3378. doi: 10.3934/dcdsb.2017141
Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis. Global well-posedness for a periodic nonlinear Schrödinger equation in 1D and 2D. Discrete & Continuous Dynamical Systems - A, 2007, 19 (1) : 37-65. doi: 10.3934/dcds.2007.19.37
Youshan Tao, Lihe Wang, Zhi-An Wang. Large-time behavior of a parabolic-parabolic chemotaxis model with logarithmic sensitivity in one dimension. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 821-845. doi: 10.3934/dcdsb.2013.18.821
Renhui Wan. Global well-posedness for the 2D Boussinesq equations with a velocity damping term. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2709-2730. doi: 10.3934/dcds.2019113
Hui Huang, Jian-Guo Liu. Well-posedness for the Keller-Segel equation with fractional Laplacian and the theory of propagation of chaos. Kinetic & Related Models, 2016, 9 (4) : 715-748. doi: 10.3934/krm.2016013
Shinya Kinoshita. Well-posedness for the Cauchy problem of the Klein-Gordon-Zakharov system in 2D. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1479-1504. doi: 10.3934/dcds.2018061
Miaoqing Tian, Sining Zheng. Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species. Communications on Pure & Applied Analysis, 2016, 15 (1) : 243-260. doi: 10.3934/cpaa.2016.15.243
Jaewook Ahn, Kyungkeun Kang. On a Keller-Segel system with logarithmic sensitivity and non-diffusive chemical. Discrete & Continuous Dynamical Systems - A, 2014, 34 (12) : 5165-5179. doi: 10.3934/dcds.2014.34.5165
Xinru Cao. Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 1891-1904. doi: 10.3934/dcds.2015.35.1891
Shen Bian, Jian-Guo Liu, Chen Zou. Ultra-contractivity for Keller-Segel model with diffusion exponent $m>1-2/d$. Kinetic & Related Models, 2014, 7 (1) : 9-28. doi: 10.3934/krm.2014.7.9
Kentarou Fujie, Takasi Senba. Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 81-102. doi: 10.3934/dcdsb.2016.21.81
Tobias Black. Global generalized solutions to a parabolic-elliptic Keller-Segel system with singular sensitivity. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 119-137. doi: 10.3934/dcdss.2020007
Johannes Lankeit. Infinite time blow-up of many solutions to a general quasilinear parabolic-elliptic Keller-Segel system. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 233-255. doi: 10.3934/dcdss.2020013
Yajing Zhang, Xinfu Chen, Jianghao Hao, Xin Lai, Cong Qin. Dynamics of spike in a Keller-Segel's minimal chemotaxis model. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 1109-1127. doi: 10.3934/dcds.2017046
Qi Wang. Global solutions of a Keller--Segel system with saturated logarithmic sensitivity function. Communications on Pure & Applied Analysis, 2015, 14 (2) : 383-396. doi: 10.3934/cpaa.2015.14.383
Jinhuan Wang, Li Chen, Liang Hong. Parabolic elliptic type Keller-Segel system on the whole space case. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1061-1084. doi: 10.3934/dcds.2016.36.1061
PDF downloads (106)
HTML views (498)
Chao Deng Tong Li | CommonCrawl |
The Dictionary of Mathematical Eponymy: The MacWilliams Identity
Written by Colin+ in dome.
After the Second World War, there was a boom in the study of transmitting encoded data. In likelihood, I imagine the boom started earlier, and the boom was more about the declassified publication of papers on this topic than about a sudden increase in productivity.
This month's mathematical hero, Jessie MacWilliams, played a relatively late part in this boom - the identity that bears her name was in her early-1960s thesis - and relates to linear codes. It needs a bit of setting up, though.
First up, we need to say what we mean by a code: in this context, it's a collection of codewords. A codeword is a list of 1s and 0s ("bits") of a given length, $n$
A linear code is one where any combination of two codewords gives a codeword - and "combination" here means taking the XOR of the two codewords1. That means, if the bits in position $i$ of the two original codewords are the same, the bit in position $i$ of the result will be 0; if the two bits were different, the $i$th bit of the result would be 1. For example, in a code with $n=4$, combining 0101 and 1100 would give a result of 1001.
Such a code also has a dual code – all of the possible words that are orthogonal to every codeword in the original code. Two words are orthogonal if, when you multiply their bits together, one at a time, and add up the results modulo 2, you get zero. For example, 0101 and 1111 are orthogonal: you get $0\times1 = 0$ for the first bit, $1\times 1 = 1$ for the second, then 0 and 1 again for the third and fourth bits. Adding up $0+1+0+1$ modulo 2 gives you 0.
For more example, the four-bit code above, with codewords 0000, 0101, 1001 and 1100 has a dual code with the codewords 0000, 0010, 1101 and 11112. If the original code is called $C$, its dual code is called $C^\perp$.
We also need to define the weight of a codeword, which is simply "how many 1s it has in it"3 - the weight of 0101 is 2. The number of codewords with a particular weight of $w$ is denoted $A_w$.
And lastly, for now, we're going to define the weight enumerator function:
$$W(C; x, y) = \sum_0^n A_w x^w y^{n-w}$$
The code we called $C$ earlier has a weight enumerator of $y^4 + 3x^2y^2$; its dual code's weight enumerator is $y^4 + xy^3 + x^3y + x^4$.
The weight enumerator can be used to find the probability of incorrectly decoding a codeword due to errors - in particular for a binary linear code, if the probability of a bit being flipped is $p$, the function $W(C; p, 1-p)$ gives the probability of the wrong codeword arriving.
What is the MacWilliams Identity?
The MacWilliams identity relates the weight enumerators of a code and its dual: it states $W(C^{\perp}; x,y) = \frac{1}{|C|}W(C; y-x, y+x)$
Let's check it with our example: $|C|$ is the number of codewords in $|C|$, 4, so we get $\frac{1}{4} \left( (y-x)^4 + 3(y-x)^2(y+x)^2\right)$.
If we expand that out, we get $\frac{1}{4}\left(\left(y^4 - 4y^3x + 6y^2x^2 - 4yx^3 + x^4\right) + 3\left(y^4 - 2x^2y^2 + x^4\right)\right)$.
Keep going: $\frac{1}{4}\left( 4y^4 - 4y^3x - 4yx^3 + 4x^4\right)$ - it works! Magic!
The MacWilliams identity isn't restricted to binary linear codes (it works on codes over any field, which could be significantly more complicated).
It allows us, among other things, to determine the number of codewords in the dual code without necessarily knowing what any of them are (except for the one that's all zeroes), and to work out the probability of incorrectly decoding a message sent in the dual code.
Who was Jessie MacWilliams?
Florence Jessie Collinson was born in Stoke-on-Trent, England, in 1917. She studied at Cambridge, receiving her BA and MA in the late 1930s. She worked with Oscar Zariski at Johns Hopkins and at Harvard, before marrying Walter MacWilliams in 1941. She raised a family before joining Bell Labs in 1958; then in 1960, she took leave for postgraduate studies, and completed her PhD in one year under Andrew Gleason. She spend the next decades working on algebraic constructions and combinatorial properties of codes, publishing The Theory of Error-Correcting Codes with Neil Sloane4 in 1977.
In 1980, MacWilliams gave the first Emmy Noether Lecture for the Association for Women in Mathematics. She retired from Bell Labs in 1983, and died in 1990 in New Jersey.
* Updated 2020-01-06 to clarify the more general case, and 2020-01-07 to fix an error. Thanks to Adam Atkinson both for gently putting me right and for guiding me towards the identity in the first place.
Review: The Theory That Would Not Die – Sharon Bertsch McGrayne
The Dictionary of Mathematical Eponymy: The Fermat Cubic
The Dictionary of Mathematical Eponymy: Sophie Germain primes
Dictionary of Mathematical Eponymy: Hoberman Sphere
Adam points out: in general, for alphabets of prime size $k$, the operation is "sum modulo $k$", which reduces to XOR when $k=2$. It is possible, but ill-advised, to extend this to alphabets whose sizes are other powers of primes. [↩]
You may want to check this [↩]
in general, how many of its digits are non-zero [↩]
Yes, the OEIS one [↩] | CommonCrawl |
Problems in Mathematics
Problems by Topics
Gauss-Jordan Elimination
Linear Transformation
Vector Space
Eigen Value
Cayley-Hamilton Theorem
Diagonalization
Exam Problems
Group Homomorphism
Sylow's Theorem
Module Theory
Ring Theory
LaTex/MathJax
Login/Join us
Solve later Problems
My Solved Problems
You solved 0 problems!!
Solved Problems / Solve later Problems
by Yu · Published 11/02/2016 · Last modified 08/11/2017
Vector Space of Polynomials and a Basis of Its Subspace
Let $P_2$ be the vector space of all polynomials of degree two or less.
Consider the subset in $P_2$
\[Q=\{ p_1(x), p_2(x), p_3(x), p_4(x)\},\] where
\begin{align*}
&p_1(x)=1, &p_2(x)=x^2+x+1, \\
&p_3(x)=2x^2, &p_4(x)=x^2-x+1.
\end{align*}
(a) Use the basis $B=\{1, x, x^2\}$ of $P_2$, give the coordinate vectors of the vectors in $Q$.
(b) Find a basis of the span $\Span(Q)$ consisting of vectors in $Q$.
(c) For each vector in $Q$ which is not a basis vector you obtained in (b), express the vector as a linear combination of basis vectors.
(The Ohio State University Linear Algebra Exam Problem)
Read solution
Click here if solved 13
Add to solve later
A Matrix Representation of a Linear Transformation and Related Subspaces
Let $T:\R^4 \to \R^3$ be a linear transformation defined by
\[ T\left (\, \begin{bmatrix}
x_1 \\
x_4
\end{bmatrix} \,\right) = \begin{bmatrix}
x_1+2x_2+3x_3-x_4 \\
3x_1+5x_2+8x_3-2x_4 \\
x_1+x_2+2x_3
\end{bmatrix}.\]
(a) Find a matrix $A$ such that $T(\mathbf{x})=A\mathbf{x}$.
(b) Find a basis for the null space of $T$.
(c) Find the rank of the linear transformation $T$.
A Homomorphism from the Additive Group of Integers to Itself
Let $\Z$ be the additive group of integers. Let $f: \Z \to \Z$ be a group homomorphism.
Then show that there exists an integer $a$ such that
\[f(n)=an\] for any integer $n$.
Inner Product, Norm, and Orthogonal Vectors
Let $\mathbf{u}_1, \mathbf{u}_2, \mathbf{u}_3$ are vectors in $\R^n$. Suppose that vectors $\mathbf{u}_1$, $\mathbf{u}_2$ are orthogonal and the norm of $\mathbf{u}_2$ is $4$ and $\mathbf{u}_2^{\trans}\mathbf{u}_3=7$. Find the value of the real number $a$ in $\mathbf{u_1}=\mathbf{u_2}+a\mathbf{u}_3$.
(The Ohio State University, Linear Algebra Exam Problem)
Image of a Normal Subgroup Under a Surjective Homomorphism is a Normal Subgroup
Let $f: H \to G$ be a surjective group homomorphism from a group $H$ to a group $G$.
Let $N$ be a normal subgroup of $H$. Show that the image $f(N)$ is normal in $G$.
Finite Group and Subgroup Criteria
Let $G$ be a finite group and let $H$ be a subset of $G$ such that for any $a,b \in H$, $ab\in H$.
Then show that $H$ is a subgroup of $G$.
Give a Formula for a Linear Transformation if the Values on Basis Vectors are Known
Let $T: \R^2 \to \R^2$ be a linear transformation.
\mathbf{u}=\begin{bmatrix}
1 \\
\end{bmatrix}, \mathbf{v}=\begin{bmatrix}
\end{bmatrix}\] be 2-dimensional vectors.
Suppose that
T(\mathbf{u})&=T\left( \begin{bmatrix}
\end{bmatrix} \right)=\begin{bmatrix}
-3 \\
\end{bmatrix},\\
T(\mathbf{v})&=T\left(\begin{bmatrix}
\end{bmatrix}\right)=\begin{bmatrix}
\end{bmatrix}.
Let $\mathbf{w}=\begin{bmatrix}
x \\
\end{bmatrix}\in \R^2$.
Find the formula for $T(\mathbf{w})$ in terms of $x$ and $y$.
Linear Independent Continuous Functions
Let $C[3, 10]$ be the vector space consisting of all continuous functions defined on the interval $[3, 10]$. Consider the set
\[S=\{ \sqrt{x}, x^2 \}\] in $C[3,10]$.
Show that the set $S$ is linearly independent in $C[3,10]$.
Vector Space of Polynomials and Coordinate Vectors
&p_1(x)=x^2+2x+1, &p_2(x)=2x^2+3x+1, \\
&p_3(x)=2x^2, &p_4(x)=2x^2+x+1.
Give the Formula for a Linear Transformation from $\R^3$ to $\R^2$
Let $T: \R^3 \to \R^2$ be a linear transformation such that
\[T(\mathbf{e}_1)=\begin{bmatrix}
\end{bmatrix}, T(\mathbf{e}_2)=\begin{bmatrix}
\end{bmatrix},\] where
\[\mathbf{e}_1=\begin{bmatrix}
\end{bmatrix}, \mathbf{e}_2=\begin{bmatrix}
\end{bmatrix}\] are the standard unit basis vectors of $\R^3$.
For any vector $\mathbf{x}=\begin{bmatrix}
\end{bmatrix}\in \R^3$, find a formula for $T(\mathbf{x})$.
Linear Properties of Matrix Multiplication and the Null Space of a Matrix
Let $A$ be an $m \times n$ matrix.
Let $\calN(A)$ be the null space of $A$. Suppose that $\mathbf{u} \in \calN(A)$ and $\mathbf{v} \in \calN(A)$.
Let $\mathbf{w}=3\mathbf{u}-5\mathbf{v}$.
Then find $A\mathbf{w}$.
Range, Null Space, Rank, and Nullity of a Linear Transformation from $\R^2$ to $\R^3$
Define the map $T:\R^2 \to \R^3$ by $T \left ( \begin{bmatrix}
\end{bmatrix}\right )=\begin{bmatrix}
x_1-x_2 \\
x_1+x_2 \\
\end{bmatrix}$.
(a) Show that $T$ is a linear transformation.
(b) Find a matrix $A$ such that $T(\mathbf{x})=A\mathbf{x}$ for each $\mathbf{x} \in \R^2$.
(c) Describe the null space (kernel) and the range of $T$ and give the rank and the nullity of $T$.
Click here if solved 113
Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis
Let $P_3$ be the vector space over $\R$ of all degree three or less polynomial with real number coefficient.
Let $W$ be the following subset of $P_3$.
\[W=\{p(x) \in P_3 \mid p'(-1)=0 \text{ and } p^{\prime\prime}(1)=0\}.\] Here $p'(x)$ is the first derivative of $p(x)$ and $p^{\prime\prime}(x)$ is the second derivative of $p(x)$.
Show that $W$ is a subspace of $P_3$ and find a basis for $W$.
Find a Basis for a Subspace of the Vector Space of $2\times 2$ Matrices
Let $V$ be the vector space of all $2\times 2$ matrices, and let the subset $S$ of $V$ be defined by $S=\{A_1, A_2, A_3, A_4\}$, where
A_1=\begin{bmatrix}
1 & 2 \\
-1 & 3
\end{bmatrix}, \quad
0 & -1 \\
-1 & 0 \\
1 & -10
Find a basis of the span $\Span(S)$ consisting of vectors in $S$ and find the dimension of $\Span(S)$.
Any Vector is a Linear Combination of Basis Vectors Uniquely
Let $B=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a basis for a vector space $V$ over a scalar field $K$. Then show that any vector $\mathbf{v}\in V$ can be written uniquely as
\[\mathbf{v}=c_1\mathbf{v}_1+c_2\mathbf{v}_2+c_3\mathbf{v}_3,\] where $c_1, c_2, c_3$ are scalars.
A Basis for the Vector Space of Polynomials of Degree Two or Less and Coordinate Vectors
Show that the set
\[S=\{1, 1-x, 3+4x+x^2\}\] is a basis of the vector space $P_2$ of all polynomials of degree $2$ or less.
Non-Abelian Simple Group is Equal to its Commutator Subgroup
Let $G$ be a non-abelian simple group. Let $D(G)=[G,G]$ be the commutator subgroup of $G$. Show that $G=D(G)$.
Two Quotients Groups are Abelian then Intersection Quotient is Abelian
Let $K, N$ be normal subgroups of a group $G$. Suppose that the quotient groups $G/K$ and $G/N$ are both abelian groups.
Then show that the group
\[G/(K \cap N)\] is also an abelian group.
Commutator Subgroup and Abelian Quotient Group
Let $G$ be a group and let $D(G)=[G,G]$ be the commutator subgroup of $G$.
Let $N$ be a subgroup of $G$.
Prove that the subgroup $N$ is normal in $G$ and $G/N$ is an abelian group if and only if $N \supset D(G)$.
Nilpotent Matrices and Non-Singularity of Such Matrices
Let $A$ be an $n \times n$ nilpotent matrix, that is, $A^m=O$ for some positive integer $m$, where $O$ is the $n \times n$ zero matrix.
Prove that $A$ is a singular matrix and also prove that $I-A, I+A$ are both nonsingular matrices, where $I$ is the $n\times n$ identity matrix.
Page 30 of 38« First«...1020...2728293031323334...»Last »
This website's goal is to encourage people to enjoy Mathematics!
This website is no longer maintained by Yu. ST is the new administrator.
Linear Algebra Problems by Topics
The list of linear algebra problems is available here.
Elementary Number Theory (1)
Field Theory (27)
Group Theory (126)
Linear Algebra (485)
Math-Magic (1)
Module Theory (13)
Probability (20)
Ring theory (67)
Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog.
Probability that Alice Wins n Games Before Bob Wins m Games
Probabilities of An Infinite Sequence of Die Rolling
Interchangeability of Limits and Probability of Increasing or Decreasing Sequence of Events
Linearity of Expectations E(X+Y) = E(X) + E(Y)
Successful Probability of a Communication Network Diagram
Two Eigenvectors Corresponding to Distinct Eigenvalues are Linearly Independent
The Ring $\Z[\sqrt{2}]$ is a Euclidean Domain
If Two Matrices are Similar, then their Determinants are the Same
Determine Dimensions of Eigenspaces From Characteristic Polynomial of Diagonalizable Matrix
Example of a Nilpotent Matrix $A$ such that $A^2\neq O$ but $A^3=O$.
How to Diagonalize a Matrix. Step by Step Explanation.
Determine Whether Each Set is a Basis for $\R^3$
How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix
Prove Vector Space Properties Using Vector Space Axioms
Express a Vector as a Linear Combination of Other Vectors
The Intersection of Two Subspaces is also a Subspace
Matrix of Linear Transformation with respect to a Basis Consisting of Eigenvectors
12 Examples of Subsets that Are Not Subspaces of Vector Spaces
Summary: Possibilities for the Solution Set of a System of Linear Equations
Site Map & Index
abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space
Search More Problems
Membership Level Free
If you are a member, Login here.
Problems in Mathematics © 2020. All Rights Reserved. | CommonCrawl |
Surfside An Example Of A Constant
What is constant-cost industry? definition and meaning
c++ Example of constant r-value - Stack Overflow
What is constant variable? definition and meaning. Avogadro's number and Planck's constant are examples of constants. constant – continual – continuous. You can use constant, continual,, Algebra - Basic Definitions. A number on its own is called a Constant. Example of a Polynomial: 3x 2 + x - 2..
Constant dictionary definition constant defined
Email Marketing Templates & Designs Constant Contact. In this lesson you will learn the definition of constant velocity, its important properties, and the equation that represents it. You will also see..., To declare a constant. Write a declaration that includes an access specifier, the Const keyword, and an expression, as in the following examples:.
I'm a tad confused between what is and is not a Constant Expression in C, even after much Googleing. Could you provide an example of something which is, and which is An example of a concentric contraction in the raising of a weight during a bicep curl. By performing a series of constant velocity shortening contractions,
Recent Examples on the Web: Adjective. Lost in the legalistic view is any sense of the ethical consequences of going through life under constant surveillance. Velocity is a vector, thus it has a direction. Therefore, you can change the velocity by changing direction. A great example of this is a ball on a string spinning at
A constant may be used to define a constant function that ignores its arguments and always gives the same value. A constant function of a single variable, such as 22/09/2009В В· I think this question violates the Community Guidelines. Chat or rant, adult content, spam, insulting other members,show more. I think this question
What Is a Constant in Science? What Is a Constant in Science? In an experiment following the scientific method, a constant is a variable that cannot be changed or is The general equilibrium constant for such processes can be written as: Example: Estimate the solubility of barium sulfate in a 0.020 M sodium sulfate solution.
What is a constant? This article will explain everything you need to know about what a constant is in the Java programming language Here's an example: In this lesson, we will learn about what a constant function is and what it looks like on a graph. We will become comfortable identifying constant...
Definition of constant variable: A variable whose value cannot be changed once it has been assigned a value. See also dependent variable, Show More Examples. c. This page explains what is meant by an equilibrium constant, introducing equilibrium constants expressed in terms of concentrations, K c. It assumes that you are
A constant rate of change is anything that increases or decreases by the same amount for every trial. Therefore an example could be driving down the highway at a The 5 in this piece of code was a literal constant. Literal constants can be classified into For example, the following literal constants are all equivalent to
26/09/2007В В· Whats a Constant in a Science experiment? the constant is the part of the experiment that never changes example: if you were to Polynomials: Definitions & Evaluation. That last example above emphasizes that it is the variable portion of a term which so it is called the "constant" term
An Example of a Constant Comparison Analysis Below is a brief excerpt of data and how one researcher analyzed it using constant comparison analysis. Learn what an experimental constant is and get examples of two main types of constants you may encounter in experiments.
22/09/2009В В· I think this question violates the Community Guidelines. Chat or rant, adult content, spam, insulting other members,show more. I think this question What are some good examples of constant variables in science? The constant variable would be things like One good example is the speed of light in vacuum
An Example of a Constant Comparison Analysis Below is a brief excerpt of data and how one researcher analyzed it using constant comparison analysis. Example: Calculate the value of the equilibrium constant, K c, for the system shown, if 0.1908 moles of CO 2, 0.0908 moles of H 2, 0.0092 moles of CO,
Generalized Anxiety Disorder (GAD) but if your worries and fears are so constant that they interfere with your ability to function and relax, For example, if Looking for some independent and dependent variable examples? of a gas is inversely proportional to its volume as long as the temperature remains constant.
To declare a constant. Write a declaration that includes an access specifier, the Const keyword, and an expression, as in the following examples: c. This page explains what is meant by an equilibrium constant, introducing equilibrium constants expressed in terms of concentrations, K c. It assumes that you are
The motion of any object is explained with the use of physical quantities like velocity, distance, displacement, acceleration. If we talk about speed then it is What Is Exponential? Isn't this very close to Table 4 and constant exponential growth? and exponential growth. Here's an example of a starting amount of
An example is the water What Is a Constant Variable in What Is a Constant Variable in Science? A constant variable, VBScript Constants - Learn VBScript in simple and easy steps starting from basic to advanced concepts with examples including Overview, Environment Setup, Basic
C tutorial for beginners with examples - Learn C programming language covering basic C, literals, data types,C Constants with examples, functions etc. Example: Calculate the value of the equilibrium constant, K c, for the system shown, if 0.1908 moles of CO 2, 0.0908 moles of H 2, 0.0092 moles of CO,
Recent Examples on the Web: Adjective. Lost in the legalistic view is any sense of the ethical consequences of going through life under constant surveillance. The general equilibrium constant for such processes can be written as: Example: Estimate the solubility of barium sulfate in a 0.020 M sodium sulfate solution.
Generalized Anxiety Disorder (GAD) but if your worries and fears are so constant that they interfere with your ability to function and relax, For example, if The 5 in this piece of code was a literal constant. Literal constants can be classified into For example, the following literal constants are all equivalent to
22/09/2009В В· I think this question violates the Community Guidelines. Chat or rant, adult content, spam, insulting other members,show more. I think this question In mathematics, a constant function is a function whose output value is the same for every input value. For example, the function () = is a constant function because
What Is Exponential? Isn't this very close to Table 4 and constant exponential growth? and exponential growth. Here's an example of a starting amount of C tutorial for beginners with examples - Learn C programming language covering basic C, literals, data types,C Constants with examples, functions etc.
What Is a Constant Variable in Science? Reference.com
what is an example of a constant? Yahoo Answers. In this article we will discuss about Variables & Constants in C with real life example., A look at the arrhenius equation to show how rate constants vary with temperature and activation energy. If the rate constant doubles, for example,.
Constant Definition of Constant by Merriam-Webster
Solubility Product Constant Ksp How to Solve. Constant Contact offers dozens of email templates for your business. They're reusable and mobile responsive. Flexible and functional. I'm a tad confused between what is and is not a Constant Expression in C, even after much Googleing. Could you provide an example of something which is, and which is.
What are some good examples of constant variables in
Calculating Equilibrium Constants Department of Chemistry
Looking for some independent and dependent variable examples? of a gas is inversely proportional to its volume as long as the temperature remains constant. A numeric constant has no type until it's given one, such as by an explicit cast.
Constant Speed = $\frac{Constant\ Distance\ traveled}{Total\ time\ taken}$ The best example of a body moving with constant speed objects in space such as satellites. The general equilibrium constant for such processes can be written as: Example: Estimate the solubility of barium sulfate in a 0.020 M sodium sulfate solution.
Here is an example of an ideal gas problem where the volume of the gas is held constant. The motion of any object is explained with the use of physical quantities like velocity, distance, displacement, acceleration. If we talk about speed then it is
Here is an example of an ideal gas problem where the volume of the gas is held constant. Looking for some independent and dependent variable examples? of a gas is inversely proportional to its volume as long as the temperature remains constant.
Imagine you are driving in your car on a straight, flat highway. Neglect curvature of the earth. You set the cruise control and stay in your lane. Here is an example of an ideal gas problem where the volume of the gas is held constant.
Definition of constant in English: constant. adjective. 1 Occurring continuously over a period of time. 'the constant background noise of the city' More example In mathematics, a constant function is a function whose output value is the same for every input value. For example, the function () = is a constant function because
In this lesson, we will learn about what a constant function is and what it looks like on a graph. We will become comfortable identifying constant... A look at the arrhenius equation to show how rate constants vary with temperature and activation energy. If the rate constant doubles, for example,
Here is an example of an ideal gas problem where the volume of the gas is held constant. Constants/Literals. A constant is a value (or an identifier) whose value cannot be altered in a program. For example: 1, 2.5, 'c' etc. Here, 1, 2.5 and 'c'are literal
26/09/2007 · Whats a Constant in a Science experiment? the constant is the part of the experiment that never changes example: if you were to Constant Contact offers dozens of email templates for your business. They're reusable and mobile responsive. Flexible and functional.
Generalized Anxiety Disorder (GAD) but if your worries and fears are so constant that they interfere with your ability to function and relax, For example, if A constant rate of change is anything that increases or decreases by the same amount for every trial. Therefore an example could be driving down the highway at a
26/09/2007В В· Whats a Constant in a Science experiment? the constant is the part of the experiment that never changes example: if you were to I'm a tad confused between what is and is not a Constant Expression in C, even after much Googleing. Could you provide an example of something which is, and which is
Variables & Constant In C With Real Life Example
C constant with examples fresh2refresh.com. The term constant simply refers to something that is not variable. In statistics, and survey research in particular, As another example,, What is a constant? This article will explain everything you need to know about what a constant is in the Java programming language Here's an example:.
Email Marketing Templates & Designs Constant Contact
Constant SAGE Research Methods. Constant Speed = $\frac{Constant\ Distance\ traveled}{Total\ time\ taken}$ The best example of a body moving with constant speed objects in space such as satellites., In mathematics, a constant function is a function whose output value is the same for every input value. For example, the function () = is a constant function because.
Example: Calculate the value of the equilibrium constant, K c, for the system shown, if 0.1908 moles of CO 2, 0.0908 moles of H 2, 0.0092 moles of CO, Imagine you are driving in your car on a straight, flat highway. Neglect curvature of the earth. You set the cruise control and stay in your lane.
I'm a tad confused between what is and is not a Constant Expression in C, even after much Googleing. Could you provide an example of something which is, and which is Constants and Variables - Basics of MQL4 These two terms, constant and variable, For example, in physics,
Constants/Literals. A constant is a value (or an identifier) whose value cannot be altered in a program. For example: 1, 2.5, 'c' etc. Here, 1, 2.5 and 'c'are literal An example is the water What Is a Constant Variable in What Is a Constant Variable in Science? A constant variable,
Constant Speed = $\frac{Constant\ Distance\ traveled}{Total\ time\ taken}$ The best example of a body moving with constant speed objects in space such as satellites. A numeric constant has no type until it's given one, such as by an explicit cast.
Imagine you are driving in your car on a straight, flat highway. Neglect curvature of the earth. You set the cruise control and stay in your lane. In this lesson you will learn the definition of constant velocity, its important properties, and the equation that represents it. You will also see...
Synonyms for constant at Thesaurus.com with free online thesaurus, antonyms, Example Sentences for constant. The surgeon was in constant attendance, Algebra - Basic Definitions. A number on its own is called a Constant. Example of a Polynomial: 3x 2 + x - 2.
A numeric constant has no type until it's given one, such as by an explicit cast. Algebra - Basic Definitions. A number on its own is called a Constant. Example of a Polynomial: 3x 2 + x - 2.
A numeric constant has no type until it's given one, such as by an explicit cast. Synonyms for constant at Thesaurus.com with free online thesaurus, antonyms, Example Sentences for constant. The surgeon was in constant attendance,
Here is an example of an ideal gas problem where the volume of the gas is held constant. Here is an example of an ideal gas problem where the volume of the gas is held constant.
В«typeВ» is the type of the value stored in the constant. So far all the examples have been integers, but any type (including a class name) is possible. The general equilibrium constant for such processes can be written as: Example: Estimate the solubility of barium sulfate in a 0.020 M sodium sulfate solution.
22/09/2009В В· I think this question violates the Community Guidelines. Chat or rant, adult content, spam, insulting other members,show more. I think this question An Example of a Constant Comparison Analysis Below is a brief excerpt of data and how one researcher analyzed it using constant comparison analysis.
The motion of any object is explained with the use of physical quantities like velocity, distance, displacement, acceleration. If we talk about speed then it is In this article we will discuss about Variables & Constants in C with real life example.
Algebra - Basic Definitions. A number on its own is called a Constant. Example of a Polynomial: 3x 2 + x - 2. C tutorial for beginners with examples - Learn C programming language covering basic C, literals, data types,C Constants with examples, functions etc.
c. This page explains what is meant by an equilibrium constant, introducing equilibrium constants expressed in terms of concentrations, K c. It assumes that you are Avogadro's number and Planck's constant are examples of constants. constant – continual – continuous. You can use constant, continual,
Algebra - Basic Definitions. A number on its own is called a Constant. Example of a Polynomial: 3x 2 + x - 2. I'm a tad confused between what is and is not a Constant Expression in C, even after much Googleing. Could you provide an example of something which is, and which is
An Example of a Constant Comparison Analysis Below is a brief excerpt of data and how one researcher analyzed it using constant comparison analysis. The term constant simply refers to something that is not variable. In statistics, and survey research in particular, As another example,
Definition of constant in English: constant. adjective. 1 Occurring continuously over a period of time. 'the constant background noise of the city' More example The general equilibrium constant for such processes can be written as: Example: Estimate the solubility of barium sulfate in a 0.020 M sodium sulfate solution.
A look at the arrhenius equation to show how rate constants vary with temperature and activation energy. If the rate constant doubles, for example, Can someone give me an example of a constant r-value? Cause apparently even literals are r-values and not const r-values.
Constants and Variables - Basics of MQL4 These two terms, constant and variable, For example, in physics, Looking for some independent and dependent variable examples? of a gas is inversely proportional to its volume as long as the temperature remains constant.
Can someone give me an example of a constant r-value? Cause apparently even literals are r-values and not const r-values. A constant may be used to define a constant function that ignores its arguments and always gives the same value. A constant function of a single variable, such as
C constant with examples fresh2refresh.com
Constant function Simple English Wikipedia the free. To declare a constant. Write a declaration that includes an access specifier, the Const keyword, and an expression, as in the following examples:, I'm a tad confused between what is and is not a Constant Expression in C, even after much Googleing. Could you provide an example of something which is, and which is.
Constant Synonyms Constant Antonyms Thesaurus.com
What is an example of a constant answers.com. Synonyms for constant at Thesaurus.com with free online thesaurus, antonyms, Example Sentences for constant. The surgeon was in constant attendance, VBScript Constants - Learn VBScript in simple and easy steps starting from basic to advanced concepts with examples including Overview, Environment Setup, Basic.
Some people refer to controlled variables as "constant variables." Examples of Variables. Question : Independent Variable (What I change) Dependent Variables c. This page explains what is meant by an equilibrium constant, introducing equilibrium constants expressed in terms of concentrations, K c. It assumes that you are
A look at the arrhenius equation to show how rate constants vary with temperature and activation energy. If the rate constant doubles, for example, Definition of constant in English: constant. adjective. 1 Occurring continuously over a period of time. 'the constant background noise of the city' More example
Velocity is a vector, thus it has a direction. Therefore, you can change the velocity by changing direction. A great example of this is a ball on a string spinning at Velocity is a vector, thus it has a direction. Therefore, you can change the velocity by changing direction. A great example of this is a ball on a string spinning at
c. This page explains what is meant by an equilibrium constant, introducing equilibrium constants expressed in terms of concentrations, K c. It assumes that you are Historical Examples. of constant. The surgeon was in constant attendance, but the malady baffled all his skill. Brave and Bold. Horatio Alger
Historical Examples. of constant. The surgeon was in constant attendance, but the malady baffled all his skill. Brave and Bold. Horatio Alger What is a constant? This article will explain everything you need to know about what a constant is in the Java programming language Here's an example:
In this lesson you will learn the definition of constant velocity, its important properties, and the equation that represents it. You will also see... Use a constant in a formula . Now that you're familiar with array constants, here's a working example. In any blank cell, enter (or copy and paste) this formula, and
Historical Examples. of constant. The surgeon was in constant attendance, but the malady baffled all his skill. Brave and Bold. Horatio Alger Controlled variables are variables that is often overlooked by researchers. For example, if you were or constant variables.
26/09/2007В В· Whats a Constant in a Science experiment? the constant is the part of the experiment that never changes example: if you were to 26/09/2007В В· Whats a Constant in a Science experiment? the constant is the part of the experiment that never changes example: if you were to
In this lesson you will learn the definition of constant velocity, its important properties, and the equation that represents it. You will also see... The motion of any object is explained with the use of physical quantities like velocity, distance, displacement, acceleration. If we talk about speed then it is
In this lesson, we will learn about what a constant function is and what it looks like on a graph. We will become comfortable identifying constant... Constants and Variables - Basics of MQL4 These two terms, constant and variable, For example, in physics,
Here is an example of an ideal gas problem where the volume of the gas is held constant. Learn what an experimental constant is and get examples of two main types of constants you may encounter in experiments.
View all posts in Surfside category
Example of a producer in a food chain
Example of introduction for thesis paper
An Example Of Unprofessional Behavior Is
Youth Work Case Notes Example
How To Write References In Resume Example
Example Of False Implies True Maths Theorem
Many To One Relationship Hibernate Annotation Example
Hydrostatic Force On A Plane Surface Example
Sample Example Of Assessment Method And Tools
Independent And Dependant Variables In Correlation Example
Example Of My Fitness Story
Give An Example Of A Strong Acid
Surfside Posts
Angularjs Asp.net Mvc Login Example
Example Of Anomie In Sociology
Which Activity Is An Example Of Empathy In Everyday Life
C Dynamic Array Of Strings Example | CommonCrawl |
Theoretical Computer Science Stack Exchange is a question and answer site for theoretical computer scientists and researchers in related fields. It only takes a minute to sign up.
Winning strategy in the game of triplets
The game of triplets is defined by a finite set of elements $X$, and a finite multi-set $T$ containing triplets of elements. Two players take turns picking elements from $X$ until all elements are taken. Then, the score of each player is the number of triplets from $T$ in which he has at least 2 elements.
A standard strategy-stealing argument shows that the first player can always score at least $|T|/2$. Suppose by contradiction that it is false. Then the second player can score more than $|T|/2$. But then the first player, copying the second player's winning strategy, can score more than $|T|/2$ too. This is a contradiction since the sum of scores is $|T|$.
QUESTION: what is an explicit strategy for the first player to get a score of at least $|T|/2$?
EDIT: Here is an explicit strategy for the first player to get at least $3|T|/8$. To each triplet in $T$, assign a potential $P(a,b)$ based on the number of its elements taken by the (first,second) player:
\begin{matrix} \bf a \downarrow b \rightarrow & \bf 0 & \bf 1 & \bf 2 & \bf 3 \\ \bf 0 &3/8&0& 0 & 0 \\ \bf 1 &3/4&1/2& 0 & \\ \bf 2 & 1 & 1 & & \\ \bf 3 & 1 & & & \\ \end{matrix} Initially, every triplet has potential $3/8$, so the potential-sum is $3|T|/8$.
Player 1's strategy is: pick an element that maximizes the potential-sum. Suppose that element is $x$ and the element picked next by player 2 is $y$. I claim that the potential-sum after these two moves weakly increases:
The potential of a triplet that contains neither $x$ nor $y$ does not change.
The potential of a triplet that contains both $x$ and $y$ changes from $P(a,b)$ to $P(a+1,b+1)$, which is always at least as large.
The potential of a triplet that contains $x$ and not $y$ increases by $P(a+1,b)-P(a,b)$;
The potential of a triplet that contains $y$ and not $x$ decreases by $P(a,b)-P(a,b+1)$; it is easy to check in the table that $P(a,b)-P(a,b+1)\leq P(a+1,b)-P(a,b)$ (the decrease when going right is at most the increase when going down).
All in all, the potential-sum increases by the sum of $P(a+1,b)-P(a,b)$ over all triplets that contain $x$, and decreases by (at most) the sum of $P(a+1,b)-P(a,b)$ over all triplets that contain $y$. By the choice of $x$, the first sum is weakly larger. So the potential-sum weakly increases.
So the final potential-sum is at least $3|T|/8$. At the end, a triplet has potential $1$ ($0$) iff it is won by player 1 (2), so the final potential-sum equals player 1's score.
gt.game-theory combinatorial-game-theory
Erel Segal-HaleviErel Segal-Halevi
$\begingroup$ It's quite unlikely that there's one simple strategy, as is the case with most games where strategy-stealing proves that the first player can always win. $\endgroup$ – domotorp Nov 8 '18 at 9:25
$\begingroup$ I agree with domtorp. I suspect "take the element with the highest number of occurrences" is the right basic heuristic, though the number of occurrences isn't exactly the right thing to be counting. Strategy stealing arguments usually mean that if you follow a certain heuristic, you're always able to play defensively when challenged and end up winning. The issue is figuring out how and when to play defensively. $\endgroup$ – Stella Biderman Nov 8 '18 at 17:04
$\begingroup$ To add to the previous commenters, it would be very interesting if a game of this type (with strategy-stealing in a natural framework other than "I cut you choose") were proven PSPACE complete (with for example, $T$ → a winning first move being PSPACE complete). $\endgroup$ – Dmytro Taranovsky Nov 9 '18 at 5:06
This isn't a complete proof, but here's some justification for why known conjectures imply that the game may be computationally hard to solve. Namely, I'm going to argue that finding the correct first move is already probably tricky.
As a first step, we argue that the triplets game is harder (in the appropriate sense) than the $\textrm{Denser Induced Subgraph}$ game defined as follows.
Two players, A and B, alternate picking vertices on a common graph G. Vertices can only be picked once. When no more vertices remain to be picked, the subgraphs induced by each player's choices are compared. The player with the larger number of induced edges is declared the winner.
Proof outline:
Given an instance of the $\textrm{Denser Induced Subgraph}$ game with graph $G = (V,E)$, we construct a $\textrm{Triplets}$ instance as follows. Without loss of generality, assume $G$ has no isolated vertices. The set of elements in our instance will be $V \cup (E \times \{0,1\})$. For each edge $e \in E$ between vertices $u$ and $v$, we have two triplets of the form $(u, v, (e, 0))$ and $(u, v, (e, 1))$. Additionally, for each vertex $v \in V$, we throw in four additional triplets of the form $(v,v,v)$. This completes the reduction.
Now imagine the proceedings of the $\mathrm{Triplets}$ game. As long as some vertex from $V$ has not been picked, the choice of such a vertex strictly dominates that of any element from $E$. Indeed, picking an element from $E \times \{0,1\}$ only ever gives a potential score increase of $1$ (and also blocks the opponent from at most $1$ point), while picking an element from $V$ automatically gives a score increase of $4$, with potential for more.
Therefore, under optimal play, the first $|V|$ rounds will correspond to both players picking elements from $V$. After these rounds, the players alternate picking up the even-sized collection of triples that have not yet been claimed, which correspond to exactly the edges whose endpoints have been picked once by each player. Any reasonable strategy here, for either player, ends up picking up exactly half of those available triplets. The game ends with a sequence of NOOP moves on the already-picked-up triplets.
Let $V_A$ be the vertices chosen by player A, and $V_B$ those chosen by B. The score for player A is the sum of (i) four points per vertex chosen from the $(v,v,v)$ triplets (ii) two points per induced edge created from these vertices, and (iii) one point for each split edge. Therefore, the score is $4|V|/2 + 2|E[V_A]| + (|E| - |E[V_A]| - |E[V_B]|)$, where $E[S]$ is the set of edges induced by $S$. Since the first and last terms are ultimately equal for both players, the player with the larger induced subgraph wins. $\square$
With this in mind, we can appeal to some of the work in the literature of detecting dense subgraphs. There's a ton of relevant work out there on this that one can appeal to, but for simplicity of analysis I'll appeal to a particular conjecture on the difficulty of finding dense random graphs in sparse random graphs (I believe that this dependence can be removed with just a little more thought, but this is not meant to be a formal proof).
The Planted Dense Subgraph Problem (informal). Let $G = (V,E)$ be a random graph sampled from the Erdos-Renyi distribution $G(n, 1/\sqrt{n})$. With probability $1/2$, we return $G$ as is. Otherwise, we let $V'$ be a uniformly random subset of $V$ of size $\sqrt{n}$. For each $u,v$ pair in $V'$, we add an edge $(u,v)$ to $E$ independently with probability $n^{-1/4}$. Only then do we return $G$. The problem is to, given only the output of the above, correctly identify whether or not the Erdos-Renyi graph was augmented.
The Planted Dense Subgraph Conjecture (informal). No polynomial-time algorithm can solve the Planted Dense Subgraph problem with probability at least $51\%$.
Suppose that the graph was augmented, and there is an unusually dense component. Since no poly-time algorithm can reliably detect this dense subgraph's presence, it also cannot reliably sample a vertex from this dense component (e.g. due to self-reducibility). Therefore, since (from Player A's perspective) it is selecting a random vertex from a pure Erdos-Renyi graph, it does not matter much which vertex A picks (up to a small change in its scoring that will end up not mattering1). However, if Player B is omniscient, it can reliably sample a vertex from the dense component on its first shot. This process repeats a superconstant number of times before B's choices begin unveiling the dense component to A (otherwise, a polynomial-time algorithm can traverse every path in this game tree to constant depth in order to solve the Planted Dense Subgraph problem). If the process repeated $r$ times before A catches on, then the first $r-1$ rounds can be seen as "freebie" rounds for B, while the $r$th round is the beginning of A and B fighting over the dense component, with B getting the first move and (by your strategy stealing argument) a winning subset.
Once the dense component is exhausted, the two players resume fighting over the rest of the graph. While A has chosen $r$ more vertices here than B has, B's first $r$ vertices are worth $\Omega(n^{1/4})$ times as much, and thus B is ultimately the winner.
1. By some type of concentration and pigeonhole argument, the difference between making the first choice and the second choice should not be more than $O(1)$ in the final score.
Therefore, despite the game being very weakly solved for player A, it's unlikely that it's computationally feasible for A to play out even the first move of the winning strategy.
An approach based on the hardness of the "normal" densest subgraph problem should not be difficult to attain here, either, and composing the reduction with a hardness of approximation result likely can be used to get some kind of hardness based on more mainstream conjectures (eg ETH). I'm not sure what the difficulty of moving up to NP-hardness (or beyond) may be.
Yonatan NYonatan N
$\begingroup$ The reduction is very cool. Can you give a reference to this "The Planted Dense Subgraph" conjecture? $\endgroup$ – Erel Segal-Halevi Nov 12 '18 at 17:14
$\begingroup$ The conjecture has appeared a number of times under slightly varying flavors, including cc.gatech.edu/~klai9/FinalThesis.pdf (Conjecture 2), users.cs.duke.edu/~rongge/derivatives_ics.pdf (Densest Subgraph Assumption), proceedings.mlr.press/v40/Hajek15.pdf (PC Hypothesis), math.ias.edu/files/ABW10_STOC.pdf (DUE Assumption), core.ac.uk/download/pdf/62922882.pdf (Planted Dense Subgraph Conjecture), among others. The end results are similar enough that the above construction needs almost no modification to be adapted to the chosen flavor. $\endgroup$ – Yonatan N Nov 12 '18 at 17:40
$\begingroup$ Interesting, thanks! I have just added to the question, an explicit strategy by which the first player can win a score of at least $3|T|/8$. So $3|T|/8$ is computable but $|T|/2$ is not computable (assuming the conjecture is true). Do you have an idea, what is the largest computable score? $\endgroup$ – Erel Segal-Halevi Nov 12 '18 at 17:50
$\begingroup$ Not off the top of my head, but I'll see if I can think about it some more over the weekend. Nice 3/8 argument! $\endgroup$ – Yonatan N Nov 14 '18 at 17:44
Thanks for contributing an answer to Theoretical Computer Science Stack Exchange!
Not the answer you're looking for? Browse other questions tagged gt.game-theory combinatorial-game-theory or ask your own question.
Permutation game redux
A simplified version of card game Winner
Winning strategy of an "edge or isolated vertex" deletion game
Complexity of finite-state partial information games
When do $\epsilon$-Nash equilibrium strategies converge to Nash Equilibrium strategies?
Is there a tool for finding Nash equilibria in parametric games?
Minmax vs Maxmin
Can generalized twenty questions be solved by a greedy algorithm?
A game on several graphs | CommonCrawl |
Research on semi-supervised multi-graph classification algorithm based on MR-MGSSL for sensor network
Yang Gang1,
Zhang Na1,
Jin Tao1,
Wang Dawei1,
Kang Yinzhu1 &
Gao Feng2
With the advent of the era of network information, the amount of data in network information is getting larger and larger, and the classification of data becomes particularly important. Current semi-supervised multi-map classification methods cannot quickly and accurately perform automatic classification and calculation of information. Therefore, this paper proposes an MR-MGSSL algorithm and applies it to the classification of semi-supervised multi-graph. By determining the basic idea and calculation framework of MR-MGSSL algorithm, the mining of optimal feature subsets in multi-graphs and the multi-graph vectorization performance time are taken as examples, and the proposed algorithm is compared with other semi-supervised multi-graph classification methods. The performance evaluation results show that compared with other classification calculation methods, MR-MGSSL algorithm has the advantages of low sensitivity to feature subgraph and short vectorization time. The method is used to extract and detect clouds in remote sensing images (GF-1 and GF-2).
With the rapid development of network information technology, the resulting network text and image information and other resources are increasing, although the number of such information resources is huge, that whose category attribute has been identified are only a small part, their dimensions are relatively high, and more training samples are needed to get an ideal classification calculation method [1]. Therefore, how to quickly and accurately classify the information resources is very important. Semi-supervised learning is used to obtain the learner with good performance, so as to realize the automatic classification of large-scale images. It not only can make up for the shortcomings of traditional information search, but also can classify information according to the similarity, so that information search becomes more simple and convenient [2].
At present, the methods based on semi-supervised multi-graph classification include mainly the decision tree method and Bayesian method [3]. These methods have high efficiency for the detection of text and image information resources but require the sample information to be detected and be marked according to its characteristics, and the unmarked information cannot be detected [4]. Some unsupervised multi-graph classification methods such as clustering can avoid the lack of decision tree method and Bayesian at information detection, but the detection rate is low and cannot be widely used in the classification of supervised multi-graphs. In addition, Wang Jing proposed a traffic classification method based on semi-supervised learning, which is based on the characteristic similarity of information resource to determine the mapping between clusters and traffic types through a small part of the labeled data in the clustering process, so as to realize the classification of information resources. This method not only reduces the requirement of the labeled data in the process of detection, but also ensures the accuracy of detection [5].
In this paper, a new MGSSL algorithm based on scoring function is proposed to solve the problem of semi-supervised small-scale multi-graph classification. In addition to solving the problem of semi-supervised large-scale multi-graph, on the basis of MGSSL algorithm and combined with MapReduce, we propose an MGSSL algorithm, which not only has high detection precision, but also can make up for the lack of other classification detection methods. It has far-reaching significance to the detection of largely existing text information and image resources. At the end of this article, the method was used to extract and detect clouds in remote sensing image (GF-1 and GF-2).
The specific contributions of this paper include:
An MR-MGSSL algorithm is proposed and applied to the classification of semi-supervised multi-graph.
The basic idea and calculation framework of MR-MGSSL algorithm are proposed.
By mining the optimal feature subsets in multiple graphs and taking the execution time of multi-graph vectorization as an example, the algorithm is compared with other semi-supervised multi-graph classification methods.
This method uses the semi-supervised calculation method to extract and detect remote sensing images.
The rest of this paper is organized as follows. Section 2 discusses related work, followed by the algorithm framework in Section 3. MR-MGSSL algorithm is discussed in Section 4. Section 5 shows the experiment, and Section 6 concludes the paper with a summary and future research directions.
Text classification algorithm based on semi-supervised learning
The text classification of semi-supervised learning refers to that the content of the text information is known, and it could automatically classify the text according to its characteristic similarity and the specified classification label. For the text that is not calibrated by the data label, it is mapped to the category which is calibrated by the data label according to its similarity, and mathematically, the process is text mapping. A single text that has been determined can be associated with one or more texts according to its similarity [6]. Generally, the text classification of semi-supervised learning is realized by two parts, namely, training and classification process. The training process is based on the classifications that have been manually classified to construct the corresponding classifier by using a certain classification algorithm. The main task of the classification process is to classify the unclassified samples according to the classification calculation method and the classifier constructed after the training.
The text classification is widely used in the classification of semi-supervised multi-graphs [7]. It can not only reduce the detection range of the sample but also has a high detection accuracy, but there are still some deficiencies. The general semi-supervised multi-graph classification algorithm only focuses on the text information that is calibrated by the data label, and the attention to the unlabeled text is not high [8]. Although it can be correlated with its feature similarity to improve the accuracy of text classification calculation, often, to mark text is generally manually completed, and the cost is relatively high [9].
Image recognition is the use of computers to process, analyze, and comprehend images to identify various patterns of objects and objects [10]. It focuses on the study of the classification and description of various images. The purpose of image recognition is to allow the computer to automatically process the corresponding image information without the need for natural human intervention to accomplish the tasks of image recognition and classification [11]. The basic task of image recognition is to analyze and process the original input image so that one or more objects of interest in the image may be extracted [12].
In life, people will inadvertently complete the process of image recognition, but letting computers implement automatic image recognition has been a difficult problem for a long time in the past [13]. The main difficulties are as follows: First, the algorithm itself is not mature enough to complete the task of identifying complex images. In some classical image recognition frameworks, there are many steps including image pre-processing, target detection and segmentation, feature extraction, and classifier design [14]. Second, the limitations of the program operating environment mainly refer to some restrictions on computer hardware [15]. In recent years, digital image processing technology has continued to develop, and pattern recognition theory has been continuously introduced. The computer's CPU speed and memory capacity have also increased by several orders of magnitude, and the above two issues have gradually eased [16]. Image processing and recognition technology, with its extensive application research needs, will surely gain more attention from domestic and foreign scholars in the near future [17].
Since the twenty-first century, image processing and recognition have been applied more and more in social networks, medical equipment, geographic information systems, information security, office automation systems, industrial automation, traffic control, postal systems, satellite photo transmission, and analysis [18]. In recent years, the computer technology, image processing technology, artificial intelligence, pattern recognition theory, etc. have become increasingly mature, and the image processing and recognition technology has been rapidly developed [19]. People are increasingly aware that image processing and recognition technologies have become inseparable from our daily lives [20].
Improved semi-supervised learning algorithm
There are some shortcomings in the existing classification methods for semi-supervised learning. The main improved methods of semi-supervised learning are as follows:
Dynamic clustering method: The dynamic clustering method is a process of marking a small part of the text as a training sample, and then taking them as a clustering center, through similar relevance to gather other text information. Through a small part of mixed text with and without the labels to build learning files, build text learning calculation classifier, further to finish text classification [21].
Multi-graph collaborative training method: The multi-graph collaborative training method is a kind of online video semi-supervised classification method based on multi-graph collaborative training. The specific process is to first select the representative two features of the text and the sight on the view, and then take the view feature vector as a network video classifier, thus building a classification calculation model. In order to obtain the classification prediction results, each view is propagated by linear domain propagation method. Use the co-training strategy to select unlabeled text between different views to update the classification calculator at any time, with a relatively high classification accuracy [22].
Traffic classification: traffic classification method is through the use of a small amount of tag data to support the clustering process to determine the mapping relationship between the cluster and traffic type, and ultimately to achieve the traffic classification of the application layer [23]. The traffic classification method can excavate the unknown area, and its coverage is extensive, which can make up for the shortcomings of other methods to the semi-supervised classification in the case of no label and improve the accuracy of the classification detection, and the requirement to the label of the data feature is relatively low [24].
Integrated direct push method [25]: The process of direct push integration method is first to form several stochastic subspaces within the internal of information resources, and then distinct semi-supervised space based on the subspace, and construct a neighborhood graph and train a classifier for each discriminant subspace, finally, to fuse these classifiers through vote [26]. Some experimental studies show that the integrated direct push method not only is more accurate, but also has more accurate selection of parameters and can better classify the information resources. In addition, it has an intuitive multiple map building strategy and could be coupled with other algorithms based on semi-supervised multi-graphs [27].
Algorithm framework
MR-MGSSL is a multi-graph classification algorithm for centralized tagged and unlabeled among the semi-supervised multi-graph classification algorithms [28]. The basic idea is to select some characteristic subgraphs from some of the multi-graph datasets. According to the characteristic of subgraphs, the multi-graphs are expressed through vectors and then classify the models by the existing semi-supervised learning methods. And it is summarized as two multi-map feature subgraph measurement model with label and without label [29].
Characteristic subgraph measurement
First, the establishment of the characteristic subgraph selection model is as follows:
According to the existing multi-map dataset NF = {NF1, NF2…, NFn}, map set Fy = {F|F ∈ NFi, NFi ∈ Ny|}, subgraph collection of FyYF = {yf|yf ⊆ F, F ∈ Fy|}, and Feature subgraph R = {r1, …, rn} ⊆ YF. The optimal feature subset is one of the most valuable feature subgraphs. The feature subgraph selection model is as follows:
$$ RY=\arg \max Y(R),s.t.\left|R\right|=n $$
The value of feature subgraph R is evaluated through Y(R). The larger the number of Y(R), the higher the value of feature subgraph. In addition, feature subgraph should satisfy, respectively, must-link, cannot-link, and separation characteristics of the collection layer and map layer [30].
The value of feature subgraph R, Y(R) is generally defined as [31]:
$$ {\displaystyle \begin{array}{l}Y(R)=\frac{1}{2B}\sum \limits_{t=1}^n\sum \limits_{xixj=-1}{\left({\left({y_i}^{Et}\right)}^{Ny}-{\left({y_j}^{Et}\right)}^{Ny}\right)}^2\\ {}-\frac{1}{2C}\sum \limits_{t=1}^n\sum \limits_{xixj=-1}{\left({\left({y_i}^{Et}\right)}^{Ny}-{\left({y_j}^{Et}\right)}^{Ny}\right)}^2\\ {}\frac{1}{2{\left|{Ny}^v\right|}^2}\sum \limits_{t=1,\nabla NEj,}^n\sum \limits_{NEi\in {Ny}^v}{\left({\left({y_i}^{Et}\right)}^{Ny}-{\left({y_j}^{Et}\right)}^{Ny}\right)}^2\\ {}\_\frac{1}{2{\left|{Ny}^{\hbox{-}}\right|}^2}\sum \limits_{t=1,\nabla Ej,}^n\sum \limits_{Ei\in {Ny}^{\hbox{-} }}{\left({\left({y_i}^{Et}\right)}^{Ey}-{\left({y_j}^{Et}\right)}^{Ey}\right)}^2\\ {}+\frac{1}{2{\left|{Ny}^{+}\right|}^2}\sum \limits_{t=1,\nabla Ej,}^n\sum \limits_{Ei\in {Ny}^{+}}{\left({\left({y_i}^{Et}\right)}^{Ey}-{\left({y_j}^{Et}\right)}^{Ey}\right)}^2\end{array}} $$
In which, B = ∑xixj = − 11, C = ∑xixj = 11.
Through the value definition of feature subgraph, the problem of solving feature subgraph is transformed into solving the problem of n optimal feature subgraphs, and the auxiliary matrix is constructed as follows [32]:
UNy = [uijNy]|Ny| × |Ny|, UEy = [uijEy]|Ey| × |Ey|. uijNy and uijEy are defined as follows:
$$ {y}_{ij}^{Ny}=\left\{\begin{array}{l}\frac{1}{B}\\ {}-\frac{1}{C}\\ {}\frac{1}{{\left|{Ny}^v\right|}^2}\\ {}0\end{array}\right.\kern0.5em {\displaystyle \begin{array}{l} xixj=-1\\ {} xixj=1\\ {} Ei, Ej\in {Ny}^v\\ {} other\end{array}} $$
$$ {u}_{ij}^{Ey}=\left\{\begin{array}{l}-\frac{1}{{\left|{Ny}^{-}\right|}^2}\\ {}\frac{1}{{\left|{Ny}^{+}\right|}^2}\\ {}0\end{array}\right.\kern0.5em {\displaystyle \begin{array}{c} EiEj\in {Ny}^{-}\\ {} EiEj\in {Ny}^{+}\\ {} other\end{array}} $$
$$ {\displaystyle \begin{array}{c}Y(R)=Y{(R)}^{Ny}+Y{(R)}^{Ey}\\ {}=\frac{1}{2}\sum \limits_{t=1}^n\sum \limits_{xixj}{\left({\left({y_i}^{Et}\right)}^{Ny}-{\left({y_j}^{Et}\right)}^{Ny}\right)}^2{u_{ij}}^{Ny}\\ {}+\frac{1}{2}\sum \limits_{t=1}^n\sum \limits_{EiEj}{\left({\left({y_i}^{Et}\right)}^{Ey}-{\left({y_j}^{Et}\right)}^{Ey}\right)}^2{u_{ij}}^{Ey}\end{array}} $$
It could be obtained after resolving [33]:
$$ {\displaystyle \begin{array}{l}Y{\left(\operatorname{Re}a\right)}^{Ny}=\frac{1}{2}\sum \limits_{t=1}^n\sum \limits_{xixj}{\left({\left({y_i}^{Et}\right)}^{Ny}-{\left({y_j}^{Et}\right)}^{Ny}\right)}^2{U}_{ij}^{Ny}\\ {}=\sum \limits_{t=1}^n\sum \limits_{xixj}\left({\left({\left({y_i}^{Et}\right)}^{Ny}\right)}^2{U}_{ij}^{Ny}-{\left({y_i}^{Et}\right)}^{Ny}{\left({y_j}^{Et}\right)}^{Ny}{U}_{ij}^{Ny}\right)\\ {}=\sum \limits_{t=1}^n\left({\left({r}_{Et}^{Ny}\right)}^S{C}_{Ny}{r}_{Et}^{Ny}-{\left({r}_{Et}^{Ny}\right)}^S{U}_{Ny}{r}_{Et}^{Ny}\right)\\ {}=\sum \limits_{t=1}^n{\left({r}_{Et}^{Ny}\right)}^S{M}_{Ny}{r}_{Et}^{Ny}\end{array}} $$
In which, CNy is a diagonal matrix consisting of diagonal elements of \( {c}_{ii}^{Ny}={\sum}_{j=1}^{\left| Ny\right|}{u}_{ij}^{Ny} \). In MNy = CNy − UNy, \( {r}_E^{Ny}={\left[r{}_E{}^{Ny1},{r}_E^{Ny2},\dots \dots, {r}_E^{NE\left| NY\right|}\right]}^s \) indicates whether a multi-graph contains multiple sub-maps. When the weight is 1, the answer is yes. So [34]:
$$ {\displaystyle \begin{array}{c}Y{\left(\operatorname{Re}a\right)}^{Ny}=\frac{1}{2}\sum \limits_{t=1}^n\sum \limits_{xixj}{\left({\left({y_i}^{Et}\right)}^{Ny}-{\left({y_j}^{Et}\right)}^{Ny}\right)}^2{U}_{ij}^{Ny}\\ {}=\sum \limits_{t=1}^n\left({\left({r}_{Et}^{Ny}\right)}^S{C}_{Ny}{r}_{Et}^{Ny}-{\left({r}_{Et}^{Ny}\right)}^S{U}_{Ny}{r}_{Et}^{Ny}\right)\\ {}=\sum \limits_{t=1}^n{\left({r}_{Et}^{Ny}\right)}^S{M}_{Ny}{r}_{Et}^{Ny}\end{array}} $$
It could be obtained by combining the above formula:
$$ {\displaystyle \begin{array}{c}Y\left(\operatorname{Re}a\right)=Y{(R)}^{Ny}+Y{(R)}^{Ey}\\ {}=\sum \limits_{t=1}^n\left({\left({r}_{Et}^{Ny}\right)}^S{M}_{Ny}{r}_{Et}^{Ny}+{\left({r}_{Et}^{Ey}\right)}^S{M}_{Ey}{r}_{Et}^{Ey}\right)\\ {}=\sum \limits_{t=1}^n{\left({r}_{Et}\right)}^S{Mr}_{Et}\end{array}} $$
Thus, the value of a single characteristic subgraph can be expressed as:
$$ Y\left({E}_t\right)={\left({r}_{Et}\right)}^S{Mr}_{Et} $$
$$ Y(R)=\sum \limits_{t=1}^nY\left({E}_t\right) $$
$$ R(Y)=\arg \max \sum \limits_{t=1}^nY\left({E}_t\right) $$
Characteristic subgraph selection algorithm: MGSSL algorithm
MGSSC is a general method in the feature subgraph selection algorithm. The main calculation process is to select the feature subgraphs with weight 1 and weight 0 in the information resource, and then use this as the vector starting node, and conduct the depth search in the information resource until the search is complete. In this paper, MGSSL algorithm is described in detail [35]:
MGSSL algorithm is to select the optimal feature subset R based on MGSSC algorithm, and then transform multi-map in the training dataset NY into a vector, and finally according to the traditional semi-supervised classification calculation method finish classification calculation [36].
The main calculation steps of MGSSL include input part and output part, as follows:
Input part: Train the multi-graph information set Ny, and then the obtained multi-graph dataset S is tested to finally obtain the support degree threshold r and the optimal characteristic subgraph number n of the multi-frequency occurrence subgraph [37].
Output part: The output part is mainly to test any one of the multi-map class tags contained in the multi-map dataset, mainly divided into the training phase and test phase.
Training stage: First select the optimal feature subgraph R = MGSSC(Ny, γ, n), and then represent multi-graph data in Ny with vector X, and finally get the classification model F.
Test phase: The test phase is to transform the multi-map in S into the vector Xt, and then according to the model F to predict class label of Xt, and finally predict all the class labels.
The data processing flow chart is shown in Fig. 1.
Flow chart of data processing
MR-MGSSL algorithm
MGSSL algorithm usually conducts classification by centralized processing, and it cannot directly calculate when dealing with semi-supervised large-scale multi-graph classification. In view of the emergence of such shortcomings, MR-MGSSL algorithm combining the MapReduce framework and MGSSL is proposed to conduct semi-supervised large-scale multi-map classification.
MR-MGSSL semi-supervised large-scale multi-graph classification algorithm
In the semi-supervised large-scale multi-map classification, MR − MGSSL is generally divided into three steps shown below (Fig. 2).
Steps of classification calculation method MR-MGSSL
Training data vectorization
The existing MGSSL algorithms cannot be directly applied to the semi-supervised multi-graph classification. We must first select the feature subgraphs, and transform the multi-graph data into eigenvectors, and then use the MGSSL algorithms to find the rules from the transformed eigenvectors. Construct subgraph model to conduct prediction of the calculation.
On the basis of the MR-MGSSL algorithm, an algorithm is proposed in the paper to select the optimal feature subset.
At present, the selection of feature subsets is determined by the single record of the scoring function. Therefore, in determining the semi-supervised multi-graph classification problem, we need to first determine the scores of the single frequent subgraph and then select N optimal characteristic subgraphs with the largest score.
In general, during the selection process of the feature subset, it first needs to select a subgraph appearing in multi-frequency and calculate its value, and the calculation of the score rei needs to first understand the matrix MNy, MEy, rNy, and rEy. In a text message, MNy and MEy of the multi-frequency subgraph rei is the same, so it is only necessary to compute the subgraphs included in the Ny and Et sum. And then calculate the value of each feature subgraph according to the formula \( Y\left({r}_{ei}\right)={r}_{Ny}^s{L}_{Ny}{r}_{Ny}+{r}_{Et}^s{L}_{Et}{r}_{Et} \). Finally, by calculating the partial optimal characteristic subgraphs, the value of the characteristic subgraph of all the text information is calculated and expressed by the vector.
Pre-calculate the matrix of MNy and MEy and the value of the multi-frequency characteristic subgraph.
Pre-calculation method
Calculate the matrix of MNy and MNy, id of text information and the list Bag − list and Gra − list of Et. The multi-graph is represented by the function record of the graph selection stage. In the multi-graph with labels, when the class label in the graph is positive, it is expressed as input < 1 ⋅ 1 > and < 4, |graph| > (2~3line): if the output is negative, it is expressed as output < 2 ⋅ 1 > and < 5, |graph| > (4~5line). The unlabeled multi-graph is expressed as output < 3 ⋅ 1 > and < 6, |graph| > (6line). The role of keys 1 to 8 is to produce a synergistic effect on the calculation of |Ny+|, |Ny‐|, |Nyv|, |Et+|, |Et‐|, |Etv|, Bag − list, and Gra − list. And then, according to the above calculated key value to calculate |Ny+|, |Ny‐|, |Nyv|, |Et+|, |Et‐|, |Etv|, Bag − list, and Gra − list in line 12 to 14. Finally, in the calculation of these key values, MNy andMEy is calculated.
Use MR-MGSSL algorithm to pre-calculate.
MR-MGSSL algorithm:
In the prediction method, it is necessary to obtain the multi-graph and the super multi-map first, and then determine whether the frequency of the multi-frequency subgraph has been calculated. If it is calculated, it is output directly according to the calculation step; otherwise, it needs to be judged again until the output is calculated. Finally, the calculated frequency is compared with its threshold, and the multi-graph and super multi-graph of multi-frequency subgraph are output.
The selection of the optimal feature subgraph and the value calculation: the characteristic subgraph refers to the multi-frequency subgraph that occurred with the highest frequency in the text information, and the selection of multi-frequency feature sub-map first needs to calculate the frequency of the subgraph that occurred in the text information and then according to the frequency, determine the multi-map and super multi-map of multi-frequency subgraph. In general, the text information is divided into pieces, and then its frequency in the multi-frequency subgraph has been determined; when determined, output, if not sure, needs to re-calculate the frequency subgraph, until it is determined and then output. Finally, the frequency of all the text information is obtained according to the known output frequency of each block, and then the optimal feature subset existing in the whole text information is determined according to the comparison with the maximum and minimum thresholds.
In general, the selection of the optimal feature subgraph mainly uses the MR-MGSSL algorithm.
MR -MGSSL algorithm
The method of solving the optimal feature subgraphs is usually with a small see big. The basic idea is to output the multi-frequency subgraphs of each part first, and then obtain the characteristic subgraphs of the partial frequency subgraphs, and finally obtain the optimal characteristic subgraph of the whole text information. The specific calculation method is as follows.
Input: information of optimal characteristic subgraph
\( H= list\left(u,{N}_{(u)},{NRE}_u^1\right),{N}_y=\left\{{NE}_1,\dots, {NE}_{NY}\right\} \) and Ey = {E1, …, ENY}
Output: Optimal characteristic subgraph H and NE, Feature vector set U based on H.
1. U = φ
2. WhenNE1 ∈ NGy, continue
3. Zero dimensional vector of H is represented with θ
4.uh ∈ H1, continue
5. When\( {NE}_1\in {YNE}_{uh}^1 \), continue
6. Set 1 as the weight of θ
7. U = U ∪ {θ};
Map vectorization generally through the following steps to test.
Input: test multi mapNy = {NE1, …, NEj}.
Output: Test the corresponding matrix of multi map,
1.US = φ;
2.When NEi ∈ NEs, continue
3.Set the corresponding vector of NEi as ui
4.ui = EU(HE, NE)
5.US = US ∪ {ui}.
Map vectorization is realized by the vector of each block multi-frequency subgraph, namely in the first end part of the above input and output for each feature sub-block multi-frequency subgraph; then, at the reduced end, get Bag − list and Gra − list, finally obtain all the sub-images of text information, and conduct vectorization of the trained multi-map.
Evaluate the performance of the MR-MGSSL algorithm by comparing it with the algorithm baseline and the MGSSL+M algorithm, which is mainly based on the two indicators of the mining time and the quantization time.
Evaluation of mining time
The following figure shows the mining times of the optimal feature subset on 40 multi-datasets with label of MR-MGSSL, MGSSL+M, and baseline (Fig. 3).
Mining times of the optimal feature subset on 40 multi-dataset with label of MR-MGSSL, MGSSL+M, and baseline
By 40 multi-datasets with label DBLP , we can see that when the number of multi-feature subset and the threshold are the same, the MR-MGSSL algorithm needs more time than the algorithm baseline and MGSSL+M in the same conditions, the baseline algorithm only needs to dig out the feature map, and MR-MGSSL algorithm not only needs to dig out feature subgraph algorithm but also still need to dig out the characteristic sub-map of Et. And the mining time increases with the increase of text information.
Vectorization time performance evaluation
The following figure shows the vectorization times of the optimal feature subset on 40 multi-map dataset with label of MR-MGSSL, MGSSL+M, and baseline.
It is clearly evident from Fig. 4 that vectorization time of the MR-MGSSL algorithm is shorter than the other methods, t in the process it only needs the vectorization of characteristic subgraph, so as to realize vectorization of the entire information text. The other two methods also need to test the similarity of all the data in the text information. In addition, when the characteristic subgraph mining out from text information is more, the other two methods need a longer time to multi-map vectorization. In general, the sensitivity of the other two methods of sub-images is higher than that of the MR-MGSSL algorithm.
Vectorization times of the optimal feature subset on 40 multi-dataset with label of MR-MGSSL, MGSSL+M, and baseline
Algorithm application
The regional growth method and support vector machine method are selected as references, and GF-1 and GF-2 remote sensing images are selected to perform cloud detection experiments in the image. The experimental data are shown in Table 1, and there are two aspects of visual effects and detection accuracy. The region growth method and support vector machine method are compared with the method in this paper. The experimental results are shown in Fig. 4. The red part in the figure is the detected cloud area.
Table 1 Experimental data
Figure 4 compares the experimental results of the region growing method, support vector machine method, and the method in this paper. In the figure, the orange circle is the missed cloud area, and the blue circle is the missed cloud area. It can be seen that the visual effect of the method in this paper is the best. In the first picture (1601), there is a small amount of thin cloud that missed detection in the support vector machine method (Fig. 5). In the second (1602) image, there are a large number of thin clouds that missed detections in the area growing algorithm. This proves that the method proposed in this paper effectively improves the accuracy of cloud detection.
Comparison of cloud detection algorithm results in remote sensing images. a Original image. b Region growth method. c SVM. d New method. e Real cloud
In the experiment, the actual cloud area was manually drawn. The accuracy of cloud detection was evaluated using three indicators: accuracy, recall, and error. The calculation formula is
$$ \mathrm{PR}=\frac{\mathrm{TC}}{\mathrm{FA}} $$
$$ \mathrm{RR}=\frac{\mathrm{TC}}{\mathrm{TA}} $$
$$ \mathrm{ER}=\frac{\mathrm{TF}+\mathrm{FT}}{\mathrm{NA}} $$
in which, PR is the precision rate, TC is the number of true cloud pixels that can be accurately identified, FA is the total number of cloud pixels identified, RR is the recall rate. TA is the number of true cloud pixels. ER is the error rate, TF is the number of pixels that have been misjudged as non-cloud by true cloud, FT is the number of pixels that have been misjudged by cloud as non-cloud, and NA is the total number of pixels. The final results are shown in Table 2.
Table 2 Comparison of accuracy indicators of different cloud detection algorithms (%)
Quantitative analysis of cloud detection results is in the figure with Table 2. The area growth algorithm is affected by the selection of seeds and similar region determination criteria, and it is easy to miss thin clouds at the edges, which leads to fewer accurately identified true cloud pixels TC and fewer total cloud pixels FA, and the true cloud accuracy rate is both above 90%, but the recall rate is low. The results of the support vector machine method are affected by the selection and training of the samples. Although the recall rate is improved compared to the area growth algorithm, the overall error rate is higher. In the first picture, the accuracy rate of the area growth method is as high as 99.22%, but the recall rate is only 49.92, because there are large areas of cloud edge misses and thin cloud misses; the support vector machine method has misjudged the house as cloud situation. The algorithm in this paper has obvious superiority in recall rate and error rate. The recall rate is around 90%, the highest error rate is 6.03%, and the lowest error rate is only 0.89%.
Based on the analysis of the existing problems of semi-supervised multi-map classification the MR-MGSSL algorithm is proposed, the calculation steps of each factor in the semi-supervised classification algorithm are determined and the evaluation system is established. Based on the comparison of the proposed algorithm and other classification methods on mining time and vectorization time, the proposed algorithm has a longer mining time of the optimal feature subgraph and the time increases with the increase of text information; on the other hand, the proposed algorithm has a shorter time of the subgraph vectorization and has positive correlation relationship with the number of the optimal feature subgraph and lower sensitivity to the number of sub-images. It affirmed the feasibility of MR-MGSSL algorithm in semi-supervised multi-map classification, so as to reduce the cost of communication and improve the efficiency of the algorithm.
W.J. Zheng, L.I. Lei, S.O. Science, Research on combined semi-supervised SVM cluster kernel algorithm based on graph. Computer Technology & Development (2014)
L. Jia, Semi-supervised multi-class classification algorithm based on local learning. J Comput Appl 32(12), 3308–3310 (2012)
J. Lv, Semi-supervised multi-class classification algorithm based on local learning// information engineering and applications. Springer London (2012)
X.Q. Wang, Research on multi-view semi-supervised learning algorithm based on co-learning// international conference on machine learning and cybernetics. IEEE 20(6), 1276–1280 (2016)
Y. Zhao, G. J. Wang, A multi-classification algorithm of semi-supervised support vector data description based on pairwise constraints// proceedings of 2013 Chinese intelligent automation conference. Springer Berlin Heidelberg 20(5), 531-538 (2013).
D.Q. Xue, The research on semi-supervised support vector data description multi-classification algorithm. Adv. Mater. Res. 26(5), 1115–1120 (2011)
S. Ding, H. Jia, L. Zhang, Research of semi-supervised spectral clustering algorithm based on pairwise constraints. Neural Comput. Applic. 24(1), 211–219 (2014)
K. Mardia, J. Kent, J. Bibby, Multivariate analysis. Academic Press, San Diego, CA, 300–325 (1980)
M. Grbovic, C. Dance, S. Vucetic, Sparse principal component analysis with constraints //Proc. of 26th AAAI , 935-941(2012).
W. Yue, K.C. Ho, Unified near-field and far-field localization for AOA and hybrid AOA-TDOA positionings. IEEE Trans. Wirel. Commun. 17(11), 1242–1254 (2018)
Z. Yi, Y. Wu, J. Yan, H. Wang, 3D inversion of full gravity gradient tensor data in spherical coordinate system using local north-oriented frame. Earth Planets Space 70(12), 58–58 (2018)
J. Wang, X.J. Cheng, J.Q. Liu, Y.J. Wen, A enhanced algorithm based on RSSI and quasi Newton method for the node localization in wireless sensor networks. Comput. Knowl. Technol. 12(8), 222–225 (2016)
G.Q. Zhou, L.J. YANG, Z. Liu, Analysis on the influence of base station layout on the fuzzy region distribution and positioning accuracy based on TDOA positioning. J. Nav. Univ. Eng. 29(11), 96–101 (2017)
Y. Tuo, S. Wang, Wang, reliability-based robust online constructive fuzzy positioning control of a turret-moored floating production storage and offloading vessel. IEEE Access. 6(8), 36019–36030 (2018)
Y. Tuo, Y. Wang, S. Wang, Reliability-based robust online constructive fuzzy positioning control of a turret-moored floating production storage and offloading vessel. IEEE Access. 6(10), 36019–36030 (2018)
S. Song, W. Zhang, P. Han, D. Zou, Sliding window method for vehicles moving on a long track. Veh. Syst. Dyn. 56(1), 113–127 (2018)
A.N.Z. Rashed, A. Mohammed, H.A. Sharshar, A.M. El-Eraki, Fast routing algorithm in optical multistage interconnection networks using fast window method. Int J Advanced Res Electron Commun Eng 6(1), 37–43 (2017)
J. Kasza, K. Hemming, R. Hooper, J. Matthews, A. Forbes, Impact of non-uniform correlation structure on sample size and power in multiple-period cluster randomised trials. Stat. Methods Med. Res. 28(3), 703–716 (2019)
I. Hanasaki, C. Hosokawa, Non-uniform stochastic dynamics of nanoparticle clusters at a solid–liquid interface induced by laser trapping. Japanese Journal of Applied Physics 58(SD), 07 (2019)
S. M. M. Gilani, T. Hong, W. Jin, G. Zhao, H. M. Heang, C. Xu, Mobility management in IEEE 802.11 WLAN using SDN/NFV technologies. EURASIP J. Wirel. Commun. Netw 67(12), 56-62 (2017).
K. Nahida, C. Yin, Y. Hu, Z.A. Arain, C. Pan, I. Khan, Y. Zhang, G.M.S. Rahman, Handover based on AP load in software defined Wi-fi systems. J. Commun. Netw. 19(6), 596–604 (2017)
T. Zahid, X. Hei, W. Cheng, A. Ahmad, P. Maruf, On the tradeoff between performance and programmability for software defined WiFi networks. Wirel. Commun. Mob. Comput 35-41 (2018).
L. Li, G. Oikonomou, M. Beach, R. Nejabati, D. Simeonidou, in Paper presented at IEEE International Conference on Communications. An SDN agent-enabled rate adaptation framework for WLAN (Shanghai, 2019).
K. Kostal, R. Bencel, M. Ries, P. Truchly, I. Kotuliak, High performance SDN WLAN architecture. Sensors 19(8), 18-25(2019).
E. Coronado, S.N. Khan, R. Riggio, 5G-EmPOWER: A software-defined networking platform for 5G radio access networks. IEEE Trans. Netw. Serv. Manag. 16(2), 715–728 (2019)
E. Coronado, E.T. Garriga, J. Villalon, A. Garrido, L. Goratti, R. Riggio, SDN@play: Software-defined multicasting in enterprise WLANs. IEEE Commun 57(7), 85–91 (2019)
A. Sen, K. M. Sivalingam, Testbed evaluation of a seamless handover mechanism for an SDN-based enterprise WLAN. Sadhana Acad 44(12), 243 (2019).
B. Dezfouli, V. Esmaeelzadeh, J. Sheth, M. Radi, A review of software-defined WLANs: Architectures and central control mechanisms. IEEE Commun 21(1), 431–463 (2019)
S. Zhu, Z. Sun, Y. Lu, L. Zhang, Y. Wei, G. Min, Centralized QoS routing using network calculus for SDN-based streaming media networks. IEEE Access 7(12), 146566–146576 (2019)
X. Zhong, L. Zhang, Y. Wei, Dynamic load-balancing vertical control for large-scale software-defined internet of things. IEEE Access 7(12), 140769–140780 (2019)
P. Dong, K. Gao, J. Xie, W. Tang, N. Xiong, A. Vasilakos, Receiver-side TCP countermeasure in cellular networks. Sensors 19(12), 27–32 (2019)
Z. Kuang, G. Liu, G. Li, X. Deng, Energy efficient resource allocation algorithm in energy harvesting-based D2D heterogeneous networks. IEEE Internet Things J. 6(1), 557–567 (2019)
Z.H. Huang, X. Xu, H.H. Zhu, M.C. Zhou, An efficient group recommendation model with multiattention-based neural networks. IEEE Transactions on Neural Networks and Learning Systems (2020)
R. Jiang, M. Y. Shi, W. Zhou, A privacy security risk analysis method for medical big data in urban computing. IEEE Access 7(12), 143841-143854(2019).
Y. Sun, C. Xu, G.F. Li, W.F. Xu, J.Y. Kong, D. Jiang, B. Tao, D.S. Chen, Intelligent Human Computer Interaction Based on Non Redundant EMG SignalAlexandria Engineering Journal (2020)
W. Wei, H. Song, W. Li, P. Shen, A. Vasilakos, Gradient-driven parking navigation using a continuous information potential field based on wireless sensor network. Information Sciences 408(2), 100-114(2017).
Z. Wan, N. Xiong, N. Ghani, A. V. Vasilakos, L. Zhou, Adaptive unequal protection for wireless video transmission over IEEE 802.11 e networks. Multimedia Tools and Applications 72(1), 541-571(2014).
Supported by the science and technology project of the State Grid Corporation of China, research on intelligent infrared image diagnosis of substation equipment (520530190003).
State Grid Shanxi Electric Power Research Institute, Taiyuan, 030001, China
Yang Gang, Zhang Na, Jin Tao, Wang Dawei & Kang Yinzhu
Modest Moistens & Harmonious Technology Co. Ltd, Beijing, 100193, China
Gao Feng
Yang Gang
Zhang Na
Jin Tao
Wang Dawei
Kang Yinzhu
Correspondence to Yang Gang.
We have no competing interests.
Gang, Y., Na, Z., Tao, J. et al. Research on semi-supervised multi-graph classification algorithm based on MR-MGSSL for sensor network. J Wireless Com Network 2020, 130 (2020). https://doi.org/10.1186/s13638-020-01745-x
DOI: https://doi.org/10.1186/s13638-020-01745-x
Sensors network
Semi-supervised multi-graph
Feature subgraph
Smart Cyber-Physical Systems | CommonCrawl |
The University of York (89)
Mathematics (York) (89)
Jump to: 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2007 | 2006 | 2000
Brown, Peter (2019) On constructions of quantum-secure device-independent randomness expansion protocols. PhD thesis, University of York.
Rana, Nimit (2019) A few problems on stochastic geometric wave equations. PhD thesis, University of York.
Rendell, Nicola (2019) Infrared behaviour of propagators in cosmological spacetimes. PhD thesis, University of York.
Albaity, Majed (2018) Representations of reflection monoids. PhD thesis, University of York.
Brown, Robert (2018) From Lie algebras to Chevalley groups. MSc by research thesis, University of York.
Dong, Wenfeng (2018) The Evaluation of Gas Sales Agreements. PhD thesis, University of York.
Dumbrell, Edward Mark (2018) On the Rescaled Hitting Time and Return Time Distributions to Asymptotically Small Sets. PhD thesis, University of York.
Hargreaves, Jessica (2018) Wavelet Analysis of Nonstationary Circadian Time Series. PhD thesis, University of York.
Harrington, Benen (2018) Cohomology of Burnside Rings. PhD thesis, University of York.
Ji, Zhongmei (2018) Estimation of Sparse Single Index Vector Autoregression Models. PhD thesis, University of York.
Kou, Xiaochen (2018) An Iterative Approach for Model Selection in A Class of Semiparametric Models. PhD thesis, University of York.
Martin, Samuel (2018) On the Clebsch-Gordan Problem in Prime Characteristic. PhD thesis, University of York.
Measures, Kayleigh (2018) Moments of distances between centres of Ford spheres. PhD thesis, University of York.
Parini, Robert Charles (2018) Classical integrable field theories with defects and near-integrable boundaries. PhD thesis, University of York.
Tosasukul, Jiraroj (2018) Nonparametric High-Dimensional Time Series: Estimation and Prediction. PhD thesis, University of York.
Wei, Lingling (2018) HOMOGENEITY PURSUIT AND STRUCTURE IDENTIFICATION IN FUNCTIONAL-COEFFICIENT MODELS. PhD thesis, University of York.
Wingham, Francis Leon (2018) Generalised Sorkin-Johnston and Brum-Fredenhagen States for Quantum Fields on Curved Spacetimes. PhD thesis, University of York.
Xu, Zhikang (2018) Option Pricing and Hedging with Regret Optimisation. PhD thesis, University of York.
Al-aadhami, Asawer (2017) Combinatorial Questions for $S\wr_{n} \mathcal{T}_n$ for a semigroup. PhD thesis, University of York.
Alfiniyah, Cicik (2017) The Role of Quorum Sensing in Bacterial Colony Dynamics. PhD thesis, University of York.
Allen, Demi Denise (2017) Mass Transference Principles and Applications in Diophantine Approximation. PhD thesis, University of York.
Brewer, Sky J (2017) Results in Metric and Analytic Number Theory. PhD thesis, University of York.
Dhami, Kymrun K. (2017) An evaluation on the gracefulness and colouring of graphs. MSc by research thesis, University of York.
Dhariwal, Gaurav (2017) A study of constrained Navier-Stokes equations and related problems. PhD thesis, University of York.
Gunns, Jos Mary Mayo (2017) Differentiating L-functions. PhD thesis, University of York.
Leong, Nicol (2017) Sums of Reciprocals and the Three Distance Theorem. MSc by research thesis, University of York.
Lock, Sarah Cloud Lauren (2017) Functional Data Analysis and QTL Detection Across Time within the Circadian Clock. MSc by research thesis, University of York.
Quinn-Gregson, Thomas (2017) Homogeneity and omega-categoricity of semigroups. PhD thesis, University of York.
Sueess, Fabian (2017) Simultaneous Diophantine approximation on affine subspaces and Dirichlet improvability. PhD thesis, University of York.
Weilenmann, Mirjam (2017) Quantum Causal Structure and Quantum Thermodynamics. PhD thesis, University of York.
Zafeiropoulos, Agamemnon (2017) Inhomogeneous Diophantine approximation, M_0 sets and projections of fractals. PhD thesis, University of York.
de la Rosa, Alejandro (2017) Symmetries of integrable open boundaries in the Hubbard model and other spin chains. PhD thesis, University of York.
kok, Tayfun (2017) Stochastic Evolution Equations in Banach Spaces and Applications to the Heath-Jarrow-Morton-Musiela Equation. PhD thesis, University of York.
Hills, Daniel (2016) Generating boundary conditions for integrable field theories using defects. PhD thesis, University of York.
King, Callum (2016) The spectral density for scalar �elds in de Sitter space at one-loop. MSc by research thesis, University of York.
Li, Xiang (2016) Maximum Rank Correlation Estimation for Generalized Varying-Coefficient Models with Unknown Monotonic Link Function. PhD thesis, University of York.
Smith, Christopher Richard (2016) The hunt for Skewes' number. MSc by research thesis, University of York.
Box, John (2015) A Dynamic Structure for High Dimensional Covariance Matrices and its Application in Portfolio Allocation. PhD thesis, University of York.
Bullock, Tom (2015) From Incompatibility to Optimal Joint Measurability in Quantum Mechanics. PhD thesis, University of York.
Draper, Christopher Peter William (2015) The geodesic Gauss map of spheres and complex projective space. PhD thesis, University of York.
Dyer, Jacob (2015) Enumeration of rooted constellations and hypermaps through quantum matrix integrals. PhD thesis, University of York.
Gibbons, Jos (2015) Infrared problem in the Faddeev–Popov sector in Yang–Mills Theory and Perturbative Gravity. PhD thesis, University of York.
Hussain, Javed (2015) Analysis Of Some Deterministic and Stochastic Evolution Equations With Solutions Taking Values In An Infinite Dimensional Hilbert Manifold. PhD thesis, University of York.
Ke, Yuan (2015) Feature selection and structure specification in ultra-high dimensional semi-parametric model with an application in medical science. PhD thesis, University of York.
Kechrimparis, Spyridon (2015) Uncertainty Relations for Quantum Particles. PhD thesis, University of York.
Lupo, Umberto (2015) Aspects of (quantum) field theory on curved spacetimes, particularly in the presence of boundaries. PhD thesis, University of York.
Zappa, Emilio (2015) New group theoretical methods for applications in virology and quasicrystals. PhD thesis, University of York.
Biniok, Johannes CG (2014) Compatible and incompatible observables in the paradigmatic multislit experiments of quantum mechanics. PhD thesis, University of York.
Friswell, Robert Michael (2014) Harmonic Vector Fields on Pseudo-Riemannian Manifolds. PhD thesis, University of York.
Goodbourn, Oliver (2014) Reductive pairs arising from representations. PhD thesis, University of York.
Lang, Benjamin (2014) Universal constructions in algebraic and locally covariant quantum field theory. PhD thesis, University of York.
Stevens, Neil (2014) Concepts surrounding incompatibility in quantum physics. PhD thesis, University of York.
Strachan, Maxwell Alexander Wharton (2014) Harmonic vector fields on Riemannian manifolds. MSc by research thesis, University of York.
Waldron, James (2014) Lie Algebroids over Differentiable Stacks. PhD thesis, University of York.
YANG, DANDAN (2014) Free idempotent generated semigroups. PhD thesis, University of York.
Zenab, Rida-E (2014) Decomposition of semigroups into semidirect and Zappa-Sz\'{e}p products. PhD thesis, University of York.
Ferguson, Matthew T (2013) Aspects of Dynamical Locality and Locally Covariant Canonical Quantization. PhD thesis, University of York.
Heap, Winston (2013) Moments of the Dedekind zeta function. PhD thesis, University of York.
Li, Liang (2013) A STUDY OF STOCHASTIC LANDAU-LIFSCHITZ EQUATIONS. PhD thesis, University of York.
McNulty, Daniel (2013) Mutually Unbiased Product Bases. PhD thesis, University of York.
Salthouse, David Georges (2013) Quasilattice-Based Models for Structural Constraints on Virus Architecture. PhD thesis, University of York.
Yan, Hongjia (2013) Statistical Analysis of Spatial Dynamic Pattern in Spatial Data Analysis. PhD thesis, University of York.
Hunt, David Stephen (2012) The Quantization of Linear Gravitational Perturbations and the Hadamard Condition. PhD thesis, University of York.
Loveridge, Leon (2012) Quantum Measurements in the Presence of Symmetry. PhD thesis, University of York.
Potts, Thomas (2012) Properties of convolution operators on Lp(0,1). PhD thesis, University of York.
Regelskis, Vidas (2012) Quantum Algebras and Integrable Boundaries in AdS/CFT. PhD thesis, University of York.
Wang, Yanhui (2012) Beyond Regular Semigroups. PhD thesis, University of York.
ul Haq, Ahsan (2012) On a class of measures on configuration spaces. PhD thesis, University of York.
Anwar, Muhammad F (2011) Representations and Cohomology of Algebraic Groups. PhD thesis, University of York.
Bullock, David (2011) Klein-Gordon solutions on non-globally hyperbolic standard static spacetimes. PhD thesis, University of York.
Burrow, Jennifer (2011) Mechanistic models of recruitment variability in fish populations. PhD thesis, University of York.
Cornock, Claire (2011) Restriction Semigroups: Structure, Varieties and Presentations. PhD thesis, University of York.
Datta, Samik (2011) A mathematical analysis of marine size spectra. PhD thesis, University of York.
Dyson, Charles (2011) Implementing quantum algorithms using classical electrical circuits: Deutsch, Deutsch-Jozsa and Grover. MSc by research thesis, University of York.
Ghroda, Nassraddin (2011) Semigroups of I-quotients. PhD thesis, University of York.
Harrap, Stephen (2011) Diophantine approximation: the twisted, weighted and mixed theories. PhD thesis, University of York.
Ortiz Hernandez, Leonardo (2011) Quantum Fields on BTZ Black Holes. PhD thesis, University of York.
Shaheen, Lubna (2011) Axiomatisability problems for $S$-acts and $S$-posets. PhD thesis, University of York.
Stanislaus, Mariaseelan (2011) The Geodesic Gauss Map and Ruh-Vilms theorem for a Hypersurface in S^{n}. MSc by research thesis, University of York.
Serrano Perdomo, Rafael Antonio (2010) Optimal control of stochastic partial differential equations in Banach spaces. PhD thesis, University of York.
Walker, Philip (2010) Radiation and reaction in scalar quantum electrodynamics. PhD thesis, University of York.
Zhu, Jiahui (2010) A study of SPDEs w.r.t. compensated Poisson random measures and related topics. PhD thesis, University of York.
Brierley, Stephen (2009) Mutually Unbiased Bases in Low Dimensions. PhD thesis, University of York.
Faizal, Mir (2009) Perturbative Quantum Gravity and Yang-Mills Theories in de Sitter Spacetime. PhD thesis, University of York.
Heaton, Rachel Ann (2009) On Schur algebras, Doty coalgebras and quasi-hereditary algebras. PhD thesis, University of York.
Martin, Giles D. R (2007) Classical and Quantum Radiation Reaction. PhD thesis, University of York.
Neklyudov, Mikhail (2006) Navier-Stokes equations and vector advection. PhD thesis, University of York.
Barton, Christine H (2000) Magic squares of Lie algebras. PhD thesis, University of York. | CommonCrawl |
Fundamental Analysis Tools for Fundamental Analysis
Adjusted EBITDA Definition
What Is Adjusted EBITDA?
Adjusted EBITDA (earnings before interest, taxes, depreciation, and amortization) is a measure computed for a company that takes its earnings and adds back interest expenses, taxes, and depreciation charges, plus other adjustments to the metric.
Adjusted EBITDA is used to assess and compare related companies for valuation analysis and for other purposes. Adjusted EBITDA differs from the standard EBITDA measure in that a company's adjusted EBITDA is used to normalize its income and expenses since different companies may have several types of expense items that are unique to them.
Standardizing EBITDA by removing anomalies means the resulting adjusted or normalized EBITDA is more accurately and easily comparable to the EBITDA of other companies, and to the EBITDA of a company's industry as a whole.
The Formula for Adjusted EBITDA Is
NI+IT+DA=EBITDAEBITDA+/−A=Adjusted EBITDAwhere:NI = Net incomeIT = Interest & taxesDA = Depreciation & amortization\begin{aligned} &NI+IT+DA=EBITDA\\ &EBITDA +\!\!/\!\!-A = \text{Adjusted }EBITDA\\ &\textbf{where:}\\ &NI\ =\ \text{Net income}\\ &IT\ =\ \text{Interest \& taxes}\\ &DA\ =\ \text{Depreciation \& amortization}\\ &A\ =\ \text{Adjustments} \end{aligned}NI+IT+DA=EBITDAEBITDA+/−A=Adjusted EBITDAwhere:NI = Net incomeIT = Interest & taxesDA = Depreciation & amortization
How to Calculate Adjusted EBITDA
Start by calculating EBITDA, which begins with a company's net income. To this figure, add back interest expense, income taxes, and all non-cash charges including depreciation and amortization.
Next, either add back non-routine expenses, such as excessive owner's compensation or deduct any additional, typical expenses that would be present in peer companies but may not be present in the company under analysis. This could include salaries for necessary headcount in a company that is under-staffed, for example.
What Does Adjusted EBITDA Tell You?
Adjusted EBITDA, as opposed to the non-adjusted version, will attempt to normalize income, standardize cash flows, and eliminate abnormalities or idiosyncrasies (such as redundant assets, bonuses paid to owners, rentals above or below fair market value, etc.), which makes it easier to compare multiple business units or companies in a given industry.
For smaller firms, owners' personal expenses are often run through the business and must be adjusted out. The adjustment for reasonable compensation to owners is defined by Treasury Regulation 1.162-7(b)(3) as "the amount that would ordinarily be paid for like services by like organizations in like circumstances."
Other times, one-time expenses need to be added back, such as legal fees, real estate expenses such as repairs or maintenance, or insurance claims. Non-recurring income and expenses such as one-time startup costs that usually reduce EBITDA should also be added back when computing the adjusted EBITDA.
Adjusted EBITDA should not be used in isolation and makes more sense as part of a suite of analytical tools used to value a company or companies. Ratios that rely on adjusted EBITDA can also be used to compare companies of different sizes and in different industries, such as the enterprise value/adjusted EBITDA ratio.
The adjusted EBITDA measurement removes non-recurring, irregular and one-time items that may distort EBITDA.
Adjusted EBITDA provides valuation analysts with a normalized metric to make comparisons more meaningful across a variety of companies in the same industry.
Public companies report standard EBITDA in financial statement filings as Adjusted EBITDA is not required in GAAP financial statements.
Example of How to Use Adjusted EBITDA
The adjusted EBITDA metric is most helpful when used in determining the value of a company for transactions such as mergers, acquisitions or raising capital. For example, if a company is valued using a multiple of EBITDA, the value could change significantly after add-backs.
Assume a company is being valued for a sale transaction, using an EBITDA multiple of 6x to arrive at the purchase price estimate. If the company has just $1 million of non-recurring or unusual expenses to add back as EBITDA adjustments, this adds $6 million ($1 million times the 6x multiple) to its purchase price. For this reason, EBITDA adjustments come under much scrutiny from equity analysts and investment bankers during these types of transactions.
The adjustments made to a company's EBITDA can vary quite a bit from one company to the next, but the goal is the same. Adjusting the EBITDA metric aims to "normalize" the figure so that it is somewhat generic, meaning it contains essentially the same line-item expenses that any other, similar company in its industry would contain.
The bulk of the adjustments are often different types of expenses that are added back to EBITDA. The resulting adjusted EBITDA often reflects a higher earnings level because of the reduced expenses.
Common EBITDA adjustments include:
Unrealized gains or losses
Non-cash expenses (depreciation, amortization)
Litigation expenses
Owner's compensation that is higher than the market average (in private firms)
Gains or losses on foreign exchange
Goodwill impairments
Share-based compensation
This metric is typically calculated on an annual basis for a valuation analysis, but many companies will look at adjusted EBITDA on a quarterly or even monthly basis, though it may be for internal use only.
Analysts often use a three-year or five-year average adjusted EBITDA to smooth out the data. The higher the adjusted EBITDA margin, the better. Different firms or analysts may arrive at slightly different adjusted EBITDA due to differences in their methodology and assumptions in making the adjustments.
These figures are often not made available to the public, while non-normalized EBITDA is typically public information. It is important to note that adjusted EBITDA is not a generally accepted accounting principles (GAAP)-standard line item on a company's income statement.
Earnings Before Interest, Taxes, Depreciation and Amortization – EBITDA Definition
EBITDA, or earnings before interest, taxes, depreciation and amortization, is a measure of a company's overall financial performance and is used as an alternative to simple earnings or net income in some circumstances.
What the EBITDA-to-Sales Ratio Tells Us
The EBITDA-to-sales ratio is a financial metric used to assess a company's profitability by comparing its revenue with its operating income before interest, taxes, depreciation, and amortization.
Enterprise Value – EV Definition
Enterprise value (EV) is a measure of a company's total value, often used as a comprehensive alternative to equity market capitalization. EV includes in its calculation the market capitalization of a company but also short-term and long-term debt as well as any cash on the company's balance sheet.
What the Debt/EBITDA Ratio Tells You
Debt/EBITDA is a ratio measuring the amount of income generation available to pay down debt before deducting interest, taxes, depreciation, and amortization.
What EBITDAR Tells Us
EBITDAR—an acronym for earnings before interest, taxes, depreciation, amortization, and restructuring or rent costs—is a non-GAAP measure of a company's financial performance.
Net Debt-to-EBITDA Ratio
Net debt-to-EBITA ratio is a measurement of leverage, calculated as a company's interest-bearing liabilities minus cash, divided by EBITDA.
A Clear Look at EBITDA
How can I find a company's EV/EBITDA multiple?
The Difference Between EBIT and EBITDA
The Formula for Calculating Ebitda (With Examples)
Energy Trading
5 Common Trading Multiples Used in Oil and Gas Valuation
The Difference Between Cash Flow and EBITDA | CommonCrawl |
Journal of Environmental Health Sciences (한국환경보건학회지)
Korean Society of Environmental Health (한국환경보건학회)
Environment > Environmental Health
Journal of Environmental Health Sciences, JEHS, is an official journal of the Korean Society of Environmental Health. The mission of the journal is to promote research, policy, education, and practice in the field of environmental health by publishing papers of high scientific quality. Main research and development interests of the journal include but not limited to: exposure sciences, environmental monitoring, environmental epidemiology, toxicology and biomonitoring, risk assessment, environmental engineering, and environmental health policy in both general environments and workplaces. Categories of submission papers are original articles, reviews, brief reports, case reports, special topics, editorials, letters, meeting reports, news and book reviews.
Volume 33 Issue 2 Serial No. 95
Pharmaceuticals in Environment and Their Implication in Environmental Health
Choi, Kyung-Ho;Kim, Pan-Gyi;Park, Jeong-Im 433
https://doi.org/10.5668/JEHS.2009.35.6.433 PDF KSCI
Pharmaceuticals in the aquatic environment are trace contaminants of growing importance in environmental health due to their physiologically active nature. Pharmaceuticals could affect non-target species and might eventually damage sustainability of susceptible populations in the ecosystem. Potentials for health consequences among susceptible human population cannot be ruled out since long-term exposure to cocktails of pharmaceuticals, which might be present in drinking water, is possible. Selection of antibiotic resistant microorganisms is of another concern. In order to understand, and if needed, to properly address the environmental health issues of pharmaceutical residues, knowledge gaps need to be filled. Knowledge gaps exist in many important areas such as prioritization of target pharmaceuticals for further risk studies, occurrence patterns in different environments, chronic toxicities, and toxicities of pharmaceutical mixtures. Appropriate treatment technologies for drinking water and wastewater could be developed when they are deemed necessary. One of the simplest, yet most efficient measures that could be undertaken is to implement a return program for unused or expired drugs. In addition, implementation of environmental risk assessment frameworks for pharmaceuticals would make it possible to efficiently manage potential environmental health problems associated with pharmaceutical residues in the environment.
Comparison of Commuters' PM10 Exposure Using Different Transportation Modes of Bus and Bicycle
Kim, Won;Kim, Sung-Yeon;Lee, Ji-Yeon;Kim, Seong-Keun;Lee, Ki-Young 447
Cycling has been lately recommended as an alternative commuting mode because it is believed to be good for health and the environment. However, the exposure to environmental pollutants, such as fine particulates, could be a potential problem for cycling in urban environments. In this study, we compared commuters' $PM_{10}$ exposure using the different transportation modes of bicycle and bus. When a bicycle was used as a commuting mode, the additional $PM_{10}$ exposure due to transportation was about 3.5 times higher than that when using a bus. The difference of additional $PM_{10}$ exposures by cycling and bus was statistically significant (p<0.01). The $PM_{10}$ exposure during cycling was significantly correlated with atmospheric $PM_{10}$ concentration (r=0.98, p<0.01) and its correlation coefficient was higher than that of bus (r=0.55, p<0.05). The results of this study demonstrated that the main reasons of higher $PM_{10}$ exposure when using the bicycle as the mode of transport were its vicinity to road traffic and routes that were unavoidably close to road traffic. Bicycle commuting along the road side may not be good for health. Exclusive bicycle lanes away from road traffic are recommended.
PM10 and CO2 Concentrations in the Seoul Subway Carriage
Sohn, Hong-Ji;Ryu, Kyong-Nam;Im, Jong-Kwon;Jang, Kyung-Jo;Lee, Ki-Young 454
The subway is the major public transportation system in Seoul with 2.2 million people using it everyday. Indoor air pollution in the subway can be a significant part of population exposure because of the number of people using the subway, time spent in transit and potentially high exposure for certain pollutants. The Korea Ministry of Environment has established the level 2 of recommended standards of $PM_{10}$ and $CO_2$ in subway trains. The aims of this study were to determine the airborne levels of $PM_{10}$, $CO_2$ and any correlation between pollutant levels and number of passenger in a subway train. The airborne $PM_{10}$ and $CO_2$ were measured on the inside of trains on line #4 for 4 different days from October to November in 2008. Average $PM_{10}$ and $CO_2$ levels were $113{\pm}25{\mu}g/m^3$ and $1402{\pm}442$ ppm, respectively. These levels did not exceed the level 2 of recommended standards of $250{\mu}g/m^3$ for $PM_{10}$ and 3500 ppm for $CO_2$. $PM_{10}$ level was not correlated with the number of passengers, while $CO_2$ levels were positively correlated with the number of passengers. The findings suggested that $PM_{10}$ in subway trains may have sources other than those directly associated with the number of passengers.
Use of Portable Global Positioning System (GPS) Devices in Exposure Analysis for Time-location Measurement
Lee, Ki-Young;Kim, Joung-Yoon;Putti, Kiran;Bennett, Deborah H.;Cassady, Diana;Hertz-Picciotto, Irva 461
Exposure analysis is a critical component of determining the health impact of pollutants. Global positioning systems (GPS) could be useful in developing time-location information for use in exposure analysis. This study compares four low cost GPS receivers with data logging capability (Garmin 60, Garmin Forerunner 201, GeoStats GeoLogger and Skytrx minitracker MT4100) in terms of accuracy, precision, and ease of use. The accuracy of the devices was determined at two known National Geodetic Survey points. The coordinates logged by the devices were compared when the devices were carried while walking and driving. The Garmin 60 showed better accuracy and precision than the GeoLogger when they were placed at the geodetic points. The Forerunner and Skytrx did not record when they were kept stationary. When the subject wore the devices while walking, the location of the devices differed by about 8 m on average between any two device combinations involving the four devices. The distance between the coordinates logged by the devices decreased when the devices were carried with their antennas facing the sky. All the devices showed similar routes when they were used in a car. All the devices except the Forerunner had satisfactory signal reception when they were worn and when they were carried in the car. The GeoLogger is less comfortable for the subject because of specific wearing requirements. This evaluation found that the Garmin 60 and the Skytrx may be useful in personal exposure analysis studies to record time-location data.
The Association of Subjective Symptoms of Students and Indoor Air Quality in Private Academic Facilities
Jung, Kyung-Sick;Kim, Nam-Soo;Lee, Jong-Dae;HwangBo, Young;Son, Bu-Soon;Lee, Byung-Kook 468
To evaluate the current indoor air quality condition of private academic facilities in Korea and investigate its association with subjective symptoms of student residing at the same academic facilities, air quality monitoring was carried out in total of 20 academic facilities located in Seoul, Daejon and Chungnam from the beginning of January to the end of April, 2009. To assess the air quality condition of academic facilities, 6 air pollutants with temperature and humidity were measured simultaneously inside and outside of academic facilities. The rate of exceeding the Indoor Air Quality (IAQ) guideline concentrations in 6 air pollutants were 5%, 85%, 15%, 5%, 10% and 30% for CO, $CO_2$, PM10, HCHO, TVOCs and TBC, respectively. A questionnaire on 16 subjective symptoms related to indoor air quality was given to 342 students who studied at the 20 academic facilities. The most frequent symptom of students was 'I feel easily tired or sleepy', and this was followed by 'I feel muscular pain or stiffness on shoulder, back and neck'. The association of net difference (subjective symptoms at the academic facility - subjective symptoms of the usual situation) with air pollutants was analyzed using spearman rank correlation. In logistic analysis using proportional odds method, the students whose indoor air concentration of HCHO was ${\geq}60{\mu}g/m^3$ hadsignificant odds of having more subjective symptoms of 'My eyes are dry or feel irritated or itching' (OR=5.026: CI=1.587-15.911), 'I feel easily tired or sleepy' (OR=2.956: CI=1.072-8.152), 'I lose my concentration and I feel my memory is falling' (OR=7.745: CI=1.938-30.955) and 'I feel dizzy' (OR=4.424: CI=1.292-15.149) than those of <$60{\mu}g/m^3$.
Assessment of Airborne Fungi Concentrations in Subway Stations in Seoul, Korea
Cho, Jun-Ho;Paik, Nam-Won 478
This study was performed to assess airborne fungi concentrations during fall in eight subway stations in Seoul, Korea. The purpose of this study was to investigate appropriate culture media and evaluate factors affecting airborne fungi concentrations. Results indicated that airborne fungi concentrations showed log-normal distribution. Thus, geometric mean (GM) and geometric standard deviation (GSD) were calculated. The GM of airborne fungi concentrations cultured on malt extract agar (MEA) media was 466 $cfu/m^3$ (GSD 3.12; Range 113~4,172 $cfu/m^3$) and the GM of concentrations cultured on DG18 media was 242 $cfu/m^3$ (GSD 4.75; Range 49~6,093 $cfu/m^3$). Both of GM values exceeded 150 $cfu/m^3$, the guideline of World Health Organization (WHO). There was no significant difference between two fungi concentrations cultured on MEA and DG18 media, respectively. Two factors, such as relative humidity and depths of subway stations were significantly related to airborne fungi concentrations. It is recommended that special consideration should be given to deeper subway stations for improvement of indoor air quality.
Evaluation of Seawater Quality from Incheon Offshore Using Early Development Systems of A Sea Urchin
Yu, Chun-Man 486
In January 2009, the water quality of offshore around the Incheon coast was evaluated by bioassay using early development systems of a sea urchin species, Hemicentrotus pulcherrimus. The results of performing biological evaluations on seawater samples from total of thirteen sites, showed that the formation rates of normal pluteus larva varied from 18% to 71%. In site 5 the seawater sample led to an averaage formation rate of normal larva of 18%, the highest abnormal formation rate hindering the early embryo development of the experimental animal, while that of site 3 averaged 71%, the highest formation rate of normal larva. Seawater samples from site 1, 2, 4, 7, 9, 10, 11 and 12, resulted in average formation rates of normal larva from 33% to 56%, which indicates the developmental damage of early embryos is not severe. Seawater samples from site 5, 6, 8 and 13, resulted in average formation rates of normal larva from 18% to 21% which there was strong damage to the development of early embryos.
Optimization of the Turbidity Removal Conditions from TiO2 Solution Using a Response Surface Methodology in the Electrocoagulation/Flotation Process
Kim, Dong-Seog;Park, Young-Seek 491
The removal of turbidity from $TiO_2$ wastewater by an electrocoagulation/flotation process was studied in a batch reactor. The response surface methodology (RSM) was applied to evaluate the simple and combined effects of the three main independent parameters, current, NaCl dosage and initial pH of the $TiO_2$ solution on the turbidity removal efficiency, and to optimize the operating conditions of the treatment process. The reaction of electrocoagulation/flotation was modeled by use of the Box-Behnken method, which was used for the fitting of a 2nd order response surface model. The application of RSM yielded the following regression equation, which is an empirical relationship between the turbidity removal efficiency of $TiO_2$ wastewater and test variables in uncoded unit: Turbidity removal (%)=69.76+59.76Current+11.98NaCl+4.67pH+5.00Current${\times}$pH-160.11$Current^2-0.34pH^2$. The optimum current, NaCl dosage and pH of the $TiO_2$ solution to reach maximum removal rates were found to be 0.186 A, 0.161 g/l and 7.599, respectively. This study clearly showed that response surface methodology was one of the most suitable method to optimize the operating conditions for maximizing the turbidity removal. Graphical response surface and contour plots were used to locate the optimum point.
Sampling Efficiency of Organic Vapor Passive Samplers by Diffusive Length
Lee, Byung-Kyu;Jang, Jae-Kil;Jeong, Jee-Yeon 500
Passive samplers have been used for many years for the sampling of organic vapors in work environment atmospheres. Currently, all passive samplers used in domestic occupational monitoring are foreign products. This study was performed to evaluate variable parameters for the development of passive organic samplers, which include the geometry of the device and diffusive length for the sampler design. Four prototype diffusive lengths; A-1(4.5 mm), A-2(7.0 mm), A-3(9.5 mm), A-4(12.0 mm) were tested for adsorption performances to a chemical mixture (benzene, toluene, trichloroethylene, and n-hexane) according to the US-OSHA's evaluation protocol. A dynamic vapor exposure chamber developed and verified by related research was used for this study. The results of study are as follows. The results in terms of sampling rate and recommended sampling time test indicate that the most suitable model was A-3 (9.5 mm diffusive lengths on both sides) for passive sampler design in time weighted average (TWA) assessment. Sampling rates of this A-3 model were 45.8, 41.5, 41.4, and 40.3 ml/min for benzene, toluene, trichloroethylene, and n-hexane, respectively. The A-3 models were tested on reverse diffusion and conditions of low humidity air (35% RH) and low concentrations (0.2 times of TLV). These conditions had no affect on the diffusion capacity of samplers. In conclusion, the most suitable design parameters of passive sampler are: 1) Geometry and structure - 25 mm diameter and 490 $mm^2$ cross sectional area of diffusion face with cylindrical form of two-sided opposite diffusion direction; 2) Diffusive length - 9.5 mm in both faces; 3) Amount of adsorbent - 300 mg of coconut shell charcoal; 4) Wind screen - using nylon net filters (11 ${\mu}m$ pore size).
Concentration Distribution of PBDEs and PCBs in Soil
Lee, Sung-Hee;Cho, Ki-Chul;Yeo, Hyun-Gu 510
Polybrominated diphenyl ethers (PBDEs) and polychlorinated biphenyls (PCBs) were measured in soil samples of Ansung in Kyonggi-province to investigate concentration distribution of PBDEs and PCBs. The 10 soil samples were collected using a stainless steel hand-held corer that was cleaned before and after each sample using hexane. Total concentration of PBDE and PCBs were 2,205.3 and 348.1 pg/g dry weight (DW) in soil sample, respectively. BDE-209 showed as the most abundant congener in soil samples which was related to imported amount and usage amount of deca-BDE technical mixture in Korea. Also, BDE-99, BDE-47, BDE-100 deposition in soil sample was higher than other congeners and was related to the imported and usage amount reported for penta-BDE technical mixture in Korea. Correlation coefficient between PBDE contribution and technical mixture formulation (Bromokal 70-5DE) were significant (r=0.91, p<0.01) which suggests the influence of sources in this technical mixtures.
Assessing Water Quality of Siheung Stream in Shihwa Industrial Complex Using Both Principal Component Analysis and Multi-Dimensional Scaling Analysis of Korean Water Quality Index and Microbial Community Data
Seo, Kyeong-Jin;Kim, Ju-Mi;Kim, Min-Jung;Kim, Seong-Keun;Lee, Ji-Eun;Kim, In-Young;Zoh, Kyung-Duk;Ko, Gwang-Pyo 517
The water quality of Lake Shihwa had been rapidly deteriorating since 1994 due to wastewater input from the watersheds, limited water circulation and the lack of a wastewater treatment policy. In 2000, the government decided to open the tidal embankment and make a comprehensive management plan to improve the water quality, especially inflowing stream water around Shihwa and Banwol industrial complex. However, the water quality and microbial community have not as yet been fully evaluated. The purpose of this study is to investigate the influent water quality around the industrial area based on chemical and biological analysis, and collected surface water sample from the Siheung Stream, up-stream to down-stream through the industrial complex, Samples were collected in July 2009. The results show that the downstream site near the industrial complex had higher concentrations of heavy metals (Cu, Mn, Fe, Mg, and Zn) and organic matter than upstream sites. A combination of DGGE (Denaturing Gradient Gel Electrophoresis) gels, lists of K-WQI (Korean Water Quality Index), cluster analysis, MDS (Multi-Dimensional Scaling) and PCA (Principal Component Analysis) has demonstrated clear clustering between Siheung stream 3 and 4 and with a high similarity and detected metal reducing bacteria (Shewanella spp.) and biodegrading bacteria (Acinetobacter spp.). These results suggest that use of both chemical and microbiological marker would be useful to fully evaluate the water quality.
Characteristics of the Food Waste and Wastewater Discharged from Food Waste Treatment Process
Kim, Young-Kwon;Kim, Se-Mi;Kim, Min-Kyu;Choi, Jin-Taek;Nam, Se-Yong 526
Waste generation was generally expected to steadily rise due to a rapid increase in population and economic growth. However, regulations on disposable goods and a volume-based waste fee system have led to a gradual reduction in the amount of waste. In the case of food waste, separation of food waste from other waste has been put in place since direct landfilling was banned in January 2005. The predicted generation amounts of food waste and wastewater in the model city were 54 ton/d and 127.3 ton/d by year 2020, respectively. However, appropriate treatment technologies for food waste and wastewater discharged from food waste treatment processes are yet to be established. In this study, the food waste and wastewater discharged from food waste treatment process in the model city were characterized by literal and field investigation.
CRDS Study of Tropospheric Ozone Production Kinetics : Isoprene Oxidation by Hydroxyl Radical
Park, Ji-Ho 532
The tropospheric ozone production mechanism for the gas phase additive oxidation reaction of hydroxyl radical (OH) with isoprene (2-methyl-1,3-butadiene) has been studied using cavity ring-down spectroscopy (CRDS) at total pressure of 50 Torr and 298 K. The applicability of CRDS was confirmed by monitoring the shorter (~4%) ringdown time in the presence of hydroxyl radical than the ring-down time without the photolysis of hydrogen peroxide. The reaction rate constant, $(9.8{\pm}0.1){\times}10^{-11}molecule^{-1}cm^3s^{-1}$, for the addition of OH to isoprene is in good agreement with previous studies. In the presence of $O_2$ and NO, hydroxyl radical cycling has been monitored and the simulation using the recommended elementary reaction rate constants as the basis to OH cycling curve gives reasonable fit to the data.
Asbestos and Environmental Disease
Ahn, Jong-Ju 538
Humans have a long history of asbestos use. There are reports from the Roman era, of asbestos victims among the slaves who worked in asbestos mines. The fact that asbestos can induce lung cancer and mesothelioma was verified epidemiologically in the 1960s. Asbestos related diseases are predominantly occupational in nature but can be caused by environmental exposure. Environmental mesothelioma is mainly associated with tremolite asbestos and this information comes from many countries including Turkey, Greece, Corsica, New Caledonia and Cyprus. In 1993, the first case of mesothelioma in Korea was reported in an asbestos textile worker. Recently, some asbestos disease victims who lived near an asbestos factory have their cases before the courts. A series of recent asbestos-related events in Korea, for example, the shocking revelation of asbestos containing talc in baby powders have caused the general public to become aware of the health risks of asbestos exposure. Asbestos related diseases are characterized by a long latency period, especially, mesothelioma which has no threshold of safety. Hence the best strategy for preventing asbestos related diseases is to decrease asbestos exposure levels to as low as possible. | CommonCrawl |
Methodology Article
An EM algorithm to improve the estimation of the probability of clonal relatedness of pairs of tumors in cancer patients
Audrey Mauguen ORCID: orcid.org/0000-0003-3236-60931 na1,
Venkatraman E. Seshan1,
Irina Ostrovnaya1 &
Colin B. Begg1
BMC Bioinformatics volume 20, Article number: 555 (2019) Cite this article
We previously introduced a random-effects model to analyze a set of patients, each of which has two distinct tumors. The goal is to estimate the proportion of patients for which one of the tumors is a metastasis of the other, i.e. where the tumors are clonally related. Matches of mutations within a tumor pair provide the evidence for clonal relatedness. In this article, using simulations, we compare two estimation approaches that we considered for our model: use of a constrained quasi-Newton algorithm to maximize the likelihood conditional on the random effect, and an Expectation-Maximization algorithm where we further condition the random-effect distribution on the data.
In some specific settings, especially with sparse information, the estimation of the parameter of interest is at the boundary a non-negligible number of times using the first approach, while the EM algorithm gives more satisfactory estimates. This is of considerable importance for our application, since an estimate of either 0 or 1 for the proportion of cases that are clonal leads to individual probabilities being 0 or 1 in settings where the evidence is clearly not sufficient for such definitive probability estimates.
The EM algorithm is a preferable approach for our clonality random-effect model. It is now the method implemented in our R package Clonality, making available an easy and fast way to estimate this model on a range of applications.
Many studies have been published over the past 20 years that involved examining pairs of tumors at the molecular level from a set of patients to determine if, for some patients, the tumors are clonal, i.e. one of the tumors is a metastasis of the other tumor. We focus in this article on the setting where the data comprise somatic mutations from a panel of genes. Various statistical methods have been proposed in the literature. One approach has been to characterize the evidence for clonality using an index of clonal relatedness (see [1] and [2]). However in constructing the index these authors have focused solely on mutations that are shared between the two tumors, ignoring the information from mutations that occur in one tumor but not the other, evidence that argues against clonal relatedness. Other authors have used the proportion of observed mutations that are shared as the index [3, 4], while Bao et al. [5] formalized this idea by assuming that the matched mutations follow a binomial distribution. All of these approaches analyze each case independently. To our knowledge, the approach we discuss in this article, improving upon Mauguen et al. [6], is the only available method that models the data from all cases collectively to obtain parametric estimates of the proportion of cases in the population that are clonal. Also our method relies heavily on the recognition of the fact that the probabilities of occurrence of the observed mutations are crucially informative,especially for shared mutations. Motivated by a study of contralateral breast cancer that will be described in more detail in the next section, we developed a random-effects model to simultaneously analyze each case for clonal relatedness and to obtain an estimate of how frequently this occurs [6]. The corresponding function mutation.rem has been added to the R package Clonality, originally described in Ostrovnaya et al. [7]. Overall, the properties of this model were demonstrated to be quite good, in the sense that the parameter estimation has generally low bias except in small samples, ie where only a few cases from the population are available [6]. Recently, in applying the model anecdotally, we noticed that in such small datasets, examples can arise where the maximum likelihood estimator of the proportion of clonal cases is zero, even when mutational matches have been observed in some cases. This tends to occur if the absolute number of cases with matches is small, either because the overall number of cases is small, or the proportion of cases that are clonal is small, or in clonal cases the proportion of mutations that are matches is small. This is problematic because it renders the probabilities of clonal relatedness to be exactly zero for all individual cases, an estimate that seems unreasonable, especially if matches on rare mutations have been observed. We thus became interested in alternate estimation methods. In this article we compare estimates obtained by the EM algorithm versus our first approach using a one-step estimate of the conditional likelihood.
We use data from a study that involved 49 women with presumed contralateral breast cancer [8]. That is, in all of these women the cancers in the opposite breasts were diagnosed clinically as independent primary breast cancers. The tumors were retrieved from the pathology archives at Memorial Sloan Kettering Cancer Center and subjected to sequencing using a panel of 254 genes known or suspected to be important in breast cancer. The key data, i.e. the numbers of mutations and matches for each case, as well as the probability of occurrence for the matched mutations, are reproduced in Table 1. The probabilities of occurrence of each specific mutation are considered known, but must actually be estimated from available sources, such as the Cancer Genome Atlas [9]. Six of the 49 cases had at least 1 mutational match, i.e. exactly the same mutation in both tumors. For 3 of these cases the match was observed at the common PIK3CA H1047R locus, known to occur in approximately 14% of all breast cancers. We note that common mutations like this one can vary by disease sub-type but we elect to use probabilities associated with breast cancer overall since the study has a mix of sub-types. Since it is plausible these common mutations could occur by chance in a pair of independent breast cancers, the evidence for clonal relatedness is much less strong than for the other 3 cases with matches at rarely occurring loci, something very unlikely to happen in independent tumors.
Table 1 Study of contralateral breast cancers
When we apply our random-effects analysis to these data, described in more detail in the "Methods" section, our estimate of the proportion of cases that are clonal (denoted henceforth by π) is 0.059, close to the proportion 3/49, reflecting the fact that the model appears to consider the 3 cases with rare matches as clonal and the 3 cases with the common matches as independent. Estimation problems can occur, however, in datasets very similar to this one. For example, when we eliminate from the analysis the two cases that are most clearly clonal, cases #36 and #48, the estimate of π is 0, despite the fact that case #8 possesses a very rare match pointing strongly to clonal relatedness. Thus, a different estimation method that reduces the frequency with which boundary estimates of π occur is advisable.
Simulations were conducted for sample sizes of 25, 50 and 100, with the population proportion of clonal cases (π) ranging from 0.10 to 0.75. The distribution of the clonality signal is characterized by 3 different lognormal distributions plotted in Fig. 1. These three scenarios represent, respectively, settings where a small proportion of mutations in a clonal case will be matched (scenario 1), where most of these mutations will be matched (scenario 3), and an intermediate scenario. Note that scenario 1 is particularly problematic for estimation, especially when π is small, since in this setting few of the cases will be clonal and these few clonal cases will tend to have few, if any, matches.
Log-normal distributions of the clonality signal
Table 2 presents the simulation results for the estimates of π averaged over 500 simulations for each setting, along with the standard deviations and ranges of the estimates. Biases can be obtained by comparing these averages with the true value of π in the second column of the table. These biases are generally modest, though it is noteworthy that our original one-step approach tends to have positive biases while the approach using the full likelihood and the EM algorithm generally leads to negative bias. More importantly, Table 2 also reports the numbers of times the estimates were exactly on the boundary, i.e. 0 or 1. These occurrences are much less frequent using the EM algorithm and are mostly limited to the small case sample (N =25), low π (0.10) setting. The columns on the right-hand side of Table 2 summarize the results using the EM approach for those datasets in which the one-step maximization produced an estimate of π of either 0 or 1. These estimates are similar to the true π, showing the improved performance with the EM estimation strategy.
Table 2 Simulation results
The EM approach was used to re-analyze the breast cancer dataset described in the motivating example. When the full dataset of 49 cases is analyzed both methods lead to the same estimate, \(\hat {\pi } = 0.059\). However, when cases #36 and #48 are removed, the EM approach leads to \(\hat {\pi } = 0.050\) while the one-step method leads to the boundary value of \(\hat {\pi } = 0\). This is a reassuring result and is congruent with the simulations in that for the preponderance of datasets the use of EM does not affect the results. However, when we move closer to a boundary, by for example removing 2 of the 3 cases with strong evidence of clonal relatedness (cases 36 and 48), the new approach corrects the estimation where the old approach was failing.
Our method provides a strategy for estimating, in a sample of cases with tumor pairs, the proportion of these cases that are clonally related, in addition to diagnostic probabilities for each case. As compared to other methods described in the introduction, the proposed model utilizes the information from a sample of patients, and includes all mutations observed in only one or in both tumors, in order to infer the probabilities of clonal relatedness. We now believe that an analysis of our proposed random-effects model should involve maximization of the likelihood using the EM algorithm rather than the one-step strategy based on conditioning on the latent clonality indicators that we had previously proposed. By doing so, we greatly reduce the chances that the estimator of the proportion of cases that are clonal will lead to an unsatisfactory boundary value. Of note, the increased performance comes at no cost regarding computation time. Our available R package Clonality [10] which includes the function to estimate the random-effects model, has been updated to adopt the EM strategy (version 1.32.0 and higher).
The EM algorithm is a preferable approach for our clonality random-effects model. It is now the method implemented in our R package Clonality, making available an easy and fast way to estimate this model on a range of applications.
The informative data Yj for case j of n cases encompasses a set of indicators for the presence of shared or private mutations in the tumor pair at genetic loci denoted by i. [Private mutations are those that occur in one tumor but not in its pair.] The sets Aj and Bj contain the shared and private mutations respectively. We denote Gj=Aj∪Bj. Each mutation i has a known probability of occurrence pi in a tumor. Let π denote the proportion of clonal cases in the population, and ξj the clonality signal for case j. The clonality signal represents the relative period of tumor evolution in which mutations accrued in the originating clonal cell, and thus represents the anticipated proportion of mutations observed in a case that are matches. The term Cj represents the true clonal status of the tumor pair, taking the value 1 when the case is clonal and 0 when the case is independent. Note that ξj=0 if Cj=0. In clonal cases, we assume that − log(1−ξj) has a lognormal density, with mean μ and standard-deviation σ. We use g(·) to denote density functions generically. As explained in Mauguen et al. [6], we previously used a conditional likelihood constructed in the following manner. Recognizing that
$$ {\begin{aligned} P\left(Y_{j} | \xi_{j}, C_{j} = 1 \right) = \prod_{i \in G_{j}} \!\left\{ \frac{\xi_{j} + (1-\xi_{j}) p_{i}}{\xi_{j} + (1-\xi_{j}) (2-p_{i})} \right\}^{I[i \in A_{j}]} \left\{ \frac{2(1-\xi_{j}) (1-p_{i})}{\xi_{j} + (1-\xi_{j}) (2-p_{i})} \right\}^{I[i \in B_{j}]} \end{aligned}} $$
$$ P\left(Y_{j} | C_{j}=0 \right) = \prod_{i \in G_{j}} \left(\frac{p_{i}}{2-p_{i}} \right)^{I[i \in A_{j}]} \left\{ \frac{2 (1-p_{i})}{2-p_{i}} \right\}^{I[i \in B_{j}]} $$
we elected to use case-specific likelihood contributions
$$L_{j}\left(\pi, \xi_{j} \right) = \pi P\left(Y_{j} | \xi_{j}, C_{j}=1 \right) + (1-\pi) P\left(Y_{j} | C_{j}=0 \right) $$
leading to
$$ L\left(\pi, \mu, \sigma \right) = \prod_{j=1}^{n} \int_{0}^{1} L_{j}\left(\pi, \xi_{j} \right) g(\xi_{j}) d\xi_{j}. $$
This allowed us to perform the maximization to estimate simultaneously the parameters π,μ, and σ using a one-step Box constrained quasi-Newton algorithm. However, although in simulations the properties of this process appear to indicate low bias, we found that it is not uncommon, especially in small datasets or those where π is close to a boundary of 0 or 1, for the parameter π to have an Maximum Likelihood estimate of 0 or 1, rendering the diagnostic probabilities for all cases to be either 0 or 1. This problem is caused by the fact that the simplified conditional likelihood in (3) above does not fully recognize the influences of the case-specific mutational profiles Yj on the case-specific clonality signals ξj and the individual levels of evidence regarding clonal relatedness Cj. In short we used the parameter representing the overall probability of clonality π in (3) rather than the case-specific probabilities of clonality, P(Cj=1|ξj,π,μ,σ). To address this problem we employ a likelihood structure that permits a more specific use of these data from individual cases and have constructed a strategy involving the EM algorithm to estimate the parameters.
This approach recognizes the fact that the terms Cj and ξj are latent variables and that our goal is to maximize the likelihood that is not conditioned on these latent variables, i.e.
$$ L = \prod_{j=1}^{n} P\left(Y_{j} | \pi, \mu, \sigma \right). $$
To perform the estimation we first recognize the following:
$$\begin{array}{*{20}l} P\left(Y_{j}, \xi_{j}, C_{j} | \pi, \mu, \sigma \right) = P\left(Y_{j} | \xi_{j}, C_{j} \right) \times g\left(\xi_{j}, C_{j} | \pi, \mu, \sigma \right) \end{array} $$
$$\begin{array}{*{20}l} = g\left(\xi_{j}, C_{j} | Y_{j}, \pi, \mu, \sigma \right) \!\times\! P\left(Y_{j} | \pi, \mu, \sigma \right). \end{array} $$
Note that the likelihood contribution of case j to (4) is a component of the right-hand side of (6). The EM algorithm permits us to instead maximize (iteratively) the expectation of the logarithm of this full likelihood, averaged over the latent variables conditioned on the data. That is, the expected likelihood is given by
$$ {\begin{aligned} E = \prod_{j=1}^{n} \int_{0}^{1} \log \left\{ P\left(Y_{j}, \xi_{j}, C_{j} | \pi, \mu, \sigma \right) \right\} g\left(\xi_{j}, C_{j} | Y_{j}, \tilde{\pi}, \tilde{\mu}, \tilde{\sigma} \right) d (\xi_{j}, C_{j}) \end{aligned}} $$
where \(\tilde {\pi }\), \(\tilde {\mu }\), and \(\tilde {\sigma }\) are the current estimates of the parameters. After choosing starting values for these parameters the expectation and maximization steps proceed iteratively until convergence. To calculate E we recognize that \(P(Y_{j}, \xi _{j}, C_{j} | \tilde {\pi }, \tilde {\mu }, \tilde {\sigma })\) is obtained easily from the defined terms on the right-hand side of (5), represented by (1) and (2) and the parametric model used for the distribution of ξj. Further, \(g(\xi _{j}, C_{j} | Y_{j}, \tilde {\pi }, \tilde {\mu }, \tilde {\sigma })\) can be obtained from Bayes Theorem, i.e.
$${\begin{aligned} g\left(\xi_{j}, C_{j} | Y_{j}, \tilde{\pi}, \tilde{\mu}, \tilde{\sigma} \right) = \frac{g\left(\xi_{j}, C_{j} | \tilde{\pi}, \tilde{\mu}, \tilde{\sigma} \right) P\left(Y_{j} | \xi_{j}, C_{j} \right)} {\int_{0}^{1} g\left(\xi_{j}, C_{j} | \tilde{\pi}, \tilde{\mu}, \tilde{\sigma} \right) P\left(Y_{j} | \xi_{j}, C_{j} \right) d(\xi_{j}, C_{j})}. \end{aligned}} $$
Expectation-maximization
Teixeira MR, Ribeiro FR, Torres L, Pandis N, Andersen JA, Lothe RA, Heim S. Assessment of clonal relationships in ipsilateral and bilateral multiple breast carcinomas by comparative genomic hybridisation and hierarchical clustering analysis. Br J Cancer. 2004; 91(4):775–82. https://doi.org/10.1038/sj.bjc.6602021.
Schultheis AM, Ng CKY, De Filippo MR, Piscuoglio S, Macedo GS, Gatius S, Perez Mies B, Soslow RA, Lim RS, Viale A, Huberman KH, Palacios JC, Reis-Filho JS, Matias-Guiu X, Weigelt B. Massively Parallel Sequencing-Based Clonality Analysis of Synchronous Endometrioid Endometrial and Ovarian Carcinomas. J Natl Cancer Inst. 2016;108(6). https://doi.org/10.1093/jnci/djv427.
Perea J, García JL, Corchete L, Lumbreras E, Arriba M, Rueda D, Tapial S, Pérez J, Vieiro V, Rodríguez Y, Brandáriz L, García-Arranz M, García-Olmo D, Goel A, Urioste M, Sarmiento RG. Redefining synchronous colorectal cancers based on tumor clonality. Int J Cancer. 2019; 144(7):1596–608. https://doi.org/10.1002/ijc.31761.
Cereda M, Gambardella G, Benedetti L, Iannelli F, Patel D, Basso G, Guerra RF, Mourikis TP, Puccio I, Sinha S, Laghi L, Spencer J, Rodriguez-Justo M, Ciccarelli FD. Patients with genetically heterogeneous synchronous colorectal cancer carry rare damaging germline mutations in immune-related genes. Nat Commun. 2016; 7:12072. https://doi.org/10.1038/ncomms12072.
Bao L, Messer K, Schwab R, Harismendy O, Pu M, Crain B, Yost S, Frazer KA, Rana B, Hasteh F, Wallace A, Parker BA. Mutational Profiling Can Establish Clonal or Independent Origin in Synchronous Bilateral Breast and Other Tumors. PLoS ONE. 2015; 10(11):e0142487. https://doi.org/10.1371/journal.pone.0142487.
Mauguen A, Seshan VE, Ostrovnaya I, Begg CB. Estimating the probability of clonal relatedness of pairs of tumors in cancer patients. Biometrics. 2018; 74(1):321–330. https://doi.org/10.1111/biom.12710.
Ostrovnaya I, Seshan VE, Olshen AB, Begg CB. Clonality: an R package for testing clonal relatedness of two tumors from the same patient based on their genomic profiles. Bioinformatics. 2011; 27(12):1698–1699. https://doi.org/10.1093/bioinformatics/btr267.
Begg CB, Ostrovnaya I, Geyer FC, Papanastasiou AD, Ng CKY, Sakr RA, Bernstein JL, Burke KA, King TA, Piscuoglio S, Mauguen A, Orlow I, Weigelt B, Seshan VE, Morrow M, Reis-Filho JS. Contralateral breast cancers: Independent cancers or metastases?Int J Cancer. 2018; 142(2):347–356. https://doi.org/10.1002/ijc.31051.
Ellrott K, Bailey MH, Saksena G, et al. Scalable Open Science Approach for Mutation Calling of Tumor Exomes Using Multiple Genomic Pipelines. Cell Syst. 2018; 6(3):271–281.e7. https://doi.org/10.1016/j.cels.2018.03.002.
Ostrovnaya I. Clonality: Clonality testing. 2019. R package version 1.32.0.
The research was supported by the National Cancer Institute, awards CA124504, CA163251, and CA008748. The funding sources played no roles in the design of the study, and collection, analysis, and interpretation of data.
Audrey Mauguen and Venkatraman E. Seshan contributed equally to this work.
Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, 485 Lexington Ave, 2nd floor, New York, NY, 10017, USA
Audrey Mauguen
, Venkatraman E. Seshan
, Irina Ostrovnaya
& Colin B. Begg
Search for Audrey Mauguen in:
Search for Venkatraman E. Seshan in:
Search for Irina Ostrovnaya in:
Search for Colin B. Begg in:
All authors designed the study and developed the model. AM and VES developed the software to estimate the model. IO integrated the code in the R package Clonality. AM and IO analyzed the clinical example data. AM conducted the simulation study. AM and CBB drafted the manuscript. All authors approved the final version.
Correspondence to Audrey Mauguen.
The motivating study involved genomic analyses of archived specimens and was conducted under a waiver of consent approved by the Memorial Sloan Kettering Cancer Center Institutional Review Board (WA0388-13).
Audrey Mauguen and Venkatraman E. Seshan are equal contributors.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Mauguen, A., Seshan, V.E., Ostrovnaya, I. et al. An EM algorithm to improve the estimation of the probability of clonal relatedness of pairs of tumors in cancer patients. BMC Bioinformatics 20, 555 (2019) doi:10.1186/s12859-019-3148-z
EM algorithm
Tumor mutation
Random effect model
Novel computational methods for the analysis of biological systems
Submission enquiries: bmcbioinformatics@biomedcentral.com | CommonCrawl |
Periodic orbits and polynomials
There are two simple and classic enumerations that still I'm puzzled about. Let's start with a simple counting problem from a well-known dynamical system.
fact 1 Consider the "tent map" f:[0,1]→[0,1] with parameter 2, that is
f(x):=2min(x,1-x).
Clearly, it has 2 fixed points, and more generally, for any positive integer n, there are 2n periodic points of period n (it's easy to count them as they are fixed points of the n-fold iteration of f, which is a piecewise linear function oscillating up and down between 0 and 1 the proper number of times). To count the number of periodic orbits of minimal period n, a plain and standard application of the Moebius inversion formula gives
Number of n-orbits of (I,f) = $\frac{1}{n}\sum_{d|n} \mu(d)2^{n/d}.$
(rmk: any function with similar behaviour would give the same result, e.g. f(x)=4x(1-x),...&c.)
Now let's leave for a moment dynamical systems and consider the following enumeration in the theory of finite fields.
fact 2 Clearly, there are 2n polynomials of degree n in $\mathbb{F}_2[x]$. With a bit of field algebra it is not hard to compute the number I(n) of the irreducible ones. One can even make a completely combinatorial computation, just exploiting the unique factorization, expressed in the form:
$\frac{1}{1-2x}=\prod_{n=1}^\infty (1-x^n)^{-I(n)}.$
One finds:
Number of irreducible polynomials of degree n in $\mathbb{F} _ 2[x]$ = $\frac{1}{n}\sum_{d|n} \mu(d)2^{n/d}.$
Question: it's obvious by now: is there a natural and structured bijection between periodic orbits of f and irreducible polinomials in $\mathbb{F}_2[x]$? How is interpreted the structure of one context when transported ni the other?
(rmk: of course, analogous identities hold for any p > 2)
co.combinatorics ds.dynamical-systems polynomials finite-fields
Pietro Majer
Pietro MajerPietro Majer
How I see it is every fixed point comes from an equation $T^nx=x$. Denoting $f(x)=2x$ and $g(x)=2(1-x)$ we see that this corresponds to solving equations $$h_1\circ h_2\circ\cdots \circ h_n (x)=x$$ where $h_i\in \{f,g\}$. This is geometrically the intersection of two lines and thus gives a unique solution, and therefore a correspondence between periodic points of period $n$ and binary strings of length $n$. Now since concatenating a binary string $L$ with itself clearly gives the same $x$ with $x=L(x)=L\circ L(x)$ and you have a shift operator $L\circ h(x)=x \implies h\circ L(y)=y$ where $y=h(x)$ you get a bijection between periodic orbits and aperiodic cyclic sequences of zeros and ones, which are well known to be in bijection with the irreducible polynomials of $\mathbb F_2 [x]$.
Gjergji ZaimiGjergji Zaimi
In the paper, review of which is cited below, the bijection between irreducible polynomials and periodic sequences is established. Periodic points of this map $f$ are less or more in obvious less or more bijection:) with periodic sequences of 0,1: just use symbolic version of $f$ (binary representation).
Golomb, Solomon W. Irreducible polynomials, synchronization codes, primitive necklaces, and the cyclotomic algebra. 1969 Combinatorial Mathematics and its Applications (Proc. Conf., Univ. North Carolina, Chapel Hill, N.C., 1967) pp. 358--370
Let $\alpha$ be a primitive root of $\text{GF}(q^n)$. The author observes that if $m=m_1q^{n-1}+m_2q^{n-2}+\cdots+m_n$ is the $q$-ary representation of the integer $m$, then the cyclic sequence (``necklace'') $m_1m_2\cdots m_n$ has no subperiod if and only if the minimal polynomial of $\alpha^m$ has degree $n$. Since a cyclic shift of the necklace corresponds to conjugation of $\alpha^m$, this exhibits an explicit one-to-one correspondence between the irreducible polynomials of degree $n$ over $\text{GF}(q)$ and the aperiodic necklaces with $n$ beads in $q$ colors. In Section 5, we learn that the integers $15^i (\text{mod}\,31)$, $i=0,1,\cdots,5$, when written in binary form, a maximal binary comma free dictionary. In Sections 6 and 7, the author restricts himself to $q=2$, and presents an algorithm for obtaining the minimal polynomial of $\alpha^p$ from that of $\alpha$, if $p$ is a prime $>2$. This algorithm is very simple for $p=3$, requiring $O(n^3)$ operations for a polynomial of degree $n$, but the work involved grows exponentially with $p$.
Reviewed by R. J. McEliece
Fedor PetrovFedor Petrov
The irreducible polynomials of degree $n$ over $\mathbb{F}_2[x]$ can be identified with the periodic orbits of minimal period $n$ of the Frobenius map $x \mapsto x^2$ acting on $\overline{ \mathbb{F}_2 }$. I think one can't do better than this canonically.
Let me mention a third related enumeration that might help you out. For fixed $n$, let $\alpha$ be a primitive element for $\mathbb{F}_{2^n}$ as an extension of $\mathbb{F}_2$. Then orbits of the Frobenius map of minimal period $n$ can be identified with aperiodic words $\sum a_i \alpha^{p^i}$ of length $n$ on the alphabet $\{ 0, 1 \}$ up to cyclic permutation, i.e. Lyndon words. It might be easier to give an explicit bijection to Lyndon words; for one thing, Lyndon words on alphabets of any size make sense whereas irreducible polynomials only make sense over prime power fields.
Qiaochu YuanQiaochu Yuan
$\begingroup$ I think it's pretty easy to give the bijection from the dynamic system to the Lyndon words: write $x\in\left[0,1\right]$ in binary as $x=0.x_1x_2x_3...$ and notice that the sequence $\left(x_1-x_2,x_2-x_3,x_3-x_4,...\right)\in\mathbb F_2^{\mathbb N}$ gets shifted one member forward each time you apply $f$ to $x$. Now you have to care a bit for technical details (for a general $x$, we cannot uniquely restore $x$ from the sequence $\left(x_1-x_2,x_2-x_3,x_3-x_4,...\right)$, because there are two possible values summing up to $1$, but if we are looking for an $x$ which is a fixed point of $f^n$, $\endgroup$ – darij grinberg May 26 '10 at 22:02
$\begingroup$ then we can because it's impossible for $x$ and $1-x$ to be fixed points of $f^n$ simultaneously - except for the case $x=0$ or $x=1$, in which case the construction doesn't work anyway; these deviations should cancel each out in the end). $\endgroup$ – darij grinberg May 26 '10 at 22:02
$\begingroup$ Thanks darij. I figured something like that had to be true, but I didn't have time to work out the details. $\endgroup$ – Qiaochu Yuan May 26 '10 at 22:09
Not the answer you're looking for? Browse other questions tagged co.combinatorics ds.dynamical-systems polynomials finite-fields or ask your own question.
Number of irreducible polynomials of degree $r$ in $F_2[x]$
Is there a dynamical system such that every orbit is either periodic or dense?
Counting some polynomials that have a zero in $\mathbb{Z}_n[X]$
Symmetric group action on squarefree polynomials
Katok's conjecture on entropy and periodic orbits for generic $C^1$ diffeomorphisms
How "accidental" are equalities between parts of Ehrhart quasi-polynomials? When do they persist to Euler-Maclaurin?
Finite-space dynamical systems
What about of periodic points of $\sum_{n=1}^\infty\frac{\mu(n)}{n}x^n$, $0<x<1$, where $\mu(n)$ is the Möbius function? | CommonCrawl |
Pressure defined
Kinetic molecular theory
Ideal gas law
Improving upon the ideal gas law
The ideal gas law,
PV = nRT
where P = pressure, V = volume, n = number of moles of gas, R = the molar gas constant, and T = Kelvin temperature), is an extremely useful relationship. For the most part, it is quite accurate and will suffice for most types of calculations.
The ideal gas law is, however, built on a few key assumptions about gases that may not hold for all gases.
The table below lists some of the key assumptions and how they can fail.
Below, we'll consider another gas equation, very much like the ideal gas law, that can account for some of these non-ideal behaviors.
Assumptions used to develop the ideal gas law and how they can fail
Real gas behavior
Gas particles have no volume Gas particles are atoms of finite size; all have measurable volumes.
Gas particles are hard spheres Gas particles are surrounded by electron "clouds," which can be deformed, polarized &c.
Collisions between gas particles are elastic. Gas particles have a finite attraction for one another, making collisions somewhat "sticky."
There are no attractive or repulsive forces between gas particles or with the walls of the container. Intermolecular forces, attractive or repulsive, can be quite strong between certain atoms & molecules
The van der Waals equation
In about 1873, Johannes Diderik van der Waals devised a new state equation for gases based on some logical assumptions about how real gases behave.
Later efforts used the principles of statistical mechanics to derive the equation. That's beyond the scope of this page, but we can still take a look at the equation and rationalize its parts. Here is the van der Waals equation:
$$\left( P + \frac{an^2}{V^2} \right) (V - nb) = nRT$$
Notice how similar it is to PV = nRT. It's just that the pressure and volume terms have been modified a bit. Here's another look:
van der Waals included two adjustable parameters in his equation, a and b, to represent the finite size of gas particles (b) and a characteristic "stickiness" or attraction between particles (a).
Let's try to unpack the equation a bit. First notice that if both a and b are zero, we're right back to the ideal gas law.
In the first parentheses, the pressure is augmented by a term proportional to parameter a and the square of the number of moles of gas, and inversely proportional to the square of the volume. This reflects the fact that real gases, which have some amount of inter-particle attraction, are more compressible than ideal gases. In small volumes, the term added to the pressure can be relatively large.
In the second parentheses, The volume available to the gas (the volume of the container) is reduced by the finite volume of the gas particles, where b is the volume taken up per particle.
Here is a limited table of van der Waals a and b parameters for some common gases.
a (L2·atm·mol-2)
b (L·mol-1)
N2 1.39 0.0391
O2 1.382 0.03186
CO2 3.658 0.04286
H2O 5.537 0.03049
He 0.0346 0.0238
CCl4 20.01 0.1281
SF6 7.857 0.08786
Xe 4.192 0.05156
Example – water
Here is an example of the difference between ideal gas behavior and more realistic behavior, as modified by the van der Waals equation. The graph shows pressure vs. volume for a mole of water molecules at 300K.
Beyond a volume of about 1 Liter, both the ideal gas law (black curve) and the van der Waals equation give approximately the same result. Namely, for a given volume, the pressure is roughly the same. But as water is further compressed, notice that the pressure decreases dramatically. This is due to the intermolecular attraction for water molecules for each other. Ultimately, that behavior leads to complete condensation into the liquid state, in which few water molecules are able to contribute to pressure on the walls of a container.
You can see from the table of van der Waals constants, that we'd expect even larger effects for carbon tetrachloride (CCl4) and sulfur hexafluoride (SF6).
Pressure vs. volume,
1 mole of H2O at 300K
The graph shows that for dilute gases, the ideal gas law isn't bad, even for water, which is quite sticky to itself.
A parameter is an adjustable constant in the definition of a function that is different from the independent variable(s). Parameters are not independent variables. For example, in the quadratic function
f(x) = Ax2 + Bx + C
A, B and C are parameters which change the shape of the graph of the function. x is the independent variable. A, B and C are fixed for any particular version of f(x), but x can range from -&inf; to +&inf; | CommonCrawl |
Mathematical Biosciences and Engineering, 2014, 11(5): 1115-1137. doi: 10.3934/mbe.2014.11.1115.
Primary: 58F15; Secondary: 53C35.
Transmission dynamics and control for a brucellosis model in Hinggan League of Inner Mongolia, China
Mingtao Li, Guiquan Sun, Juan Zhang, Zhen Jin, Xiangdong Sun, Youming Wang, Baoxu Huang, Yaohui Zheng
1. Department of Mathematics, North University of China, Taiyuan, Shanxi 030051
2. Complex Systems Research Center, Shanxi University, Taiyuan, Shanxi 030051
3. Department of mathematics, North University of China, Taiyuan 030051, PR
4. China Animal Health and Epidemiology Center, Qingdao, Shandong 266032
5. Hinggan League Animal Sanitation Supervision Stations, Ulanhot, Inner Mongolia, 137400
Received date: , Published date:
Abstract Related pages
Brucellosis is one of the major infectious and contagious bacterial diseases in Hinggan League of Inner Mongolia, China. The number of newly infected human brucellosis data in this area has increased dramatically in the last 10 years. In this study, in order to explore effective control and prevention measures we propose a deterministic model to investigate the transmission dynamics of brucellosis in Hinggan League. The model describes the spread of brucellosis among sheep and from sheep to humans. The model simulations agree with newly infected human brucellosis data from 2001 to 2011, and the trend of newly infected human brucellosis cases is given. We estimate that the control reproduction number $\mathcal{R}_{c}$ is about $1.9789$ for the brucellosis transmission in Hinggan League and compare the effect of existing mixed cross infection between basic ewes and other sheep or not for newly infected human brucellosis cases. Our study demonstrates that combination of prohibiting mixed feeding between basic ewes and other sheep, vaccination, detection and elimination are useful strategies in controlling human brucellosis in Hinggan League.
Keywords: Brucellosis; vaccination and detection; control strategy.; basic reproduction number
Citation: Mingtao Li, Guiquan Sun, Juan Zhang, Zhen Jin, Xiangdong Sun, Youming Wang, Baoxu Huang, Yaohui Zheng. Transmission dynamics and control for a brucellosis model in Hinggan League of Inner Mongolia, China. Mathematical Biosciences and Engineering, 2014, 11(5): 1115-1137. doi: 10.3934/mbe.2014.11.1115
1. J. Biol. Dyn., 4 (2010), 2-11.
2. Nature Med., 1 (1995), 815-821.
3. Int. Stat. Rev., 62 (1994), 229-243.
4. Emerg. Infect. Dis., 3 (1997), 213-221.
5. J. Math. Biol., 28 (1990), 365-382.
6. J. R. Soc. Interface, 7 (2010), 873-885.
7. Vet. Mic., 90 (2002), 157-163.
8. Math. Biosci., 180 (2002), 29-48.
9. http://www.xamtj.gov.cn/tjgg/ndtjgb/.
10. J. Theoret. Biol., 300 (2012), 39-47.
11. Math. Biosci., 242 (2013), 51-58.
12. End. Dise. Bul., 5 (1990), 101-105.
13. Appl. Math. Com., 237 (2014), 582-594.
14. Discrete Dynamics in Nature and Society, Volume 2013, Article ID 703826.
15. Chin. J. Ctrl. Endem. Dis., 15 (2000), 273-275.
16. Chinese Journal of Animal Health Inspection, 17 (2000), 21-22.
17. Chin. J. Ctrl. Endem. Dis., 25 (2010), 34-36.
18. Proc. Natl. Acad. Sci. USA, 108 (2011), 8767-8772.
19. http://www.stats.gov.cn/tjsj/ndsj/2012/indexch.htm.
20. Med. Inform., 3 (2010), 473-474.
21. N. Engl. J. Med., 352 (2005), 2325-2536.
22. China Animal Husbandry Veterinary Medicine, 36 (2009), 139-143.
23. University of Florida, 100 (1997), 1-6.
24. American Journal of Epidemiology, 145 (1997), 1127-1137.
25. Vet. Microbiol., 69 (2002), 77 pp.
26. Proc. R. Soc. B, 275 (2008), 19-28.
27. Cambridge University Press, 1995.
28. http://www.xamtj.gov.cn/tjgg/rkpcgb/138748.htm.
29. J. Math. Biol., 30 (1992), 755-763.
30. SIAM J. Math. Biosci., 24 (1993), 407-435.
32. Appl. Math. Com., 134 (2003), 51-67.
33. Math. Biosci., 190 (2004), 97-112.
34. PLoS ONE, 6 (2011), e20891.
35. Bull. Math. Biol., 74 (2012), 1226-1251.
37. Canad. Appl. Math. Quart., 3 (1995), 473-495.
39. J. Liaoning Medical University, 1 (2010), 81-85.
40. Prev. Vet. Med., 69 (2005), 77-95.
This article has been cited by:
1. Paride O. Lolika, Steady Mushayabasa, Claver P. Bhunu, Chairat Modnak, Jin Wang, Modeling and analyzing the effects of seasonality on brucellosis infection, Chaos, Solitons & Fractals, 2017, 104, 338, 10.1016/j.chaos.2017.08.027
Copyright Info: 2014, Mingtao Li, et al., licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0)
HTML views(4) | CommonCrawl |
Journal Home About Issues in Progress Current Issue All Issues Feature Issues
Issue 19,
pp. 27136-27150
•https://doi.org/10.1364/OE.27.027136
Tuning color and saving energy with spatially variable laser illumination
Jingjing Zhang, Kevin A. G. Smet, and Youri Meuret
Jingjing Zhang,1,2 Kevin A. G. Smet,3 and Youri Meuret3,*
1School of Automation, China University of Geosciences, Wuhan 430074, China
2Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan 430074, China
3ESAT/Light & Lighting Laboratory, KU Leuven, Ghent 9000, Belgium
*Corresponding author: youri.meuret@kuleuven.be
Jingjing Zhang https://orcid.org/0000-0003-0408-374X
Kevin A. G. Smet https://orcid.org/0000-0003-3825-6274
Youri Meuret https://orcid.org/0000-0002-2815-5915
J Zhang
K Smet
Y Meuret
Jingjing Zhang, Kevin A. G. Smet, and Youri Meuret, "Tuning color and saving energy with spatially variable laser illumination," Opt. Express 27, 27136-27150 (2019)
Development of the IES method for evaluating the color rendition of light sources
Aurelien David, et al.
Opt. Express 23(12) 15888-15906 (2015)
Object color naturalness and attractiveness with spectrally optimized illumination
Dorukalp Durmus, et al.
Color rendition engine
Artūras Žukauskas, et al.
Opt. Express 20(5) 5356-5367 (2012)
Table of Contents Category
Vision, Color, and Visual Optics
Digital micromirror devices
Projection systems
Spatial light modulators
Original Manuscript: July 25, 2019
Revised Manuscript: August 19, 2019
Manuscript Accepted: August 27, 2019
Spatially variable laser illumination
Theoretical analysis
Conclusion and discussion
Previous studies have shown that the radiant flux that needs to be emitted by an illumination system, can be significantly reduced by optimizing its spectral power distribution to the object reflectance spectra, without inducing perceptible chroma or hue shifts of the illuminated objects. In this paper, the idea is explored to vary the spectral power distribution at different positions in the illuminated scene, in order to tailor the color appearance of objects. For this, a spatially variable, laser diode based illumination system is considered with three primaries and large color gamut. The color rendering performance of the illumination system is quantified via the IES TM-30-2018 method. It is shown that it is possible to reach the maximal color gamut score that is theoretically allowed by the corresponding color fidelity score. This is a unique property of an illumination system with a spatially variable spectral power distribution. The radiant flux requirements of this laser diode based illumination system are theoretically investigated for various color rendering settings, showing reduced power requirements for higher color gamut. The possibility to tune color rendering is also experimentally demonstrated with a set-up that consists of a commercially available laser projector with a hyperspectral camera. By including a feedback optimization algorithm, it is possible to reach the targeted color rendering performance.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
For many lighting applications, color rendering is one of the most important properties of a light source [1]. Different metrics exist to evaluate color rendering [2–4]. Most of these metrics quantify the resulting color differences when illuminating a set of test color samples by the light source compared to illumination with a reference light source [5–8].
The idea of optimizing the spectral power distribution (SPD) of the illumination source for specific object reflectance spectra was theoretically explored in several papers [9–11]. When considering a single object reflectance spectrum, it is possible to minimize the required radiant flux of the illumination source while assuring the same or very similar color and brightness appearance as under a reference light source. This implies that the optimized SPD induces the same color rendering as the reference light source. With this approach, energy saving ratios of 40% and higher were reported for different object reflectance spectra. Higher energy saving ratios are possible when allowing noticeable color shifts but this typically corresponds with reduced color rendering performance. These theoretical results were also validated with a practical set-up in which specific colored objects were illuminated with optimized SPD's that were generated by combining nine narrowband LEDs [12].
Of course, a typical illuminated scene contains many different objects, all with different reflectance spectra [13]. So, to make practical use of variable SPD's, a smart lighting system is needed which monitors all object reflectance spectra in the scene and spatially adapts the SPD of the illumination to the object reflectance spectrum at each specific position. Some conceptual ideas for lighting systems capable of generating different illumination spectra in space are described in [14] and the practical aspect of visual clarity for such point-by-point light projection systems has recently been addressed [15].
Indeed, digital video projectors could in fact be seen as lighting systems capable of generating spatially variable illumination spectra. Even more, commercial projection systems are already being used to combine spot lighting, ambient illumination and still or moving images [16]. Unfortunately, most current video projectors use spatial light modulators (e.g. Digital Micromirror Device (DMD), Liquid Crystal Display (LCD) or Liquid Crystal on Silicon (LCOS)) that selectively block the light flowing from the light source through the projection system in order to create RGB images [17]. It is obvious that such a system can never by an energy efficient lighting luminaire. However, by combining direct laser diodes with advanced spatial light modulators it could become possible to create very efficient point-by-point light projection systems in the not too distant future.
An energy efficient method to generate high-resolution light patterns, is e.g. to employ a phase modulating spatial light modulator. Such a phase-only element redirects light when forming the light pattern, rather than absorbing or blocking light. While research of projection systems based on this technology is ongoing [18–20], these type of systems are not yet able to provide full-color, high quality images. Another efficient projector approach that is already available are scanned laser pico-projectors [21]. Also in this case, light is redirected rather than absorbed, which allows high efficiency. The light output of these systems however is currently quite limited. What both types of projection systems have in common, is the fact that single-color laser diodes (typically red (R), green (G) and blue (B)) are the used light source technology and that the system redirects parts of their flux towards specific positions.
Due to the fact that laser diodes emit light with a very narrow spectrum, a wide color gamut can be obtained. This means that a spatially variable lighting system based on multiple laser diodes allows a profound color tuning of the illuminated scene. Indeed, if the object reflectance spectrum at a specific position is known; then, by illuminating that position with a certain ratio of narrow-band red, green and blue laser light, almost any color can be obtained for the light that is reflected from the object surface at that position. This implies that the color appearance of the complete illuminated scene can be fully tailored. For lighting applications, this is an aspect which has never been considered before.
In this paper, the color tuning possibilities and intrinsic energy saving potential of such laser diode based illumination systems are investigated for different lighting scenarios; both theoretically and experimentally. It is shown that exceptional color rendering performance can be realized with a spatially variable laser illumination that goes beyond what is possible with any other static illumination system. When analyzing the energy requirements of the system, only the optical power requirements are considered which is similar to the approach that is followed in other studies about SPD optimization for lighting.
2. Spatially variable laser illumination
2.1 Lighting setup
The lighting setup that is considered in this paper is conceptually shown in Fig. 1. A colorful scene is illuminated by a laser diode based lighting system. This system is capable of illuminating different positions in the scene with a variable spectrum S(x,y)(λ). A camera system monitors the object reflectance spectra R(x,y)(λ) at every position in the illuminated scene and uses therefore the incident light of a calibrated broadband light source. Two methods can be envisaged to do this in practice. The most straightforward option is to use a hyperspectral camera [22,23] and this approach is used for the experiments. An alternative and cheaper method could be to use state-of-the-art spectral reflectance estimation algorithms that use a common RGB camera [24–26]. The position of the camera should be close to the position of the observer(s) of the illuminated scene, such that the monitored object reflectance spectra, correspond with the reflectances of the incident laser diode light towards the observer(s).
Fig. 1. Conceptual illustration of the considered smart lighting setup.
Download Full Size | PPT Slide | PDF
2.2 Spectral power distribution
The SPD of the laser diode based illumination system onto a certain spatial area with a specific object reflectance is given by
(1)$${S_{(x,y)}}(\lambda ) = {p_\textrm{b}}{S_\textrm{B}}(\lambda ) + {p_\textrm{g}}{S_\textrm{G}}(\lambda ) + {p_\textrm{r}}{S_\textrm{R}}(\lambda ).$$
In this Eq., SB(λ), SG(λ) and SR(λ) correspond with the SPD of the maximal amount of blue, green and red light that can be emitted by the system towards the considered spatial area. The values pb, pg and pr thus correspond with the fraction of blue, green and red light. For the sake of simplicity, it is assumed that the radiant flux corresponding with SB(λ), SG(λ) and SR(λ) are the same, and equal to 1 W. This implies that the total radiant flux is simply given by
(2)$${{\Phi }_{\mathop{\textrm {e}}\nolimits} } = {p_{\textrm{b}}} + {p_{\textrm{g}}} + {p_{\mathop{\textrm {r}}\nolimits} }[{\textrm{in Watt}}].$$
In the theoretical analysis that is presented in section 3, the radiant flux requirements of this laser diode based illumination model are calculated for various object reflectance's, in order to obtain the same color and brightness appearance as when illuminated by a reference light source. The CIE Standard Illuminant D50 with a total radiant flux of 1 W over the wavelength range from 380 nm to 780 nm, is always used as a reference. This means that the energy saving potential can simply be evaluated by comparing the result of Eq. (2) with the reference value of 1 W.
For all calculations, with one exception, the peak wavelengths of the laser diodes are chosen as 467 nm, 532 nm, and 630 nm, respectively. These peak wavelengths are derived from the Rec. 2020 standard [27]. By choosing these three primaries, the color gamut of the considered illumination covers 63.3% of all chromaticities and 99.9% of Pointer's gamut [28] in the CIE 1931 chromaticity diagram (see Fig. 2). The SPD's of the blue, green, and red laser diode light (SB(λ), SG(λ) and SR(λ)) are modelled by Ohno's model (at 1 nm interval) [29] with spectral widths equal to 3 nm. This spectral model and width however, have a marginal impact on the results, as long as the spectral width of the chosen primaries is sufficiently small. This is a condition that typically holds for direct laser diodes.
Fig. 2. (a) An example of the SPD of the laser diode illumination. (b) The SPD of the reference illuminant D50. (c) The color gamut of the laser based illumination covers almost entirely Pointer's gamut.
2.3 Tuning color
With a spatially variable laser illumination with wide color gamut, almost any color can be obtained for the reflected light by objects. This means that if the object reflectance is known, that object can be illuminated by the needed fractions of blue, green and red laser diode light, in order to render exactly the same color as when the object would be illuminated by the reference illuminant. In other words, the CRI - Ra of a spatially variable laser lighting system can be made equal to 100.
The recent IES TM-30-2018 method [30] constitutes a two-measure system for evaluating light sources' color rendition. It is quantified by a color fidelity score and a color gamut score. The color fidelity score Rf is an improved version of the CIE color rendering index Ra. To quantify the color differences of illumination by the test light source compared to a reference illuminant, the wide range of object reflectance's that occur in practice, are sampled by a set of 99 test color samples (TCS's). This is a significant improvement compared to CRI - Ra which considers only 8 TCS's. These 99 TCS's were selected on the basis of color space uniformity and spectral uniformity [31] and were derived from various types of objects [32].
The color gamut score Rg on the other hand, quantifies the average change in chroma (in CAM02UCS a'b' space [33]) of the test light source compared to a reference illuminant, in which saturating color shifts correspond to a color gamut score increase [32,34]. The underlying idea of this metric is the fact that various studies have reported that light sources that increase chroma are described as more pleasant by most observers [5,34–38]. Considering both color fidelity and color gamut results in a two-axis system for evaluating color rendering with IES TM-30-2018 (see Fig. 3(a)). Depending on the illumination task at hand, higher fidelity Rf or higher gamut Rg is desired.
Fig. 3. (a) Tradeoff between fidelity and gamut; light sources can only reside in the non-shaded area. With a spatially variable laser illumination an optimal trade-off between fidelity and gamut can be achieved: i.e. a combined fidelity/gamut score along the red dotted line. (b) The normalized (total radiant flux = 1W) spectral power distribution of the considered phosphor-converted LED (c) Color gamut shape of this LED and the reference illuminant (D50).
However, for all light sources there is a fundamental limiting relationship between fidelity and gamut. Perfect fidelity (Rf = 100) can only be obtained when colors exactly match those under the reference illuminant, thus yielding no variation in chroma (→ Rg = 100). There is also a maximum amount of chroma that can be gained (or lost) for a given color shift and this is the case if all color shifts are in the positive (or negative) radial direction. Therefore, there is a theoretical maximum and minimum gamut Rg that can be achieved for a given fidelity Rf, what can also be seen in Fig. 3(a). The Rf and Rg values of light sources with optimal SPD in terms of both color fidelity and color gamut should therefore be close to this theoretical trade-off between both quantities with Rg maximal [39,40], indicated by the red dotted line in Fig. 3(a).
In practice, it is possible to optimize the SPD to enhance color gamut. However, ensuring color shifts in the positive radial direction for all object reflectance's is not possible with a light source with a static SPD. This is illustrated in Fig. 3(c) for a specific case. There one can see the induced color shifts of a typical phosphor-converted LED compared to reference illuminant D50 for the 16 hue bins that are used in IES TM-30-2018. Together with undesired changes in hue, one notices both saturating and de-saturating color shifts. The light source SPD can certainly be optimized to have better color rendering performance, but a static SPD that induces only positive chroma shifts and no changes in hue for all object reflectance's, cannot be attained.
However, this fundamental limitation disappears when considering a variable laser illumination with wide color gamut, because, any color can be obtained for the reflected light by different objects. This means that purely saturating color shifts with respect to the reference illuminant can be generated for each object of which the object reflectance is known. This further implies that for each scene with known object reflectance's, the illumination can be tuned such that a chosen loss of color fidelity is maximally translated into an increase of color gamut. This is a unique property of a spatially variable laser illumination, which can never be realized with light sources with a fixed SPD.
In the following section, the radiant flux requirements in order to achieve a certain color rendering performance are investigated for a spatially variable laser illumination system. For evaluating color rendering, the IES TM-30-2018 metric is used. In this metric color fidelity and color gamut are calculated from color differences with respect to a reference illuminant. This reference illuminant normally depends on the correlated color temperature (CCT) of the test light source. In this case, the variable laser illumination can generate different SPD's, and thus it has no inherent CCT. In this paper, the reference illuminant is chosen to be the CIE Standard Illuminant D50, which is the reference illuminant for sources with CCT = 5000 K [30].
3. Theoretical analysis
3.1 Calculation methods
The IES TM-30-2018 metric for color rendition relies on a set of 99 TCS's for which the CAM02-UCS color coordinates (J′, a′, b′) [33] are calculated of the reflected light by these samples when illuminated by the test light source and the reference illuminant. The color-appearance difference of the ith TCS is then calculated as
(3)$$\Delta {E_i} = \sqrt {{\Delta J}^{{\prime}2}_{i} + {\Delta a}^{{\prime}2}_{i} + {\Delta b}^{{\prime}2}_{i}} ,$$
where ${\Delta }{J^{\prime}_i}$, ${\Delta }{a^{\prime}_i}$, ${\Delta }{b^{\prime}_i}$ refer to the CAM02-UCS color coordinate differences of the ith TCS illuminated by the test light source and reference illuminant, respectively. Then the arithmetic mean is calculated of these color-appearance differences for all TCS's in order to obtain an average color difference ΔE, from which the color fidelity can be calculated as follows
(4)$${R_\textrm{f}}{ = }10 \times \ln ({e^{(100 - {c_\textrm{f}} \times \Delta E)/10}} + 1)\,{\textrm {with}} \,{c_{\mathop{\textrm {f}}\nolimits} }{ = 6}{.73}{.}$$
For the color gamut score, the (J′, a′, b′) color coordinates of the 99 TCS's are grouped into 16 hue bins of equal width. In each bin the average values of a' and b' are computed, resulting in two 16-point polygons in the (a′, b′) plane for the test source and reference illuminant, respectively. The color gamut score is then equal to:
(5)$${R_\textrm{g}} = 100 \times {A_{\textrm{test}}}/{A_{\textrm{ref}}},$$
with Atest and Aref the areas of the test and reference polygons in the (a′, b′) plane.
When analyzing the performance of the variable laser illumination configuration, each TCS is illuminated by a SPD that is described by Eq. (1) with (pb, pg, pr) as only variables. These three values should be calculated in such a way that the resulting SPD generates the desired color coordinates for that sample in order to comply with the desired Rf and Rg values.
Consider e.g. the case in which both Rf and Rg should be equal to 100, meaning that the resulting color coordinates of each TCS when illuminated by the laser diode SPD should be equal to those when the TCS is illuminated by the reference illuminant. First the XYZ tristimulus values (CIE 1964 10° Standard Observer) are calculated of the reflected light under the reference illumination. Since the (J′, a′, b′) color coordinates are directly related to these tristimulus values, it suffices to realize the same tristimulus values with the laser diode SPD. The XYZ values that are obtained with the laser illumination SPD, can be calculated from the XcYcZc values (with c = b, g, and r) that are obtained by illuminating the TCS with the SPD's SB(λ), SG(λ) and SR(λ) of the individual laser diode peaks (see Eq. (1)). This relation can be expressed as
(6)$$\left[ \begin{array}{l} X\\ Y\\ Z \end{array} \right]{\ =\ }\left[ \begin{array}{l} {X_\textrm{b}}{X_\textrm{g}}{X_\textrm{r}}\\ {Y_\textrm{b}}{Y_\textrm{g}}{Y_\textrm{r}}\\ {Z_\textrm{b}}{Z_\textrm{g}}{Z_\textrm{r}} \end{array} \right] \cdot \left[ \begin{array}{l} {p_\textrm{b}}\\ {p_\textrm{g}}\\ {p_\textrm{r}} \end{array} \right],$$
in which pb, pg and pr correspond again with the fractions of the illumination by the different laser diodes. Therefore, these (necessary) fractions can be easily calculated for each TCS by
(7)$$\left[ \begin{array}{l} {p_\textrm{b}}\\ {p_\textrm{g}}\\ {p_\textrm{r}} \end{array} \right] = {\left[ \begin{array}{l} {X_\textrm{b}}{X_\textrm{g}}{X_\textrm{r}}\\ {Y_\textrm{b}}{Y_\textrm{g}}{Y_\textrm{r}}\\ {Z_\textrm{b}}{Z_\textrm{g}}{Z_\textrm{r}} \end{array} \right]^{ - 1}} \cdot \left[ \begin{array}{l} X\\ Y\\ Z \end{array} \right].$$
As mentioned before, the sum of pb, pg and pr is equal to the total radiant flux onto the considered TCS and this SPD will give the same brightness and color appearance as compared to illuminating the TCS with the reference illuminant D50 with a total radiant flux of 1 W.
Next, we consider the case in which a loss of color fidelity is maximally translated into an increase of color gamut. The XYZ values for each TCS, under reference illumination, can be transformed to the corresponding (J′, a′, b′) coordinates in the CAM02-UCS color-appearance space [30]. The J′ value refers to the lightness. The angle that the projected (J′, a′, b′) point makes in the (a′, b′) plane with respect to the positive a′-axis indicates the hue, and the distance to the origin indicates the chroma. When calculating the SPD in order to enhance the chroma of the i-th TCS, the lightness and hue should remain constant in order to ensure minimal reduction of the color fidelity. This implies that the color-appearance difference corresponds with a radial displacement in the (a′, b′) space. This allows to calculate the target (${J^{\prime}_{i,T}}$, ${a^{\prime}_{i,T}}$, ${b^{\prime}_{i,T}}$) values that should be obtained for each TCS when illuminated by the laser diode SPD.
When using Eq. (4), the chosen color fidelity value Rf can be converted to a color-appearance difference ΔE with the following Eq.:
(8)$$\Delta E = \frac{{100 - 10 \cdot \ln ({e^{{R_\textrm{f}}/10}} - 1)}}{{{c_\textrm{f}}}},$$
Because of the fixed lightness and hue, the target values for the CAM02-UCS coordinates can be calculated from ΔE, as
(9)$$\left\{ \begin{array}{l} {{J^{\prime}}_{i\textrm{,new}}} = {{J^{\prime}}_i}\\ {{a^{\prime}}_{i\textrm{,new}}} = a^{\prime} + \frac{{sign(a^{\prime}) \cdot \Delta {E_i}}}{{\sqrt {1 + {{{{b^{\prime 2}}}} \mathord{\left/ {\vphantom {{{{b^{\prime 2}}}} {{{a^{\prime 2}}}}}} \right.} {{{a^{\prime 2}}}}}} }}\\ {{b^{\prime}}_{i\textrm{,new}}} = k \cdot {{a^{\prime}}_{i\textrm{,new}}}\\ k = \frac{{{{a^{\prime}}_i}}}{{{{b^{\prime}}_i}}} \end{array} \right..$$
Then, these target color appearance coordinates for the i-th TCS are again transformed to (Xi,T, Yi,T, Zi,T) tristimulus values. The needed fractions pb, pg and pr, can again be calculated with Eq. (7).
First the relevant benchmark configuration of a static laser diode illumination system with three narrow spectral peaks is considered. Such a light source can be realized by mixing the light of three different laser diodes and the SPD can also be described by Eq. (1). But, in this case, the peak wavelengths are not chosen to span large color gamut, but are optimized using the Nelder–Mead method in combination with the tools provided in [41], in order to give maximal color fidelity, for a CCT = 5000 K. The resulting peak wavelengths are 459 nm, 531 nm, and 602 nm. The resulting color fidelity score amounts to 74.5, while the gamut score is 103.2. When considering these values in Fig. 3(a), it can be seen that this static SPD does not reach the maximal color gamut score for the obtained color fidelity score.
With these peak wavelengths, the required radiant flux such that the reflected light by a perfect white sample (R(λ) = 1) gives the same tristimulus values as when the sample would be illuminated by the CIE Standard Illuminant D50 (at 1 W), is equal to 0.555W. When comparing the radiant flux requirements of the spatially variable laser diode illumination, both this value and the reference value of 1 W (D50) are both relevant.
In the case of the spatially variable laser illumination with peak wavelengths chosen to allow maximal color gamut (see section 2.2), it is now possible to vary the amount of blue, green and red laser diode light across different positions depending on the object reflectance spectrum at each position. The wide range of object reflectance's that occur in practice are sampled by the 99 TCS's of the IES TM-30-2018 metric. For each TCS, the SPD can be calculated such that the resulting color/brightness appearance corresponds with a specific color fidelity score and color gamut score, as explained in the previous section. It is e.g. possible to calculate the SPD's such that the resulting (L, a′, b′) coordinates of the reflected light by the 99 TCS's are exactly the same as under the reference illuminant. In that case, the variable laser illumination has a color fidelity score and color gamut score of 100.
The total radiant flux of each SPD for a specific TCS, are shown in Fig. 4(a), as well as the average radiant flux for all 99 TCS's. This average flux value is clearly lower than the reference value of 1 W which implies that perfect color fidelity can be achieved with a significant reduction of the required radiant flux. When compared to the static laser illumination benchmark, more radiant flux is needed, but one has to keep in mind that this static laser illumination was optimized to reach maximal color fidelity, but it reaches only a fidelity score of 74.5.
Fig. 4. (a) The required radiant flux for all 99 TCS's to obtain a color fidelity Rf = 100. (b) The variation of the average radiant flux (average for 99 TCS's) and the color gamut index as a function of Rf. (c) The a'b' coordinate shifts for all TCS's when the Rf goes from 100 to 60.
Similar calculations were now performed for the case in which the SPD's for each TCS are adapted in order to obtain an Rf going from 100 to 60 in steps of 10, with the corresponding theoretically maximal Rg. The a'b' coordinate shifts, when Rf changes from 100 to 60, for the 99 TCS's, are shown in Fig. 4(c). It can be seen that the chroma (colorfulness) of the 99 TCS's increases, while their hue keeps constant. The variation of the average radiant flux and color gamut index, as a function of Rf, are shown in Fig. 4(b). It is found that the average radiant flux (average for 99 TCS's) decreases from 0.73 W to 0.68 W and that color gamut index increases from 100 to 158, when the color fidelity index decreases from 100 to 60. This implies that an increase of the color gamut index with the variable laser illumination also results in a reduction of the necessary optical power to yield similar lightness values (even without taking into account that more saturated colors are typically perceived as more bright colors due to the Helmholtz-Kohlrausch effect [42]).
The reason why the average radiant flux decreases with decreasing Rf values can be easily explained by considering the example of a reddish sample, meaning that it has low reflectance for blue and green light and high reflectance for red light. The (red) color saturation of that sample can be enhanced (Rg goes up) by increasing the amount of red light in the illumination, and reducing the amount of blue and green light. Because of the higher reflectance for red light as compared to green and blue, a variation of the red illumination has a bigger impact on the lightness value than a variation of blue and green. So, in order to reach the same lightness value, but a more saturated red color, the red light should be increased be a smaller amount than the green and red light that should be reduced; resulting in a reduction of the total required radiant flux.
4.1 Experimental results
Experiments were conducted in order to prove the feasibility of tuning the color rendering performance of a spatially variable laser illumination system.
The used test setup is shown in Fig. 5 and consists of a portable laser projector illuminating a Macbeth ColorChecker. The reflected light by a certain patch of the ColorChecker is captured by a spectrometer that is equipped with a direct view telescope. The projected image onto the ColorChecker is calibrated such that each pixel of the projected image is connected to a specific patch of the ColorChecker or the black border in between different patches. The reflectance spectrum of each patch is found by illuminating the ColorChecker with a calibrated light source and measuring the absolute SPD of the reflected light. The SPD's of the laser projector primaries onto each patch, were characterized as a function of the RGB values of the input image. This allows calculating the necessary RGB values for each pixel in the projected image, such that the color coordinates of the reflected light by each patch correspond with a specific color rendering performance.
Fig. 5. (a) Test set-up in order to demonstrate the feasibility of tuning the color rendering performance of a spatially variable laser illumination system: 1. Macbeth ColorChecker, 2. Laser projector, 3. Laptop, 4. Spectrometer, 5. Spectral telescope. The Macbeth ColorChecker under (b) static laser diode illumination, (c) tuned laser diode illumination for Rf = 100, and (d) tuned laser diode illumination for Rf = 60.
First, the color tuning capacities of the system were tested for individual color patches. A color rendering performance with Rf = Rg = 100 was targeted, and the RGB values were thus calculated to assure that the SPD of the reflected light by a specific patch gave color coordinates that are "equal" to the color coordinates when the patch is illuminated by the reference illuminant. The term "equal" means in this case that the CIE 1976 chromaticity differences (Δu'v') and luminance differences (ΔY) between the measured and targeted chromaticity and luminance values, are smaller than the just noticeable color difference (i.e. Δu'v' < 0.003 [43,44]) and the just noticeable luminance difference (i.e. ΔY/YT < 1% [45] – for YT under 1.9∼51), respectively. The CIE 1976 u'v' chromaticity coordinates are chosen for this analysis, because it is a very uniform chromaticity diagram for color and luminance differences.
It was noticed however that the targeted chromaticity and luminance values were not reached when using the calculated RGB values for the projected image. This is directly related with the non-linear relationship between the RGB values and the SPD's of the projector primaries, which changes over time and as a function of temperature. As such, it is very difficult to take this behaviour fully into account. A solution to this problem is offered by including a simple feedback optimization algorithm to the system that optimizes the initial RGB values according to the measured spectrum of the reflected light. In this way it is possible to achieve the targeted color rendering performance. In Fig. 6(a) one can see the result for the green patch of the ColorChecker (row 3, column 2). The initial Δu'v' value ( = 0.0129) and ΔY/YT value ( = 0.088%) correspond with a noticeable appearance difference. One can see how the Δu'v' and ΔY/YT (%) values improve after a couple of optimization iterations and stabilize below the just noticeable color and luminance differences. The time in between two successive iterations is mainly determined by the time it takes to capture the SPD of the reflected light with the spectrometer. Similar results were obtained for each of the other color patches and for other color rendering targets (Rf ≠ Rg ≠ 100).
Fig. 6. The Δu'v' and ΔY/YT values (target Rf = 100) as a function of time for (a) the green color patch, when measured with the spectrometer and (b) the orange-yellow color patch, when measured with the hyperspectral camera.
The experimental setup was then adapted to prove the color tuning efficiency for multiple color patches simultaneously. For this, the spectrometer with direct view telescope was replaced by a hyperspectral camera (GagaField-V10, Sichuan Dualix Spectral Image Technology Co. Ltd, China), which can simultaneously measure the SPD of the reflected light by the 24 patches. Also in this case it was necessary to include the feedback optimization algorithm in order to decrease the initial differences between the measured and targeted chromaticity and luminance values, for all 24 patches at the same time. The variation of the chromaticity and luminance differences over multiple optimization iterations are shown in Fig. 6(b), for the orange-yellow patch of the ColorChecker (row 2, column 6). It is apparent that the feedback optimization algorithm needs significantly more iterations in this case, in order to reach chromaticity and luminance differences below the just noticeable thresholds. This is mainly due to the less accurate and less stable measurements of the reflected SPDs by the hyperspectral camera. This loss in accuracy however is compensated by the smaller integration time that is needed to capture these SPDs.
Photographs of the Macbeth ColorChecker are shown in Fig. 5 for three different cases. In Fig. 5(b) one can see the ColorChecker under homogenous illumination by the laser projector. Similar to the static benchmark case in the theoretical analysis, the SPD of the emitted light is calculated such that the reflected light by a perfect white sample gives the same chromaticity and luminance values, as when the sample would be illuminated by the reference illuminant. This results in a very poor color fidelity score of 25 and a color gamut score of 130; the color appearance is clearly very unnatural. In Fig. 5(c) the color appearance of the scene is shown for the case when the laser projector is optimized to reach perfect color fidelity (Rf =Rg=100). This situation is reached after sufficient feedback optimization cycles. In Fig. 5(d) one can see the color appearance of the scene when the laser projector is optimized to reach a color fidelity score of 60 and an increased color gamut score. The colors of the different patches are saturated and vivid. Depending on the application, the color fidelity could be judged as being acceptable or not.
Finally, the radiant flux requirements of the laser projector were investigated for the 18 chromatic patches of the Macbeth ColorChecker for the case with Rf = 100. Again, these radiant flux requirements can be compared with a radiant flux of 1 W for the reference illuminant D50. The results are shown in Fig. 7(b). It can be seen that the radiant flux requirements with the real projector are on average not below the level of the reference illuminant. The reason that this is not the case for the used laser projector lies in the significant difference between the SPDs of the real laser projector primaries and the theoretical laser diode primaries that are considered in Eq. (1). The measured SPDs of the laser projector (for R, G, B values equal to 125) are shown in Fig. 7(a). It can be seen that apparently two different green and red laser diodes are used in the laser projector, with slightly different peak wavelengths. Some light leakage from the green channel (RGB = [0,125,0]) to the red channel (RGB = [125,0,0]) is also visible. Furthermore, there is a significant difference of the peak wavelengths in this laser projector as compared to the laser diode peak wavelengths that were considered in the theoretical calculations. The used laser projector is therefore not optimal in order to reach minimal radiant flux requirements and also not for having maximum color tuning potential. Indeed, the chromaticity coordinates of the laser projector primaries, are not located on the boundaries of the CIE 1931 chromaticity diagram (see Fig. 7(c)). This results in a more narrow color gamut than in the theoretical case (see Fig. 2(c)).
Fig. 7. (a) The measured spectral power distribution of the laser projector primaries (for R, G, B values equal to 125 separately). (b) Required radiant flux for the 18 chromatic Macbeth color patches, with the real laser projector, and with the considered spatially variable laser diode illumination system with Rf = 100. (c) The color gamut of the laser projector.
4.2 Experimental methods
The portable laser projector that is used for the experimental results is a PicoBit (Celluon Inc., USA), which offers a resolution of 1280 pixels by 720 pixels. It is used to generate the spatially variable SPD by varying the RGB values of the pixels in the projected image. The incident light of the projector that is reflected by a certain patch of the Macbeth ColorChecker is collected by a direct view telescope (Bentham Instruments Inc., UK), and then measured with a spectrometer QE65 Pro (Ocean Optics Inc., USA).
An important aspect of the experimental set-up was to derive the non-linear relationship between the RGB values of the projected image and the SPD's SB(λ), SG(λ) and SR(λ) of the three primaries of the laser projector onto each patch. This was done by measuring the reflected light from the projector by each patch with the wavelength and radiometric calibrated spectrometer, and taking the measured reflectance spectra of the different patches into account. The knowledge of SB(λ), SG(λ) and SR(λ) onto each patch for maximal RGB values, allows to calculate the required laser diode ratio's pb, pg and pr for each patch such that the desired color rendering performance is reached. These ratio's correspond with certain RGB values, and the knowledge of SB(λ), SG(λ) and SR(λ) for those specific RGB values can be used to update pb, pg and pr for each patch. A couple of these iterations help to accommodate for the non-linear dependence between the RGB values and projector primaries SPD's, but the resulting RGB values were still not sufficiently accurate. Therefore, a feedback optimization algorithm was included.
A continuous conditional loop is used that relies on the measurement of the reflected light with the spectrometer or hyperspectral camera. From the measured SPD, the luminance and chromaticity values Y and (u′, v′) are calculated and compared with the target values. If the difference between measured and target luminance values is not below the just noticeable luminance differences (i.e. ΔY/YT < 1% [45] – for YT under 1.9∼51), the RGB values are increased or decreased by 1, simultaneously. Additionally, only R values are adapted if the difference between measured and target u' values are above the just noticeable color difference (i.e. Δu'v' < 0.003 [43,44]), and R values are adapted if the difference between measured and target u' values are above the just noticeable color difference.
5. Conclusion and discussion
This paper investigates the color tuning potential and related energy requirements of a spatially variable laser illumination system, in view of the new IES TM-30-2018 method for color rendering. It is explained how such a system allows to reach the maximal color gamut score that is theoretically allowed by the corresponding chosen color fidelity score. Calculations of the optical energy requirements reveal that the needed radiant flux with such a system is below that of the chosen reference illuminant, while it realizes exactly the same color appearance. Energy requirements are further reduced when higher color gamut scores are realized. An experimental setup in which a hyperspectral camera is combined with a commercial laser projector demonstrates that accurate color tuning is also feasible in practice, at least for simple illuminated scenes and when real-time feedback is included.
However, a few relevant remarks are certainly in order. A first important remark is related with the energy saving potential of the system. It is clear that the energy consumption of a lighting system is not only determined by the emitted radiant flux but also by the radiant efficiency. As mentioned in the introduction, the theoretical results focus only on the optical power requirements of the illumination. However, since this study assumes explicitly the use of direct laser diodes as light sources, it is certainly required to point to the fact that the radiant efficiency of direct laser diodes is currently well below 50% for emission in the visible domain. Similar to LEDs, the radiant efficiency of green-emitting laser diodes is even below that of blue or red-emitting devices [46], reaching efficiencies of "only" 15% [47]. But direct laser diode technology is maturing fast and radiant efficiencies are improving continuously. Apart from the radiant efficiency also the system efficiency of the point-by-point light projection system should be taken into account. Spatial light modulators that redirect light such as phase-only elements or scanning mirrors, are from an efficiency point-of-view, superior to devices that block light. These technologies are currently not yet ready to be used as lighting systems, but that could change in the near future.
This means that from a practical point of view, saving energy is clearly not the main reason for considering a spatially variable laser-based illumination system, at this moment in time. Its current potential lies in allowing a color rendering performance that is not achievable with other illumination systems. Already at this stage, one could imagine using a laser projector for shop or museum lighting in order to tailor the color appearance of a certain scene in a very flexible and dynamic manner. Apart from reducing energy consumption or tuning color, optimizing the local SPD with a spatially variable illumination system can help to minimize the absorption of light by sensitive objects such as artwork [48] or enhancing contrast in the illuminated scene.
The fact that some time is needed to evolve the system towards the desired color performance is not a real hindrance for such applications. The cost of a hyperspectral camera however might be an issue, and for that reason it could be very interesting to investigate the possibility of spectral estimation algorithms with a common RGB camera. In order to have real-time color tuning of dynamic scenes, further research is certainly needed. There is room for improvement on both the hardware side (faster camera capture) and software side (faster feedback optimization algorithms). But in order to realize a real-time system one should also provide an answer on how to combine the object reflectance spectra measurements (needing a calibrated broadband illumination) and the adaptive laser illumination.
A final topic that warrants further research is the impact of the camera position relative to the observer position. In our study it is assumed that the monitored object reflectance spectra correspond with the reflectances of the laser diode light towards the observer. This assumption is certainly not always valid for many materials (e.g. glossy materials), if there is a relative large angle between the observation direction and camera monitoring direction. This effect can further complicate the already difficult alignment issue of the pixels of the camera system, with the "pixels" of the point-by-point illumination system.
National Natural Science Foundation of China (61604135); China Scholarship Council (201706415021); Fundamental Research Funds for the Central Universities (CUGL180404).
The work was carried out at ESAT/Light & Lighting Laboratory, KU Leuven, Ghent, Belgium. We thank Prof. Peter Hanselear for authorizing the use of equipment at the lab, and Dr. Jan Audenaert and Shining Ma for help with the experiments.
1. J. H. Oh, S. J. Yang, and Y. R. Do, "Healthy, natural, efficient and tunable lighting: four-package white LEDs for optimizing the circadian effect, color quality and vision performance," Light: Sci. Appl. 3(2), e141 (2014). [CrossRef]
2. W. Davis and Y. Ohno, "Color quality scale," Opt. Eng. 49(3), 033602 (2010). [CrossRef]
3. K. A. G. Smet, W. R. Ryckaert, M. R. Pointer, G. Deconinck, and P. Hanselaer, "A memory colour quality metric for white light sources," Energy. Buildings. 49, 216–225 (2012). [CrossRef]
4. K. Houser, M. Mossman, K. Smet, and L. Whitehead, "Tutorial: Color rendering and its applications in lighting," Leukos 12(1–2), 7–26 (2016). [CrossRef]
5. K. A. G. Smet and P. Hanselaer, "Memory and preferred colours and the colour rendition of white light sources," Lighting Res. Technol. 48(4), 393–411 (2016). [CrossRef]
6. K. A. G. Smet, J. Schanda, L. Whitehead, and R. M. Luo, "CRI2012: A proposal for updating the CIE colour rendering index," Lighting Res. Technol. 45(6), 689–709 (2013). [CrossRef]
7. K. A. G. Smet, L. Whitehead, J. Schanda, and R. M. Luo, "Toward a replacement of the CIE color rendering index for white light sources," Leukos 12(1–2), 61–69 (2016). [CrossRef]
8. A. David, "Color fidelity of light sources evaluated over large sets of reflectance samples," Leukos 10(2), 59–75 (2014). [CrossRef]
9. D. Durmus and W. Davis, "Optimising light source spectrum for object reflectance," Opt. Express 23(11), A456–A464 (2015). [CrossRef]
10. J. Zhang, R. Hu, B. Xie, X. Yu, X. Luo, Z. Yu, L. Zhang, H. Wang, and X. Jin, "Energy-saving light source spectrum optimization by considering object's reflectance," IEEE Photonics J. 9(2), 1–11 (2017). [CrossRef]
11. D. Durmus and W. Davis, "Appearance of achromatic colors under optimized light source spectrum," IEEE Photonics J. 10(6), 1–11 (2018). [CrossRef]
12. D. Durmus and W. Davis, "Object color naturalness and attractiveness with spectrally optimized illumination," Opt. Express 25(11), 12839–12850 (2017). [CrossRef]
13. F. David and A. Kinjiro, "Hyperspectral imaging in color vision research: tutorial," J. Opt. Soc. Am. A 36(4), 606–627 (2019). [CrossRef]
14. J. Y. Tsao, M. H. Crawford, M. E. Coltrin, A. J. Fischer, D. D. Koleske, G. S. Subramania, G. T. Wang, J. J. Wierer, and R. F. Karlicek, "Toward smart and ultra-efficient solid-state lighting," Adv. Opt. Mater. 2(9), 809–836 (2014). [CrossRef]
15. D. Durmus and W. Davis, "Blur perception and visual clarity in light projection systems," Opt. Express 27(4), A216–A223 (2019). [CrossRef]
16. Panasonic Corp. "What's a space player?" https://panasonic.net/cns/projector/products/spaceplayer.
17. M. S. Brennesholtz and E. H. Stupp, Projection displays (Wiley Publishing, 2008).
18. G. Damberg and W. Heidrich, "Efficient freeform lens optimization for computational caustic displays," Opt. Express 23(8), 10224–10232 (2015). [CrossRef]
19. W. F. Hsu and M. H. Weng, "Compact holographic projection display using liquid-crystal-on-Silicon spatial light modulator," Materials 9(9), 768–776 (2016). [CrossRef]
20. M. Bawart, S. Bernet, and M. Ritsch-Marte, "Programmable freeform optical elements," Opt. Express 25(5), 4898–4906 (2017). [CrossRef]
21. M. Freeman, M. Champion, and S. Madhavan, "Scanned laser pico-projectors: Seeing the bg picture (with a small device)," Opt. Photonics News 20(5), 28–34 (2009). [CrossRef]
22. J. Shen, S. Chang, H. Wang, and Z. Zheng, "Optimal illumination for visual enhancement based on color entropy evaluation," Opt. Express 24(17), 19788–19800 (2016). [CrossRef]
23. C. Chi, H. Yoo, and M. Ben-Ezra, "Multi-spectral imaging by optimized wide band illumination," Int. J. Comput. Vis. 86(2–3), 140–151 (2010). [CrossRef]
24. S. Han, I. Sato, T. Okabe, and Y. Sato, "Fast spectral reflectance recovery using DLP projector," Int. J. Comput. Vis. 110(2), 172–184 (2014). [CrossRef]
25. S. W. Oh, M. S. Brown, M. Pollefeys, and S. J. Kim, "Do it yourself hyperspectral imaging with everyday digital cameras," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 2461–2469.
26. Z. Liu, Q. Liu, G. A. Gao, and C. Li, "Optimized spectral reconstruction based on adaptive training set selection," Opt. Express 25(11), 12435–12445 (2017). [CrossRef]
27. BT2020, I. T. U. R. "Parameter values for ultra-high definition television systems for production and international programme exchange," (2012)
28. M. R. Pointer, "The gamut of real surface colours," Color Res. Appl. 5(3), 145–155 (1980). [CrossRef]
29. Y. Ohno, "Spectral design considerations for white LED color rendering," Opt. Eng. 44(11), 111302 (2005). [CrossRef]
30. Illuminating engineering society of north America, "TM-30-18: IES method for evaluating light source color rendition," (2018).
31. K. A. G. Smet, A. David, and L. Whitehead, "Why color space uniformity and sample set spectral uniformity are essential for color rendering measures," Leukos 12(1–2), 39–50 (2016). [CrossRef]
32. A. David, P. T. Fini, K. W. Houser, Y. Ohno, M. P. Royer, K. A. G. Smet, W. Minchen, and L. Whitehead, "Development of the IES method for evaluating the color rendition of light sources," Opt. Express 23(12), 15888–11590 (2015). [CrossRef]
33. M. R. Luo, G. Cui, and C. Li, "Uniform colour spaces based on CIECAM02 colour appearance model," Color Res. Appl. 31(4), 320–330 (2006). [CrossRef]
34. T. Esposito and K. Houser, "Models of colour quality over a wide range of spectral power distributions," Lighting Res. Technol. 51(3), 331–352 (2019). [CrossRef]
35. X. Feng, W. Xu, Q. Han, and S. Zhang, "Colour-enhanced light emitting diode light with high gamut area for retail lighting," Lighting Res. Technol. 49(3), 329–342 (2017). [CrossRef]
36. K. A. G. Smet, W. R. Ryckaert, M. R. Pointer, G. Deconinck, and P. Hanselaer, "Memory colours and colour quality evaluation of conventional and solid-state lamps," Opt. Express 18(25), 26229–26244 (2010). [CrossRef]
37. M. Wei, K. W. Houser, A. David, and M. R. Krames, "Perceptual responses to LED illumination with colour rendering indices of 85 and 97," Lighting Res. Technol. 47(7), 810–827 (2015). [CrossRef]
38. Y. Ohno, M. Fein, and C. Miller, "Vision experiment on chroma saturation for colour quality preference," Light Eng. 23(4), 6–14 (2015).
39. M. P. Royer, A. Wilkerson, and M. Wei, "Human perceptions of colour rendition at different chromaticities," Lighting Res. Technol. 50(7), 965–994 (2018). [CrossRef]
40. M. Royer, A. Wilkerson, M. Wei, K. Houser, and R. Davis, "Human perceptions of colour rendition vary with average fidelity, average gamut, and gamut shape," Lighting Res. Technol. 49(8), 966–991 (2017). [CrossRef]
41. K. A. G. Smet, "Tutorial: The LuxPy Python toolbox for lighting and color science," Leukos. (2019). [CrossRef]
42. S. Hermans, K. A. G. Smet, and P. Hanselaer, "Color appearance model for self-luminous stimuli," J. Opt. Soc. Am. A 35(12), 2000–2009 (2018). [CrossRef]
43. M. Luo, G. Cui, and M. Georgoula, "Colour difference evaluation for white light sources," Lighting Res. Technol. 47(3), 360–369 (2015). [CrossRef]
44. Commission internationale de l'éclairage, "CIE Technical Note 001:2014 Chromaticity difference specification for light sources," (2014).
45. G. Wyszecki and W. S. Stiles, Color science: concepts and methods, quantitative data and formulas, 2nd edition (John Wiley & Sons. Press, 2000).
46. D. Sizov, R. Bhat, and C. E. Zah, "Gallium indium Nitride-based green lasers," J. Lightwave Technol. 30(5), 679–699 (2012). [CrossRef]
47. Nichia Corp, "Laser diode," https://www.nichia.co.jp/en/product/laser.html (2019).
48. D. Durmus, D. Abdalla, A. Duis, and W. Davis, "Spectral optimization to minimize light absorbed by artwork," Leukos. (2018).
Article Order
J. H. Oh, S. J. Yang, and Y. R. Do, "Healthy, natural, efficient and tunable lighting: four-package white LEDs for optimizing the circadian effect, color quality and vision performance," Light: Sci. Appl. 3(2), e141 (2014).
[Crossref]
W. Davis and Y. Ohno, "Color quality scale," Opt. Eng. 49(3), 033602 (2010).
K. A. G. Smet, W. R. Ryckaert, M. R. Pointer, G. Deconinck, and P. Hanselaer, "A memory colour quality metric for white light sources," Energy. Buildings. 49, 216–225 (2012).
K. Houser, M. Mossman, K. Smet, and L. Whitehead, "Tutorial: Color rendering and its applications in lighting," Leukos 12(1–2), 7–26 (2016).
K. A. G. Smet and P. Hanselaer, "Memory and preferred colours and the colour rendition of white light sources," Lighting Res. Technol. 48(4), 393–411 (2016).
K. A. G. Smet, J. Schanda, L. Whitehead, and R. M. Luo, "CRI2012: A proposal for updating the CIE colour rendering index," Lighting Res. Technol. 45(6), 689–709 (2013).
K. A. G. Smet, L. Whitehead, J. Schanda, and R. M. Luo, "Toward a replacement of the CIE color rendering index for white light sources," Leukos 12(1–2), 61–69 (2016).
A. David, "Color fidelity of light sources evaluated over large sets of reflectance samples," Leukos 10(2), 59–75 (2014).
D. Durmus and W. Davis, "Optimising light source spectrum for object reflectance," Opt. Express 23(11), A456–A464 (2015).
J. Zhang, R. Hu, B. Xie, X. Yu, X. Luo, Z. Yu, L. Zhang, H. Wang, and X. Jin, "Energy-saving light source spectrum optimization by considering object's reflectance," IEEE Photonics J. 9(2), 1–11 (2017).
D. Durmus and W. Davis, "Appearance of achromatic colors under optimized light source spectrum," IEEE Photonics J. 10(6), 1–11 (2018).
D. Durmus and W. Davis, "Object color naturalness and attractiveness with spectrally optimized illumination," Opt. Express 25(11), 12839–12850 (2017).
F. David and A. Kinjiro, "Hyperspectral imaging in color vision research: tutorial," J. Opt. Soc. Am. A 36(4), 606–627 (2019).
J. Y. Tsao, M. H. Crawford, M. E. Coltrin, A. J. Fischer, D. D. Koleske, G. S. Subramania, G. T. Wang, J. J. Wierer, and R. F. Karlicek, "Toward smart and ultra-efficient solid-state lighting," Adv. Opt. Mater. 2(9), 809–836 (2014).
D. Durmus and W. Davis, "Blur perception and visual clarity in light projection systems," Opt. Express 27(4), A216–A223 (2019).
Panasonic Corp. "What's a space player?" https://panasonic.net/cns/projector/products/spaceplayer .
M. S. Brennesholtz and E. H. Stupp, Projection displays (Wiley Publishing, 2008).
G. Damberg and W. Heidrich, "Efficient freeform lens optimization for computational caustic displays," Opt. Express 23(8), 10224–10232 (2015).
W. F. Hsu and M. H. Weng, "Compact holographic projection display using liquid-crystal-on-Silicon spatial light modulator," Materials 9(9), 768–776 (2016).
M. Bawart, S. Bernet, and M. Ritsch-Marte, "Programmable freeform optical elements," Opt. Express 25(5), 4898–4906 (2017).
M. Freeman, M. Champion, and S. Madhavan, "Scanned laser pico-projectors: Seeing the bg picture (with a small device)," Opt. Photonics News 20(5), 28–34 (2009).
J. Shen, S. Chang, H. Wang, and Z. Zheng, "Optimal illumination for visual enhancement based on color entropy evaluation," Opt. Express 24(17), 19788–19800 (2016).
C. Chi, H. Yoo, and M. Ben-Ezra, "Multi-spectral imaging by optimized wide band illumination," Int. J. Comput. Vis. 86(2–3), 140–151 (2010).
S. Han, I. Sato, T. Okabe, and Y. Sato, "Fast spectral reflectance recovery using DLP projector," Int. J. Comput. Vis. 110(2), 172–184 (2014).
S. W. Oh, M. S. Brown, M. Pollefeys, and S. J. Kim, "Do it yourself hyperspectral imaging with everyday digital cameras," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 2461–2469.
Z. Liu, Q. Liu, G. A. Gao, and C. Li, "Optimized spectral reconstruction based on adaptive training set selection," Opt. Express 25(11), 12435–12445 (2017).
BT2020, I. T. U. R. "Parameter values for ultra-high definition television systems for production and international programme exchange," (2012)
M. R. Pointer, "The gamut of real surface colours," Color Res. Appl. 5(3), 145–155 (1980).
Y. Ohno, "Spectral design considerations for white LED color rendering," Opt. Eng. 44(11), 111302 (2005).
Illuminating engineering society of north America, "TM-30-18: IES method for evaluating light source color rendition," (2018).
K. A. G. Smet, A. David, and L. Whitehead, "Why color space uniformity and sample set spectral uniformity are essential for color rendering measures," Leukos 12(1–2), 39–50 (2016).
A. David, P. T. Fini, K. W. Houser, Y. Ohno, M. P. Royer, K. A. G. Smet, W. Minchen, and L. Whitehead, "Development of the IES method for evaluating the color rendition of light sources," Opt. Express 23(12), 15888–11590 (2015).
M. R. Luo, G. Cui, and C. Li, "Uniform colour spaces based on CIECAM02 colour appearance model," Color Res. Appl. 31(4), 320–330 (2006).
T. Esposito and K. Houser, "Models of colour quality over a wide range of spectral power distributions," Lighting Res. Technol. 51(3), 331–352 (2019).
X. Feng, W. Xu, Q. Han, and S. Zhang, "Colour-enhanced light emitting diode light with high gamut area for retail lighting," Lighting Res. Technol. 49(3), 329–342 (2017).
K. A. G. Smet, W. R. Ryckaert, M. R. Pointer, G. Deconinck, and P. Hanselaer, "Memory colours and colour quality evaluation of conventional and solid-state lamps," Opt. Express 18(25), 26229–26244 (2010).
M. Wei, K. W. Houser, A. David, and M. R. Krames, "Perceptual responses to LED illumination with colour rendering indices of 85 and 97," Lighting Res. Technol. 47(7), 810–827 (2015).
Y. Ohno, M. Fein, and C. Miller, "Vision experiment on chroma saturation for colour quality preference," Light Eng. 23(4), 6–14 (2015).
M. P. Royer, A. Wilkerson, and M. Wei, "Human perceptions of colour rendition at different chromaticities," Lighting Res. Technol. 50(7), 965–994 (2018).
M. Royer, A. Wilkerson, M. Wei, K. Houser, and R. Davis, "Human perceptions of colour rendition vary with average fidelity, average gamut, and gamut shape," Lighting Res. Technol. 49(8), 966–991 (2017).
K. A. G. Smet, "Tutorial: The LuxPy Python toolbox for lighting and color science," Leukos. (2019).
S. Hermans, K. A. G. Smet, and P. Hanselaer, "Color appearance model for self-luminous stimuli," J. Opt. Soc. Am. A 35(12), 2000–2009 (2018).
M. Luo, G. Cui, and M. Georgoula, "Colour difference evaluation for white light sources," Lighting Res. Technol. 47(3), 360–369 (2015).
Commission internationale de l'éclairage, "CIE Technical Note 001:2014 Chromaticity difference specification for light sources," (2014).
G. Wyszecki and W. S. Stiles, Color science: concepts and methods, quantitative data and formulas, 2nd edition (John Wiley & Sons. Press, 2000).
D. Sizov, R. Bhat, and C. E. Zah, "Gallium indium Nitride-based green lasers," J. Lightwave Technol. 30(5), 679–699 (2012).
Nichia Corp, "Laser diode," https://www.nichia.co.jp/en/product/laser.html (2019).
D. Durmus, D. Abdalla, A. Duis, and W. Davis, "Spectral optimization to minimize light absorbed by artwork," Leukos. (2018).
Abdalla, D.
Bawart, M.
Ben-Ezra, M.
Bernet, S.
Bhat, R.
Brennesholtz, M. S.
Brown, M. S.
Champion, M.
Chang, S.
Chi, C.
Coltrin, M. E.
Corp, Nichia
Crawford, M. H.
Cui, G.
Damberg, G.
David, A.
David, F.
Davis, R.
Davis, W.
Deconinck, G.
Do, Y. R.
Duis, A.
Durmus, D.
Esposito, T.
Fein, M.
Feng, X.
Fini, P. T.
Fischer, A. J.
Freeman, M.
Gao, G. A.
Georgoula, M.
Han, S.
Hanselaer, P.
Heidrich, W.
Hermans, S.
Houser, K.
Houser, K. W.
Hsu, W. F.
Hu, R.
Jin, X.
Karlicek, R. F.
Kim, S. J.
Kinjiro, A.
Koleske, D. D.
Krames, M. R.
Li, C.
Liu, Q.
Liu, Z.
Luo, M.
Luo, M. R.
Luo, R. M.
Luo, X.
Madhavan, S.
Miller, C.
Minchen, W.
Mossman, M.
Oh, J. H.
Oh, S. W.
Ohno, Y.
Okabe, T.
Pointer, M. R.
Pollefeys, M.
Ritsch-Marte, M.
Royer, M.
Royer, M. P.
Ryckaert, W. R.
Sato, I.
Sato, Y.
Schanda, J.
Shen, J.
Sizov, D.
Smet, K.
Smet, K. A. G.
Stiles, W. S.
Stupp, E. H.
Subramania, G. S.
Tsao, J. Y.
Wang, G. T.
Wang, H.
Wei, M.
Weng, M. H.
Whitehead, L.
Wierer, J. J.
Wilkerson, A.
Wyszecki, G.
Xie, B.
Xu, W.
Yang, S. J.
Yoo, H.
Yu, X.
Yu, Z.
Zah, C. E.
Zhang, J.
Zhang, L.
Zhang, S.
Zheng, Z.
Adv. Opt. Mater. (1)
Color Res. Appl. (2)
Energy. Buildings. (1)
IEEE Photonics J. (2)
Int. J. Comput. Vis. (2)
J. Lightwave Technol. (1)
J. Opt. Soc. Am. A (2)
Leukos (4)
Light Eng. (1)
Light: Sci. Appl. (1)
Lighting Res. Technol. (8)
Opt. Eng. (2)
Opt. Express (9)
Opt. Photonics News (1)
Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.
Alert me when this article is cited.
Click here to see a list of articles that cite this paper
View in Article | Download Full Size | PPT Slide | PDF
Equations on this page are rendered with MathJax. Learn more.
(1) S ( x , y ) ( λ ) = p b S B ( λ ) + p g S G ( λ ) + p r S R ( λ ) .
(2) Φ e = p b + p g + p r [ in Watt ] .
(3) Δ E i = Δ J i ′ 2 + Δ a i ′ 2 + Δ b i ′ 2 ,
(4) R f = 10 × ln ( e ( 100 − c f × Δ E ) / 10 + 1 ) with c f = 6 .73 .
(5) R g = 100 × A test / A ref ,
(6) [ X Y Z ] = [ X b X g X r Y b Y g Y r Z b Z g Z r ] ⋅ [ p b p g p r ] ,
(7) [ p b p g p r ] = [ X b X g X r Y b Y g Y r Z b Z g Z r ] − 1 ⋅ [ X Y Z ] .
(8) Δ E = 100 − 10 ⋅ ln ( e R f / 10 − 1 ) c f ,
(9) { J ′ i ,new = J ′ i a ′ i ,new = a ′ + s i g n ( a ′ ) ⋅ Δ E i 1 + b ′ 2 / b ′ 2 a ′ 2 a ′ 2 b ′ i ,new = k ⋅ a ′ i ,new k = a ′ i b ′ i .
James Leger, Editor-in-Chief | CommonCrawl |
Homotopy Type Theory
Rollback to semi-simplicial types Rev #4
Syntax tips
The basic syntax is extended Markdown.
Links to other nLab pages should be made by surrounding the name of the page in double square brackets: [[ name of page ]]. To link to an nLab page but show a different link text, do the following: [[ name of page | link text to show ]].
LaTeX can be used inside single dollar signs (inline) or double dollar signs or \[ and \], as usual.
To create a table of contents, add \tableofcontents on its own line.
For a theorem or proof, use \begin{theorem} \end{theorem} as you would in LaTeX. Labelling and referencing is exactly as in LaTeX, with use of \label and \ref. The full list of supported environments can be found in the HowTo.
Tikz can be used for figures almost exactly as in LaTeX. Similarly, tikz-cd and xymatrix can be used for commutative diagrams. See the HowTo.
As an alternative to the Markdown syntax for sections (headings), one can use the usual LaTeX syntax \section, \subsection, etc.
For further help, see the HowTo, or you are very welcome to ask at the nForum.
One interesting open problem (considered by [[Vladimir Voevodsky]] and others): define _semi-simplicial_ types in Homotopy Type Theory. (Here is [[Vladimir Voevodsky]]'s [code](http://uf-ias-2012.wikispaces.com/file/view/semisimplicial.v/386291744/semisimplicial.v) for a proposed definition.) Classically, a _semi-simplicial object_ in a category is like a simplicial object, but without degeneracy maps; i.e. a contravariant functor from the category $\Delta{}i$, of finite nonempty ordinals and just injections between them. Can we define these internally to the type theory? **Update 3/20**: here is a [note by Hugo Herbelin](http://uf-ias-2012.wikispaces.com/file/view/semi-simplicial.pdf/416038766/semi-simplicial.pdf) on a proposed construction. **Update 4/12**: [A note on semisimplicial sets](https://uf-ias-2012.wikispaces.com/file/view/semisimplicialsets.pdf/421930564/semisimplicialsets.pdf) by Benno van den Berg. **Update 4/15**: [A note on a presheaf model for simplicial sets](http://uf-ias-2012.wikispaces.com/file/view/countermodel.pdf/423334002/countermodel.pdf) by Marc Bezem and Thierry Coquand. **Update 6/24**: Accompanying files to Update 4/15: * [CL.pl](https://uf-ias-2012.wikispaces.com/file/view/CL.pl/439408128/CL.pl), the prover for coherent logic (requires [SWI-Prolog](http://www.swi-prolog.org/)) * [X.in](http://uf-ias-2012.wikispaces.com/file/view/X.in/440060710/X.in), a formalization of the model in coherent logic * [X.model](http://uf-ias-2012.wikispaces.com/file/view/X.model/440062146/X.model), the edges, fill1 and fill2, day by day, generated by the prover from input X.in. * [X.v](http://uf-ias-2012.wikispaces.com/file/view/X.v/439412350/X.v), a Coq script verifying the proof that no homotopy equivalence between the fibers can exist in the model. ## Finite-dimensional parts For small values of n, it is straightforward to define n-semi-simplicial types. A 0-semi-simplicial type is just a type $X_0\,:\,Type$. A 1-semi-simplicial type: $X_0\,:\,Type$, and $X_1\,:\,X_0 \rightarrow X_0 \rightarrow Type$. A 2-semi-simplicial type: $X_0\,:\,Type$; $X_1\,:\, X_0 \rightarrow X_0 \rightarrow Type$; $X_2\,:\,forall\:(x y z\,:\,X_0)\:(f\,:\,X_1 x y)\:(g\,:\,X_1 y z)\:(h\,:\,X_1 x z), Type$ A 3-semi-simplicial type: $(X_0,X_1,X_2)$ as before; and $X_3:\,forall (\text{tetrahedral configurations from} X_0 \ldots X_2), Type$ And so on. Each of these can be tupled up as a single type. * Can we define a function "$\text{Semi-simplicial}\,:\,nat \rightarrow Type$", such that for $n = 0,1,2,3$, $\text{Semi-simplicial} n$ is (equivalent to) the objects explicitly defined here? * Can we define a type of semi-simplicial types (i.e. infinity-semi-simplicial types, with $n$-simplicies for all $n$? Note: this is only one possible approach! Other approaches to the problem are also possible, and may be better. ## More precise success criteria Obviously, the above specifications admit trivial solutions. Can we give a precise formulation of the goal? Idea: something like "in the simplicial model, or more generally in other homotopy-theoretic model, they should be equivalent to coherent (possibly: Reedy-fibrant) semi-simplicial objects". ## Difficulties with some approaches Why not imitate the classical definition, using internal functors from the internally-defined category $\Delta{}i$? Giving a reasonable definition of $\Delta{}i$ is not too hard: the objects [n] are very well-behaved. But defining functors out of it is problematic, because there are coherence issues. One would have to specify the functoriality laws using equations, i.e. inhabitants of equality types, which homotopically we treat as paths. But specified paths in homotopy theory have to be coherent to define a useful notion of functor; thus we need equations between our equations (associativity pentagons, etc), then higher equations between those, and so on forever. Thus we end up with the same problem we had before of specifying infinitely much data using a finite description in type theory. If one restricts the types involved to h-sets, then this problem should go away; but then one has only defined semi-simplicial h-sets, which are (presumably) strictly less general. ## Why semi-simplicial, not simplicial? With semi-simplicial sets, the "iterated dependency" approach gives us at least a candidate approach for tackling coherence issues. With simplicial sets, it's hard to see how one might tackle or avoid them. (Comparing it to the semi-simplicial approach, one requires degeneracy maps and equations between them, which lets loose the spectre of coherence again.) Furthermore, reasoning about Kan simplicial sets seems to insist on classical logic. For example, the classical result stating the homotopy equivalence of fibers of a Kan fibration cannot be proved constructively ([pdf](http://uf-ias-2012.wikispaces.com/file/view/countermodel.pdf/423334002/countermodel.pdf)). In homotopy-theoretic terms, this is because $\Delta{}i$ is a direct category, while $\Delta$ is not. [[!redirects semisimplicial types]] [Semi-simplicial Types in Logic-enriched Homotopy Type Theory Fedor Part, Zhaohui Luo](http://arxiv.org/abs/1506.04998 ) (Submitted on 16 Jun 2015)
as | Cancel (unlocks page) | CommonCrawl |
PICO Entity Extraction For Preclinical Animal Literature
Qianying Wang, Jing Liao, Mirella Lapata, Malcolm Macleod
https://doi.org/10.21203/rs.3.rs-1008099/v1
posted 28 Oct, 2021
Background: Natural language processing could assist multiple tasks in systematic reviews to reduce workflow, including the extraction of PICO elements such as study populations, interventions and outcomes. The PICO framework provides a basis for the retrieval and selection for inclusion of published evidence relevant to a specific systematic review question, and automatic approaches of PICO extraction have been developed particularly for reviews of clinical trial findings. Considering the difference between preclinical animal studies and clinical trials, developing separate approaches are necessary. Facilitating preclinical systematic reviews will inform the translation from preclinical to clinical research.
Methods: We randomly selected 400 abstracts from the PubMed Central Open Access database which described in vivo animal research and manually annotated these with PICO phrases for Species, Strain, model Induction, Intervention, Comparator and Outcome. We developed a two-stage workflow for preclinical PICO extraction. Firstly we fine-tuned BERT with different pre-trained modules for PICO sentence classification. Then, after removing text irrelevant to PICO features, we explored LSTM, CRF and BERT-based models for PICO entity recognition. We also explored a self-training approach because of the small training corpus.
Results: For PICO sentence classification, BERT models using all pre-trained modules achieved an F1 score over 80%, and models pre-trained on PubMed abstracts achieved the highest F1 of 85%. For PICO entity recognition, fine-tuning BERT pre-trained on PubMed abstracts achieved an overall F1 of 71%, and satisfactory F1 for Species (98%), Strain (70%), Intervention (70%) and Outcome (67%). The score of Induction and Comparator is less satisfactory, but F1 of Comparator can be improved to 50% by applying self-training.
Conclusions: Our study indicates that of the approaches tested, BERT pre-trained on PubMed abstracts is the best for both PICO sentence classification and PICO entity recognition in the preclinical abstracts. Self-training yields better performance for identifying comparators and strains.
preclinical animal study
named entity recognition
information extraction
self-training
Systematic review attempts to collate all relevant evidence to provide reliable summary of findings relevant to a pre-specified research question [1]. When conducting information extraction from clinical literature, the key elements of interest are Population/Problem, Intervention, Comparator and Outcome, which compose the established framework of PICO [2]. This has been used as the basis for retrieval, inclusion and classification of published evidence, and empirical studies have shown the use of the PICO framework facilitates more complex search strategies and yields more precise search results in systematic reviews [3]. As the number of publications describing experimental studies has increased, the time taken in manually extracting information has increased such that many reviews are out of date by the time they are published. The evidence-based research community has responded by advocating the use of automated approaches to assist systematic reviews, and PICO extraction tools have been developed, particularly for clinical trials [4].
Preclinical animal studies differ from clinical trials in many aspects. The aim of animal studies is to explore new hypotheses for drug or treatment development, so they have more variations for the definition of PICO elements. For example, in animal studies, disease is not naturally present but often induced; different species can be used; and outcomes of interest can include survival, behavioural, histological and biochemical outcomes [5]. Considering the difference and the leading clinical research, the SYRCLE group developed a framework definition of preclinical PICO, where "Population" includes animal species and strain, and any method of inducing a disease model; and several outcomes can be considered [6]. Importantly, the "Comparator" for animal studies is usually simply an untreated control cohort, although the exact choice of control is sometimes a variable of interest.
Here we report the development of automatic PICO extraction approaches for preclinical animal studies which may advocate the use of preclinical PICO and facilitate the translation from preclinical to clinical research.
To our knowledge, while automated PICO extraction in clinical reports is relatively well-explored, no method has been developed or evaluated for preclinical animal literature.
Most of the previous work for the clinical trial literature casts PICO element extraction as a sentence classification task. Byron et al use logistic regression with distant supervision to train PICO sentences derived from clinical articles [7]. More recent approaches have used recent neural networks for PICO sentence classification which requires less manual feature engineering. Such approaches include bidirectional long-short term memory network (BiLSTM) [8] with some variations [9-11]. More precise PICO phrases or snippets extraction is cast as a named-entity recognition task, and BiLSTM with conditional random field (CRF) [12] are common approaches [13-14]. Some advanced methods including graph learning [15] and BERT [16] enhance the performance.
We downloaded 2,207,654 articles from PubMed Central Open Access Subset database published from 2010 to 2019 and used a citation screening filter trained to identify in vivo research from title and abstract (developed by EPPI-Centre, UCL [17]). We chose an inclusion cut-point which gave 99% precision and obtained 50,653 abstracts describing in vivo animal experiments. We randomly selected 400 abstracts for the annotation task and another 10,000 for the self-training experiments.
We used the online platform tagtog for PICO phrases annotation. In addition to Intervention, Comparator and Outcome, we divided the Population category into three components: the Species, the Strain, and the method of Induction of the disease model. After the initial annotation process and discussion with a senior clinician, we proposed some general rules for the annotation task:
Only PICO spans describing in vivo experiments are annotated, i.e. interventions or treatments should be conducted within an entire, living organism. Interventions applied to tissues derived from an animal or in cell culture (ex vivo or in vitro experiments) should not be annotated;
Texts describing introduction, conclusion or objectives should not be annotated in most cases because these might relate to work other than that described in the publication. They should be annotated only when the remaining text lacks a clear description of the method, or where the text gives the meaning of abbreviations;
The first occurrence of abbreviation should be annotated together with the parent text. For example, "vascular endothelial growth factor (VEGF)" should be tagged as one entity for its first occurrence; in the remainder of the text, "VEGF" or "vascular endothelial growth factor" could be annotated separately if they are not mentioned together;
Any extra punctuations between phrases (such as commas) should not be annotated. However, if the entity appears only one time in the text, punctuations can be included in a long span of text which consists of several phrases;
Entity spans cannot be overlapped. Annotations in tagtog are output in EntitiesTsv format which resembles the tsv output in the Stanford NER tool [18], and this does not support overlapping entities.
Figure 1 shows an example of annotated abstract using tagtog. After excluding the title, introduction sentence, first part of the objective sentence and the conclusion sentence which do not explicitly describe experimental elements, PICO entities are extracted from the remaining sentences: 1) Species: mice; 2) Strain: C57BL/6; 3) Induction: fed normal chow (NC), fed high-fat diet (HFD); 4) Intervention: aerobic exercise training, exercise, treadmill running; 5) Comparator: sedentary; and 6) Outcome: protein spots.
In total, 6837 entities were annotated across 400 abstracts, and the distribution of PICO entities is imbalanced (Table 1). Less than 50% of sentences in each abstract contains PICO phrases and using the entire abstracts to train an entity recognition model is not efficient. Therefore, we split the PICO phrases extraction task into two independent subtasks: 1) PICO sentence classification, and 2) PICO entity recognition.
Statistics of 400 annotated PICO dataset
Average number in each abstract
PICO sentences
Distribution of PICO entity
PICO sentence classification
Text from 400 abstracts are split into 4,247 sentences by scispaCy [19] and sentences containing at least one PICO entity are labelled as 'True' for PICO sentence. Individual sentences were randomly allocated to training, validation and test sets (80%/10%/10%). For the sentence-level classification task, we use BERT, a contextualized representation model where a deep bidirectional encoder is trained on a large text corpus. The encoder structure is derived from the powerful transformer based on multi-head self-attention, which dispenses with issues arising from recurrence and convolutions [20]. The pre-trained BERT can be fine-tuned with a simple additional output layer for downstream tasks and achieves state-of-the-art performance on many natural language processing tasks [16]. We explore the effects of using different text corpuses and methods for pre-training including 1) BERT-base, the original BERT trained on the combination of BookCorpus, and English Wikipedia [16]; 2) BioBERT, which trains BERT on the combination of BookCorpus, English Wikipedia, PubMed abstracts and PubMed Central full-text articles [21]; 3) PubMedBERT-abs, which trains BERT on PubMed abstracts only, and 4) PubMedBERT-full on a combination of PubMed abstracts and PubMed Central full-text articles [22].
The approach to training seeks to minimise cross-entropy loss using the AdamW algorithm [23]. We use a slanted triangular learning rate scheduler [24] with a maximum learning rate 5e-5 for 10 epochs of training. We apply gradient clipping [25] with a threshold norm of 0.1 to rescale gradients and gradient accumulation every 16 steps (mini-batches) to reduce memory consumption.
PICO entity recognition
Identifying specific PICO phrases is cast as a named-entity recognition (NER) task. We convert all entity annotations to the standard BIO format [26], i.e. each word/token is labelled as 'B-XX' if it is the beginning word of the 'XX' entity, 'I-XX' if it belongs to other words inside the entity but not the beginning word, or 'O' if it is outside of any PICO entity. Hence, there are 13 unique tags for 6 PICO entities (two tags for each entity, plus tag 'O'), and a NER model is trained to assign the 13 unique tags to each token in the PICO text.
One classic NER model is the bidirectional long-short term memory (BiLSTM) with a CRF layer on top (BiLSTM-CRF) [27]. LSTM belongs to the family of recurrent neural networks which can process word embeddings sequentially. In the hidden layer, by combining the weighted hidden representations from the adjacent word through a Tanh operation, a basic recurrent neural structure can retain information from neighbouring text. However, when the document is long, retraining information from very early or late words is difficult because of the exploding or vanishing gradient problem, which stops the network learning efficiently [28]. LSTM is designed to solve this long-term dependencies problem, which uses a cell state and three gates (forget gate, input gate and output gate) for each word embedding to control information we need to flow straight, to forget, or to store and update to the next step [8]. BiLSTM contains information from words in both directions, by processing hidden vectors from previous words to the current word, and hidden vectors from future words back to the current words.
CRF is a type of discriminative probabilistic model which is often added on top of LSTMs to model dependencies and learn the transition constraints among predicted tags from LSTM output. For example, if the tag of a word in the sequence is 'I-Outcome', the tag of the previous word can only be 'B-Outcome' or 'I-Outcome', and impossible to be 'I-Intervention' or 'O' in a real sample. Models without CRF layer may lose these constraints and cause unnecessary transition errors. We explore BiLSTM models with or without CRF layers. For text representations in these models, tokens are mapped into 200-dimension vectors by word2vec [29] induced on a combination of PubMed, PMC texts and English Wikipedia [30].
Similar to the PICO sentence classification, we also fine-tuned BERT with different pre-trained weights for the entity recognition task, using BertForTokenClassification module from Hugging Face Transformers library [31]. We also explored the effect of adding CRF and LSTM layers on top of BERT.
For more efficient training and to achieve best results for the entity recognition task, we removed sentences without any PICO annotation from each abstract and trained NER models on each remaining text, which consisted of PICO sentences only; for prediction in the future application, sentences in an individual abstract are classified by the best PICO sentence classifier from the first task, and the non-PICO sentences are then removed automatically. The workflow is illustrated in Figure 2.
For LSTM/CRF models, we tuned hidden dimension from 32 to 512, and compare Adam and AdamW optimizers with the constant or slanted triangular learning rate scheduler. We froze word embeddings because we found it achieves better performance on the validation set. Models were trained for 20 epochs and the learning rate depended on the specific model (1e-3 for BiLSTM and 5e-3 for BiLSTM-CRF). For BERT models, we fine-tuned BERT for 20 epochs with learning rate of 1e-3, BERT-CRF for 30 epochs and BERT-LSTM-CRF for 60 epochs, both with a learning rate 1e-4; other settings are similar to that of PICO sentence classification task. These settings were determined by checking overfitting or convergence issues from their learning curves. For evaluation, we used entity-level metrics [32] for each PICO text (truncated abstract):
$${\text{P}\text{r}\text{e}\text{c}\text{i}\text{s}\text{i}\text{o}\text{n}}_{\text{i}}=\frac{\text{n}\text{u}\text{m}\text{b}\text{e}\text{r} \text{o}\text{f} \text{p}\text{r}\text{e}\text{d}\text{i}\text{c}\text{t}\text{e}\text{d} \text{c}\text{o}\text{r}\text{r}\text{e}\text{c}\text{t} \text{e}\text{n}\text{t}\text{i}\text{t}\text{i}\text{e}\text{s}}{\text{n}\text{u}\text{m}\text{b}\text{e}\text{r} \text{o}\text{f} \text{p}\text{r}\text{e}\text{d}\text{i}\text{c}\text{t}\text{e}\text{d} \text{e}\text{n}\text{t}\text{i}\text{t}\text{i}\text{e}\text{s}}$$
$${\text{R}\text{e}\text{c}\text{a}\text{l}\text{l}}_{\text{i}}=\frac{\text{n}\text{u}\text{m}\text{b}\text{e}\text{r} \text{o}\text{f} \text{p}\text{r}\text{e}\text{d}\text{i}\text{c}\text{t}\text{e}\text{d} \text{c}\text{o}\text{r}\text{r}\text{e}\text{c}\text{t} \text{e}\text{n}\text{t}\text{i}\text{t}\text{i}\text{e}\text{s}}{\text{n}\text{u}\text{m}\text{b}\text{e}\text{r} \text{o}\text{f} \text{t}\text{r}\text{u}\text{e} \text{e}\text{n}\text{t}\text{i}\text{t}\text{i}\text{e}\text{s}}$$
$${\text{F}1}_{\text{i}}=\frac{2\text{*}{\text{P}\text{r}\text{e}\text{c}\text{i}\text{s}\text{i}\text{o}\text{n}}_{\text{i}}*{\text{R}\text{e}\text{c}\text{a}\text{l}\text{l}}_{\text{i}}}{{\text{P}\text{r}\text{e}\text{c}\text{i}\text{s}\text{i}\text{o}\text{n}}_{\text{i}}{+\text{R}\text{e}\text{c}\text{a}\text{l}\text{l}}_{\text{i}}}$$
These individual metrics were then averaged across all validation/test samples to obtain the overall metrics.
One limitation of the previous method is the small amount of training data so we also explored a semi-supervised learning strategy, self-training, which uesd the unlabelled dataset to generate pseudo labels for training [33]. We use 400 annotated abstracts as "gold" data, and 10,000 unlabelled abstracts from 50,653 in vivo animal records as "silver" data. Non-PICO sentences were removed from the unlabelled text by the best PICO sentence classification model, and these truncated texts were used for self-training. As Figure 3 shows, we first used the fine-tuned PICO entity recognizer from the gold set (80% of 400 labelled records for training, 10% for validation) to predict entities of each token in the silver set. For each abstract in the silver set, we calculated the average prediction probabilities of all tokens within that abstract. Silver records with average probabilities larger than a threshold (0.95 or 0.99) were then combined with the original gold training/validation set, and the enlarged new dataset was used to fine-tune a newly initialised PICO entity recognizer. Then we repeat the prediction, pseudo data generation, data selection and supervised fine-tuning procedures, until no more unlabelled records with average prediction probabilities larger than the threshold are identified. Note in every data enlarging step, newly included silver records are split into training set (80%) and validation set (20%), then combined with existing gold training records (initially 320 records) and gold validation records (initially 40 records) respectively. This guarantees that the initial gold validation set is only ever used for validation. The original gold test set is used for final evaluation. All experiments were conducted using an Ubuntu machine with 16-core CPU.
1 https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist
2 https://www.tagtog.net
The results of PICO sentence classification models on the test set (425 sentences) are shown in Table 2 (see validation performance in Appendix Table 1). All BERT models achieve an F1 score greater than 80% regardless of the pre-training corpus used, and PubMedBERT trained on PubMed abstracts achieves the highest F1 score of 85.4%. Biomedical-domain BERT improves F1 score by 4% compared with general-domain BERT, and BERT with pure biomedical-domain pre-training (two PuBMedBERT) can identify more PICO sentences than BERT with general pre-training (BERT-base) or mixed-domain pretraining (BioBERT), as recall increased by 7%. Therefore, we selected BERT trained on PubMed abstracts as the best PICO sentence classifier for self-training experiments and prediction.
Performance of PICO sentence classification by BERT with different pre-trained weights on the test set.
BERT-base
BioBERT
PubMedBERT-abs
PubMedBERT-full
For PICO entity recognition, for each model we used settings which achieved the best performance on the validation set, and then evaluated these them on the test set (40 truncated abstracts). As Table 3 shows, BERT models (BERT, BERT-CRF, BERT-BiLSTM-CRF) outperformed LSTM models (BiLSTM, BiLSTM-CRF), with F1 scores improved by between 3% and 27%. The use of a CRF layer improves F1 score in BiLSTM by 14%, but does not enhance performance in BERT models. Compared with the benefit from the large-scale pre-trained domain knowledge, the advantage of CRF layer might therefore be trivial. Within BERT models, biomedical BERT models improve F1 by at least 4% compared to the general BERT, and the difference among three biomedical pre-trained weights is not obvious. We selected PuBMedBERT pre-trained on PubMed abstracts and full texts as the best PICO entity recognizer based on the validation results (see Appendix Table 2), and the test performance by each PICO entity is reported in Table 4 ('original scores'). The F1 score for identifying Species is 98%. This entity has a limited number of potential responses, so their identification is not complicated. For Intervention and Outcome, the performance is satisfactory, with F1 around 70%. F1 scores of Strain and Induction are 63% and 49% respectively, and so there remains room for improvement. The F1 score for identifying the Comparator is only 16%, which may be due to the relative lack of Comparator instances in the training corpus, and unclear boundaries in the definition of comparator and interventions in some complicated manuscripts. For instance, a manuscript may describe two experiments, and what is an intervention in the first may become a comparator in the second.
Overall performance of PICO entity recognition models on the test set.
BiLSTM
BiLSTM-CRF
PubMed-abs
PubMed-full
BERT-CRF
-BiLSTM
-CRF
In self-training experiments, we used the best PICO sentence classifier (BERT pre-trained on PubMed abstracts) to remove non-PICO sentences for unlabelled data, and the best PICO entity recognizer (BERT pre-trained on PubMed abstracts and full texts) to identify PICO phrases and calculate prediction scores across all tokens in each individual text. We explore two thresholds (0.95, 0.99) for records selection, and the results are reported in Figure 4. When the threshold is 0.99, no more silver records are included in the training set beyond the first iteration, and self-training did not improve performance. When the threshold is 0.95, the performance fluctuates and the best F1 score is improved by 5% and 1% on the gold validation set and test set respectively, achieved at the sixth iteration step. We terminated the training program after 15 iterations because the training size tends to saturate and the improvement of performance is very limited. For specific PICO entities, the main improvement using self-training was for F1 scores for Comparator and Strain, increased by 32% and 7% respectively ('self-training scores' in Table 4).
Entity-level performance of PubMedBERT on the gold test set. Original scores refer to performance of model before self-training; self-training scores refer to performance of model at the best iteration (6th iteration) of self-training. 'R' and 'P' refer to recall and precision respectively.
Original scores
Self-training scores
We have developed an interactive application via Streamlit for potential use (see Figure 5). When the user inputs the PMID from the PubMed Open Access Subset, the app will call the PubMed Parser [34] to return its title and abstract. The background sentence model classifies and removes non-PICO sentences, and then the entity recognizer identifies the PICO phrases from those PICO sentences. This can give a quick overview of PICO elements of an experimental study.
3https://streamlit.io/
In this work we show the possibilities of automated PICO sentence classification and PICO entity recognition in abstracts describing preclinical animal studies. For sentence classification, BERT models with different pre-trained weights have generally good performance (F1 over 80%), and biomedical BERT (BioBERT or PubMedBERT) have slightly better performance than general BERT. For PICO entity recognition, all BERT models outperform BiLSTM with or without a CRF layer, with the improvement of F1 ranging from 3–27%. It is unnecessary to use a more complicated structure based on BERT, as the results of BERT, BERT-BiLSTM and BERT-BiLSTM-CRF do not differ greatly, but the latter two bring a cost in longer training time and resources. Within LSTM based models, adding a CRF layer is beneficial, where recall is increased by 16% and precision is increased by 9%. The training time of LSTM based models is much shorter than fine-tuning BERT, and this could be a quick alternative solution when computing resources are limited, at the cost of reduction of performance by 3% and 12% compared to the general BERT and PubMedBERT respectively. The self-training approach helps to identify more comparators and strains, but does not help much with the overall performance. By entity levels, F1 scores are generally good for identifying Species (over 80%), satisfactory for Intervention, Outcome and Strain (around or over 70%), and acceptable for Induction and Outcome (around 50%).
One limitation of our work is that the training corpus is at the level of the abstract, but some PICO elements in preclinical animal studies are often not described in the abstract. This limits the usefulness of our applications and we cannot transfer it to full-text identification without further evaluation. Of note, this same limitation applies to manual approaches to identifying PICO elements based on the abstract alone. In a related literature we have shown, for instance, that manual screening for inclusion based on TiAb has substantially lower sensitivity than manual screening of full texts (https://osf.io/nhjeg). Another limitation is that the amount of training, validation and test data is not adequate. Although our best models do not show very inconsistent results between validation and test set (except for "Comparator"), the conclusions may still be biased using small dataset. Previous studies show that self-training can propagate both knowledge and error from high confidence predictions on unlabelled samples [35], and that training from larger annotated corpora may reduce the error propagation and boost performance. Large datasets also provide possibilities for exploring more complicated models which are proved effective in other tasks.
In future work we will evaluate our PICO sentence classification and entity recognition models on some full-text publications, to observe any heuristic implications. We will also evaluate existing clinical PICO extraction tools on preclinical text to identify interventions and outcomes because these two categories may be more similar in preclinical and clinical studies than other PICO elements. As the training corpora for clinical PICO are relatively larger and in more standard forms, we think that training using a combined preclinical/clinical corpus may yield better performance.
We demonstrate a workflow for PICO extraction in preclinical animal text using LSTM and BERT based models. Without feature engineering, BERT pre-trained on PubMed abstracts is optimal for both PICO sentence classification, and BERT pre-trained on PubMed abstracts and full texts is optimal for PICO entity recognition tasks in preclinical abstracts. PICO entities including Intervention, Outcome, Species and Strain have acceptable precision and recall (around or over 70%), while Comparator and Induction have less satisfactory scores (around 50%). We encourage the collection of a more standard PICO annotation corpus and the use of natural language processing models for PICO extraction in preclinical animal studies, which may achieve better results for publications retrieval, reduce workflow of preclinical systematic reviews, and narrow the gap between preclinical and clinical research.
BERT: Bidirectional Encoder Representations from Transformers
BiLSTM: Bidirectional Long-Short Term Memory
BIO: Beginning-Inside-Outside tagging format
CRF: Conditional Random Field
NER: Named Entity Recognition
The datasets supporting the current study are available in the Preclinical PICO extraction repository, https://osf.io/2dqcg.
This work is jointly funded by China Scholarships Council and a John Climax UK Reproducibility Network PhD Studentship
JL and QW collected and processed the in vivo abstracts. QW and MM conducted PICO phrases annotations. QW developed and implemented the classification, entity recognition and self-training models. ML was involved in the design of self-training experiments. QW analysed and evaluated results. All authors reviewed and provided comments on preliminary versions. All authors read and approved the final manuscript.
1. Higgins JPT, Green S, (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. 2011.
2. Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP journal club. 1995;123. doi:10.7326/acpjc-1995-123-3-a12.
3. Huang X, Lin J, Demner-Fushman D. Evaluation of PICO as a knowledge representation for clinical questions. AMIA Annu Symp Proc. 2006;2006:359–63. http://www.fpin.org/. Accessed 29 Mar 2021.
4. Marshall IJ, Wallace BC. Toward systematic review automation: A practical guide to using machine learning tools in research synthesis. Systematic Reviews. 2019;8:163. doi:10.1186/s13643-019-1074-9.
5. Hooijmans CR, Rovers MM, De Vries RBM, Leenaars M, Ritskes-Hoitinga M, Langendam MW. SYRCLE's risk of bias tool for animal studies. BMC Med Res Methodol. 2014;14:43. doi:10.1186/1471-2288-14-43.
6. Hooijmans CR, De Vries RBM, Ritskes-Hoitinga M, Rovers MM, Leeflang MM, IntHout J, et al. Facilitating healthcare decisions by assessing the certainty in the evidence from preclinical animal studies. PLoS One. 2018;13.
7. Wallace BC, Kuiper J, Sharma A, Zhu MB, Marshall IJ. Extracting PICO Sentences from Clinical Trial Reports using Supervised Distant Supervision. J Mach Learn Res. 2016;17. http://www.ncbi.nlm.nih.gov/pubmed/27746703. Accessed 3 Mar 2019.
8. Hochreiter S, Schmidhuber J. Long Short-Term Memory. Neural Comput. 1997;9:1735–80. doi:10.1162/neco.1997.9.8.1735.
9. Jin D, Szolovits P. PICO Element Detection in Medical Text via Long Short-Term Memory Neural Networks. In: Proceedings of the BioNLP 2018 workshop. Stroudsburg, PA, USA: Association for Computational Linguistics; 2018. p. 67–75. doi:10.18653/v1/W18-2308.
10. Chabou S, Iglewski M. Combination of conditional random field with a rule based method in the extraction of PICO elements. BMC Med Inform Decis Mak. 2018;18:128. doi:10.1186/s12911-018-0699-2.
11. Jin D, Szolovits P. Advancing PICO Element Detection in Biomedical Text via Deep Neural Networks. Bioinformatics. 2018;36:3856–62. http://arxiv.org/abs/1810.12780. Accessed 6 Feb 2021.
12. Sutton C, McCallum A. An introduction to conditional random fields. Found Trends Mach Learn. 2011;4:267–373. doi:10.1561/2200000013.
13. Brockmeier AJ, Ju M, Przybyła P, Ananiadou S. Improving reference prioritisation with PICO recognition. BMC Med Inform Decis Mak. 2019;19:256. doi:10.1186/s12911-019-0992-8.
14. Nye B, Yang Y, Li JJ, Marshall IJ, Patel R, Nenkova A, et al. A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature. In: ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). 2018. p. 197–207. doi:10.18653/v1/p18-1019.
15. Perozzi B, Al-Rfou R, Skiena S. DeepWalk: Online Learning of Social Representations. Proc ACM SIGKDD Int Conf Knowl Discov Data Min. 2014;:701–10. doi:10.1145/2623330.2623732.
16. Devlin J, Chang M-W, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR. 2018. https://github.com/tensorflow/tensor2tensor. Accessed 21 Oct 2019.
17. Liao J, Ananiadou S, Currie GL, Howard BE, Rice A, Sena ES, et al. Automation of citation screening in pre-clinical systematic reviews. bioRxiv. 2018;:280131. doi:10.1101/280131.
18. Finkel JR, Grenager T, Manning C. Incorporating non-local information into information extraction systems by Gibbs sampling. In: ACL-05 - 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference. Association for Computational Linguistics (ACL); 2005. p. 363–70. doi:10.3115/1219840.1219885.
19. Neumann M, King D, Beltagy I, Ammar W. ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing. Association for Computational Linguistics (ACL); 2019. p. 319–27.
20. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is all you need. In: Advances in Neural Information Processing Systems. 2017. p. 5999–6009. http://arxiv.org/abs/1706.03762. Accessed 26 Aug 2019.
21. Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. 2019. doi:10.1093/bioinformatics/btz682.
22. Gu Y, Tinn R, Cheng H, Lucas M, Usuyama N, Liu X, et al. Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing. 2020. http://arxiv.org/abs/2007.15779. Accessed 18 Sep 2020.
23. Loshchilov I, Hutter F. Decoupled Weight Decay Regularization. 7th Int Conf Learn Represent ICLR 2019. 2017. http://arxiv.org/abs/1711.05101. Accessed 1 Oct 2020.
24. Howard J, Ruder S. Universal language model fine-tuning for text classification. In: ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). 2018. p. 328–39. doi:10.18653/v1/p18-1031.
25. Zhang J, He T, Sra S, Jadbabaie A. Why gradient clipping accelerates training: A theoretical justification for adaptivity. 2019. http://arxiv.org/abs/1905.11881. Accessed 1 Oct 2020.
26. Ramshaw LA, Marcus MP. Text Chunking using Transformation-Based Learning. 1995;:157–76. http://arxiv.org/abs/cmp-lg/9505040. Accessed 7 May 2021.
27. Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural Architectures for Named Entity Recognition. 2016 Conf North Am Chapter Assoc Comput Linguist Hum Lang Technol NAACL HLT 2016 - Proc Conf. 2016;:260–70. http://arxiv.org/abs/1603.01360. Accessed 19 Apr 2021.
28. Pascanu R, Mikolov T, Bengio Y. On the difficulty of training Recurrent Neural Networks. 30th Int Conf Mach Learn ICML 2013. 2012; PART 3:2347–55. http://arxiv.org/abs/1211.5063. Accessed 18 Nov 2020.
29. Mikolov T, Chen K, Corrado G, Dean J. Efficient Estimation of Word Representations in Vector Space. 2013. http://ronan.collobert.com/senna/. Accessed 1 Apr 2019.
30. Pyysalo S, Ginter F, Moen H, Salakoski T, Ananiadou S. Distributional Semantics Resources for Biomedical Text Processing. Proc 5th Lang Biol Med Conf (LBM 2013). 2013;:39–44.
31. Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing. 2019. http://arxiv.org/abs/1910.03771. Accessed 13 Feb 2021.
32. Hiroki Nakayama. seqeval: A Python framework for sequence labeling evaluation. 2018. https://github.com/chakki-works/seqeval. Accessed 7 May 2021.
33. Ruder S, Plank B. Strong Baselines for Neural Semi-supervised Learning under Domain Shift. ACL 2018 - 56th Annu Meet Assoc Comput Linguist Proc Conf (Long Pap. 2018;1:1044–54. http://arxiv.org/abs/1804.09530. Accessed 16 Apr 2021.
34. Achakulvisut T, Acuna D, Kording K. Pubmed Parser: A Python Parser for PubMed Open-Access XML Subset and MEDLINE XML Dataset XML Dataset. J Open Source Softw. 2020;5:1979. doi:10.21105/joss.01979.
35. Gao S, Kotevska O, Sorokine A, Christian JB. A pre-training and self-training approach for biomedical named entity recognition. PLoS One. 2021;16 2 February. doi:10.1371/journal.pone.0246310.
Appendix.docx
Appendix Table 1: Performance of PICO sentence classification by BERT with different pre-trained weights on the validation set. Appendix Table 2: Overall performance of PICO entity recognition models on the validation set. Appendix Table 3: Entity-level performance of PubMedBERT on the gold validation set. Original scores refer to performance of model before self-training; self-training scores refer to performance of model at the best iteration (6th iteration) of self-training. 'R' and 'P' refer to recall and precision respectively.
Reviewers agreed at journal
Reviews received at journal
Reviewers invited by journal
First submitted to journal | CommonCrawl |
The goal of this lesson is to familiarize the reader with the properties of operations of rational numbers, and before that, how we construct and define this specified set.
On the set of natural numbers we could not define the operation $'-'$ for all two natural numbers. Also, in the set $\mathbb{Z}$ we can not find the number $a \in \mathbb{Z}$ such that $y = a \cdot x, \forall x, y \in \mathbb{Z}$. Right here we find the motivation for expanding the set of integers $\mathbb{Z}$.
We consider the cartesian product $\mathbb{Z} \times \mathbb{Z}^{+}$ with a relation $\sim$ defined on it as follows. For two pairs $(a, b), (c, d) \in \mathbb{Z} \times \mathbb{Z}^{+}$ we say that they are in the relation $ \sim$ if the following is valid:
$$(a, b) \sim (c, d) \Longleftrightarrow ad = bc.$$
$\sim$ defines an equivalence relation on $\mathbb{Z} \times \mathbb{Z}^{+}$.
A quotient set $\mathbb{Q} = \mathbb{Z} \times \mathbb{Z}^{+}/_{\sim}$ is called a set of rational numbers and its elements rational numbers, that is
$$\mathbb{Q} =\mathbb{Z} \times \mathbb{Z}^{+}/_{\sim} = \{\frac{x}{y} : x, y \in \mathbb{Z} \times \mathbb{Z}^{+} \}.$$
We define $[(x, y)] := \frac{x}{y}, (y \neq 0)$.
Addition of rational numbers
Definition 1. The addition operation $+$ on $\mathbb{Q}$ is defined as follows:
$$+: \left (\mathbb{Z} \times \mathbb{Z}^{+} \right) \times \left (\mathbb{Z} \times \mathbb{Z}^{+} \right) \to \mathbb{Z} \times \mathbb{Z}^{+},$$
$$[(x, y)] + [(u, w)] = [(x \cdot w + y \cdot u, y \cdot w)].$$
In fraction notation:
$$\frac{x}{y} + \frac{u}{w} = \frac{x \cdot w + y \cdot u}{y \cdot w}.$$
The operation of addition on $\mathbb{Q}$ is well defined and closed:
$$\forall x, y \in \mathbb{Q}: x + y \in \mathbb{Q}.$$
Theorem 1.(Commutativity) For any $x, y \in \mathbb{Q}$ the following is valid:
$$x + y = y + x.$$
Let $[(a,b)] = x$ and $[(c,d)]= y \in \mathbb{Q}$. Then we have
$$x + y = [(a,b)] + [(c,d)] $$
$$ = [(a \cdot d + b \cdot c, b \cdot d)]$$
$$y+x = [(c,d)] + [(a,b)] $$
$$ = [(c \cdot b +d \cdot a, d \cdot b)] $$
$$= [(a \cdot d + b \cdot c, b \cdot d)]. $$
We can notice that the left and right side of equality are equal, therefore, the commutativity of addition of rational numbers is proven.
Theorem 2. (Associativity) For any $x, y, z \in \mathbb{Q}$ the following is valid:
$$(x + y) + z = x + (y + z).$$
Let $x = [(a, b)], y = [(c, d)], z= [(e, f)] \in \mathbb{Q}$. Then we have:
$$(x + y) + z = ([(a, b)] + [(c, d)]) + [(e, f)] $$
$$= ([(a \cdot d + b \cdot c, b \cdot d)]) + [(e, f)] $$
$$ = \left [ \left ( \left ( a \cdot d + b \cdot c\right) \cdot f + \left (b \cdot d \right ) \cdot e, \left (b \cdot d \right) \cdot f\right ) \right] $$
$$ =\left[ \left( \left( \left( a \cdot d \right) \cdot f + \left ( b \cdot c\right) \cdot f \right) + \left(b \cdot d \right) \cdot d, \left (b \cdot d \right) \cdot f\right ) \right]$$
$$x+ (y +z) = [(a, b)] + \left ([(c, d)] + [(e, f)] \right) $$
$$ = [(a, b)] + ([(c \cdot f + e \cdot d, d \cdot f)]) $$
$$=[(a \cdot (d \cdot f) + b \cdot (c \cdot f + e \cdot d), b \cdot (d \cdot f))] $$
$$=\left[ \left( a \cdot (d \cdot f) + (b \cdot (c \cdot f) + b \cdot (e \cdot d), b \cdot (d \cdot f) \right) \right] $$
$$ =\left[ \left( (a \cdot d) \cdot f + ((b \cdot c) \cdot f + (b \cdot d) \cdot e, (b \cdot d) \cdot f \right) \right]$$
$$= \left[ \left( \left( \left( a \cdot d \right) \cdot f + \left ( b \cdot c\right) \cdot f \right) + \left(b \cdot d \right) \cdot d, \left (b \cdot d \right) \cdot f\right ) \right]. $$
We obtain that $(x + y) + z = x + (y + z), \forall x, y, z \in \mathbb{Q}$, therefore, the associativity of addition of rational numbers is valid.
Theorem 3. There exists $e \in \mathbb{Q}$ such that $\forall x \in \mathbb{Q}$ the following is valid:
$$e + x = x + e = x.$$
Before the proof, we need to highlight two lemma without proving.
Lemma 4. For any $x \in \mathbb{Z}$ is valid $x \cdot 1 = x$.
Lemma 5. For any $x \in \mathbb{Z}$ is valid $x \cdot 0 = 0$.
Let $e=[(0,1)]$ and $x = [(a, b)]$. Then we have:
$$e + x = [(0,1)] + [(a, b)] $$
$$ = [(0 \cdot b + 1 \cdot a, 1 \cdot b)] $$
$$= [(0 \cdot b + a \cdot 1, b \cdot 1)] $$
$$= [(0 \cdot b + a, b)] $$
$$= [(a, b)] $$
$$= x$$
$$a + e = [(a, b)] + [(0,1)]$$
$$= [(a \cdot 1 + b \cdot 0, b \cdot 1)] $$
$$= [(a + b \cdot 0, b)] $$
$$ = [(a, b)] $$
$$= x. $$
The both side of equality are equal to $x$, it follows that the statement of the theorem is true.
The element $e = [(0,1)] = \frac{0}{1} = 0$ is called the identity element for addition in the set $\mathbb{Q}$.
Theorem 4. For any $x \in \mathbb{Q}$ there exists $-x \in \mathbb{Q}$ such that
$$x+ (-x) = (-x) + x = e.$$
Let $x= [(a, b)], -x= [(-a, b)], e=[(0,1)] \in \mathbb{Q}$. Then we have:
$$x + (-x) = [(a, b)] + [(-a, b)] $$
$$ = [(a \cdot b + b \cdot (-a), b\cdot b)] $$
$$= [(0, b \cdot b)]$$
$$(-x) + x = [(-a, b)] + [(a, b)] $$
$$ = [((-a) \cdot b + a \cdot b, b \cdot b)] $$
What is remains is to prove that $[(0, b \cdot b)] = [(0,1)]=e$.
According to the definition of the set of rational numbers, the relation $\sim$ is defined as $(a, b) \sim (c, d) \Leftrightarrow a \cdot d = b \cdot c$. We conclude that pairs $(0,1)$ and $(0, b \cdot b)$ are in the relation because $0 \cdot (b \cdot b) =1 \cdot 0$, that is $0= 0$, is valid. Therefore, $[(0, b \cdot b)] = [(0,1)]$.
This means, $x=[(a, b)]$ has an addition inverse in $\mathbb{Q}$: $-x= [(-a, b)]$.
We have proven that on the set of rational numbers are valid properties of associativity and commutativity of addition, there exists the identity element for addition and an addition inverse, therefore, the ordered pair $(\mathbb{Q}, +)$ has a structure of the Abelian group.
Properties of multiplication in $\mathbb{Q}$
Definition 2.
The multiplication operation $\cdot$ on $\mathbb{Q}$ is defined as follows:
$$[(a, b)] \cdot [(c, d)] = [(a \cdot c, b \cdot d)], \quad [(a, b)], [(c, d)] \in \mathbb{Z} \times \mathbb{Z}^{+}.$$
The operation of multiplication on $\mathbb{Q}$ is well defined and closed:
$$\forall x, y \in \mathbb{Q}: x \cdot y \in \mathbb{Q}.$$
Theorem 5. (Commutativity of multiplication) For any $x, y \in \mathbb{Q}$ is valid $x \cdot y = y \cdot x$.
Let $x = [(a, b)], y=[(c, d)] \in \mathbb{Q}$. Then we have
$$x \cdot y = [(a, b)] \cdot [(c, d)] $$
$$= [(a \cdot c, b \cdot d)] \in \mathbb{Q}$$
$$y \cdot a = [(c, d)] \cdot [(a, b)] $$
$$ = [(c \cdot a, d \cdot b)] $$
$$= [(a \cdot c, b \cdot d)].$$
Since the left and the right side of equality are equal, it follows the statement of the theorem.
We will mention the following properties of multiplication in $\mathbb{Q}$ without proof.
Theorem 6. (Associativity of multiplication) For any $x, y, z \in \mathbb{Q}$ the following is valid:
$$(x \cdot y) \cdot z = x \cdot (y \cdot z).$$
Theorem 7. There exists $e \in \mathbb{Q}$ such that for every $x \in \mathbb{Q}$ the following is valid:
$$x \cdot e = e \cdot x = x.$$
The element $e \in \mathbb{Q}$ is called the identity element for multiplication in $\mathbb{Q}$, whereby $e=[(1,1)]$. If we were writing in the form of a fraction, then $e=\frac{1}{1} = 1$ is a neutral element.
Theorem 8. For any $x \in \mathbb{Q} /\{0\}$ there exists $x^{-1} \in \mathbb{Q}$ such that the following is valid:
$$x \cdot x^{-1} = x^{-1} \cdot x = e.$$
The element $x^{-1} \in \mathbb{Q}$ is called a multiplicative inverse for a number $x= [(a, b)] \in \mathbb{Q} /\{0\}$, whereby:
$(1.)$ If $a> 0$ then $x^{-1} = [(b, a)]$,
$(2.)$ if $a < 0$ then $x^{-1} = [(-b, -a)]$.
Theorem 9. There is no a multiplicative inverse for a $0 \in \mathbb{Q}$.
Since $0 \in \mathbb{Q}$ has no a multiplicative inverse, the ordered pair $(\mathbb{Q}, \cdot)$ is a commutative monoid, however, $(\mathbb{Q} /\{0\}, \cdot)$ is an Abelian (commutative) group.
Theorem 10. (Distributivity) For all $x, y, z \in \mathbb{Q}$ the following is valid:
$$(x + y) \cdot z = x \cdot z + y \cdot z.$$
Ordering on $\mathbb{Q}$
Let $[(a, b)], [(c, d)] \in \mathbb{Q}$. Assuming $b, d > 0 \in \mathbb{Z}$, we say that $[(a, b)] < [(c, d)]$ if and only if $ad < bc$. With $<$ we denoted our ordering relation in $\mathbb{Q}$, which is different from the usual ordering in $\mathbb{Z}$.
Theorem 11. If $(a, b) \sim (c, d)$ and $(a', b') \sim (c', d')$, then
$$[(a, b)] < [(c, d)] \Longleftrightarrow [(a', b')] < [(c', d')],$$
that is, ordering on $\mathbb{Q}$ is well defined.
Theorem 12. The relation $<$ is an ordering of the rational numbers.
Theorem 13. The rational numbers form an ordered field.
Density property of rational numbers
If $x $and $y$ are rational numbers such that $x<y$, then there exists a rational number $ a$ such that $x < a <y$.
The simplest, we can take the arithmetic mean of the numbers $x$ and $y$. The specified property is another property that distinguishes a set of integers from the set of rational numbers.
Vector spaces | CommonCrawl |
Cantor's Theorem holding simply because every power set includes a singleton set for each element, and the empty set?
I have just been learning about Cantor's Theorem, which has been stated in by book as "the carndinality of every set is strictly less than the cardinality of it's power set", and I have a question about the theorem.
In the proof I have been given for Cantor's Theorem, the argument is put forward that the power set contains a singleton set corresponding to each element of the original set, and hence cardX $\le$ cardP(X). They then must just prove that X$\ne$P(X), that is that there exists no surjective function from X to P(X), and hence there can exist no bijective function between them, so the theorem must be true.
Is this introductory step correctly stated? If it is, then I don't understand why Cantor's Theorem doesn't trivially hold true. Using the same reasoning, could it not be said that if the power set contains a singleton set corresponding to each element in the original set, but it also contains the empty set, then surely the power set must have strictly greater cardinality than the original set?
Does this reasoning perhaps not apply when dealing with infinite sets? If not, please try and provide some intuitive reason why the argument I have supplied above fails.
elementary-set-theory
$\begingroup$ With your arguments you would get that $\mathbb Q$ has a strictly larger cardinality as $\mathbb N$ as the natural numbers are strictly contained in the rationals. But as is well-known they have the same cardinality! $\endgroup$ – StefanH Dec 14 '15 at 12:38
$\begingroup$ This works for finite sets, but not infinite ones. For example the set $\{1,2,3,\dots\}$ is in bijection with the set $\{0,1,2,3,\dots\}$ (the bijection being $n\to n-1$) despite the latter having nominally one more element than the former. $\endgroup$ – lulu Dec 14 '15 at 12:39
$\begingroup$ You have noticed that there is an bijective function from $X$ to a proper subset of $P(X)$. So what? Note that there is an bijective function from $\mathbb R$ (the real line) to a proper subset of $\mathbb R$, namely the function $e^x$. Can you conclude from that that the real line has greater cardinality than itself? $\endgroup$ – bof Dec 14 '15 at 12:41
$\begingroup$ Maybe this famous characterisation of the infinite is of interest here: en.wikipedia.org/wiki/Dedekind-infinite_set Then your argument would imply that all infinite sets have the same cardinality, which certainly is not the case as it is also well known that $\mathbb R$ and $\mathbb Q$ have different cardinalities. Or that it is not well-defined to speak about cardinalities for infinite sets as your argument would show that every set has a higher cardinality as itself, a contradiction. $\endgroup$ – StefanH Dec 14 '15 at 12:44
Two sets have the same cardinality if there exists a bijection between them. And we can say a set $B$ has a higher cardinality then $B$ if there exists an injection from $A$ to $B$ (or equivalently a surjection of some subset of $B$ to $A$).
Your argument is a naive extrapolation from the finite case, this was discussed early on in mathematics, see for example Hilbert's Hotel and also the definition of Dedekind-infinite in one of my comments. Also see the examples given by the others. I would also suggest to read carefully the two famous diagonal proofs of Cantor that $\mathbb N$ and $\mathbb Q$ have the same cardinality and $\mathbb R$ has a strictly larger cardinality than $\mathbb N$.
For completeness I add a proof that there could be no bijection between $X$ and its power set $\mathcal P(X)$. For suppose we have such a bijection $f : X \to \mathcal P(X)$, then define $M := \{ x \in X : x \notin f(x) \}$ (which is well-defined by injectivity). As $M \in \mathcal P(X)$ there exists some $z \in X$ with $f(z) = M$ as it is surjective. If $z \in M$, then this would imply $z \notin M$ by definition, otherwise if $z \notin M$ we would have $z \in M$ by definition, in both cases we got a contradiction showing that we could not have such a bijection.
Compare this scheme with the diagonal argument for the real number from here, they are closely related.
StefanHStefanH
Let me make Stefan's remark even more concrete: your argument would show that the natural numbers are strictly contained in the naturals, because you can send 0 to 1, 1 to 2, 2 to 3, 3 to 4, and so on, and you have nothing sent to 0. So the second set has "room" for one more item.
It's exactly the infinitude that makes this work, and it's why dealing with cardinalities of infinite sets requires a bit more careful formalization than do those of finite sets.
John HughesJohn Hughes
$\begingroup$ So is the introductory step in the proof incorrect then? In case you have it handy to look up the entire proof the book I am referencing is Zorich's Mathematical Analysis 1, pg 26, where it says "Since P(X) contains all one-element subsets of X, cardX $\ge$ cardP(X). Or is this argument allowed, but attempting to make the argument stronger, as I did above, is not allowed? $\endgroup$ – Guest Dec 14 '15 at 12:45
$\begingroup$ The introductory step is correct, as by giving an injection $f : A \to B$ between two sets $A$ and $B$ would give that $|A| \le |B|$ (i.e. $A$ is isomorphic to a subset of $B$). $\endgroup$ – StefanH Dec 14 '15 at 12:48
$\begingroup$ So have they essentially said that there exists an injective function f:A→B that is something along the lines of f(x)={x}, and so |A|≤|B|? $\endgroup$ – Guest Dec 14 '15 at 12:53
$\begingroup$ That's right. And to prove that $|A| = |B|$, they'd have to give an injective function in the other direction, so to show that $|A| \ne |B|$, they have to show that there's \emph{no} injective function in the other direction...which is where the diagonal argument (in some form) comes in. $\endgroup$ – John Hughes Dec 14 '15 at 12:56
$\begingroup$ Ah ok, now that first statement makes much more sense now, thank you! $\endgroup$ – Guest Dec 14 '15 at 12:57
Your counter-argument is 'since the map from elements to corresponding singletons omits the empty-set, the power-set is strictly larger'. I.e you are observing that one specific map isn't a bijection. That fails to consider the possibility that some other map could be a bijection. With infinite sets, that is a distinct possibility (side note: it is worth your time to try proving that it ISN'T a possibility for finite sets [hint: induction]]). In order to establish a size difference, you need to show that every possible map cannot be a bijection (which is what the standard proof shows).
[note: the elements-to-singletons map is cited only to show that a set and its power-set can in fact be compared in size, and that the latter is at least as large as the former]
PMarPMar
Not the answer you're looking for? Browse other questions tagged elementary-set-theory or ask your own question.
Cardinality of the power set of natural numbers
Can one come to prove Cantor's theorem (existence of higher degree of infinities) FROM Russell's paradox?
Is every element in a set also an element of the powerset?
flaw in the proof of Cantor's theorem? does the set of all sets exist?
Power set of set containing empty set, sets of empty set, and mixes of the former
How is it possible for a singleton to exist if ∅ is a subset of every set?
Do Cantor's Theorem and the Schroder-Bernstein Theorem Contradict?
Does Cantor's Theorem and the Continuum Hypothesis imply discrete levels of infinity?
Prove that the set of polynomials with rational coefficients is countable. (Doubt)
Can the diagonal set in Cantor's Theorem of cardinality of infinite sets exist | CommonCrawl |
Joint detection scheme for cooperative spectrum sensing in cognitive radio network
Yijiang Nan ORCID: orcid.org/0000-0002-1745-117X1,
Chenglin Zhao1 &
Bin Li1
In this paper, a new architecture of cognitive radio network (CRN) is presented for future dynamic spectrum sharing in time-variant flat fading (TVFF) channels. We consider a practical scenario where secondary users (SUs) are able to access the idle spectrum by secondary user routers (SU_Rs). Managed by a fusion center (FC), SU_Rs can work together to capture the idle spectrum, and then assign to the SUs. Besides, it is imperative to guarantee the wireless communication quality between primary base station (P_BS) and SU_Rs. Therefore, a new cooperative spectrum sensing (CSS) algorithm is suggested to recursively estimate the channel state information (CSI) while capturing the idle licensed band. The united mathematics model relies on a dynamic state-space model (DSM) and a Bernoulli filters (BF) algorithm. TVFF channels are modeled as finite-state Markov channel (FSMC). In order to reduce complexity of CSS, the particles are manipulated and reconstructed. Experimental simulations demonstrate that, by exploiting dynamic CSI, sensing performance of the new CSS algorithm will surpass the traditional schemes and this new architecture can be used in a realistic spectrum sharing system.
In radar sensor networks, recent advances on waveform design and diversity [1, 2] have made spectrum sensing possible for applications such as radar sensor selections [3] and target detection [4]. Similarly, spectrum sensing is also possible for wireless networks. With tremendous growth in the wireless network market, the number of subscribers and the demand for high data rate have escalated greatly so as to strengthen the scarcity of spectrum resource in wireless networks [5]. However, the traditional approach of fixed spectrum allocation to a licensed network leads to the spectrum underutilization. According to the report from Federal Communications Commission (FCC), the spectrum utilization in the usage of allocated spectrum is as low as 15 % [6]. As a result, this defect motivates the development of cognitive radio network (CRN) that allows secondary users to seek idle licensed bands cooperatively and share them dynamically. Without causing serious interference to the primary users (PUs), such dynamic spectrum sharing will considerably improve the frequency utilization [7]. In fact, it has been widely adopted by both the FCC policy initiatives and IEEE standardizations, such as IEEE 802.22 [8] and IEEE 802.11k [9].
Despite enhancement of spectrum utilization, the transmission from secondary users (SUs) can interfere with the PU negatively. To guarantee interference-free spectrum access, the SUs are required to monitor the primary spectrum so as to capture the idle time in PU transmissions (also called spectrum holes). In CRN, cooperative spectrum sensing (CSS), participated by a group of spatially distributed SUs, has been widely adopted in previous sensing algorithms [10–12]. Compared with a single cognitive radio, CSS algorithms use multiple sensors and combine their measurements into one decision. Thus, CSS can overcome some limitations [7] of single spectrum sensing like energy constraints, and they are more suitable for the CRN, which consists of multiple nodes.
In Fig. 1, this paper designs a new architecture of CRN, where sensing modules of each SU can be separated and centralized by a fusion center (FC). In this case, spectrum sensing will be manipulated by FC, and SUs use the idle licensed band without reference to sensing process. To illustrate, this new architecture consists of a FC and a set of secondary user routers (SU_Rs), which play the role of sensing. In the cover region of a primary base station (P_BS), the SU_Rs cooperate with one another to detect the idle spectrum. In the cover region of each SU_R, SUs may communicate with the SU_R by a classical wireless technique, e.g., 802.11n. As soon as there is enough idle bands, the SU_R will forward the data from SUs to the P_BS.
Architecture of CRN. Yellow dotted line stands for cognitive links; black solid line strands for primary link
We consider the time-variant flat fading (TVFF) channels in the CRN. For the considered scenario, TVFF channels may deteriorate the sensing performance and reduce the quality of wireless communication between P_BS and SU_Rs. Therefore, in realistic TVFF scenarios, besides the idle licensed band, time-variant channel state information (CSI) on the cognitive links, as the important network information, needs to be estimated to ensure good sensing performance and helps SU_Rs acquire a better communication quality. Therefore, it is essential to propose a new CSS algorithm, which does not merely detect the PU and estimate the varying CSI jointly but also can be applicable to the new architecture.
However, most existing sensing algorithms are less attractive. With boosting the uncertainty on CSI, TVFF will remarkably degrade the sensing performance. Although some existing schemes take statistical property of fading effects into account [13], they can just reduce the detrimental effects to some extent but fail to track the evolution of such dynamic TVFF channels. From another aspect, since the received signals from P_BS will be disappeared intermittently, CSI is also hard to be acquired in CRN systems. Some feasible techniques such as machine learning methods [14] have been adopted; nevertheless, a separated CSI estimation will increase computational complex considerably, which limits their applications.
To confront with the above challenges, a new CSS algorithm is proposed. The main contributions are threefold.
We establish a dynamic state-space model (DSM) to characterize the process of change in CRN by Markov process, including the PU work state and the varying TVFF channels. With the scope of an ordinary base station, the synchronization can be obtained by Global Positioning System (GPS) [15], thus assuming that the synchronization and a priori pilot can be available. The matched filter (MF) method, as the optimal observed output, is used as the measurement.
Considering the measurement determined by both the PU work state and CSI, the time evolution Bernoulli random finite set (BRFS) is adopted to characterize the complex DSM procedure. Based on the Bayes theory and sequential estimation schemes, the cardinality (i.e., PU work state) and elements (i.e., CSI) of BRFS are estimated recursively. Particle filter is adopted to implement the above scheme.
As the number of SU_R increases, more dimensions of CSI vector will render the exponential growth of complexity. To combat this challenge, we further manipulate the construction of particles and rebuild new particles, based on the independence of channels. In this way, the computation burden of an existing joint estimation-based sensing scheme can be greatly alleviated.
The remainder of this work is organized as follows. The DSM is designed in Section 2. Section 3 presents the CSS algorithm, a two-step BF mechanism, which has its foundation in the Bayesian theory and the BRFS. In Section 4, the implementation of each step and manipulation of particles will be described in detail. In Section 5, compared numerical simulations and performance analysis are provided. Finally, we conclude the investigation in Section 6.
The stochastic DSM is adopted for describing the dynamic complex CRN system as follows:
$$ {s}_n=S\left({s}_{n-1}\right) $$
$$ {\boldsymbol{\upalpha}}_n=H\left({\boldsymbol{\upalpha}}_{n-1}\right) $$
$$ {\mathbf{y}}_n=G\left({\boldsymbol{\upalpha}}_n,{s}_n,{\mathbf{d}}_{n,m},{\mathbf{z}}_{n,m}\right) $$
The block diagram of DSM is depicted in Fig. 2. We define totally three equations for the DSM, including two dynamic equations S (.) and H (.) and one measurement equation G (.). The transitional function S (.) characterizes the stochastic evolution of the PU's state s n ∊S = {0,1} of the nth discrete time as the first-order Markov process. The other transition function H (.) specifies dynamic behaviors of CSI vector α n = [α 0,n ,…, α u,n ,…, α U − 1,n ]T(u = 0,…, U − 1), where α u,n denotes the channels between P_BS and uth SU_R. The observation function G (.) then describes y n = [y 0,n ,…, y u,n ,…, y U − 1,n ]T(u = 0,…, U − 1), where y u,n denotes the signal through matched filter of uth SU_R and d n,m = [d n,0 ,…, d n,m ,…, d n,M − 1]T denotes the pilot sequence signal of PU at nth discrete time, where M signifies the length of pilot sequence. The random noise is involved in measurement process, and denoted by z n,m = [z 0,n,m ,…, z u,n,m ,…, z U − 1,n,m ]T, which is viewed as zero-mean additive white Gaussian noise (AWGN).
Architecture of CRN; primary user transit (PU_T)
Additionally, three assumptions are made in the DSM for the ease of analysis.
The distance between different SU_Rs is far enough so that the elements of CSI vector α n are of little correlation. Therefore, α 0,n ,…, α u,n ,…, α U − 1,n are assumed to be independent and identically distributed (i.i.d.) variables.
In a slow-varying case, each fading channel gain α u,n is assumed to remain unchanged with L successive sensing slots. The static duration L is related with the maximum Doppler frequency shift f D , which is inversely proportional to L. Because of asynchrony of channel and the PU work state, we assume n' represents the changing slot of α u,n . For connection of two time index, l is defined as the lth (0< l<L) PU switching slot in the slot n'; thus, n = L(n' − 1) + l.
The PU's state is assumed to keep invariant in each sensing slot.
Evolution of the PU work state
The PU work state is of two forms: active and idle, and it is modeled as a two-value variable s n ∊S, where s n = 1 denotes the active state and s n = 0 represents the other one, respectively. Survival probability p s = Pr{s n + 1 = 1|s n = 1} represents the PU work state that remains an active state from the slot n to slot n + 1. Similarly, birth probability p b = Pr{s n + 1 = 1|s n = 0} represents switching active state. Therefore, the transitional probability matrix of the PU work state is given by
$$ \varPi =\left[\begin{array}{cc}\hfill 1-{p}_{\mathrm{b}}\hfill & \hfill {p}_{\mathrm{b}}\hfill \\ {}\hfill 1-{p}_{\mathrm{s}}\hfill & \hfill {p}_{\mathrm{s}}\hfill \end{array}\right] $$
Cognitive TVFF channel model
In CRN, wireless propagation suffers from white Gaussian noises and TVFF. In this investigation, we fully make use of the finite-state Markov channel (FSMC) [16] to specify the cognitive channels because its Markov property can effectively characterize the dynamic property of TVFF and match its statistical model.
In the FSMC model, the fading channel is divided into K discrete states and each state transfers to others with specified probability. Let A k ∊A = {A 0, A 2,…, A K − 1} denotes each channel state. Correlatively, the state transition can be characterized by a transitional probability matrix Π K × K = {π i → j , i, j∈0,1,…, K − 1}:
$$ {\boldsymbol{\Pi}}_{K\times K}=\left[\begin{array}{l}{\pi}_{0\to 0}\kern1.25em {\pi}_{0\to 1}\kern1em \cdots \kern1.5em {\pi}_{0\to \left(K\operatorname{}1\right)}\\ {}{\pi}_{1\to 0}\kern1.25em {\pi}_{1\to 1}\kern1.25em \cdots \kern1.25em {\pi}_{1\to \left(K-1\right)}\\ {}\kern0.5em \vdots \kern3em \vdots \kern2.25em \ddots \kern2em \vdots \\ {}{\pi}_{\left(K-1\right)\to 0}\ {\pi}_{\left(K-1\right)\to 1}\cdots {\pi}_{\left(K-1\right)\to \left(K-1\right)}\end{array}\right] $$
where π k' → k accounts for the transitional probability from the state k' at n' − 1 (channel slot index) to the state k at n':
$$ {\pi}_{k\hbox{'}\to k}= \Pr \left\{{a}_{n\hbox{'}}={A}_k\Big|{a}_{n\hbox{'}-1}={A}_{k\hbox{'}}\right\} $$
Considering the indecomposable FSMC model, evolution of channel is taken as a stationary Markov process. We denote stationary probability vector as π = [π 0,…, π k ,…, π K − 1]T, where π k = Pr{α n' = A k } is stationary distribution of channel state A k , and this vector can be solved by Π K × K T π = π. The nonnegative channel amplitude will be partitioned into K non-overlapping regions: V = {[v 0,v 1], [v 1,v 2],…, [v K − 1,v K ]}. Suppose the PDF of fading channel (e.g., Rayleigh or Rican distribution) f(α), the steady probability of channel states can be computed by integrating between the region [v k − 1,v k ]:
$$ {\pi}_k={\displaystyle {\int}_{v_{k-1}}^{v_k}\alpha f\left(\alpha \right)d\alpha } $$
Under the equal partition property π k = 1/ K, the partitioning bounds are easily derived from
$$ {v}_k=\sqrt{-2{\sigma}^2\cdot \ln \left(1-\raisebox{1ex}{$k$}\!\left/ \!\raisebox{-1ex}{$K$}\right.\right)} $$
Simplistically, we adopt a first-order FSMC which is fully enough to match with statistical fading models (e.g., Clarke's model [16]). Then, the transitional probability from channel state A k' to A k can be determined by
$$ {\pi}_{k\hbox{'}\to k}=\frac{1}{\pi_k}\times {\displaystyle {\int}_{v_{k\hbox{'}}}^{v_{k\hbox{'}+1}}{\displaystyle {\int}_{v_k}^{v_{k+1}}f\left({\alpha}_{n\hbox{'}-1},{a}_{n\hbox{'}}\right)d{\alpha}_{n\hbox{'}-1}d{a}_{n\hbox{'}}}},k=0,1,\dots K-1 $$
where f (α n' − 1, α n') is the bivariate joint PDF.
Observation model
For implementation with ease, we adopt a matched filter sensing scheme [17] for reference. Each SU_R observing system is independent with the others. Therefore, the spectrum sensing can be formulated to the following hypothesis test problem in each SU_R:
$$ {y}_{u,n}={\displaystyle \sum_{m=1}^M\left({\alpha}_{u,n}{s}_n{d}_{n,m}+{z}_{u,n,m}\right){d}_{n,m}}=\left\{\begin{array}{l}{\displaystyle {\sum}_{m=1}^M{z}_{u,n,m}{d}_{n,m}}\kern4.75em {H}_0\\ {}{\displaystyle {\sum}_{m=1}^M\left({\alpha}_{u,n}{d}_{n,m}+{z}_{u,n,m}\right){d}_{n,m}}\kern1.25em {H}_1\end{array}\right. $$
Here, H 0 and H 1 represent two opposite hypotheses, respectively, i.e., idle work state and active work state; M is the length of pilot signal; y u,n ∊y n is the measurement in the uth SU_R; d n,m ∊d n,m is the amplitude of pilot signal in nth sensing slot; and the AWGN is z u,n,m ∊z n,m with zero mean and a variance of σ n 2, i.e., z u,n,m ~ N(0, σ n 2).
Due to the received signals at different SU_Rs are independent with one another, the likelihood function φ(y n |s n ,α n ) may follow joint distribution, consisting of φ u (y u,n |s n ,α u,n ) (u = 0,…, U – 1):
$$ \varphi \left({\mathbf{y}}_n\left|{s}_n,{\boldsymbol{\upalpha}}_n\right.\right)={\displaystyle \prod_{u=0}^{U\hbox{-} 1}{\varphi}_u\left({y}_{u,n}\left|{s}_n,{\alpha}_{u,n}\right.\right)} $$
Conditioned on the different fading channel α u,n and the PU work state s n , sub-likelihood function φ u (y u,n | s n , α u,n ) may follow a Gaussian distribution with zero mean under H 0 (s n = 0), and with none-zero mean under H 1(s n = 1):
$$ {\varphi}_u\left({y}_{u,n}\left|{s}_n,{\alpha}_{u,n}\right.\right)=\left\{\begin{array}{l}\frac{1}{\sqrt{2\pi M{\sigma}^2}} \exp \left(\frac{y_{u,n}^2}{2M{\sigma}^2}\right),\kern5em {H}_0\hfill \\ {}\frac{1}{\sqrt{2\pi M{\sigma}^2}} \exp \left(\frac{{\left({y}_{u,n}-M{\alpha}_{u,n}\right)}^2}{2M{\sigma}^2}\right),\kern1.75em {H}_1\hfill \end{array}\right. $$
where σ 2 represents the variance of Gaussian noise.
Joint cooperative spectrum sensing algorithm
In classical CSS schemes, i.e., optimal soft combination scheme [10], such combination of detection and estimation will not be implemented, since most of them are also threshold-based techniques, which only solve detection issues. Therefore, a Bayesian stochastic approach is an effective solution to estimate such two parameters jointly. Our task is to estimate recursively the posterior PDF of the state s n , which is viewed as s n = {s n ,α n }.
Optimal soft combination scheme
An optimal soft combination scheme [10] estimates the PU work state by threshold decision. Without loss of generality, the classical scheme is concerned with maximizing the detection probability for a given false alarm probability. Thus, the Neyman–Pearson (NP) criterion is applied, which is equivalent to the likelihood ratio test. The corresponding likelihood ratio LR(y n ) between two hypotheses and decision function are expressed as
$$ LR\left({\mathbf{y}}_n\right)=\frac{ \Pr \left({\mathbf{y}}_n\left|{H}_1\right.\right)}{ \Pr \left({\mathbf{y}}_n\left|{H}_0\right.\right)}={\displaystyle \prod_{u=0}^{U\hbox{-} 1}\frac{ \Pr \left({y}_{u,n}\left|{H}_1\right.\right)}{ \Pr \left({y}_{u,n}\left|{H}_0\right.\right)}}\underset{H_0}{\overset{H_1}{\gtrless }}h $$
where h is the threshold determined by the given false alarm.
In this considered scenario, SU_Rs send their original sensing information to the fusion center without any local processing, and then the measurements are accumulated with its weights appropriately. According to eqs. (11)–(12), based on the NP criterion, threshold judgment is accomplished by the likelihood ratio LR(y n ) which is expressed as
$$ LR\left({\mathbf{y}}_n\right)={\displaystyle \prod_{u=0}^{U\hbox{-} 1}\frac{ \Pr \left({y}_{u,n}\left|{H}_1\right.\right)}{ \Pr \left({y}_{u,n}\left|{H}_0\right.\right)}}= \exp \left({\displaystyle \sum_{u=0}^{U\hbox{-} 1}\frac{\alpha_{u,n}{y}_{u,n}}{\sigma^2}}-{\displaystyle \sum_{u=0}^{U\hbox{-} 1}\frac{M{\alpha}_{u,n}^2}{2{\sigma}^2}}\right) $$
Therefore, the original decision criterion given in eq. (13) is rewritten as
$$ y{\mathit{\hbox{'}}}_n={\displaystyle \sum_{u=0}^{U\hbox{-} 1}w{\hbox{'}}_{u,n}{y}_{u,n}}={\displaystyle \sum_{u=0}^{U\hbox{-} 1}\frac{\sqrt{\gamma_{u,n}}}{\sigma }{y}_{u,n}}\underset{H_0}{\overset{H_1}{\gtrless }} \ln (h)+{\displaystyle \sum_{u=1}^I\frac{1}{2}}{\gamma}_{u,n}M $$
where y' n is the total accumulated measurement as the weight summation of different SU_Rs, whose weight w' u,n is corresponding to its instantaneous signal to noise ratio (SNR) γ u,n at time slot n. Thus, we obtain the optimal soft combination in our DSM.
Bayesian scheme
From the Bayes theory view, given a prior transitional function p(s n |s n − 1) and the likelihood function φ (y n |s n ), the joint posterior distribution is then propagated sequentially via the well-known two-step procedure, assuming that initial density of the state, p(s 0) is known.
The first-predicting step will be integrated via the Chapman–Kolmogorov equation as follows:
$$ {p}_{n\left|n-1\right.}\left({\mathbf{s}}_n\left|{\mathbf{y}}_{0:n-1}\right.\right)={\displaystyle \int {p}_{n-1\left|n-1\right.}\left({\mathbf{s}}_{n-1}\left|{\mathbf{y}}_{0:n-1}\right.\right)\cdot \phi \left({\mathbf{s}}_n\left|{\mathbf{s}}_{n-1}\right.\right)}d{\mathbf{s}}_{n-1} $$
where p n − 1|n − 1(s n − 1|y 0:n − 1) denotes the posterior density of s at n − 1th slot and transitional function ϕ(s n | s n − 1) can be decomposed as
$$ \phi \left({\mathbf{s}}_n\left|{\mathbf{s}}_{n-1}\right.\right)= \Pr \left({s}_n\left|{s}_{n-1}\right.\right)\times {\displaystyle \prod_{u=0}^{U\hbox{-} 1}{\pi}_{n\left|n-1\right.}\left({\alpha}_{u,n}\left|{\alpha}_{u,n-1}\right.\right),\kern0.5em u=0,1,\dots, U-1} $$
The second-updating step applies the Bayes rule to update the predicted PDF. Taking measurement vector y n into account, current posterior density can be expressed as
$$ {p}_{n\left|n\right.}\left({\mathbf{s}}_n\left|{\mathbf{y}}_{0:n}\right.\right)=\frac{p_{n\left|n-1\right.}\left({\mathbf{s}}_n\left|{\mathbf{y}}_{0:n-1}\right.\right)\cdot \varphi \left({\mathbf{y}}_n\left|{\mathbf{s}}_n\right.\right)}{{\displaystyle \int {p}_{n\left|n-1\right.}\left({\mathbf{s}}_n\left|{\mathbf{y}}_{0:n-1}\right.\right)\cdot \varphi \left({\mathbf{y}}_n\left|{\mathbf{s}}_n\right.\right)}d{\mathbf{s}}_n} $$
Knowing the posterior p n|n (s n |y 0:n ), we can estimate the CSI and PU work state according to maximum A posteriori (MAP) criterion.
Bernoulli random finite sets
Despite the independence of two estimated variables, we cannot just model them simply as s n = [s n , α n ]T since the measurement y n is not made of multiplication. The likelihood function of y n switches alternately along with appearance/disappearance of PU. Thus, BRFS [18] ℱ, as a stochastic variable that takes a value as an empty or singleton set, is fully suitable for the construction of state s n .
The cardinality of ℱ is random and modeled by a Bernoulli distribution ρ(k) = Pr{|ℱ| = k}, k ∈ {0, 1}, and it can be defined as a PU work state. Meanwhile, its element is completely specified by a joint distribution p k ({α 0 … α k … α K −1}) conditioned on cardinality k and can stand for CSI vector α n , whose distribution is related to the TVFF. In order to describe the statistical nature of such ℱ, we adopt Mahler's approach [18], finite set statistics (FISST) PDF, to specify BRFS. This nth PDF is uniquely determined by ρ(k) and p n (α n ):
$$ {p}_{\boldsymbol{F}}\left({\mathcal{F}}_n\right)=\Big\{\begin{array}{l}q\cdot {p}_n\left({\upalpha}_n\right)\kern1.25em {\mathcal{F}}_n=\left\{{\upalpha}_n\right\}\kern1em \\ {}1-q\kern2.75em {\mathcal{F}}_n=\varnothing \end{array}\kern1em \operatorname{} $$
where ℱ is empty with probability 1 − q or has a singleton element with probability q. The set integral is defined as
$$ {\displaystyle {\int}_{\mathcal{F}}{p}_{\mathcal{F}}\left({\mathcal{F}}_n\right)}\delta \mathcal{F}={p}_{\mathcal{F}}\left(\varnothing \right)+{\displaystyle \int {p}_{\mathcal{F}}\left({\upalpha}_n\right)d{\upalpha}_n} $$
Here, eq. (20) denotes the set integration on ℱ and it integrates to one. Correspondingly, transitional function of ℱ has to suit to its architecture. Due to four state transitions in eq. (4), given the different PU work states at the slot n − 1, we can write the priori transitional density:
$$ \begin{array}{l}{\phi}_{n\Big|n-1\operatorname{}}\left({\mathcal{F}}_n\Big|\varnothing \operatorname{}\right)=\Big\{\begin{array}{l}1-{p}_b\kern3em \left|{\mathcal{F}}_n\right|=0\kern1em \\ {}{p}_b\cdot {b}_{n\Big|n-1\operatorname{}}\left({\upalpha}_n\right)\kern0.75em \left|{\mathcal{F}}_n\right|=1\kern1em \\ {}0\kern4.5em \left|{\mathcal{F}}_n\right|\ge 2\end{array}\kern1em \operatorname{}\\ {}{\phi}_{n\Big|n-1\operatorname{}}\left({\mathcal{F}}_n\Big|\left\{{\upalpha}_{n-1}\right\}\operatorname{}\right)=\Big\{\begin{array}{l}1-{p}_s\kern4.5em \left|{\mathcal{F}}_n\right|=0\kern1em \\ {}{p}_s\cdot {\pi}_{n\Big|n-1\operatorname{}}\left({\upalpha}_n\Big|{\upalpha}_{n-1}\operatorname{}\right)\kern0.75em \left|{\mathcal{F}}_n\right|=1\kern1em \\ {}0\kern6em \left|{\mathcal{F}}_n\right|\ge 2\end{array}\kern1em \operatorname{}\end{array} $$
where π n|n − 1(.) is the transitional density of CSI α n obtained from the FSMC model and b n|n − 1(.) is the birth density, which represents the initial density of CSI α n when the PU is reactive.
Bernoulli filters algorithm
A stochastic algorithm, named Bernoulli filters, is a useful tool to estimate the posterior FISST PDF of BRFS recursively. According to eq. (19) above, the FISST PDF of BRFS ℱ n at nth slot is made up of two important distributions. One is the posterior density of PU's appearance q n|n . The other one is the posterior spatial PDF of the CSI α n. , f n|n (α n ).
With the well-known two-step procedure of Bayesian estimation framework, Bernoulli filters (BF) [19], as a sequential Bayesian estimator, propagates two posterior terms recursively. The specific approach will also be divided into predict step and update step.
Predict step
It is assumed that the posterior FISST PDF of BRFS ℱ n −1 is (p n −1|n − 1ℱ n − 1|y n −1) at n − 1th slot, which includes two terms q n − 1|n − 1 and f n − 1|n − 1(α n ) above. Then, the predicted FISST PDF of BRFS (p n|n − 1ℱ n −1|y n −1) has been derived originally from eq. (16). Based on the Mahler approach eq. (20), (p n|n − 1ℱ n |y n −1) is derived from
$$ \begin{array}{l}{p}_{n\Big|n-1\operatorname{}}\left({\mathcal{F}}_n\Big|{\mathbf{y}}_{n-1}\operatorname{}\right)\\ {}={\displaystyle \int {\phi}_{n\Big|n-1\operatorname{}}\left({\mathcal{F}}_n\Big|{\mathcal{F}}_{n-1}\operatorname{}\right){p}_{n-1\Big|n-1\operatorname{}}\left({\mathcal{F}}_{n-1}\Big|{\mathbf{y}}_{n-1}\operatorname{}\right)\delta {\mathcal{F}}_{n-1}}\\ {}={\phi}_{n\Big|n-1\operatorname{}}\left({\mathcal{F}}_n\Big|\varnothing \operatorname{}\right){p}_{n-1\Big|n-1\operatorname{}}\left(\varnothing \Big|{\mathbf{y}}_{n-1}\operatorname{}\right)\\ {}+{\displaystyle \int {\phi}_{n\Big|n-1\operatorname{}}\left({\mathcal{F}}_n\Big|\left\{{\upalpha}_{n-1}\right\}\operatorname{}\right){p}_{n-1\Big|n-1\operatorname{}}\left(\left\{{\upalpha}_{n-1}\right\}\Big|{\mathbf{y}}_{n-1}\operatorname{}\right)\boldsymbol{d}{\upalpha}_{n-1}}\end{array} $$
By viewing its cardinality, we consider two different cases, i.e., ℱ n = Ø and ℱ n = {α n }. Two predictions q n|n − 1 and f n|n − 1(α n ) of p n|n − 1(F n |y n − 1), based on the FISST PDF of BRFS in eq. (19), can be simplified as
$$ {q}_{n\left|n-1\right.}={p}_b\left(1-{q}_{n-1\left|n-1\right.}\right)+{p}_s{q}_{n-1\left|n-1\right.} $$
$$ \begin{array}{l}{f}_{n\left|n-1\right.}\left({\boldsymbol{\upalpha}}_n\right)=\frac{p_b\left(1-{q}_{n-1\left|n-1\right.}\right){b}_{n\left|n-1\right.}\left({\boldsymbol{\upalpha}}_n\right)}{q_{n\left|n-1\right.}}\\ {}+\frac{p_s{q}_{n-1\left|n-1\right.}{\displaystyle \int {\pi}_{n\left|n-1\right.}\left({\boldsymbol{\upalpha}}_n\left|{\boldsymbol{\upalpha}}_{n-1}\right.\right)\cdot }\ {f}_{n-1\left|n-1\right.}\left({\boldsymbol{\upalpha}}_{n-1}\right)d{\boldsymbol{\upalpha}}_{n-1}}{q_{n\left|n-1\right.}}\end{array} $$
where f n|n − 1(α n ) consists of two parts, i.e., birth part and survival part.
Update step
The updated processing of BF for the MF-based measurement vector y n has been derived from eq. (10). Recalling from predicted FISST PDF of BRFS (p n|n − 1ℱ n |y n −1), the updated FISST PDF of BRFS (p n|n ℱ n |y n ) is based on the Bayes rule:
$$ {p}_{n\left|n\right.}\left({\mathrm{\mathcal{F}}}_n\left|{\mathbf{y}}_{0:n}\right.\right)=\frac{p_{n\left|n-1\right.}\left({\mathrm{\mathcal{F}}}_n\left|{\mathbf{y}}_{0:n-1}\right.\right)\cdot \varphi \left({\mathbf{y}}_n\left|{\mathrm{\mathcal{F}}}_n\right.\right)}{p\left({\mathbf{y}}_n\left|{\mathbf{y}}_{0:n-1}\right.\right)} $$
Using the standard Chapman–Kolmogorov function and set integration, the denominator of eq. (25) is simplified as
$$ \begin{array}{l}p\left({\mathbf{y}}_n\Big|{\mathbf{y}}_{0:n-1}\operatorname{}\right)={\displaystyle \int \varphi \left({\mathbf{y}}_n\Big|{\mathcal{F}}_n\operatorname{}\right)\cdot {p}_{n\Big|n-1\operatorname{}}\left({\mathcal{F}}_n\Big|{\mathbf{y}}_{1:n-1}\operatorname{}\right)\delta {\mathcal{F}}_n}\kern1em \\ {}=\left(1-{q}_{n\Big|n-1\operatorname{}}\right)\varphi \left({\mathbf{y}}_n\Big|\varnothing \operatorname{}\right)+{q}_{n\Big|n-1\operatorname{}}{\displaystyle \int \varphi \left({\mathbf{y}}_n\Big|\left\{{\upalpha}_n\right\}\operatorname{}\right)\cdot {f}_{n\Big|n-1\operatorname{}}\left({\upalpha}_n\right)d{\upalpha}_n}\kern1em \end{array} $$
Similarly, the update step will also consider two cases of ℱ n . Therefore, the current probability of PU appearance q n|n and spatial PDF f n|n (α n ) can be constructed as
$$ {q}_{n\left|n\right.}=\frac{q_{n\left|n-1\right.}{\displaystyle \int {r}_n\left({\mathbf{y}}_n\left|\left\{{\boldsymbol{\upalpha}}_n\right\}\right.\right){f}_{n\left|n-1\right.}\left({\boldsymbol{\upalpha}}_n\right)d{\boldsymbol{\upalpha}}_n}}{1\hbox{-} {q}_{n\left|n-1\right.}+{q}_{n\left|n-1\right.}{\displaystyle \int {r}_n\left({\mathbf{y}}_n\left|\left\{{\boldsymbol{\upalpha}}_n\right\}\right.\right){f}_{n\left|n-1\right.}\left({\boldsymbol{\upalpha}}_n\right)d{\boldsymbol{\upalpha}}_n}} $$
$$ {f}_{n\left|n\right.}\left({\boldsymbol{\upalpha}}_n\right)=\frac{\varphi \left({\mathbf{y}}_n\left|\left\{{\boldsymbol{\upalpha}}_n\right\}\right.\right){f}_{n\left|n-1\right.}\left({\boldsymbol{\upalpha}}_n\right)}{{\displaystyle \int \varphi \left({\mathbf{y}}_n\left|\left\{{\boldsymbol{\upalpha}}_n\right\}\right.\right){f}_{n\left|n-1\right.}\left({\boldsymbol{\upalpha}}_n\right)d{\boldsymbol{\upalpha}}_n}} $$
where r n (.) represents the measurement likelihood ratio between two hypotheses:
$$ {r}_n\left({\mathbf{y}}_n\Big|\left\{{\upalpha}_n\right\}\operatorname{}\right)=\frac{\varphi \left({\mathbf{y}}_n\Big|\left\{{\upalpha}_n\right\}\operatorname{}\right)}{\varphi \left({\mathbf{y}}_n\Big|\varnothing \operatorname{}\right)}={\displaystyle \prod_{u=0}^{U{\textstyle \hbox{-} }1}\frac{\varphi_u\left({y}_{u,n}\Big|{s}_n=1,{\alpha}_{u,n}\operatorname{}\right)}{\varphi_u\left({y}_{u,n}\Big|{s}_n=0,{\alpha}_{u,n}\operatorname{}\right)}} $$
Note that eq. (28) is effectively the same as the conventional Bayesian estimation update eq. (18).
Before implementation of BF, some crucial problem should be taken fully into account. One problem is the asynchrony between the PU work state and CSI, which motivate us to refine the BF algorithm. On the other hand, how to eliminate the exponential growth of complexity associated with increasing number of SU_Rs is a key point for the application.
Asynchrony of BF algorithm
Since the channel state remains unchanged periodically, the channel state α u,n will be evolved from the channel state over the last channel slot n' − 1, not from the channel state at slot n − 1. Thus, the predicted function in eq. (24) will be reconstructed as
$$ \begin{array}{l}{f}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_{L\left(n\hbox{'}-2\right)+{l}_l}\right)=\frac{p_b\left(1-{q}_{n-1\left|n-1\right.}\right){b}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_n\right)}{q_{n\left|n-1\right.}}\hfill \\ {}+\frac{p_s{q}_{n-1\left|n-1\right.}{\displaystyle \int {\pi}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_n\left|{\boldsymbol{\upalpha}}_{L\left(n\hbox{'}-2\right)+{l}_l}\right.\right)\cdot }\ {f}_{L\left(n\hbox{'}-2\right)+{l}_l\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_{L\left(n\hbox{'}-2\right)+{l}_l}\right)d{\boldsymbol{\upalpha}}_{L\left(n\hbox{'}-2\right)+{l}_l}}{q_{n\left|n-1\right.}}\hfill \end{array} $$
where l l denotes the estimation of CSI α n in the final PU active state over the slot n' − 1. This handy manipulation solves non-synchronous problems successfully without any error because of the independence of such two terms, q and f n|n (α n ).
Implementation of Bernoulli filters
PF provides a general framework for the implementation of BF (Fig. 3). This new framework approximates the posterior distribution of CSI α n , f n|n (α n ), via a group of discrete particles {w n (i), α n (i)}(i = 1,…, T), where α n (i) is the state of the particle of f n|n (α n ), T is the total number of particles, and w n (i) are its weight. Since f n|n (α n ) is a probability density function, the weight is normalized and the sum of w n (i) is equal to 1.
Algorithm flow of Bernoulli filters
Supposed at slot n − 1, the probability of existence q n − 1|n − 1 and the spatial PDF f n − 1|n − 1(α n − 1) is approximated by
$$ {f}_{n-1\left|n-1\right.}\left({\boldsymbol{\upalpha}}_{n-1}\right)={\displaystyle \sum_{i=1}^T{w}_{n-1}^{(i)}{\delta}_{{\boldsymbol{\upalpha}}_{n-1}^{(i)}}\left({\boldsymbol{\upalpha}}_{n-1}\right)} $$
where δ b (.) is the Dirac delta function that is concentrated at point b.
Predicted step
In the predicted step, the FISST PDF of F n is implemented as follows: the prior probability of the PU work state q n|n − 1 is easily straightforward; see eq. (23). According to eq. (30), the predicted step for f n|L(n' − 2) + ll (α n ) involves the sum of two terms: birth and survival parts, which should be approximated numerically due to the intractable integration on the continuous distribution. Therefore, it can be described as
$$ {f}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_n\right)\approx {\displaystyle \sum_{i=1}^{T+B}{w}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}^{(i)}{\delta}_{{\boldsymbol{\upalpha}}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}^{(i)}}\left({\boldsymbol{\upalpha}}_n\right)} $$
where B denotes the number of birth particles. Two groups of particles above will be simulated accordingly from
$$ {\boldsymbol{\upalpha}}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}^{(i)}=\left\{\begin{array}{l}{\pi}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left|{\boldsymbol{\upalpha}}_{L\left(n\hbox{'}-2\right)+{l}_l}^{(i)},{\mathbf{y}}_{1:n-1}\right.\right),\kern0.5em i=1,\dots, T\hfill \\ {}{\beta}_n\left({\boldsymbol{\upalpha}}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left|{\mathbf{y}}_{1:n-1}\right.\right),\kern6.5em i=T+1,\dots, T+B\hfill \end{array}\right. $$
where birth particles are drawn from birth density β n (.). At the same time, we adopt prior transitional density of CSI α n as the sequential importance sampling; thus, weight w n − 1 (i), corresponding to particle α n (i), will be predicted according to [19]:
$$ {w}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}^{(i)}=\left\{\begin{array}{l}\frac{p_s{q}_{n-1\left|n-1\right.}}{q_{n\left|n-1\right.}}{w}_{L\left(n\hbox{'}-2\right)+{l}_l}^{(i)},\kern3em i=1,\dots, T\hfill \\ {}\frac{p_b\left(1-{q}_{n-1\left|n-1\right.}\right)}{q_{n\left|n-1\right.}}{w}_{L\left(n\hbox{'}-2\right)+{l}_l}^{(i)},\kern1em i=T+1,\dots, T+B\hfill \end{array}\right. $$
The transition of survival particles has been introduced in details according to transitional function. When it comes to the prior birth density of CSI α n , b n|L(n' − 2) + ll (α n ), in absence of prior knowledge, a more effective way is to use the previous measurements to build adaptively birth particles [20], which follow the standard Chapman–Kolmogorov prediction the same as the survival transition in eq. (16):
$$ \begin{array}{l}{b}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_n\left|{\boldsymbol{\upalpha}}_{L\left(n\hbox{'}-2\right)+{l}_l}\right.\right)\\ {}={\displaystyle \int {\pi}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_n\left|{\boldsymbol{\upalpha}}_{L\left(n\hbox{'}-2\right)+{l}_l}\right.\right)\cdot {b}_{L\left(n\hbox{'}-2\right)+{l}_l}\left({\boldsymbol{\upalpha}}_{L\left(n\hbox{'}-2\right)+{l}_l}\right)}d{\boldsymbol{\upalpha}}_{L\left(n\hbox{'}-2\right)+{l}_l}\end{array} $$
where the particle from b L(n' − 2) + ll (.) can be sampled from the prior transitional density f L(n' − 2) + ll |L(n' − 3) + ll (α L(n' − 2) + ll ) at slot L(n' − 2) + l l with equal weight w L(n' − 2) + ll (i) = 1/B.
Updated step
In the updated step, the processing of implementation is similar to the predicted one. The weight w n| L(n' − 2) + ll (i) in eq. (34) is renewed by likelihood function:
$$ {\tilde{w}}_n^{(i)}={\varphi}_n\left({\mathbf{y}}_n\left|\left\{{\boldsymbol{\upalpha}}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}^{(i)}\right\}\right.\right){w}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}^{(i)} $$
Then, the normalized weight of ith particle is derived as
$$ {w}_n^{(i)}=\frac{{\tilde{w}}_n^{(i)}}{{\displaystyle \sum_{i=1}^{T+B}{\tilde{w}}_n^{(i)}}} $$
Finally, α n (i) is equal to α L(n' − 2) + ll (i) with its corresponding weight w n (i).
On the other hand, posterior density of PU's appearance q n|n is constructed recursively in eq. (27). In this equation, the integral of likelihood ratio and predicted function are approximated by
$$ \begin{array}{c}\hfill {I}_n={\displaystyle \int {r}_n\left({\mathbf{y}}_n\left|\left\{{\boldsymbol{\upalpha}}_n\right\}\right.\right){f}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_n\right)d{\boldsymbol{\upalpha}}_n}\hfill \\ {}\hfill \approx {\displaystyle \sum_{i=1}^{T+B}{r}_n\left({\mathbf{y}}_n\left|\left\{{\boldsymbol{\upalpha}}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}^{(i)}\right\}\right.\right)}{w}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}^{(i)}\hfill \end{array} $$
Then, q n|n is obtained with ease.
MAP estimation
Based on the MAP, the threshold γ n at slot n will be set up to 0.5 empirically and CSI α n can be estimated by spatial PDF f n|n (α n ):
$$ {\widehat{s}}_n=\left\{\begin{array}{c}\hfill 1,\kern1.75em {q}_{n\left|n\right.}\ge {\gamma}_n\left(\mathrm{active}\right)\hfill \\ {}\hfill 0,\kern1.75em {q}_{n\left|n\right.}<{\gamma}_n,\left(\mathrm{idle}\right)\hfill \end{array}\right. $$
$$ {\widehat{\boldsymbol{\upalpha}}}_n= \arg \underset{\alpha_{i,n}\in \mathbf{A}}{max}{f}_{n\left|n\right.}\left({\boldsymbol{\upalpha}}_n\right) $$
Reducing complexity
In the CSS algorithm, the particle number (T + B) conditions the accuracy of spatial PDF f n|n (α n ). In order to ensure the existence of each kind of particle's transition, all kinds of particles should be fully contained. With the increasing SU_R's number I, exponential growth of particles will deteriorate computational efficiency seriously.
According to a MAP criterion, we just need to find the optimal particle to maximize the spatial PDF f n|n (α n ) rather than approximate the whole PDF. Thus, in Fig. 4, we can manipulate particles via detaching, sorting, and assembling, which polarizes particles between best and worst. Based on independence of different channels, we can detach each particle of f n|L(n' − 2) + ll (α n ) in eq. (40),
$$ \begin{array}{l}{f}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\boldsymbol{\upalpha}}_{n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\right)={\displaystyle \prod_{u=1}^U{f}_{u,n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\left({\alpha}_{u,n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\right)}\hfill \\ {}={\displaystyle \prod_{u=1}^U{\displaystyle \sum_{i=1}^{T+B}{w}_{n-1}^{(i)}{\delta}_{\alpha_{u,n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}^{(i)}}\left({\alpha}_{u,n\left|L\left(n\hbox{'}-2\right)+{l}_l\right.}\right)}}\hfill \end{array} $$
where we detach the prior spatial PDF f n|L(n' − 2) + ll (α n ) into U marginal distribution density functions, which belong to each channel. Then, we calculate sub-likelihood of each sub-particle α n|L(n' − 2) + ll (i) by the function eq. (12) and sort them in descending order. Finally, by integrating the associated sub-particle in each channel, we reconstruct the particles and recalculate its likelihood function using eq. (11). With the above scheme, we just consider the K kinds of particles in one channel avoiding the exponential growth of particles and a majority of particles of f n|n (α n ) will focus on optimal value after resampling.
Process of reducing complexity: different colors identify the sub-particles belonging to different channels; the solid squares signify the optimal sub-particle whose sub-likelihood ratio value is larger than others, while the hollow ones represent the other sub-optimal or worse sub-particles
Although we minimize the particle numbers by ignoring sub-optimal particles, this scheme cuts across the principle of diversity of particles in particle filters. Nevertheless, this is not the case. In the case of independent channels, variety of particles in CSI vector depends on each channel and such polarization does not cause the decline in sub-particle variety. Hence, it is entirely appropriate to match the PF principle.
In this section, we will present several cooperative experiments in the realistic TVFF channels. Without loss of generality, we assume the channel states follow the commonly time-varying Rayleigh fading. Therefore, the PDF of channel fading gain is given by
$$ {f}_R\left(\alpha \right)=\frac{\alpha }{\sigma_R^2} \exp \left(-\frac{\alpha^2}{2{\sigma}_R^2}\right) $$
where σ R 2 is the variance of Rayleigh fading.
Here, we configure the variance σ R 2 = 0.1, the number of discrete channel state K = 5. It is apparent that the CSS algorithm is operated recursively under the Bayesian criterion and thus defines the total detection probability P D , which is adopted as a metric in [21], which considers the correct detection in both hypotheses H 1 and H 0:
$$ {P}_D=1-p\left({H}_1\right){P}_m-p\left({H}_0\right){P}_f $$
Besides, the mean square error (MSE) is defined as the other criterion to measure the performance of CSI. The equation is given by
$$ \mathrm{M}\mathrm{S}\mathrm{E}=\frac{1}{N}{\displaystyle \sum_{n=1}^N\left[{\displaystyle \sum_{i=1}^I{\left({\widehat{\alpha}}_{i,n}-{\alpha}_{i,n}\right)}^2}/{\displaystyle \sum_{i=1}^I{\alpha}_{i,n}^2}\right]} $$
Different SU_R numbers
In the first experiment, performance of P D and MSE has been fully investigated from different numbers of SU_Rs. In the simulation, the length of pilot signal M is configured to 100. The transitional probability (p b , p s ) is set to (0.4, 0.7). The static length of L is configured to 20. We set the number of survival particle T and birth particle B to 50 and 50.
Figure 5 shows the total detection probability curves of the CSS algorithm with different numbers of SU_Rs U = 3, 5, and 7. It is noted that the P D will be increased with increasing SNR, which is finally up to 1. It is comprehensible that the more SU_Rs cooperate, the better performance of P D is shown. That is because with more SU_Rs, the FC is able to obtain more observations about the P_BS; as a result, the accuracy of sensing will be enhanced.
Sensing performance of the CSS algorithm under different numbers of SU_R
MSE curves have been shown in Fig. 6 in detail. Similar to detection curves in Fig. 5, the performance of MSE also becomes better with more SU_Rs and they will be convergent to the same value with increasing of SNR. Note that, however, a compromise should be taken into account, since a more SU_R also results in a more complexity of CSI. For example, in Fig. 6 the increased performance from U = 5 to U = 7 is far less than that from U = 3 to U = 5.
MSE of the CSS algorithm under different numbers of SU_R
Comparative analysis in sensing performance
For a comparative analysis, a classical optimal soft combination scheme has been entirely investigated in [10]. In this simulation, the number of SU_Rs is set to 5, the static length L is 20, and length of pilot signal is 100. The threshold in the classical scheme can be obtained according to the observation model in this paper. With P f = 0.1, its detection probability P d = Pr{LR(y n )>h|H 1} curves have been shown in Fig. 6. Since the CSS algorithm belongs essentially to the Bayesian rule, in order to maximum the total detection probability P D , the false alarm probability has been hidden in the P D , where P f will be fixed hardly. Nevertheless, for comparing sensing performance of two schemes, a cooperative joint scheme can still be evaluated under the NP criterion. The blue solid curve belonging to the new scheme gains an advantage over the soft combination scheme with a fixed P f = 0.1.
The new CSS scheme is superior to the classical one, obviously, since precise estimation of CSI can ensure the accuracy of likelihood function ratio between two hypotheses H 1 and H 0 (Fig. 7). It is crucial that such estimated CSI is surely responsible for the communication quality of cognitive link as well.
Comparison of sensing performance between the optimal soft combination and the new CSS algorithm
This work presents a new framework to reach dynamic spectrum sharing in TVFF channels. The central question addressed in the paper is how to effectively detect the occupancy of PU's band and estimate the CSI jointly. Based on a BF scheme, we design a joint CSS algorithm which achieves harmony and complementarity of detecting and estimating. Experimental simulation has validated our new scheme clearly. This new scheme also provides a promising solution to sense other dynamic parameters like the location of primary users in CRN. By casting these parameters into the DSM, they can be estimated recursively together by the BF algorithm. Thus, the new CSS is accessible to a more realistic CRN.
Q Liang, Automatic target recognition using waveform diversity in radar sensor networks. Pattern Recogn Lett 29(3), 377–381 (2008)
Q Liang, Waveform design and diversity in radar sensor networks: theoretical analysis and application to automatic target recognition. In sensor and ad hoc communications and networks, 2006. SECON'06. 2006 3rd annual IEEE communications society on Vol. 2 (IEEE, Reston, VA, 2006), pp. 684-689
Q Liang, X Cheng, SC Huang, D Chen, Opportunistic sensing in wireless sensor networks: theory and application. IEEE Trans Comput 63(8), 2002–2010 (2014)
S Singh, Q Liang, D Chen, L Sheng, Sense through wall human detection using UWB radar. EURASIP J Wirel Commun Netw 2011(1), 1–11 (2011)
S. Haykin, Cognitive radio: brain-empowered wireless communications. Selected Areas in Communications, IEEE J 23(2), 201-220 (2005)
Federal Communications Commission, Spectrum policy task force report, FCC 02-155 (2002)
E Axell, G Leus, EG Larsson, HV Poor, Spectrum sensing for cognitive radio: state-of-the-art and recent advances. IEEE Signal Processing Mag 29(3), 101–116 (2012)
C.R. Stevenson, C. Cordeiro, E. Sofer, G. Chouinard, Functional requirements for the 802.22 WRAN standard: IEEE 802.22-05/0007r46 (IEEE, Piscataway, 2005)
IEEE, Draft supplement to standard for telecommunications and information exchange between systems—LAN/MAN specific requirements—Part 11 (2003) wireless medium access control (MAC) and physical layer (PHY) specifications: specification for radio resource measurement: IEEE 802.11k/D0.7 (IEEE, Piscataway, 2003)
J Ma, G Zhao, Y Li, Soft combination and detection for cooperative spectrum sensing in cognitive radio networks. IEEE Trans Wirel Commun 7(11), 4502–4507 (2008)
SM Mishra, A Sahai, RW Brodersen, Cooperative sensing among cognitive radios. In Communications, 2006. ICC'06. IEEE International Conference on Vol. 4. (IEEE, Istanbul, 2006), pp. 1658-1663
Z Quan, S Cui, AH Sayed, Optimal linear cooperation for spectrum sensing in cognitive radio networks. IEEE J Selected Top Signal Processing 2(1), 28–40 (2008)
FF Digham, MS Alouini, MK Simon, On the energy detection of unknown signals over fading channels. In Communications, 2003. ICC'03. IEEE international conference on Vol. 5. (IEEE, 2003), pp. 3575-3579
M Bkassiny, Y Li, SK Jayaweera, A survey on machine-learning techniques in cognitive radios. IEEE Commun Surv Tutor 15(3), 1136–1159 (2013)
W Lewandowski, J Azoubib, WJ Klepczynski, GPS: primary tool for time transfer. Proc IEEE 87(1), 163–172 (1999)
P Sadeghi, RA Kennedy, PB Rapajic, R Shams, Finite-state Markov modeling of fading channels-a survey of principles and applications. IEEE Signal Processing Mag 25(5), 57–80 (2008)
C Zhao, M Sun, B Li, L Zhao, X Peng, Blind spectrum sensing for cognitive radio over time-variant multipath flat-fading channels. EURASIP J Wirel Commun Netw 2014(1), 1–13 (2014)
RP Mahler, Statistical multisource-multitarget information fusion. (Artech House, Inc., Norwood, MA, USA, 2007)
B Ristic, BT Vo, BN Vo, A Farina, A tutorial on Bernoulli filters: theory, implementation and applications. IEEE Trans Signal Processing 61(13), 3406–3430 (2013)
B Ristic, S Arulampalam, Bernoulli particle filter with observer control for bearings-only tracking in clutter. IEEE Trans Aerosp Electron Syst 48(3), 2405–2415 (2012)
B Li, S Li, A Nallanathan, Y Nan, C Zhao, Z Zhou, Deep sensing for next-generation dynamic spectrum sharing: more than detecting the occupancy state of primary spectrum. IEEE Trans Commun 63(7), 2442–2457 (2015)
This work was supported by NSFC (61379016) and SRFDP (20130005110016).
School of Information and Communication Engineering (SICE), Beijing University of Posts and Telecommunications (BUPT), Beijing, 100876, China
Yijiang Nan
, Chenglin Zhao
& Bin Li
Search for Yijiang Nan in:
Search for Chenglin Zhao in:
Search for Bin Li in:
Correspondence to Yijiang Nan.
Nan, Y., Zhao, C. & Li, B. Joint detection scheme for cooperative spectrum sensing in cognitive radio network. J Wireless Com Network 2016, 79 (2016) doi:10.1186/s13638-016-0570-z
Accepted: 24 February 2016
Cognitive radio network
Cooperative spectrum sharing
Time-variant flat fading
Dynamic state-space model
Radar and Sonar Networks | CommonCrawl |
Examining proximity to death and health care expenditure by disease: a Bayesian-based descriptive statistical analysis from the National Health Insurance database in Japan
Yuji Hiramatsu ORCID: orcid.org/0000-0001-6158-98061,2,
Hiroo Ide1,
Atsuko Tsuchiya3 &
Yuji Furui1
Health Economics Review volume 12, Article number: 6 (2022) Cite this article
Japan is one of the Organization for Economic Co-operation and Development (OECD) countries where population aging and increasing health care expenditures (HCE) are urgent issues. Recent studies have identified factors other than age, such as proximity to death and morbidity, as contributing factors to the increase in medical costs. It is important to assess HCE by disease and analyze their factors to estimate and improve future HCE.
We extracted individual records spanning approximately 2 years prior to the death of persons aged 65 to 95 years from the National Health Insurance data in Japan, and used a Bayesian approach to decompose monthly HCE into five disease groups (circulatory, chronic kidney disease, neoplasms, respiratory, and others). The relationship between the proximity to death and the average HCE in each disease group was stratified by sex and age and analyzed using a descriptive statistical method similar to the two-part model.
The average HCE increased rapidly as death approached in most disease groups, but the increase-pattern differed greatly among disease groups, sex, and age groups. The effect of proximity to death on average HCE was small for chronic diseases, but large for lethal diseases. When stratified by age and sex, younger and male decedents tended to have higher average HCE, but the extent of this varied by disease group. The two-year cumulative average HCE for neoplasms in the 65–75 years age group was about six times larger than those in the 85–95 years age group.
In Japan, it was suggested that disease, proximity to death, age, and sex may contribute to HCE. However, these factors interact in a complex manner, and it is important to analyze HCE by disease. In addition, preventing or delaying the severity of diseases with high medical burdens in younger people may be effective in reducing future terminal care costs. These findings have important implications for future projections and improvements of HCE.
Currently, health care expenditures (HCE) are on the rise in most Organization for Economic Co-operation and Development (OECD) countries, and the upward trend in the ratio of HCE to GDP has continued for more than 30 years [1]. Population aging has long been identified as a factor in the increase in HCE [2], and the former is still underway in many countries [1]. For example, in Japan, the percentage of the total population aged 65 and over has increased from 23.0 to 28.4% over the past 10 years, from 2010 to 2019. Many studies have examined the relationship between age and HCE [3,4,5,6,7]. It has been pointed out that the ratio of end-of-life care expenditures for decedents to total healthcare costs is greater than the ratio of decedents to the total population [8, 9]. According to Lubitz et al. [8], in Medicare although decedents accounted for only 6% of the total population, end-of-life HCE accounted for 28% of the total annual HCE, most of which was concentrated within the months before death. This indicates the importance of analyzing end-of-life HCE and factors that influence them to estimate future healthcare costs. It has also been pointed out that end-of-life care, which includes medical care costs, decreases with increasing age [8,9,10], suggesting that we cannot simply conclude that aging leads to an increase in medical costs.
After much debate about the relationship between aging and HCE, a "red herring" hypothesis was proposed [11], that suggested that although age and HCE are positively correlated, proximity to death (PTD), rather than age itself, is the main factor driving HCE, and numerous articles related to this debate have been published [12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44]. While Zweifel et al. [11] found that PTD was an important factor in HCE and not age itself, Seshamani et al. [23] have reported that both PTD and age are important factors. They solved the problems of multicollinearity and inappropriate handling of records with no medical costs in the Heckman model adopted by Zweifel et al. [11] by adopting a two-part model. Other studies [15, 16, 32] have pointed out the same thing, that PTD only reduces the overestimation of the effect of age on HCE, and that both PTD and age are important.
Werblow et al. [13] decomposed HCE into all seven healthcare services (ambulatory care, prescriptions, hospitalization, outpatient care, nursing home care, home care, and other services) and examined the association between PTD or age and HCE for each service. They reported that age was not significant for HCE for most services, while PTD was significant. However, they also noted that age had a significant effect on HCE associated with long-term care services. They termed this "school of red herring" as the fact that the associations between PTD or age and HCE differed depending on whether health care services were long-term care related or not. Polder et al. [21] used health insurance data of 13.4% of the Dutch population to analyze the total costs of medical care and nursing care in the year before death by gender and the disease that caused death. Results showed that among the deceased, younger age was associated with higher cost of medical care, while older age was associated with higher cost of nursing care and higher cost of malignant neoplasms as a cause of death in both men and women, especially in the younger age group. When the correlation between age and HCE was examined by controlling for diseases that caused death, morbidity, number of comorbidities, length of hospital stay, and PTD, HCE was reported to be lower in older age groups [20, 22,23,24,25,26,27,28, 42]. The reasons for lower HCE in older age groups include nursing care services as a substitute function for medical services in the elderly and reduced medical intensity [18, 22, 23, 35]. Wong et al. [24] analyzed hospital HCE for 94 different diseases among Dutch survivors and decedents and found that the significance and effect of PTD and age differed for each disease, which they termed the "carpaccio of red herring." Dormont et al. [14] also analyzed the relationship between aging, changes in morbidity and medical technology, and HCE, and concluded that the impact of aging on HCE is smaller than that of advanced medical care on HCE. In addition, some studies have reported that the closer one is to death, the more likely one is to have diseases, and that PTD itself is a substitute for morbidity [20, 27], suggesting that it is important to consider morbidities when analyzing the factors of HCE. Thus, although there have been many discussions until recently, no conclusion has been reached on the factors that determine HCE, and research is still being conducted that include not only age and PTD, but also the effects of lifestyle, income, price, and medical environment [37,38,39,40,41, 44].
Japan is an OECD country where population aging is serious and at the same time, the increase in HCE has become a problem. In the early 2000s, the percentage of the total population aged 65 years or older was less than 20%, but in 2019, it had reached 28% [1], and verifying the correlation between age and HCE is an important issue in estimating future healthcare costs in Japan. Hashimoto et al. [18] examined the relationship between age, PTD, and health care and nursing care expenditures using claims data from the National Health Insurance (NHI) of the population aged 65 and older in the Kyushu district of Japan. They pointed out the possibility of both the contribution of PTD to increased HCE and the contribution of aging to increased nursing care expenditures. Hosoya [19] used macroeconomic data of 25 OECD countries, including Japan, and estimated the relationship between age and HCE in a fixed effects model after controlling for other macroeconomic variables such as GDP.
In this study, to examine the relationship between PTD and HCE, we stratified our data by sex and age group and examined trends in average HCE for each month from the month of death to 23 months prior. In Japan, reimbursement claims data are summed up monthly and HCE by disease is not known, making it unclear how much medical expenses are spent on which diseases. Therefore, we used the Bayesian methods to decompose the incurred HCE into five representative disease groups and analyzed the average per capita HCE for each disease group for each month prior to death. Our proposed method for appropriate allocation of costs by disease is to obtain the average HCE of representative diseases while considering the uncertainty of the parameters that enables analysis of HCE by disease and is useful for understanding the trend of average HCE. Polder et al. [23] analyzed medical and care costs by the disease that caused death. In this study, however, we used our proposed method to allocate the incurred HCE to each disease group on an average basis. Ours is a novel method of analysis that is more objective in that it appropriately allocates costs to diseases other than those that cause death. In addition, due to the aforementioned problems with reimbursement claims data in Japan, there have been no studies on the relationship between PTD and HCE by disease [18], which is another novelty of this study.
Study objective
The purpose of this study was to clarify the relationship between PTD and end-of-life HCE for diseases using data from the Japanese National Health Insurance System. We analyzed the relationship between PTD and HCE, stratified by sex, age, and disease group, considering the results of studies outside Japan that have reported the importance of the relationship between morbidity and age [20,21,22,23,24,25,26,27,28]. The HCE for each stratified group were estimated for each month prior to death using a method similar to the two-part model that has been frequently used since Seshamani et al. [23, 33]. Finally, the estimated HCE were accumulated for approximately 2 years prior to death, and the impact of each disease group on the cost of medical care at the end of life was examined, and differences by sex, age, and disease were analyzed.
The data used in this study were information on enrollees and reimbursement claims data from the NHI in the Shizuoka Prefecture, located in the center of Japan. The NHI is a public medical insurance program that requires self-employed and retired elderly people to join. Information on enrollees is anonymized individual-level data, including demographic information such as date of birth, date of death, and gender. Reimbursement claims data are a record of monthly total HCE and correspond to ICD-10 codes, which can be linked to information on enrollees by anonymized IDs. The billed HCE consists of inpatient and outpatient expenditures and prescription fees, but it is not broken down by the disease. For the purpose of this study, it was necessary to estimate the average HCE for each of the five disease groups using the decomposition method of HCE, which will be explained later. The diseases were divided into five groups: circulatory (I00-I99), chronic kidney disease (N18), neoplasms (C00-D48), respiratory (J00-J99), and others according to the WHO ICD-10 definition [45]. The three major causes of death among the elderly (65 years and older) in Japan are neoplasms, heart disease, and cerebrovascular disease, in descending order of frequency. Heart and cerebrovascular diseases are included in the circulatory group in this study. Others consisted of all ICD-10 codes except for circulatory, Chronic Kidney Disease (CKD), neoplasms, and respiratory diseases. In the ICD-10 codes categorized as others, there were some disease groups such as diabetes mellitus and metabolic disorders that were attributed to lifestyle and were of interest in this study. However, in the decomposition method for HCE described below, some cases occurred where the estimation was not stable due to insufficient sample size, so they were included in others.
The NHI data recorded between November 2012 and October 2018 were used, and there were 4,292,759 samples including both surviving and deceased individuals. We selected insured individuals who had died during the study period, had at least 2 years of coverage, and were between 65 and 95 years of age at death. As a result, 122,318 samples were included in the analysis. The age at death was divided into three age groups: 65–75 years, 75–85 years, and 85–95 years. Seshamani et al. [33] pointed out that the effect of PTD becomes apparent 15 years before death, and Wyl [28] found that HCE increased significantly more than 2 years before death. In this study, the time to death (TTD) to be analyzed was 23 months (approximately 2 years), based on both the results of these previous studies and the sufficient data points required to decompose HCE in our method. Seshamani et al. [23] and Kolodziejczyk [34] pointed out the problem of bias due to right-censoring of survivors who did not die within the observation period, but this problem did not arise in this study because only decedents were included in the analysis.
Zweifel et al. [11] estimated the association of PTD and age with HCE using a two-step Heckman model [46] and found that PTD was significant and had a large effect. However, Salas et al. [29] and Seshamani et al. [23, 33] pointed out multicollinearity due to the inverse Mills ratio calculated using Heckman's method and the endogeneity of PTD and avoided the problem of multicollinearity by adopting a two-part model. The two-part model has been known for a long time in the field of actuaries who calculate insurance premium rates and is also called the "frequency-severity model" [47]. In the two-part model [48], the first step is to model the probability of incurring HCE (frequency), and the second is to model the HCE conditional on incurring IHCE (incurred health care expenditures). In this study, following Klugman et al. [47], we refer to the model of the first step as the frequency model and the model of the second step as the severity model. The average HCE (AHCE) can then be calculated by multiplying the estimated frequency by the IHCE [49]. Since frequency and IHCE are estimated by stratified group, TTD (in months), and disease group, AHCE also represents the estimated per capita by stratified group, TTD, and disease group.
In the frequency model, we defined the estimated frequency, F.gtd, as the proportion of HCE occurrences by stratified and disease groups in the corresponding TTD, as shown in the following equation:
$$ {\boldsymbol{F}}_{.\boldsymbol{gtd}}=\frac{\mathbf{1}}{{\boldsymbol{N}}_{\boldsymbol{g}}}\sum \limits_{\boldsymbol{i}=\mathbf{1}}^{{\boldsymbol{N}}_{\boldsymbol{g}}}{\boldsymbol{I}}_{\boldsymbol{i}\boldsymbol{gtd}} $$
$$ N=\sum \limits_{g=1}^G{N}_g $$
where Iigtd is a variable indicating whether or not the i-th subject in the g-th stratified group (1 ≤ g ≤ G) had HCE in disease group d (1 ≤ d ≤ 5, five types of disease group) in the month before death t (0 ≤ t ≤ 23), and is an indicator variable that takes the value of 1 if HCE is incurred and 0 otherwise. G represents the number of types of analysis targets stratified by sex, three age groups, or both, and the largest number of types is six, multiplied by the number of types of sex and age groups (G = 2 × 3 = 6). N is the total number of individuals analyzed, and Ng is the number of individuals belonging to the g-th stratified group.
By contrast, in the severity model, the IHCE of each individual is summed up by month, so that cost allocations need to be made to the five disease groups. In Japan, a method called the "primary disease method" is often used to estimate IHCE by disease. This method allocates all the IHCE in the month to the disease for which most medical resources are considered to have been invested and it has several drawbacks. First, this method depends on the subjectivity of the person who judges the primary disease. Second, all medical resources spent on other coexisting diseases are also allocated to the primary disease, resulting in a bias in the estimation of IHCE by disease. Therefore, a method that enables an objective and appropriate cost allocation is necessary.
Wagner et al. [50] used Ordinary Least Squares (OLS) regression to estimate medical costs by diagnostic group, but they reported cases in which the estimated medical costs were negative. This is an unavoidable problem because OLS assumes a normal distribution. On the other hand, Zweifel et al. [11] and many other authors solved the problem of negative HCE by logarithmically transforming HCE. However, when heteroscedasticity exists, it has been pointed out that bias occurs when retransforming to the original medical cost dimension, and Seshamani et al. [23] proposed a generalized linear model using the log-link function as an alternative.
The IHCE of the same disease may vary depending on the hospital where the patient receives treatment, doctor's judgment, and the comorbidity, and the mean value of IHCE may not be truly one value. In addition, if the covariates that represent these factors are not sufficiently obtained as data, the methods described thus far are not necessarily optimal, and fitting, including the uncertainty of the estimated values, is important. Furthermore, the HCE is a sum of normal distributions with different model parameters, and the Bayesian method using Markov Chain Monte Carlo (MCMC) is suitable for estimating the model parameters of each normal distribution in such cases. And by adopting the method using MCMC, the expected value of medical cost for each disease did not become negative as seen in Wagner et al. [50]. Figure 1 shows a network graph of the relationship between the parameters to be estimated and the summed IHCE t months before the death of the i-th subject belonging to the stratified group g.
Network graph representing the Severity Model
Yigt in Fig. 1 is the IHCE of the i-th subject in the stratified group g at t months before death and is an observable value (gray in Fig. 1 indicates that it is an observable value). Iigtd in Fig. 1 is an indicator variable that specifies whether the i-th subject in group g has or does not have IHCE for disease group d at t months before death and is the same variable as Iigtd in the frequency model. Yigtd in Fig. 1 is a random variable representing the IHCE of disease group d at t months before the death of the i-th subject, following a normal distribution with mean value μgtd and variance σ2gtd, assuming that IHCE of each disease group is independent of each other. The purple line in Fig. 1 is the parameter to be estimated, while the variance σ2gtd is a nuisance parameter and is not of interest in this study. The structure shown in Fig. 1 can be expressed using the following equation:
$$ {Y}_{igt}=\sum \limits_{d=1}^5{Y}_{igt d}\bullet {I}_{igt d} $$
$$ {Y}_{igtd}\sim Normal\ \left({\mu}_{gt d},{\sigma}_{gt\mathrm{d}}^2\right) $$
We used R's RStan 2.19.3 [51] to estimate the posterior distributions of the parameters (μgtd, σ2gtd) by sampling with MCMC. With reference to the mean and variance in the sample in each stratified group of IHCE, a weakly informative prior distributions were adopted as the prior distributions for each parameter so that the calculation would converge efficiently. HCE was normalized to 100,000 Japanese yen (JPY), and all subsequent numbers related to medical costs in this study are shown as normalized values (1 JPY is equivalent to 0.0093 USD at the average exchange rate during the sample period). The prior distributions of the parameters were the same for all disease groups and stratified groups, as follows:
$$ {\mu}_{gtd}\sim Normal\ \left(0,{20}^2\right) $$
$$ {\sigma}_{gtd}^2\sim LogNormal\ \left(0,{10}^2\right) $$
where LogNormal represents the lognormal distribution, and this distribution is chosen such that the variance does not take a negative value. For MCMC sampling, the number of chains was set to four, and the number of samples in each chain was set to 6000. The first 2000 steps were discarded as a warm-up period. The convergence condition of MCMC was set as R̂ of all parameters and log posterior probability being less than 1.05 [52], and convergence was confirmed in all calculations. Since Bayesian methods were used to analyze the IHCE, Bayesian credible intervals are reported in this paper instead of P values. In addition, the Bayes factor (BF) [53] for the composite hypothesis was used to verify whether there was a difference in IHCE by sex and age group, and the evidence criteria expressed in Table 1 based on Kass et al. [54] were adopted in this study.
Table 1 Evidence criteria for Bayes Factor
Finally, based on the results obtained from the frequency and severity models, the average HCE (AHCE) for each disease group and the cumulative average HCE (CAHCE) over a period of approximately 2 years before death were calculated [49]. The formulation is as follows:
$$ {AHCE}_{gtd}={\hat{\mu}}_{gtd}\bullet {F}_{. gtd} $$
$$ C{AHCE}_{gd}=\sum \limits_{t=0}^{23}{AHCE}_{gtd} $$
where \( {\hat{\mu}}_{gtd} \) denotes the mean value of the posterior distribution of μgtd, AHCEgtd denotes the AHCE of disease group d in stratified group g at t months before death, and CAHCEgd denotes the CAHCE of disease group d in the stratified group.
We classified age into three groups and stratified the subjects into up to six categories based on age group and sex. For each stratified category, a frequency model and severity model were created for each of the five disease groups for each TTD, and finally, AHCE and CAHCE were calculated for each disease group.
To verify whether the differences in the values of frequency for each category were significant, the Chi-square test was conducted for each category and the number of HCE incurred. We also performed the Kruskal-Wallis and the Wilcoxon tests with Bonferroni correction to test whether the difference in total IHCE, the sum of the IHCE of the five disease groups, between the age groups was significant. Similarly, the difference in total IHCE between the two sexes was analyzed using the Wilcoxon test. Pearson's correlation coefficient was calculated for each TTD to check whether the IHCE of each disease group were independent of each other. The decomposed IHCE by disease group was then tested for differences between sex or age groups by Bayes factor.
The above analysis was performed by extracting the relevant data for each subject for each month from 0 to 23 months prior to death.
Basic statistics
The number of individuals used in this analysis was 122,318 who were stratified by sex, age group, or both to analyze the relationship between PTD and HCE for each disease group. Table 2 shows the number of individuals analyzed by sex and age group. The number of males and females was almost the same, but the proportion of the older adults was larger in females, which is attributed to their longer life expectancy.
Table 2 Number of subjects by sex and age group
Table 3 shows the number of HCE incurred for each disease group when TTD is 0 to 6 months (TTD of 7 or more is not shown). Table 4 shows the mean total IHCE, which was the sum of IHCE for all diseases when the TTD is 0 to 6 months. As a general trend, the number of HCE incurred was larger in the following order: others, circulatory, respiratory, neoplasms, and CKD. The number of HCE incurred in the others group was the largest because it included a majority of the diseases. Regarding the month of death (TTD = 0), there was a downward trend in the number of HCE incurred and total IHCE compared with the month before death (TTD = 1), because the effective period in the month of death was only half a month. Despite this, the number of respiratory cases was characteristically higher than that in the month before death. Table 3 shows only the number of HCE incurred with TTD from 0 to 6, but values with TTD from 7 to 23 were also included in the calculation of Frequency.
Table 3 Number of HCE (health care expenditures) incurred for each disease group
Table 4 Mean total IHCE (incurred health care expenditures)
Table 5 shows the admission rate based on TTD. The admission rate increased rapidly as the month of death approached. The older the group, the lower the admission rate tended to be across all TTD.
Table 5 Admission ratio by time to death
Table 6 shows the results of the Chi-square test for the age group being independent of the number of HCE incurred (the number of records with non-zero HCE) for each disease group with TTD from 0 to 6 months, based on the number of HCE incurred in Table 3. While there was a significant relationship between age group and the number of HCE incurred for most of the disease groups, there were several TTD for which there was no significant relationship with others. Table 7 shows the results of the Chi-square test for the independence between sex and the number of HCE incurred for each disease group with TTD from 0 to 6 months, based on the number of HCE incurred in Table 3. The relationship between sex and the number of HCE incurred tended to become more significant in older age across all disease groups.
Table 6 Chi-square test for the age group being independent of the number of HCE (health care expenditures) incurred
Table 7 Chi-square test for the independence of sex and the number of HCE (health care expenditures) incurred
Total IHCE was lower in older age groups, with a significant difference in the Kruskal-Wallis test for TTD between 0 and 6 months (p < 0.001). The more conservative Wilcoxon test with Bonferroni correction also showed a significant difference among all age groups (p < 0.001). Conversely, in both 75–85 and 85–95 age groups, the Wilcoxon test showed significant differences in total IHCE between males and females at TTD of 0 to 6 months (p < 0.001), but in the 65–75 age group, there were no significant differences in most TTD (Table 8).
Table 8 Wilcoxon test in total IHCE (incurred health care expenditures) between men and women
Tables 9 and 10 show Pearson's correlation coefficients of HCE occurrence for each disease group at TTD of 1 and 6 months. For most of the other TTD, there was not a strong correlation between the disease groups, suggesting that the probability of incurring HCE for each disease group was almost independent of each other, but when the TTD was 1, there was a tendency for circulatory and neoplasms not to co-occur (r = − 0.12).
Table 9 Pearson correlation coefficients of health care expenditure occurrence at time to death of 1
Table 10 Pearson correlation coefficients of health care expenditure occurrence at time to death of 6
Frequency and severity models
The results of the four types of analysis have been described in the following order: without stratification by age group and sex, with stratification by age group, with stratification by sex, and with stratification by both sex and age group.
Without stratification by age group and sex
Additional file 1 shows the frequency (a) and IHCE (b). The IHCE graph shows a 95% Bayesian confidence interval with a pale-colored band. Since the month of death has an effective period of only about half a month, the values of both frequency and IHCE decrease in the month of death for most of the disease groups. For all disease groups, the frequency tended to increase as the time of death approached, with a particularly large increase in the months prior to death in respiratory diseases, where frequency and IHCE 1 month before death were approximately 1.5 and 3.5 times higher than 12 months before death, respectively. This is a large value compared to other diseases and may represent the fact that many patients are subjected to medical treatment, such as intubation of a ventilator, as death approaches. In addition, except for others, the frequency was large for all TTD in the circulatory group, which may indicate that many diseases in the circulatory group are chronic. In IHCE, CKD had the highest medical costs for all TTD, except in the months immediately before death, probably because patients with CKD were on regular dialysis. Neoplasms were the next largest group, with relatively high medical costs across all TTD. The lowest IHCE in the circulatory group could be because most circulatory diseases are chronic, and the main medical cost involves prescriptions for antihypertensive drugs, which are less expensive than the treatment cost of lethal diseases such as neoplasms. In all disease groups, IHCE tended to increase rapidly in a nonlinear pattern in the months before death, except in the month of death.
Additional file 2 shows AHCE (a) and CAHCE (b) for each disease group. Comparing AHCE by disease group, the order of increasing AHCE is others, respiratory, neoplasms, CKD, and circulatory. The AHCE in the others group increased about 2.5-fold when compared with the AHCE 1 year before death and 1 month before death, but the degree of increase differed among disease groups, and there was almost no change in chronic diseases (CKD and circulatory). Although the frequency of circulation is high, it is not large in terms of AHCE due to the small IHCE, but the difference in AHCE between groups tended to disappear as TTD increased. In the CAHCE, others accounted for approximately 60% of the total CAHCE (3,200,000 JPY in 2 years), and respiratory and neoplasms each accounted for approximately 15%.
Table 11 shows a comparison of the estimated total average HCE and the corresponding actual total HCE for each TTD. The error rate is the difference between the total average HCE and the actual total HCE divided by the actual total HCE, indicating that the AHCE can be estimated with good accuracy (less than ±0.3%) across all TTD.
Table 11 Comparison of the estimated total average HCE (health care expenditures) with the corresponding actual values
With stratification by age group
Figure 2 shows the estimation results by age group. Figure 2c shows the difference in posterior distributions between the 85–95 and 75–85 age groups, and between the 75–85 and 65–75 age groups by TTD, and the pale bands indicate the 95% Bayesian credible intervals. The frequency was higher in the older age group for all TTD in the circulatory and respiratory systems, and in others from the month of death to several months before death. In neoplasms, the frequency tended to be higher in the younger age group. In IHCE, BF was calculated from the difference in posterior distributions between the age groups of 85–95 years and 75–85 years, and between the age groups of 75–85 years and 65–75 years, to verify the strength of evidence between the age groups. Table 12 shows the results of calculating the BF values for each disease group by TTD (from 0 to 12 months) among the age groups. According to the results, the difference in IHCE in others, CKD, and neoplasms between the age groups of 85–95 years and 75–85 years was considered strong evidence, and IHCE was higher in the age group of 75–85 years; however, differences in circulatory and respiratory rates were not found across all TTD. Conversely, between the age groups of 75–85 years and 65–75 years, there was strong evidence of a difference in CKD and neoplasms, and IHCE was higher in the age group of 65–75 years, but there was no difference across all TTD for the other disease groups. Figure 3 shows the AHCE and CAHCE for each disease group by age group. In general, the younger age group had larger values for both AHCE and CAHCE, but the older age group tended to have larger values for respiratory diseases. In neoplasms, AHCE and CAHCE were remarkably larger in the 65–75 years age group than in other age groups, and the effect of PTD was also large. In particular, the CAHCE of the of 65–75 years age group was about three times larger than that of the of 75–85 years age group and about six times larger than that of the of 85–95 years age group.
Frequency and IHCE (incurred health care expenditures) with stratification by age group
Table 12 Bayes Factors for IHCE (incurred health care expenditures) difference with stratification by age group
AHCE (average health care expenditures) and CAHCE (cumulative average health care expenditures) with stratification by age group
With stratification by sex
Figure 4 shows the estimation results by sex. The bottom graph (c) shows the difference in posterior distributions by TTD for males and females and the pale bands indicate the 95% Bayesian credible intervals. The frequency was higher for females in the circulatory and other groups, whereas it was higher for males in other disease groups. In the IHCE, BF was calculated from the difference in posterior distributions between males and females, and the strength of the evidence for the difference between males and females was tested. Table 13 shows the BF values for each TTD (from 0 to 12 months) for each disease group. The results show that there is no strong evidence of a difference between males and females in circulatory and CKD, but a difference was seen between males and females in respiratory, others, and neoplasms. For respiratory and others, IHCE was greater in males, but for neoplasms, IHCE was greater in females. Overall, the differences between the sexes were not as large as the differences between the age groups. Figure 5 shows the AHCE and CAHCE for each disease group by sex; males tended to have higher values for both.
Frequency and IHCE (incurred health care expenditures) with stratification by gender
Table 13 Bayes Factors for IHCE (incurred health care expenditures) difference with stratification by sex
AHCE (average health care expenditures) and CAHCE (cumulative average health care expenditures) with stratification by gender
By age and sex
Additional file 3 shows the AHCE and CAHCE obtained by stratifying by age group and sex and applying the frequency and severity models (the results for frequency and IHCE are not shown). Except in respiratory diseases, CAHCE was larger in females than in males between the ages of 65 and 85. In the respiratory, CAHCE was greater in males than in females in all age groups. Except for others, the CAHCE of neoplasms in females aged 65–75 years was the largest.
The purpose of this study was to test the "red herring" hypothesis established by Zweifel et al. [11] and to find the driving factors of end-of-life care costs by disease in Japan, where the population is aging more rapidly than in other OECD countries. In this study, we stratified 122,318 decedents aged 65 to 95 years who were enrolled in the NHI in the Shizuoka Prefecture, by age group and sex, and decomposed medical costs by disease group based on Bayesian methods. Frequency, IHCE, and AHCE had different profiles for each disease group for TTD, but the values tended to increase as the month of death approached. The profiles of frequency, IHCE, and AHCE for TTD differed among the categories stratified by age group and sex, but the differences among age groups were more pronounced than those by sex.
Wong et al. [24] analyzed the association between PTD and hospital HCE by primary disease and found that the effect of PTD was stronger in lethal diseases. They concluded that the effect of PTD was stronger in neoplasms, similar to other studies [21, 25]. In our analysis, the increase in IHCE of neoplasms with the approach of the month of death was large, but the same was also true for respiratory diseases. This may be because the total IHCE in this analysis was cost-allocated to each disease group in the Bayesian method, and costs were allocated to medical treatments that were not primary diseases, such as ventilator intubation near death. In Japan, the two major circulatory diseases, stroke and heart disease, account for approximately 30% of total deaths, and according to Wong et al.'s argument [24], the effect of PTD is expected to be stronger for circulatory AHCE. However, this was not the case in our analysis. This may be because many circulatory diseases are chronic, and their symptoms are controlled by continuous medication. Therefore, it is important to consider the medical treatment for each disease and its cost (reimbursement price) when examining the factors of HCE. These discussions were made possible by using the Bayesian method to allocate costs to disease groups other than the primary disease group, suggesting the importance of cost allocation.
Overall, in the results obtained from the Japanese data used in this analysis, AHCE was also larger as the month of death approached in each stratified category, which partially supported the "red herring" hypothesis proposed by Zweifel et al. [11, 30]. However, contrary to Zweifel et al.'s conclusion [30] that there was no effect of age on HCE among the deceased, AHCE differed among the age groups in the present study. In particular, AHCE and CAHCE in the age group of 65–75 years were larger than those in the age group of 85–95 years in all disease groups except for respiratory diseases, and the difference was especially pronounced in neoplasms. These suggest that there is an effect of both, PTD and age on HCE, which is consistent with several reports [15, 16, 18, 23, 32]. Hashimoto et al. [18] analyzed frequency and IHCE by inpatient and outpatient hospitalization in the year before death for individuals in the 65–75, 75–85, and 85+ years age groups in the Kyushu region of Japan, and reported that both frequency and IHCE were higher in younger age groups for almost all TTD for both inpatient and outpatient hospitalization. Meanwhile, in our study, frequency was higher in older patients with circulatory and respiratory diseases and higher in younger patients with neoplasms. Additionally, IHCE was larger in the younger age group with CKD and neoplasms and there was a large difference in the magnitude of the relationship between the age groups in each disease group. Although the trend of higher frequency in younger age groups was consistent in Hashimoto's study for both inpatient and outpatient care, the age profile of frequency varied by disease in our study. This suggests that disease affects HCE more differently for each age group, and it is important to consider disease in the factor analysis of HCE. In addition, except for respiratory disease, AHCE and CAHCE were smaller in the older adults group, which may be due to a decrease in the intensity of inpatient care for them and a decrease in the hospitalization rate due to the use of nursing care [18, 22,23,24, 35].
Shugarman et al. [26, 36] analyzed the association between medical costs and age and gender among Medicare beneficiaries who died of lung cancer at age 68 years or older, and found that IHCE was greater in women than in men. In the present study, IHCE in the neoplasms group, including lung cancer, was higher in females across most TTD, consistent with the results of Shugarman's study. However, IHCE in males was larger than that in females for respiratory diseases and others, and it is important to note that the relationship between the magnitude of the profiles of males and females differs by disease. In particular, in others, which includes the majority of the diseases, frequency was greater in females and IHCE was greater in males across many TTD. This trend is consistent with Hashimoto's results [18]. Although women tend to be more likely to see a physician at the end of life, when men do see a physician, their illness tends to be more severe and their medical costs may be higher. As these indicate, it is important for estimating HCE before death to take into account the complex differences in frequency and IHCE by gender for each disease and TTD.
The impact of neoplasms on CAHCE, the medical cost in the 2 years prior to death, was greatest in the 65–75 age group. The CAHCE of females in the 65–75 age group was particularly large, with a reduction in medical costs of about 500,000 yen per person per year, even if the age at which the disease strikes is delayed by about 10 years. This is greater than the average annual medical cost for all Japanese people, which is approximately 350,000 yen [55]. To reduce the cost of end-of-life care, it is important to delay the onset of serious diseases as much as possible through preventive interventions such as lifestyle improvement and early detection and treatment in the category in which most medical resources are invested.
In this study, we proposed a method to estimate the average HCE of each disease group by cost allocation of HCE, which is aggregated on a monthly basis, to each disease group using a Bayesian method. We found that the relationship between HCE and age and sex differed in each disease group, and that the terminal care cost of neoplasms was relatively higher in the younger age group. However, because the sample size was not large enough for the Bayesian method to converge, it was difficult to allocate the costs to more detailed disease groups and inpatient and outpatient groups. This problem can be overcome by increasing the number of subjects in the analysis. In addition, we did not consider the fact that the distribution of medical costs is skewed, which may have caused bias. Furthermore, although we assumed that the IHCE of each disease group was independent of each other, the possibility of "super-additive," in which the IHCE of comorbidities is larger than the sum of the independent IHCE of the underlying diseases, has been suggested [38], and this point may need to be considered. This analysis assumes that the relationship between age and sex and medical costs by disease group is stationary during the period of analysis, which is about 6 years. For example, in Japan, stroke incidence and mortality rates have been on a gradual downward trend due to changes in health status over time [56, 57], and the relationship between age and disease rates has not been completely stationary over time. Therefore, when estimating future medical costs, it may be necessary to incorporate dynamic incidence rates that take into account changes in health, economic, and social conditions over time, as seen in Kasajima's study [58].
In this study, using data from decedents enrolled in the Japanese NHI, we used a Bayesian approach to decompose the aggregated monthly medical costs into HCE for each disease group, and to examine the relationship between PTD and HCE by disease group, stratified by sex and age. As in recent studies, we found that HCE in most disease groups increased as death approached. However, the profiles differed greatly among disease, sex, and age groups, suggesting that they may be important driving factors for HCE. In addition, the large two-year cumulative medical cost of neoplasms in younger age groups suggests that preventive interventions such as lifestyle modification and early detection and treatment are important to reduce future medical costs in the end-of-life period. Not only for neoplasms, but also for other diseases that place a heavy burden on end-of-life care in the younger age group, the effect of delaying the onset of severe disease on reducing medical costs may be not negligible.
The data that supporting the findings of this study were used under license for the current study from Shizuoka Prefecture in Japan. As such, they are not publicly available. However, data are available from the authors upon reasonable request and with permission from the Shizuoka Prefecture in Japan. (Shizuoka Prefecture, Japan: https://www.pref.shizuoka.jp/index.html).
AHCE:
Average Health care expenditures
CAHCE:
Cumulative average health care expenditures
CKD:
HCE:
Health care expenditures
IHCE:
Incurred health care expenditures
MCMC:
Markov Chain Monte Carlo
NHI:
PTD:
Proximity to death
TTD:
Time to death
OECD Health statistics. OECD 2020. https://stats.oecd.org/. Accessed 25 Dec 2020.
Bös D, von Weizsäcker RK. Economic consequences of an aging population. Eur Econ Rev. 1989;33(2–3):345–54. https://doi.org/10.1016/0014-2921(89)90112-8.
Hitiris T, Posnett J. The determinants and effects of health expenditure in developed countries. J Health Econ. 1992;11(2):173–81. https://doi.org/10.1016/0167-6296(92)90033-W.
Gerdtham U-G, Søgaard J, Andersson F, Jonsson B. An econometric analysis of health care expenditure across OECD countries. J Health Econ. 1992;11(1):63–84. https://doi.org/10.1016/0167-6296(92)90025-V.
Getzen TE. Population aging and the growth of health expenditures. J Gerontol. 1992;47(3):S98–104.
O'Connell JM. The relationship between health expenditures and the age structure of the population in OECD countries. Health Econ. 1996;5(6):573–8. https://doi.org/10.1002/(SICI)1099-1050(199611)5:6<573::AID-HEC231>3.0.CO;2-L.
Smith S, Newhouse JP, Freeland MS. Income, insurance, and technology: why does health spending outpace economic growth? Health Aff. 2009;28(5):1276–84. https://doi.org/10.1377/hlthaff.28.5.1276.
Lubitz J, Prihoda R. The use and costs of Medicare services in the last 2 years of life. Health Care Financ Rev. 1984;5(3):117–31.
Lubitz JD, Riley GF. Trends in medicare payments in the last year of life. N Engl J Med. 1993;328(15):1092–6. https://doi.org/10.1056/NEJM199304153281506.
Felder S, Meier M, Schmitt H. Health care expenditure in the last months of life. J Health Econ. 2000;19(5):679–95. https://doi.org/10.1016/S0167-6296(00)00039-4.
Zweifel P, Felder S, Meiers M. Ageing of population and health care expenditure: a red herring? Health Econ. 1999;8(6):485–96. https://doi.org/10.1002/(SICI)1099-1050(199909)8:6<485::AID-HEC461>3.0.CO;2-4.
Gregersen FA, Godager G. The association between age and mortality related hospital expenditures: evidence from a complete national registry. Nord J Heal Econ. 2014;2(1):203-18. https://doi.org/10.5617/njhe.656.
Werblow A, Felder S, Zweifel P. Population ageing and health care expenditure: a school of 'red herrings'? Health Econ. 2007;16(10):1109–26. https://doi.org/10.1002/hec.1213.
Dormont B, Grignon M, Huber H. Health expenditure growth: reassessing the threat of ageing. Health Econ. 2006;15(9):947–63. https://doi.org/10.1002/hec.1165.
Madsen J, Serup-Hansen N, Kristiansen IS. Future health care costs - do health care costs during the last year of life matter? Health Policy (New York). 2002;62(2):161–72. https://doi.org/10.1016/S0168-8510(02)00015-5.
Stearns SC, Norton EC. Time to include time to death? The future of health care expenditure predictions. Health Econ. 2004;13(4):315–27. https://doi.org/10.1002/hec.831.
Felder S, Werblow A, Zweifel P. Do red herrings swim in circles? Controlling for the endogeneity of time to death. J Health Econ. 2010;29(2):205–12. https://doi.org/10.1016/j.jhealeco.2009.11.014.
Hashimoto H, Horiguchi H, Matsuda S. Micro data analysis of medical and long-term care utilization among the elderly in Japan. Int J Environ Res Public Health. 2010;7(8):3022–37. https://doi.org/10.3390/ijerph7083022.
Hosoya K. Determinants of health expenditures: stylized facts and a new signal. Mod Econ. 2014;05(13):1171–80. https://doi.org/10.4236/me.2014.513109.
Howdon D, Rice N. Health care expenditures, age, proximity to death and morbidity: implications for an ageing population. J Health Econ. 2018;57:60–74. https://doi.org/10.1016/j.jhealeco.2017.11.001.
Polder JJ, Barendregt JJ, van Oers H. Health care costs in the last year of life-the Dutch experience. Soc Sci Med. 2006;63(7):1720–31. https://doi.org/10.1016/j.socscimed.2006.04.018.
Brockmann H. Why is less money spent on health care for the elderly than for the rest of the population? Health care rationing in German hospitals. Soc Sci Med. 2002;55(4):593–608. https://doi.org/10.1016/S0277-9536(01)00190-3.
Seshamani M, Gray A. Ageing and health-care expenditure: the red herring argument revisited. Health Econ. 2004;13(4):303–14. https://doi.org/10.1002/hec.826.
Wong A, van Baal PHM, Boshuizen HC, Polder JJ. Exploring the influence of proximity to death on disease-specific hospital expenditures: a carpaccio of red herrings. Health Econ. 2011;20(4):379–400. https://doi.org/10.1002/hec.1597.
Zhu B, Li F, Wang C, Wang L, He Z, Zhang X, et al. Tracking hospital costs in the last year of life - the Shanghai experience. Biosci Trends. 2018;12(1):79–86. https://doi.org/10.5582/bst.2017.01245.
Shugarman LR, Bird CE, Schuster CR, Lynn J. Age and gender differences in medicare expenditures and service utilization at the end of life for lung cancer decedents. Womens Health Issues. 2008;18(3):199–209. https://doi.org/10.1016/j.whi.2008.02.008.
Carreras M, Ibern P, Inoriza JM. Ageing and healthcare expenditures: exploring the role of individual health status. Heal Econ (United Kingdom). 2018;27(5):865–76. https://doi.org/10.1002/hec.3635.
Wyl VV. Proximity to death and health care expenditure increase revisited: a 15-year panel analysis of elderly persons. Health Econ Rev. 2019;9(1):1–6.
Salas C, Raftery JP. Econometric issues in testing the age neutrality of health care expenditure. Health Econ. 2001;10(7):669–71. https://doi.org/10.1002/hec.638.
Zweifel P, Felder S, Werblow A. Population ageing and health care expenditure: new evidence on the red herring. Geneva Pap Risk Insur Issues Pract. 2004;29(4):652–66. https://doi.org/10.1111/j.1468-0440.2004.00308.x.
Dow WH, Norton EC. Choosing between and interpreting the Heckit and two-part models for corner solutions. Heal Serv Outcomes Res Methodol. 2003;4(1):5–18. https://doi.org/10.1023/A:1025827426320.
Bjørner TB, Arnberg S. Terminal costs, improved life expectancy and future public health expenditure. Int J Health Care Finance Econ. 2012;12(2):129–43. https://doi.org/10.1007/s10754-012-9106-1.
Seshamani M, Gray AM. A longitudinal study of the effects of age and time to death on hospital costs. J Health Econ. 2004;23(2):217–35. https://doi.org/10.1016/j.jhealeco.2003.08.004.
Kolodziejczyk C. The effect of time to death on health care expenditures: taking into account the endogeneity and right censoring of time to death. Eur J Health Econ. 2020;21(6):945–62. https://doi.org/10.1007/s10198-020-01187-8.
Levinsky NG, Yu W, Ash A, Moskowitz M, Gazelle G, Saynina O, et al. Influence of age on Medicare expenditures and medical care in the last year of life. J Am Med Assoc. 2001;286(11):1349–55. https://doi.org/10.1001/jama.286.11.1349.
Shugarman LR, Campbell DE, Bird CE, Gabel J, Louis TA, Lynn J. Differences in medicare expenditures during the last 3 years of life. J Gen Intern Med. 2004;19(2):127–35. https://doi.org/10.1111/j.1525-1497.2004.30223.x.
Rodriguez Santana I, Aragón MJ, Rice N, Mason AR. Trends in and drivers of healthcare expenditure in the English NHS: a retrospective analysis. Health Econ Rev. 2020;10(1):1–11. https://doi.org/10.1186/s13561-020-00278-9.
Cortaredona S, Ventelou B. The extra cost of comorbidity: multiple illnesses and the economic burden of non-communicable diseases. BMC Med. 2017;15(1):1–11. https://doi.org/10.1186/s12916-017-0978-2.
Hernández-Aceituno A, Pérez-Tasigchana RF, Guallar-Castillón P, López-García E, Rodríguez-Artalejo F, Banegas JR. Combined healthy behaviors and healthcare services use in older adults. Am J Prev Med. 2017;53(6):872–81. https://doi.org/10.1016/j.amepre.2017.06.023.
Martín JJM, del Amo Gonzalez MPL. Dolores Cano Garcia M. review of the literature on the determinants of healthcare expenditure. Appl Econ. 2011;43(1):19–46. https://doi.org/10.1080/00036841003689754.
Astolfi R, Lorenzoni L, Oderkirk J. Informing policy makers about future health spending: a comparative analysis of forecasting methods in OECD countries. Health Policy (New York). 2012;107(1):1–10. https://doi.org/10.1016/j.healthpol.2012.05.001.
Hazra NC, Rudisill C, Gulliford MC. Determinants of health care costs in the senior elderly: age, comorbidity, impairment, or proximity to death? Eur J Health Econ. 2018;19(6):831–42. https://doi.org/10.1007/s10198-017-0926-2.
Li F, Zhu B, He Z, Zhang X, Wang C, Wang L, et al. Exploring the determinants that influence end-of-life hospital costs of the elderly in Shanghai, China. Biosci Trends. 2018;12(1):87–93. https://doi.org/10.5582/bst.2017.01244.
Moore PV, Bennett K, Normand C. Counting the time lived, the time left or illness? Age, proximity to death, morbidity and prescribing expenditures. Soc Sci Med. 2017;184:1–14. https://doi.org/10.1016/j.socscimed.2017.04.038.
ICD-10 Version:2019. WHO. 2019. https://icd.who.int/browse10/2019/en. Accessed 25 Dec 2020.
Heckman JJ. The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models. Ann Econ Soc Meas. 1976;5(4):475–92.
Klugman SA, Panjer H, Willmot GE. Loss models: from data to decisions. 4th ed: Wiley; 2012. https://doi.org/10.1002/9781118787106.
Duan N, Manning WG, Morris CN, Newhouse JP. Choosing between the sample-selection model and the multi-part model. J Bus Econ Stat. 1984;2(3):283–9.
Blough DK, Ramsey SD. Using generalized linear models to assess medical care costs. Heal Serv Outcomes Res Methodol. 2000;1(2):185–202. https://doi.org/10.1023/A:1012597123667.
Wagner TH, Chen S, Barnett PG. Using average cost methods to estimate encounter-level costs for medical-surgical stays in the VA. Med Care Res Rev. 2003;60(3 SUPPL):15S–36S.
Carpenter B, Gelman A, Hoffman MD, Lee D, Goodrich B, Betancourt M, et al. Stan: a probabilistic programming language. J Stat Softw. 2017;76(1):1–32.
Gelman A, Carlin JB, Stern HS. Bayesian data analysis. 3rd ed: Chapman and Hall/CRC; 2006.
Jeffreys H. Some tests of significance, treated by the theory of probability. Math Proc Camb Philos Soc. 1935;31(2):203–22. https://doi.org/10.1017/S030500410001330X.
Kass RE, Raftery AE. Bayes factors. J Am Stat Assoc. 1995;90(430):773–95. https://doi.org/10.1080/01621459.1995.10476572.
Health Care Expenditure Data as of 2019. Ministry of Health, Labour and Welfare in Japan 2019. https://www.mhlw.go.jp/topics/medias/year/19/dl/iryouhi_data.pdf (in Japanese).
Chalmers J, Arima H, Hata J. Cost-effective reduction in stroke: lessons from the Japanese hypertension detection and control program. J Hypertens. 2012;30(9):1706–7. https://doi.org/10.1097/HJH.0b013e32835789ff.
Hata J, Ninomiya T, Hirakawa Y, Nagata M, Mukai N, Gotoh S, et al. Secular trends in cardiovascular disease and its risk factors in Japanese: half-century data from the Hisayama study (1961–2009). Circulation. 2013;128(11):1198–205. https://doi.org/10.1161/CIRCULATIONAHA.113.002424.
Kasajima M, Hashimoto H, Suen S, Chen B, Jalal H, Eggleston K, et al. Future projection of the health and functional status of older people in Japan: a multistate transition microsimulation model with repeated cross-sectional data. Health Econ. 2020;30 Suppl 1:30–51. https://doi.org/10.1002/hec.3986.
I would like to thank Mr. Kasahara and Mr. Komoto for their patience and support over the years. I would also like to thank Editage (www.editage.com) for English language editing.
Commissioned research fees from Shizuoka Prefecture in Japan under the Shizuoka Prefecture Data Health Planning and Support Project Outsourcing Agreement. The funder had no role in the analysis, decision to publish, or preparation of the manuscript. The grant numbers are not applicable (Shizuoka Prefecture, Japan: https://www.pref.shizuoka.jp/index.html).
Institute for Future Initiatives, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
Yuji Hiramatsu, Hiroo Ide & Yuji Furui
MCVP Division, AXA Life Insurance Co., Ltd, Tokyo, Japan
Yuji Hiramatsu
Health and Welfare Department, Shizuoka Prefectural Government, Shizuoka, Japan
Atsuko Tsuchiya
Hiroo Ide
Yuji Furui
YH and HI designed the study and interpreted the findings. HI reviewed the manuscripts. YH curated the data, conducted the statistical analysis, and drafted the manuscript. AT and YF provided administrative support. All authors read and approved the final manuscript.
Correspondence to Yuji Hiramatsu.
The study protocol was approved by the ethics committees of the University of Tokyo, Institute for Future Initiatives (Approval No. 20–112). The requirement for informed consent was waived by the Shizuoka Prefecture government, which provided the data.
Frequency and IHCE (incurred health care expenditures) without stratification by sex and age group.
AHCE (average health care expenditures) and CAHCE (cumulative average health care expenditures) without stratification by sex and age group.
AHCE (average health care expenditures) and CAHCE (cumulative average health care expenditures) with stratification by age group and sex.
Hiramatsu, Y., Ide, H., Tsuchiya, A. et al. Examining proximity to death and health care expenditure by disease: a Bayesian-based descriptive statistical analysis from the National Health Insurance database in Japan. Health Econ Rev 12, 6 (2022). https://doi.org/10.1186/s13561-021-00353-9
Health care expenditure
Bayes | CommonCrawl |
A critical exponent of Joseph-Lundgren type for an Hénon equation on the hyperbolic space
CPAA Home
Nonlinear Dirichlet problems with double resonance
July 2017, 16(4): 1169-1188. doi: 10.3934/cpaa.2017057
Existence, nonexistence and uniqueness of positive solutions for nonlinear eigenvalue problems
Gabriele Bonanno 1, , Pasquale Candito 2, , Roberto Livrea 2,, and Nikolaos S. Papageorgiou 3,
Department of Engineering, University of Messina, Messina, 98166, Italy
Department DICEAM, University of Reggio Calabria, Reggio Calabria, 89122, Italy
Department of Mathematics, National Technical University, Zografou Campus, Athens 15780, Greece
* Corresponding author: P. Candito
Received May 2016 Revised February 2017 Published April 2017
Fund Project: The authors have been supported by the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
We study the existence of positive solutions for perturbations of the classical eigenvalue problem for the Dirichlet $p-$Laplacian. We consider three cases. In the first the perturbation is $(p-1)-$sublinear near $+\infty$, while in the second the perturbation is $(p-1)-$superlinear near $+\infty$ and in the third we do not require asymptotic condition at $+\infty$. Using variational methods together with truncation and comparison techniques, we show that for $\lambda\in (0, \widehat{\lambda}_1)$ -$\lambda>0$ is the parameter and $\widehat{\lambda}_1$ being the principal eigenvalue of $\left(-\Delta_p, W^{1, p}_0(\Omega)\right)$ -we have positive solutions, while for $\lambda\geq \widehat{\lambda}_1$, no positive solutions exist. In the "sublinear case" the positive solution is unique under a suitable monotonicity condition, while in the "superlinear case" we produce the existence of a smallest positive solution. Finally, we point out an existence result of a positive solution without requiring asymptotic condition at $+\infty$, provided that the perturbation is damped by a parameter.
Keywords: p-Laplacian, first eigenvalue, generalized Picone's identity, nonlinear regularity, nonlinear maximum principle, variational methods.
Mathematics Subject Classification: Primary: 35J25; Secondary: 35J80.
Citation: Gabriele Bonanno, Pasquale Candito, Roberto Livrea, Nikolaos S. Papageorgiou. Existence, nonexistence and uniqueness of positive solutions for nonlinear eigenvalue problems. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1169-1188. doi: 10.3934/cpaa.2017057
S. Aizicovici, N. Papageorgiou and V. Staicu, Degree theory for operators of monotone type and nonlinear elliptic equations with inequality constraints, Mem. Amer. Math. Soc. , 196 (2008). doi: 10.1090/memo/0915. Google Scholar
S. Aizicovici, N. Papageorgiou and V. Staicu, On p-superlinear equations with a nonhomogeneous differential operator, NoDEA Nonlinear Differential Equations Appl., 20 (2013), 151-175. doi: 10.1007/s00030-012-0187-9. Google Scholar
W. Allegretto and Y. X. Huang, A Picone's identity for the p-Laplacian and applications, Nonlinear Anal., 32 (1998), 819-830. doi: 10.1016/S0362-546X(97)00530-0. Google Scholar
A. Ambrosetti, H. Brezis and G. Cerami, Combined effects of concave and convex nonlinearities in some elliptic problems, J. Funct. Anal., 122 (1994), 519-543. doi: 10.1006/jfan.1994.1078. Google Scholar
A. Ambrosetti and P. H. Rabinowitz, Dual variational methods in critical point theory and applications, J. Funct. Anal., 14 (1973), 349-381. Google Scholar
D. Arcoya and D. Ruiz, The Ambrosetti-Prodi problem for the p-Laplacian operator, Comm. Partial Differential Equations, 31 (2006), 849-865. doi: 10.1080/03605300500394447. Google Scholar
G. Bonanno, A critical point theorem via the Ekeland variational principle, Nonlinear Anal., 75 (2012), 2992-3007. doi: 10.1016/j.na.2011.12.003. Google Scholar
G. Bonanno, Relations between the mountain pass theorem and local minima, Adv. Nonlinear Anal., 1 (2012), 205-220. doi: 10.1515/anona-2012-0003. Google Scholar
G. Bonanno and R. Livrea, Existence and multiplicity of periodic solutions for second order Hamiltonian systems depending on a parameter, J. Convex Anal., 20 (2013), 1075-1094. Google Scholar
P. Candito, G. D'Aguí and N. S. Papageorgiou, Nonlinear noncoercive Neumann problems with a reaction concave near the origin, Topol. Methods Nonlinear Anal., 46 (2016), 289-317. Google Scholar
D. Costa and C. Magalhaes, Existence results for perturbations of the p-Laplacian, Nonlinear Anal., 24 (1995), 409-418. doi: 10.1016/0362-546X(94)E0046-J. Google Scholar
J. I. Diaz and J. E. Saá, Existence et unicité de solutions positives pour certaines équations elliptiques quasilinéaires, C. R. Acad. Sci. Paris Sér. I Math., 305 (1987), 521-524. Google Scholar
N. Dunford and J. Schwartz, Linear Operators, Wiles-Interscience, New York, 1958. Google Scholar
G. Fei, On periodic solutions of superquadratic Hamiltonian systems, Electron. J. Differential Equations, 8 (2002), 1-12. Google Scholar
M. Filippakis, A. Kristály and N. S. Papageorgiou, Existence of five nonzero solutions with exact sign for a p-Laplacian equation, Discrete Contin. Dyn. Syst., 24 (2009), 405-440. doi: 10.3934/dcds.2009.24.405. Google Scholar
J. Garc′ıa Azorero, I. Peral Alonso and J. Manfredi, Sobolev versus Hölder local minimizers and global multiplicity for some quasilinear elliptic equations, Commun. Contemp. Math., 2 (2000), 385-404. doi: 10.1142/S0219199700000190. Google Scholar
Z. M. Guo and Z. T. Zhang, W1,p versus C1 local minimizers and multiplicity results for quasilinear elliptic equations, J. Math. Anal. Appl., 286 (2003), 32-50. doi: 10.1016/S0022-247X(03)00282-8. Google Scholar
Hu Shouchuan and N. S. Papageorgiou, Multiplicity of solutions for parametric p-Laplacian equations with nonlinearity concave near the origin, Tohoku Math. J. (2), 62 (2010), 137-162. doi: 10.2748/tmj/1270041030. Google Scholar
Hu Shouchuan and N. S. Papageorgiou, Nonlinear Neumann equations driven by a nonhomogeneous differential operator, Commun. Pure Appl. Anal., 10 (2011), 1055-1078. doi: 10.3934/cpaa.2011.10.1055. Google Scholar
S. Li, S. Wu and H. S. Zhou, Solutions to semilinear elliptic problems with combined nonlinearities, J. Differential Equations, 185 (2002), 200-224. doi: 10.1006/jdeq.2001.4167. Google Scholar
S. A. Marano and N. S. Papageorgiou, Positive solutions to a Dirichlet problem with pLaplacian and concave-convex nonlinearity depending on a parameter, Commun. Pure Appl. Anal., 12 (2013), 815-829. doi: 10.3934/cpaa.2013.12.815. Google Scholar
S. A. Marano and N. S. Papageorgiou, Multiple solutions to a Dirichlet problem with pLaplacian and nonlinearity depending on a parameter, Adv. Nonlinear Anal., 1 (2012), 257-275. doi: 10.1515/anona-2012-0005. Google Scholar
S. A. Marano and N. S. Papageorgiou, Constant-sign and nodal solutions of coercive (p. q)-Laplacian problems, Nonlinear Anal., 77 (2013), 118-129. doi: 10.1016/j.na.2012.09.007. Google Scholar
N. S. Papageorgiou and S. Kyritsi, Handbook of Applied Analysis, Advances in Mechanics and Mathematics, Springer, New York, 2009. doi: 10.1007/b120946. Google Scholar
G. Talenti, Best constant in Sobolev inequality, Ann. Mat. Pura Appl., 110 (1976), 353-372. doi: 10.1007/BF02418013. Google Scholar
J. L. Vázquez, A strong maximum principle for some quasilinear elliptic equations, Appl. Math. Optim., 12 (1984), 191-202. doi: 10.1007/BF01449041. Google Scholar
Lingyu Jin, Yan Li. A Hopf's lemma and the boundary regularity for the fractional p-Laplacian. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1477-1495. doi: 10.3934/dcds.2019063
E. N. Dancer, Zhitao Zhang. Critical point, anti-maximum principle and semipositone p-laplacian problems. Conference Publications, 2005, 2005 (Special) : 209-215. doi: 10.3934/proc.2005.2005.209
Giuseppina Barletta, Roberto Livrea, Nikolaos S. Papageorgiou. A nonlinear eigenvalue problem for the periodic scalar $p$-Laplacian. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1075-1086. doi: 10.3934/cpaa.2014.13.1075
Yansheng Zhong, Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent. Communications on Pure & Applied Analysis, 2019, 18 (1) : 227-236. doi: 10.3934/cpaa.2019012
Shun Uchida. Solvability of doubly nonlinear parabolic equation with p-laplacian. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021033
Hugo Beirão da Veiga, Francesca Crispo. On the global regularity for nonlinear systems of the $p$-Laplacian type. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1173-1191. doi: 10.3934/dcdss.2013.6.1173
Kanishka Perera, Andrzej Szulkin. p-Laplacian problems where the nonlinearity crosses an eigenvalue. Discrete & Continuous Dynamical Systems, 2005, 13 (3) : 743-753. doi: 10.3934/dcds.2005.13.743
Isabeau Birindelli, Francoise Demengel. Eigenvalue, maximum principle and regularity for fully non linear homogeneous operators. Communications on Pure & Applied Analysis, 2007, 6 (2) : 335-366. doi: 10.3934/cpaa.2007.6.335
Vincenzo Ambrosio, Teresa Isernia. Multiplicity and concentration results for some nonlinear Schrödinger equations with the fractional p-Laplacian. Discrete & Continuous Dynamical Systems, 2018, 38 (11) : 5835-5881. doi: 10.3934/dcds.2018254
Eun Kyoung Lee, R. Shivaji, Inbo Sim, Byungjae Son. Analysis of positive solutions for a class of semipositone p-Laplacian problems with nonlinear boundary conditions. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1139-1154. doi: 10.3934/cpaa.2019055
Zhong Tan, Zheng-An Yao. The existence and asymptotic behavior of the evolution p-Laplacian equations with strong nonlinear sources. Communications on Pure & Applied Analysis, 2004, 3 (3) : 475-490. doi: 10.3934/cpaa.2004.3.475
Shigeaki Koike, Andrzej Świech. Local maximum principle for $L^p$-viscosity solutions of fully nonlinear elliptic PDEs with unbounded coefficients. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1897-1910. doi: 10.3934/cpaa.2012.11.1897
Julián Fernández Bonder, Leandro M. Del Pezzo. An optimization problem for the first eigenvalue of the $p-$Laplacian plus a potential. Communications on Pure & Applied Analysis, 2006, 5 (4) : 675-690. doi: 10.3934/cpaa.2006.5.675
Nassif Ghoussoub. A variational principle for nonlinear transport equations. Communications on Pure & Applied Analysis, 2005, 4 (4) : 735-742. doi: 10.3934/cpaa.2005.4.735
Adam Lipowski, Bogdan Przeradzki, Katarzyna Szymańska-Dębowska. Periodic solutions to differential equations with a generalized p-Laplacian. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2593-2601. doi: 10.3934/dcdsb.2014.19.2593
Dimitri Mugnai. Bounce on a p-Laplacian. Communications on Pure & Applied Analysis, 2003, 2 (3) : 371-379. doi: 10.3934/cpaa.2003.2.371
VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad. A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition. Communications on Pure & Applied Analysis, 2018, 17 (1) : 39-52. doi: 10.3934/cpaa.2018003
Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040
Francesca Da Lio. Remarks on the strong maximum principle for viscosity solutions to fully nonlinear parabolic equations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 395-415. doi: 10.3934/cpaa.2004.3.395
Wen Tan. The regularity of pullback attractor for a non-autonomous p-Laplacian equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 529-546. doi: 10.3934/dcdsb.2018194
PDF downloads (97)
Gabriele Bonanno Pasquale Candito Roberto Livrea Nikolaos S. Papageorgiou | CommonCrawl |
Exercises - Clocks, Groups, and Commutative Rings
Find the "addition table" and "multiplication table" for a $6$-hour clock arithmetic.
$\displaystyle{\begin{array}{c|cccccc} + & 0 & 1 & 2 & 3 & 4 & 5 \\\hline 0 & 0 & 1 & 2 & 3 & 4 & 5 \\ 1 & 1 & 2 & 3 & 4 & 5 & 0 \\ 2 & 2 & 3 & 4 & 5 & 0 & 1 \\ 3 & 3 & 4 & 5 & 0 & 1 & 2 \\ 4 & 4 & 5 & 0 & 1 & 2 & 3 \\ 5 & 5 & 0 & 1 & 2 & 3 & 4 \end{array}}$ $\displaystyle{\begin{array}{c|cccccc} \times & 0 & 1 & 2 & 3 & 4 & 5 \\\hline 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 2 & 3 & 4 & 5 \\ 2 & 0 & 2 & 4 & 0 & 2 & 4 \\ 3 & 0 & 3 & 0 & 3 & 0 & 3 \\ 4 & 0 & 4 & 2 & 0 & 4 & 2 \\ 5 & 0 & 5 & 4 & 3 & 2 & 1 \end{array}}$
Find the multiplicative inverse of $7$ in a $11$-hour clock arithmetic.
There are more elegant ways (using the branch of mathematics known as number theory) to find such multiplicative inverses, especially when the "clock" has a large number of hours on it -- however, $11$ is a fairly small number of hours, so we can find the multiplicative inverse quickly, even using the following "brute-force" method:
Computing products $1 \cdot 7, 2 \cdot 7, 3 \cdot 7$, etc, all in this $11$-hour clock arithmetic, we stop upon seeing the first result equal to $1$:$\def\ss{\kern-15pt}$ $$\begin{array}{ccccccccc} n \cdot 7\kern-7pt &=&\kern-7pt 7, \quad 2 \cdot 7\kern-7pt &=&\kern-7pt 3, \quad 3 \cdot 7 \kern-7pt&=&\kern-7pt 10, \quad 4 \cdot 7 \kern-7pt&=&\kern-7pt 6,\\ 5 \cdot 7\kern-7pt &=&\kern-7pt 2, \quad 6 \cdot 7\kern-7pt &=&\kern-7pt 9, \quad 7 \cdot 7 \kern-7pt&=&\kern-7pt 5, \quad 8 \cdot 7 \kern-7pt&=&\kern-7pt 1 \end{array}$$ Thus, $7^{-1} = 8$ in an $11$-hour clock arithmetic.
Determine which elements don't have multiplicative inverses in a $6$-hour clock arithmetic.
After having fleshed out a "multiplication table" for $6$-hour arithmetic (see below), this question is quickly answered. Given that $x$ and $x^{-1}$ are inverses precisely when $x \cdot x^{-1} = 1 = x^{-1} \cdot x$, we simply look for rows (and columns) that don't contain $1$ (the multiplicative identity). In this way, we see that $0$, $2$, $3$, and $4$ don't have multiplicative inverses.
$$\begin{array}{c|cccccc} \times & 0 & 1 & 2 & 3 & 4 & 5 \\\hline 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 2 & 3 & 4 & 5 \\ 2 & 0 & 2 & 4 & 0 & 2 & 4 \\ 3 & 0 & 3 & 0 & 3 & 0 & 3 \\ 4 & 0 & 4 & 2 & 0 & 4 & 2 \\ 5 & 0 & 5 & 4 & 3 & 2 & 1 \end{array}$$
Below is the multiplication table for a $10$-hour clock arithmetic. Does the smaller set $S = \{1,3,7,9\}$ form a group using the same operation? If it does, is it an abelian group? Explain the reasoning behind both of your answers.
$$\begin{array}{c|cccccccccc} \times & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ 2 & 0 & 2 & 4 & 6 & 8 & 0 & 2 & 4 & 6 & 8 \\ 3 & 0 & 3 & 6 & 9 & 2 & 5 & 8 & 1 & 4 & 7 \\ 4 & 0 & 4 & 8 & 2 & 6 & 0 & 4 & 8 & 2 & 6 \\ 5 & 0 & 5 & 0 & 5 & 0 & 5 & 0 & 5 & 0 & 5 \\ 6 & 0 & 6 & 2 & 8 & 4 & 0 & 6 & 2 & 8 & 4 \\ 7 & 0 & 7 & 4 & 1 & 8 & 5 & 2 & 9 & 6 & 3 \\ 8 & 0 & 8 & 6 & 4 & 2 & 0 & 8 & 6 & 4 & 2 \\ 9 & 0 & 9 & 8 & 7 & 6 & 5 & 4 & 3 & 2 & 1 \\ \end{array}$$
Yes! It does form a group. To see this, consider its "multiplication" table below (which can be quickly obtained from the one above, by erasing the rows and columns corresponding to numbers not in set $S = \{1,3,7,9\}$. $$\begin{array}{c|cccc} \times & 1 & 3 & 7 & 9 \\\hline 1 & 1 & 3 & 7 & 9 \\ 3 & 3 & 9 & 1 & 7 \\ 7 & 7 & 1 & 9 & 3 \\ 9 & 9 & 7 & 3 & 1 \end{array}$$ Then, note:
$S$ is closed with respect to our $10$ hour "multiplication", as we only see $1$s, $3$s, $7$s, and $9$s in the body of the smaller table.
Associativity held in $10$-hour clock arithmetic, so given the table above involves the exact same calculations and is closed, associativity must also hold here.
The identity continues to be $1$ given that both the first row and column of products match the numbers being multiplied by $1$ to produce them.
Finally, inverses for all elements exist as there is a $1$ in each row and column.
Having satisfied the four defining properties of a group, that's exactly what the smaller table above describes!
Further, we know this is an abelian group given that there is a symmetry of products across the (falling) diagonal in the table. That is to say, for every $a \cdot b$ product (where both $a$ and $b$ are in $S = \{1,3,7,9\}$) equals its corresponding $b \cdot a$ product.
Below is the addition table for a $10$-hour clock arithmetic. Does the smaller set $T = \{0,2,4,6,8\}$ form a group using the same operation? If it does, is it an abelian group? Explain the reasoning behind both of your answers. $$\displaystyle{\begin{array}{c|cccccccccc} + & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\hline 0 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ 1 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 0 \\ 2 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 0 & 1 \\ 3 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 0 & 1 & 2 \\ 4 & 4 & 5 & 6 & 7 & 8 & 9 & 0 & 1 & 2 & 3 \\ 5 & 5 & 6 & 7 & 8 & 9 & 0 & 1 & 2 & 3 & 4 \\ 6 & 6 & 7 & 8 & 9 & 0 & 1 & 2 & 3 & 4 & 5 \\ 7 & 7 & 8 & 9 & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ 8 & 8 & 9 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 9 & 9 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \end{array}}$$
First, let us find the addition table for $T = \{0,2,4,6,8\}$, by erasing the un-needed rows and columns of the table above: $$\begin{array}{c|ccccc} + & 0 & 2 & 4 & 6 & 8 \\\hline 0 & 0 & 2 & 4 & 6 & 8 \\ 2 & 2 & 4 & 6 & 8 & 0 \\ 4 & 4 & 6 & 8 & 0 & 2 \\ 6 & 6 & 8 & 0 & 2 & 4 \\ 8 & 8 & 0 & 2 & 4 & 6 \end{array}$$ Then, note:
$T$ is closed with respect to $10$-hour "addition", as we only see elements of $T$ in the body of the table.
Associativity holds as it held in the larger table.
The identity is $0$, as adding this to any value (even on a clock) leaves it unchanged.
Every element has an inverse as we see the identity $0$ in every row and column.
Thus, $T$ with addition on a $10$-hour clock forms a group.
Further, given the symmetry seen along the (falling) diagonal where every $ab = ba$ for elements $a$ and $b$ in $T$, this is an abelian group.
Does the set $T = \{0,2,4,6,8\}$ form a commutative ring with respect to addition and multiplication on a $10$-hour clock?
No. Notice that multiplication, if restricted to the elements of $T$, has no multiplicative identity. | CommonCrawl |
Group Elements, Categorically
On Monday we concluded our mini-series on basic category theory with a discussion on natural transformations and functors. This led us to make the simple observation that the elements of any set are really just functions from the single-point set {✳︎} to that set. But what if we replace "set" by "group"? Can we view group elements categorically as well?
The answer to that question is the topic for today's post, written by guest-author Arthur Parzygnat. Arthur is a mathematics postdoctoral fellow at the University of Connecticut, and, incidentally, was the first person to introduce me to categories as an undergraduate!
An element $x$ of a set $X$ can equivalently be described in terms of a function $x:\{*\}\to X$ from any one element set into $X.$ Similarly, a point $x$ in a topological space $(X,\tau)$ can be described in terms of a function $x:\{*\}\to(X,\tau)$ from the single point space into $(X,\tau).$ This same idea works in several categories of mathematical objects.
Definition 1. Let $\mathscr{C}$ be a category with a terminal object $T$ and let $X$ be an object of $\mathscr{C}.$ A point of $X$ is a morphism $x:T\to X.$
In many examples, such as the ones mentioned above, this produces the usual notion of a point. However, it fails with categories whose objects have additional algebraic structure and the morphisms respect this algebraic structure. For instance, in the category of vector spaces and linear transformations, the terminal object is a $0$-dimensional vector space, which we'll denote by $\mathbf{0}.$ If $V$ is a real vector space, there is only a single linear transformation $\mathbf{0}\to V$ since the zero vector must be preserved. On the other hand, linear transformations of the form $\mathbb{R}\to V$ do describe elements of $V$ (the image of the number 1). Furthermore, $\mathbb{R}$ is the monoidal unit for the tensor product of vector spaces.
Definition 2. Let $(\mathscr{C},\otimes,I)$ be a monoidal category (the other data are not explicitly written here) with monoidal unit $I.$ Let $X$ be an object of $\mathscr{C}.$ A point of $X$ is a morphism $x:I\to X.$
With this definition and an appropriate choice of monoidal structure, all the examples from above describe elements in the usual sense. Unfortunately, this still does not describe elements in all categories with algebraic structure. For example, in the category of groups, the terminal object and the monoidal unit for the direct product of groups are both the group $\{e\}$ with a single element, the identity. For any group $G,$ there is only a single morphism $\{e\}\to G$ sending the identity to the identity. Although group elements in $G$ can be described by a morphism $\mathbb{Z}\to G,$ $\mathbb{Z}$ is not a terminal object, nor is it a unit for the monoidal structure. There are a few possible ways to proceed.
Find an appropriate monoidal structure on groups where $\mathbb{Z}$ plays the role of a unit.
Generalize our definition of a point even further.
Change the way we think about the totality of groups.
I have no idea how to do the first and if it's even possible. One possibility for the second option is known as the ``functor of points'' perspective, though we prefer another option. The third option is not only possible, but it offers an interesting perspective to how group elements are different from other types of elements. First note that every group can be viewed as a category with a single object. Normally, we think of the totality of groups as an ordinary category. However, viewing a group as a one object category, the totality of groups becomes a 2-category since the totality of categories has a canonical structure of a 2-category. Morphisms are functors, and the data of such a functor is equivalent to the data of a group homomorphism. A natural transformation
consists of a single element $h\in H$ satisfying $$ h\varphi(g)=\psi(g)h\qquad\forall\;g\in G. $$ In other words, the homomorphisms $\psi$ and $\varphi$ are conjugate by some element $h\in H.$ Composition of natural transformations corresponds to group multiplication.
In particular, all natural transformations between groups are invertible. Hence, the set of all natural transformations from a group homomorphism to itself forms a group. (Can you guess which subgroup it is?) Of course, unless you've specifically chosen your group homomorphisms, it's probably not likely that there exists a natural transformation between them. But, in some special cases, there are many such natural transformations. In fact, the group of all natural transformations
is canonically isomorphic to $G$ itself. Here $!:\{e\}\to G$ is the unique group homomorphism from the single element group to $G.$ Therefore, group elements are more appropriately associated with ``processes'' instead of static ``elements,'' which is appropriate anyway because we think of groups as symmetries of other mathematical objects. More precisely, we can make the following definition.
Definition 3. Let $\mathscr{C}$ be a 2-category with a terminal object $T.$ Let $C$ be an object of $\mathscr{C}$ and let $X:T\to C$ and $Y:T\to C$ be elements in $C.$ A process $f$ from $X$ to $Y$ in $C$ is a 2-morphism in $\mathscr{C}$ of the form
Let $\mathscr{C}$ be the 2-category of groups viewed as one-object categories. Then $T=\{e\}$ is a single element group. Set $C:=G$ to be any group $G.$ Set $X:=\;!$ and $Y:=\;!$ to be the unique group homomorphisms from $\{e\}$ to $G.$ Finally, set $f:=g$ to be any element $g$ of $G.$ This shows that a group element is an example of a process.
Let $\mathscr{C}$ be the 2-category of categories. Then $T$ is a 2-category with a single object, a single 1-morphism, and a single 2-morphism. Let $C$ be a category (such as the category of sets). Let $X$ and $Y$ be objects of $C$ (such as two sets). Let $f:X\to Y$ be a morphism in $C$ (a function in the case of sets). Then this is an example of a process, consistent with our usual notion of a process as a morphism between two objects in a category. If we wanted to, we could also replace the 2-category $\mathscr{C}$ and terminal object $T$ in the definition of a process by a monoidal 2-category with a unit in a similar fashion to what was done to include vector spaces in the discussion of elements. I encourage you to come up with other examples in this case.
Naming Functors
What is an Adjunction? Part 2 (Definition)
The Most Obvious Secret in Mathematics
Language, Statistics, & Category Theory, Part 1 | CommonCrawl |
Research article | Open | Open Peer Review | Published: 13 August 2018
Using Baidu index to nowcast hand-foot-mouth disease in China: a meta learning approach
Yang Zhao1,
Qinneng Xu2,
Yupeng Chen2 &
Kwok Leung Tsui1,2
Hand, foot, and mouth disease (HFMD) has been recognized as one of the leading infectious diseases among children in China, which causes hundreds of annual deaths since 2008. In China, the reports of monthly HFMD cases usually have a delay of 1–2 months due to the time needed for collecting and processing clinical information. This time lag is far from optimal for policymakers making decisions. To alleviate this information gap, this study uses a meta learning framework and combines publicly Internet-based information (Baidu search queries) for real-time estimation of HFMD cases.
We incorporate Baidu index into modeling to nowcast the monthly HFMD incidences in Guangxi, Zhejiang, Henan provinces and the whole China. We develop a meta learning framework to select appropriate predictive model based on the statistical and time series meta features. Our proposed approach is assessed for the HFMD cases within the time period from July 2015 to June 2016 using multiple evaluation metrics including root mean squared error (RMSE) and correlation coefficient (Corr).
For the four areas: whole China, Guangxi, Zhejiang, and Henan, our approach is superior to the best competing models, reducing the RMSE by 37, 20, 20, and 30% respectively. Compared with all the alternative predictive methods, our estimates show the strongest correlation with the observations.
In this study, the proposed meta learning method significantly improves the HFMD prediction accuracy, demonstrating that: (1) the Internet-based information offers the possibility for effective HFMD nowcasts; (2) the meta learning approach is capable of adapting to a wide variety of data, and enables selecting appropriate method for improving the nowcasting accuracy.
Hand, foot and mouth disease (HFMD), usually caused by enterovirus 71 (EV71) and coxsackievirus A16 (Cox a16), is a type of infectious disease that occurs most commonly among children under 5 years old [1–4]. The typical symptoms of HFMD patients include fever, skin eruptions on hands and feet, and vesicles in the mouth. HFMD can cause mild to severe illness. Some patients, especially those infected by EV71, would rapidly deteriorate with life-threatening neurological and systemic complications, including neurological, cardiovascular and respiratory problems. Several large outbreaks of HFMD have been witnessed in Asia-Pacific region in recent decades, such as the 1997 pandemic in Malaysia, 1998 pandemic in Taiwan, 2000 pandemic in Japan, 2008 pandemic in Singapore, Vietnam, Mongolia and Brunei, 2008 to 2012 pandemics in China, 2011 pandemic in Japan, 2012 pandemic in Cambodia and 2015 pandemic in Syria [5–11], posing a heavy burden to public health and socioeconomic system in the affected areas [12]. HFMD has been recognized as one of the leading infectious diseases among children in China, which causes hundreds of annual deaths since 2008 [4, 13]. Real-time epidemiological surveillance and early warning of HFMD could enable the timely interventions to prevent and control HFMD outbreaks, thus effectively minimizing morbidity, mortality, and reducing the cost of public health system.
China has built its surveillance system to report the monthly HFMD cases and mortality, but the report always has a 1–2 months delay which could be a major challenge for policymakers to accurately estimate epidemics in an efficient real-time manner. Therefore, an effective system that enables forecasting current HFMD (i.e. nowcasting) is in urgent need. An up-to-date detection of acute disease outbreak means more days gained, more lives and more resources saved. In the previous studies, various time series models have been employed for HFMD prediction based on historical reports, including autoregressive integrated moving average (ARIMA) and season ARIMA (SARIMA) [14–18]. However, ARIMA based models have a disadvantage in common that they are essentially 'backward-looking', which results in poor prediction at turning points unless the turning point represents a return to a long-run equilibrium [19]. Several studies discovered the correlation between the trend of HFMD and some external variables, where the prediction models are constructed by incorporating external variables such as meteorological data and calendar variables [20–27]. However, one limitation of those models is that they can only be used in a relatively small area, such as a town, and may not be applicable in larger areas due to geographical variety of those external variables among sub-areas. Thus, how to predict HFMD epidemics effectively in larger scales, such as in a province or entire China, remains an open question for researchers.
With the arrival of big data era, we are encountering large streaming data in our lives more frequently than ever before. The availability of big data from multiple sources provides new opportunities and tools for evident-supported decision making, such as infectious diseases prediction. In 2008, Google developed an influenza surveillance web-service, namely GFT (Google Flu Trends) [28], which used the Google search query as external variables to predict weekly influenza-likeliness (ILI) rate. The success of GFT motivated several studies aiming to assess current flu activity based on secondary data such as Internet search queries and electronic health records [29–35]. Several studies have been conducted on HFMD prediction using Baidu search queries [36–38]. In these research works, Baidu search queries are incorporated into forecasting methods, and the HFMD prediction is either at provincial or national level. In fact, both data-driven and knowledge-driven forecasting methods usually work well in specific conditions, which is due to the inherent diversities among data sets. The forecasting accuracy can be completely varied when there exists some difference in data structure, data size, time scale, etc. [39, 40]. Therefore, how to develop a robust method or framework with effective model selection for epidemics prediction is a major concern for many applications of public health surveillance.
Our contribution in this paper is two folds: (1) We comprehensively investigate the predictive utility of search queries from Baidu, a dominating search engine in China, for predicting the number of HFMD cases in China, and (2) We develop a novel meta learning (ML) framework that incorporates Internet big data and various parametric predictive models for improving the nowcasting accuracy of HFMD. We evaluate the prediction performance of our estimates in terms of root mean squared error (RMSE) and correlation coefficients (Corr). The results show that: the prediction performance of the predictive models and methods can be significantly improved by utilizing Internet-based search data; the developed meta learning approach can automatically select befitting model based on the historical information, and is more efficient than using single model in terms of prediction power.
Data source and process
In this study, we focus on the problem of nowcasting monthly HFMD cases in areas with geographical variety including Guangxi province, Zhejiang province, Henan province, and China. The reason that we choose these provinces is that most HFMD cases occur in central and southern China [13]. The surveillance data in China, Guangxi, and Zhejiang cover four years from July 2012 to June 2016, and the data in Henan are from January 2013 to June 2016. We collect the monthly reported clinical cases of HFMD from Chinese Centers for Disease Control and Prevention (CDC) and CDCs in the specific provinces accordingly. In medical informatics, an HFMD case is defined as having clinical confirmation of popular vesicular rashes on hands, feet, mouth or buttocks, with or without fever [4].
Baidu is the most prevailing Web search engine in China with over 80 percent of market share [41]. Among the various online services provided by Baidu, Baidu Index (https://index.baidu.com) is an online search tool that allows users to view how frequent the specific keywords, subjects and phrases have been queried over a time period. In this study, we use the HFMD related search frequency of keywords obtained from Baidu Index as external variables to predict HFMD epidemics. We select search terms or keywords which are closely correlated with HFMD epidemics from a keyword tool 'Chinaz' (http://tool.chinaz.com) [12]. The keywords are obtained through calculating their pairwise correlation with HFMD time series data, using semantic correlation analysis on the relevant queries in Baidu from any available portal websites, blogs, and online reports. Finally, 46 top keywords are selected as the most correlated to the China HFMD cases (the selected Chinese keywords are displayed in the Additional file 1: Table S1). We collect daily search query of these keywords via Baidu Index, and then aggregate the data to a monthly basis for consistency. Figure 1 illustrates the HFMD associated queries, where the monthly HMFD cases in China and search frequency of Chinese keyword 'hand-foot-mouth' are plotted for comparison. As can be seen in Fig. 1, the two time series are highly correlated.
Monthly HFMD cases in China and search frequency of 'hand-foot-mouth'. Blue: the variation trend of monthly HFMD incidences in China; Orange: Baidu search volume of 'hand-footmouth'
In our case, the response variable is the monthly HFMD incidences and the covariates are the Baidu index of the selected search keywords. The correlation coefficients are calculated, and only those search terms whose correlation coefficients are higher than 0.5 are used in the subsequent predictive models. The keywords used thus might be different for predicting the HFMD cases in each month. Our proposed approach also employs autoregressive terms because of the intrinsic time series structure in HFMD observations. Let yi denote the number of HFMD cases in month i, we calculate the correlation coefficients between the HFMD observations at lag 0 (yi) and observations at lag 1, 2, 3, 4, 5, and 6 (yi−1,…,yi−6), respectively. As can be seen in Table 1, the HFMD cases at lag 1 is significantly associated with the current HFMD incidence in terms of correlation coefficients. The autoregressive term yi−1 together with the Baidu index of search keywords comprise the covariates in our proposed approach.
Table 1 Correlation coefficients of HFMD cases at lag 0 with cases at lag 1, 2, 3, 4, 5, and 6
It is observed that the number of covariates exceeds the number of cases in our data sets, least squares estimation may be ill-posed when using linear regression [42]. Three methods, including principal component analysis (PCA), least absolute shrinkage and selection operator (LASSO), and ridge regression (RR), are employed in our model to tackle this problem. In addition, we use autoregressive integrated moving average (ARIMA) to predict the incidence of HFMD in four regions, because of the underlying time series structure of HFMD observations.
Since the relationship between HFMD cases and Baidu index is intrinsically dynamic we adopt an adaptive form of out-of-sample forecasting in this study [43]. For PCA, LASSO, RR, and ARIMA, we use a 24 months window (i.e. two full years) to train statistical models and then the upcoming months to perform out-of-sample prediction validation. As the available data is limited, the selected 24 months window length can also capture the yearly trend as well as seasonal pattern. The model parameters are recomputed before predicting each point by using the training data from the previous 24 months.
Evaluation metrics
Three metrics are employed to measure the prediction accuracy: root mean square error (RMSE), mean absolute percent error (MAPE) and correlation coefficient (Corr). For a series of predicted values $\hat {\boldsymbol {Y}}=(\hat {y}_{1}, \hat {y}_{2}, \ldots, \hat {y}_{n})$ and their corresponding real values Y=(y1,y2,…,yn), these metrics are
$$\begin{array}{@{}rcl@{}} RMSE&=&\sqrt{\frac{\sum_{i=1}^{n}\left(\hat{y}_{i}-y_{i}\right)^{2}}{n}},\\ MAPE&=&\frac{\sum_{i=1}^{n}\left(\left|\frac{\hat{y}_{i}-y_{i}}{y_{i}}\right|\right)}{n}\\ Corr&=&\frac{cov\left(\hat{\boldsymbol{Y}},\boldsymbol{Y}\right)}{\sigma_{\hat{\boldsymbol{Y}}} \sigma_{\boldsymbol{Y}}}. \end{array} $$
Smaller RMSE and MAPE indicates the better prediction performance, while the higher the correlation the better.
A meta learning approach for HFMD nowcasting
As discussed earlier, one major challenge of health forecasting is that there is no single algorithm performs best for all health conditions. Although four individual models are examined in this study, there is no guarantee that one of them can always outperform the others. To achieve more accurate forecasting result, an important question is how to choose the best model for each time point in each location. Meta-learning approach, in this scenario, is a potential approach to automatically acquire empirical knowledge for supporting non-expert users in algorithm selection task [44]. Meta-learning has proven to be effective in many forecasting applications [45–48], but its effectiveness in forecasting infectious diseases has been rarely investigated.
Meta-learning is defined as an automatic process of generating knowledge associating the performance of algorithms to the characteristics of problem [49]. The meta learner can simply be a single machine learning algorithm [50]. In this case, we employ support vector machine (SVM) as the meta learner to build the recommendation system in meta learning. SVM is a specific class of algorithms, characterized by the usage of kernels, absence of local minima, sparseness of the solution and number of support vectors, etc [51]. SVM can be applied for both classification and regression purpose. In SVM classification, the goal is to find a maximal margin hyper-plane that separates data points from different classes as wide as possible in feature space. Besides linear classification, SVMs also works efficiently in cases of nonlinear separation via kernel transformation, which can automatically map their inputs into the transformed feature spaces.
Figure 2 shows the overall procedure of our meta learning framework. We take the HFMD forecasting in China as an example to illustrate the framework. Let Y = (y1…y48)⊤ represent the outputs, where yi(i = 1,…,48) denotes the monthly HFMD incidences in China from July 2012 to June 2016. Let X=(x1…x48)⊤ represent the covariates set, where xi = (1,yi−1,bi) denotes the ith input, and bi=(bi1,…,bik) denotes the Baidu index (search frequency) of k (k=46) search keywords related to HFMD activity in the ith month. The procedure of meta learning method mainly consists of the following steps:
Meta learning framework
Step 1: The dataset is divided into training set T(0) and testing set T(1). For training set T(0), $\boldsymbol {t}_{j}^{(0)}=\left (y_{j},\boldsymbol {x}_{j}\right) (j=1,\ldots,26)$ is the jth point of training set, where xj=(yj−1,bj). For testing set T(1), $\boldsymbol {t}_{s}^{(1)}=\left (y_{s},\boldsymbol {x}_{s}\right) (s=1,\ldots,22)$ is the sth point of testing set, where xs=(ys−1,bs).
Step 2: A set of predictive method candidates {f(1),…,f(L)} for fitting the relationship between Y and X is constructed. For each method, we have the fitted model yi=f(l)(xi;θ(l)), where f(l)∈{f(1),…,f(L)} and θ(l) is the parameter set of this method. For each data point in testing set, all the predictive methods are applied for HFMD prediction and an adaptive approach (models are dynamically trained with a 2-year time window) is adopted.
Step 3: The MAPE of each predictive method at the first n−1 testing data points is calculated, and the optimal method is selected by minimizing MAPE value, i.e. $l_{s}^{*}=arg\min \limits _{l\in \{1,\ldots,L\}}{MAPE}_{s}=arg\min \limits _{l\in \{1,\ldots,L\}}\left |\hat {y}_{s}^{(l)}-y_{s}^{(l)}\right |/y_{s}^{(l)}$;
Step 4: For each case in the first n−1 testing data points, 11 statistical, time series and physical features characterizing its training set are extracted based on previous study [46–48, 50]. Let $\boldsymbol {F}_{s}=\left (F_{s}^{1},\ldots,F_{s}^{m}\right)$ denote the set of features. The description of the features is shown in Table 2.
Table 2 Meta features description
Step 5: SVM is employed as the meta leaner to train the data set $\left (l_{s}^{*},\boldsymbol {F}_{s}\right)(s=1,\ldots,n-1)$, where $l_{s}^{*}$ is the response variable, which is the optimal method index for the sth point, and the 11 features Fs extracted from the corresponding training set are the covariates. Leave-One-Out Cross Validation is applied for model parameter tuning. The fitted model will be sent to the recommendation system for selecting optimal method on a given data set.
Step 6: To predict new HFMD cases in the nth month, the 11 features associated with its training set will be input to the recommendation system, then the meta learner will return an appropriate method for forecasting HFMD incidences in the nth month. The new HFMD cases will be predicted via the recommended model.
Linear regression (LR) with principle component analysis (PCA)
Linear regression (LR) was the first type of regression method with complete theoretical system, and to be applied widely in practical applications. In this study, the linear regression model is formulated as:
$$\begin{array}{@{}rcl@{}} y_{i}=\alpha+\beta_{0}y_{i-1}+\sum_{k=1}^{46}{\beta_{k}b_{ik}}+\varepsilon_{i}, \qquad \varepsilon_{i}\overset{iid}{\sim}N\left(0, \sigma^{2}\right) \end{array} $$
Letting $\hat {\boldsymbol {\beta }}=\left (\hat {\beta }_{0}\ldots \hat {\beta }_{46}\right)$, where bi are the exogenous variables.
However, as mentioned earlier, LR might be ill-posed when the number of covariates exceeds the number of cases due to the limitation of least squares estimation. To tackle this problem, we introduce Principal Component Analysis (PCA) to reduce the dimensionality of covariates. PCA works by first computing linear combinations of variables that contribute to variation in the sample, and then ranking the combinations of variables according to the amount of variations they account for. The most contributed combinations of variables are then used as the new covariates for regression. More details of application of PCA can be referred to [52–56]. In this study, we apply PCA on the observed Baidu index matrix of training set to obtain the principal components, and select a subset of the top principal components that explain at least 95% variance.
Least absolute shrinkage and selection operator (LASSO)
LASSO, which is referred to as L1 regularization method, is able to achieve both covariates selection and regression. It works by setting a constraint on the sum of the absolute value of the regression coefficients, forcing certain coefficients to be zeros. In this way, LASSO enables efficient selection of a simpler model without the insignificant features, which could enhance predication accuracy. More technical details of LASSO and its some generalizations and variants can be found in [57, 58]. In this study, the LASSO estimate $(\hat {\alpha },\hat {\boldsymbol {\beta }})_{lasso}$ can be obtained by solving
$$\begin{array}{@{}rcl@{}} \left(\!\hat{\alpha},\hat{\boldsymbol{\beta}}\!\right)_{lasso}&=& arg \min{\sum_{i}\left(y_{i}-\alpha-\beta_{0}y_{i-1}-\!\sum_{j=1}^{46}\beta_{k}b_{ik}\right)^{2}}\\ &&\,subject\ to \sum_{k=0}^{46}\left|\beta_{k}\right|\le g, \end{array} $$
where g≥0 is a tuning parameter.
Ridge regression (RR)
Ridge regression, which is referred to as L2 regularization method, is also applied for HFMD nowcasting in this study. Ridge regression conducts the least squares estimation by adding a small constant value λ to the diagonal entries of the matrix XTX before taking its inverse. The ridge regression estimate $\left (\hat {\alpha },\hat {\boldsymbol {\beta }}\right)_{ridge}$ can be obtained by solving
$$\begin{array}{@{}rcl@{}} \begin{aligned} \left(\!\hat{\alpha},\hat{\boldsymbol{\beta}}\right)_{ridge}=arg \min{\sum_{i}\left(y_{i}-\alpha-\beta_{0}y_{i-1}-\sum_{j=1}^{46}\beta_{k}b_{ik}\right)^{2}}+\lambda\sum_{k=0}^{n}\beta_{k}^{2} \end{aligned} \end{array} $$
The analytical solution of the ridge regression estimator is given by
$$\begin{array}{@{}rcl@{}} \left(\hat{\alpha},\hat{\boldsymbol{\beta}}\right)_{ridge}=\left(\boldsymbol{X}^{T}\boldsymbol{X}+\lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^{T}\boldsymbol{y}, \end{array} $$
where I is an identity matrix.
Different from LASSO, ridge regression is more commonly used to deal with the collinearity among variables. More details of ridge regression and its applications can be found in [59–61].
Autoregressive integrated moving average (ARIMA)
Besides regression-based approaches, we also consider autoregressive integrated moving average ARIMA(p,d,q) model, where p is the number of autoregressive (AR) terms, q is the order of the non-seasonal moving average (MA) lags, and d is the number of non-seasonal differences [62–64]. ARIMA model can be formulated as:
$$\begin{array}{@{}rcl@{}} y_{t}=\vartheta_{0}+\sum_{i=1}^{p}{\varphi_{i}y_{t-i}}+\sum_{j=1}^{q}{\vartheta_{j}\varepsilon_{t-j}^{arima}}+\varepsilon_{t+h}^{arima}, \end{array} $$
where yt is the number of HFMD cases at time t and $\varepsilon _{t}^{arima}$ is white noise random error; φi (i = 1,2,…,p) and 𝜗j (j = 0,1,2,…,q) are parameters to be estimated via least squares or maximum likelihood estimation. The parameters p, q, and d are selected from a search over all the possible model candidates by minimizing the corrected Akaike Information Criterion (AIC) [65].
Time series models can provide satisfactory forecasting performance when the time series data have clear trend and seasonality. However, the strong assumption of the statistical properties of time series data might limit the reliability of forecast performance.
All of the experiments are implemented in R v3.4.1(64 bit) platform using the "MASS", "penalized", "hydroGOF", "forecast", "glmnet", "moments", "e1071", and "kernalb" packages [66].
We evaluate and compare the forecasting performance of each method. For the time period from July 2015 to June 2016, the meta learning approach reduces the RMSE of the compared method which has the minimum RMSE by 37%, 20%, 20%, and 30% for the four regions, i.e. China, Guangxi, Zhejiang, and Henan, respectively. Comparing the correlation between the nowcasting results and observations, the prediction of the meta learning approach has the maximum correlation coefficient with the ground truth.
Figures 3 and 4 show the RMSE and correlation coefficient of the compared predictive methods in different regions, respectively. As can be seen from the figures, the result verifies the fact that no single model outperforms other models in the four regions. PCA shows inconsistent forecasting performance as it performs worst in China and is comparable with RR and LASSO in the three provinces. The two regularization methods, i.e. LASSO and RR, are competitive in most of the cases except in Henan, where PCA outperforms LASSO. ARIMA does not perform well in all the four regions compared with the models with Baidu index, especially in the three provinces where it is always the worst among the four individual models, validating the predictive utility of Baidu search queries. Comparing the proposed meta learning approach with each individual model, it performs best in China, Guangxi, and Zhejiang, while it is as good as RR in Henan, indicating the effectiveness of meta learning in selecting the befitting models.
Evaluation metric: RMSE. Dark blue: the RMSE of PCA; Red: the RMSE of LASSO; Green: the RMSE of RR; Purple: the RMSE of ARIMA; Light blue: the RMSE of ML
Evaluation metric: correlation coefficient. Dark blue: the correlation coefficient of PCA; Red: the correlation coefficient of LASSO; Green: the correlation coefficient of RR; Purple: the correlation coefficient of ARIMA; Light blue: the correlation coefficient of ML
The comparison of the prediction results over the entire forecasting period of all the methods is displayed in Fig. 5 (The numerical results can be found in Additional file 2: Table S2). Clearly, ARIMA model shows delayed (or "off") prediction performances in all the regions, as ARIMA only relies on the historical time series data and it is not able to capture the irregular turning point which contributes to the delayed prediction of those points. PCA, LASSO, and RR can capture the seasonal pattern of HFMD epidemics more accurately, but PCA greatly overestimates the HFMD cases at some time points. At most of the forecasting points, meta learning can match the best or one of the best two models, and there is few significant overestimation or underestimation throughout the forecasting period.
Forecasting results. Black: the true value; Orangered: the nowcasting results of ARIMA; Gray: the nowcasting results of PCA; Orange: the nowcasting results of RR; Dark blue: the nowcasting results of LASSO; Green: the nowcasting results of Meta learning
Furthermore, in order to further demonstrate the predictive utility of models incorporating Baidu search queries, we compare regression based models with and without Baidu index data. For models without Baidu index data, the three models including PCA+LR, LASSO, and RR degrade into classical linear regression (LR), as only the HFMD cases at lag 1 (yt−1) is left as covariate. Tables 3 and 4 show the RMSE and Corr of the four compared forecasting models (LR, PCA+LR, LASSO, RR) in different regions. As can be seen from the results, PCA.LR, LASSO and RR (models with Baidu data) shows better predictive performance than LR (model without Baidu data), indicating the utility of Baidu search queries.
Table 3 RMSE of different forecasting methods
Table 4 Corr of different forecasting methods
In this paper, we evaluated the predictive utility of Baidu search data in nowcasting HFMD cases in China. The conventional linear regression is not appropriate for this problem due to the relatively large number of covariates in the model. Therefore, we employ four parametric models, including PCA, RR, LASSO, and ARIMA, to nowcast monthly HFMD cases in China, Guangxi province, Zhejiang province, and Henan province.
The result suggests that the time series model, ARIMA, underperforms due to its delayed prediction performance. PCA, LASSO, and RR have the competitive performances in most of the regions and produce more accurate prediction than ARIMA. Among the compared methods, PCA overestimates or underestimates the HFMD epidemics at some forecasting points, and performs slightly worse than LASSO and RR. The performance of LASSO and RR are similar.
In general, PCA, LASSO, and RR can be feasible single model to nowcast HFMD cases in province or country scales by using Baidu search data when there are limited observations and a relatively large number of search terms. However, they could not produce consistently accurate HFMD nowcasting results because of the relatively weak robustness of each model. No single predictive method proves to be universally best in the four cases.
This result motivates us to develop a novel model selection approach in order to choose appropriate model in different situations. The meta learning approach is then developed to fulfill the requirement. Specifically, the meta learning framework consists of a two-stage learning process: In Stage 1, the features characterizing the problem are extracted based on historical data; In Stage 2, a meta learner module is built to learn the interrelation between the features and model performances from the known facts, and deduce new knowledge and rules. This meta learning approach with automatic model recommendation system is superior to the compared individual methods in the problem of HFMD nowcasting.
In this paper, we focus on HFMD nowcasting with 1-month lag data. It should be noted that the prediction power of forecasting method may degrade as time lag increases. In the following, we take the HFMD nowcasting in the whole China as an example for further illustration. Similar to the 1-month nowcasting, the metric RMSE is used to evaluate the prediction performance of the nowcasting with varied time lag. Figure 6 shows the evaluation results in terms of RMSE of the five compared forecasting methods including meta learning, ARIMA, PCA+LR, LASSO and RR. As can be seen from Fig. 6, the prediction accuracy of various methods declines with the increase of time lag (i.e. from 1 month to 4 months), which is consistent with our findings in the preliminary analysis that the more recent HFMD activities are more associated with the current HFMD incidence in terms of Corr. In spite of the varied time lag, the proposed ML framework still outperforms the other methods, indicating its robustness and effectiveness; as the time lag increases, the difference between the various predictive models' performance become smaller. It is worth noting that the proposed meta learning approach is not restricted by data resolution, although monthly data is used to illustrate its effectiveness.
Evaluation metric of different lag time: RMSE. Blue: the RMSE of ARIMA; Orange: the RMSE of PCA+LR; Yellow: the RMSE of LASSO; Orangered: the RMSE of RR; Brown: the RMSE of ML
The proposed meta learning framework provides practical guidelines in the design, development, implementation, and testing of a forecasting recommendation system for health forecasting problems. Specifically, it can help non-experts with predictive methods selection. One is to further examine the features for meta learner. The meta learning framework can incorporate various predictive methods and machine learning algorithms. In fact, there could be some other effective features than those used in our model, and there are also more choices of machine learning methods for training meta learner, such as deep learning. These will be further investigated in our future work.
The result of this study demonstrates that the accuracy of HFMD nowcasting can be significantly improved by incorporating Baidu Index data in predictive model. In addition, the developed meta learning approach for model selection together with Baidu Index data enables credible forecasts and provide helpful information for predicting HFMD incidence. Compared with the four individual predictive methods used in this study, the performance of meta learning is more robust for different forecasting scales. Of course, there is still some room for our approach to improve. For example, we will refine the meta learner by examining various learning algorithms in our future work. Besides, we will evaluate the utility of the developed approach in some other forecasting applications.
ARIMA:
Autoregressive integrated moving average
Corr:
Correlation coefficient
HFMD:
Hand, foot, and mouth disease
LASSO:
Least absolute shrinkage and selection operator
MAPE:
Mean absolute percent error
ML:
Meta learning
PCA:
Principle component analysis
RMSE:
Root mean squared error
Solomon T, Lewthwaite P, Perera D, Cardosa MJ, McMinn P, Ooi MH. Virology, epidemiology, pathogenesis, and control of enterovirus 71. Lancet Infect Dis. 2010; 10(11):778–90.
Zhuang ZC, Kou ZQ, Bai YJ, Cong X, Wang LH, Li C, et al.Epidemiological research on hand, foot, and mouth disease in Mainland China. Viruses. 2015; 7(12):6400–11.
Chang PC, Chen SC, Chen KT. The current status of the disease caused by Enterovirus 71 infections: epidemiology, pathogenesis, molecular epidemiology, and vaccine development. Int J Environ Res Public Health. 2016; 13(9):890.
Xing W, Liao Q, Viboud C, Zhang J, Sun J, Wu JT, et al.Hand, foot, and mouth disease in China, 2008–12: an epidemiological study. Lancet Infect Dis. 2014; 14(4):308–18.
Ang LW, Koh BK, Chan KP, Chua LT, James L, Goh KT. Epidemiology and control of hand, foot and mouth disease in Singapore. Ann Acad Med Singapore. 2009; 38(2):106–12.
Chan L, Parashar UD, Lye M, Ong F, Zaki SR, Alexander JP, et al.Deaths of children during an outbreak of hand, foot, and mouth disease in Sarawak, Malaysia: clinical and pathological characteristics of the disease. Clin Infect Dis. 2000; 31(3):678–83.
Fujimoto T, Chikahira M, Yoshida S, Ebira H, Hasegawa A, Totsuka A, et al.Outbreak of central nervous system disease associated with hand, foot, and mouth disease in Japan during the summer of 2000: detection and molecular epidemiology of enterovirus 71. Microbiol Immunol. 2002; 46(9):621–7.
Fujimoto T, Iizuka S, Enomoto M, Abe K, Yamashita K, Hanaoka N, et al.Hand, foot, and mouth disease caused by coxsackievirus A6, Japan, 2011. Emerg Infect Dis. 2012; 18(2):337.
Chen KT, Chang HL, Wang ST, Cheng YT, Yang JY. Epidemiologic features of hand-foot-mouth disease and herpangina caused by enterovirus 71 in Taiwan, 1998–2005. Pediatrics. 2007; 120(2):e244–e252.
Yang F, Ren L, Xiong Z, Li J, Xiao Y, Zhao R, et al.Enterovirus 71 outbreak in the People's Republic of China in 2008. J Clin Microbiol. 2009; 47(7):2351–2.
Nguyen NT, Pham HV, Hoang CQ, Nguyen TM, Nguyen LT, Phan HC, et al.Epidemiological and clinical characteristics of children who died from hand, foot and mouth disease in Vietnam, 2011. BMC Infect Dis. 2014; 14(1):341.
Lee BY, Wateska AR, Bailey RR, Tai JH, Bacon KM, Smith KJ. Forecasting the economic value of an Enterovirus 71 (EV71) vaccine. Vaccine. 2010; 28(49):7731–6.
Xiao X, Gasparrini A, Huang J, Liao Q, Liu F, Yin F, et al.The exposure-response relationship between temperature and childhood hand, foot and mouth disease: A multicity study from mainland China. Environ Int. 2017; 100:102–9.
Feng H, Duan G, Zhang R, Zhang W.Time series analysis of hand-foot-mouth disease hospitalization in Zhengzhou: establishment of forecasting models using climate variables as predictors. PLoS ONE. 2014; 9(1):e87916.
Song Y, Wang F, Wang B, Tao S, Zhang H, Liu S, et al. Time series analyses of hand, foot and mouth disease integrating weather variables. PloS ONE. 2015; 10(3):e0117296.
Wang P, Goggins WB, Chan EY. Hand, foot and mouth disease in Hong Kong: A time-series analysis on its relationship with weather. PloS ONE. 2016; 11(8):e0161006.
Liu L, Luan R, Yin F, Zhu X, Lü Q. Predicting the incidence of hand, foot and mouth disease in Sichuan province, China using the ARIMA model. Epidemiol Infect. 2016; 144(1):144–51.
Cai Xh, Wan Qp, Wu Ys, et al.Application of ARIMA Model in Predicting Incidence Trend of Hand-foot-mouth Disease in Zhabei District, Shanghai. Pract Prev Med. 2012; 3:018.
Meyler A, Kenny G, Quinn T. Forecasting Irish inflation using ARIMA models. Central Bank and Financial Services Authority of Ireland Technical Paper Series. 1998; 1998(3/RT/98):1–48.
Huang R, Bian G, He T, Chen L, Xu G. Effects of meteorological parameters and PM10 on the incidence of hand, foot, and mouth disease in children in China. Int J Environ Res Publ Health. 2016; 13(5):481.
Lin H, Zou H, Wang Q, Liu C, Lang L, Hou X, et al.Short-term effect of El Nino-Southern Oscillation on pediatric hand, foot and mouth disease in Shenzhen, China. PLoS ONE. 2013; 8(7):e65585.
Chen C, Lin H, Li X, Lang L, Xiao X, Ding P, et al.Short-term effects of meteorological factors on children hand, foot and mouth disease in Guangzhou, China. Int J Biometeorol. 2014; 58(7):1605–14.
Yu L, Zhou L, Tan L, Jiang H, Wang Y, Wei S, et al.Application of a new hybrid model with seasonal auto-regressive integrated moving average (ARIMA) and nonlinear auto-regressive neural network (NARNN) in forecasting incidence cases of HFMD in Shenzhen, China. PloS ONE. 2014; 9(6):e98241.
Liao Y, Ouyang R, Wang J, Xu B. A study of spatiotemporal delay in hand, foot and mouth disease in response to weather variations based on SVD: a case study in Shandong Province, China. BMC Public health. 2015; 15(1):71.
Ma E, Lam T, Wong C, Chuang S. Is hand, foot and mouth disease associated with meteorological parametersEpidemiol Infect. 2010; 138(12):1779–88.
Huang Y, Deng T, Yu S, Gu J, Huang C, Xiao G, et al.Effect of meteorological variables on the incidence of hand, foot, and mouth disease in children: a time-series analysis in Guangzhou, China. BMC Infect Dis. 2013; 13(1):134.
Guo C, Yang J, Guo Y, Ou QQ, Shen SQ, Ou CQ, et al.Short-term effects of meteorological factors on pediatric hand, foot, and mouth disease in Guangdong, China: a multi-city time-series analysis. BMC Infect Dis. 2016; 16(1):524.
Ginsberg J, Mohebbi MH, Patel RS, Brammer L, Smolinski MS, Brilliant L. Detecting influenza epidemics using search engine query data. Nature. 2009; 457(7232):1012–4.
McIver DJ, Brownstein JS. Wikipedia usage estimates prevalence of influenza-like illness in the United States in near real-time. PLoS Comput Biol. 2014; 10(4):e1003581.
Xu Q, Gel YR, Ramirez LLR, Nezafati K, Zhang Q, Tsui KL. Forecasting influenza in Hong Kong with Google search queries and statistical model fusion. PloS ONE. 2017; 12(5):e0176690.
Santillana M, Zhang DW, Althouse BM, Ayers JW. What can digital disease detection learn from (an external revision to) Google Flu Trends?Am J Prev Med. 2014; 47(3):341–7.
Yang S, Santillana M, Kou SC. Accurate estimation of influenza epidemics using Google search data via ARGO. Proc Natl Acad Sci. 2015; 112(47):14473–8.
Lampos V, Miller AC, Crossan S, Stefansen C. Advances in nowcasting influenza-like illness rates using search query logs. Sci Rep. 2015; 5:12760.
Kang M, Zhong H, He J, Rutherford S, Yang F. Using google trends for influenza surveillance in South China. PloS ONE. 2013; 8(1):e55205.
Yang S, Santillana M, Brownstein JS, Gray J, Richardson S, Kou S. Using electronic health records and Internet search information for accurate influenza forecasting. BMC Infect Dis. 2017; 17(1):332.
Du Z, Xu L, Zhang W, Zhang D, Yu S, Hao Y. Predicting the hand, foot, and mouth disease incidence using search engine query data and climate variables: an ecological study in Guangdong, China. BMJ open. 2017; 7(10):e016263.
Xiao Q, Liu H, Feldman M. Tracking and predicting hand, foot, and mouth disease (HFMD) epidemics in China by Baidu queries. Epidemiol Infect. 2017; 145(8):1699–707.
Huang DC, Wang JF, Huang JX, Sui DZ, Zhang HY, Hu MG, et al. Towards identifying and reducing the bias of disease information extracted from search engine data. PLoS Comput Biol. 2016; 12(6):e1004876.
Grossglauser M, Saner H. Data-driven healthcare: from patterns to actions. Eur J Prev Cardiol. 2014; 21(2_suppl):14–7.
Abidi SSR. Knowledge management in healthcare: towards 'knowledge-driven'decision-support services. Int J Med Inform. 2001; 63(1):5–18.
China Search Engine Market Overview. 2015. Available from: https://www.chinainternetwatch.com/17415/search-engine-2012-2018e/. Accessed 11 July 2018.
Kutner MH, Nachtsheim C, Neter J. Applied linear regression models. New York: McGraw-Hill/Irwin; 2004.
Burkom HS, Murphy SP, Shmueli G. Automated time series forecasting for biosurveillance. Stat Med. 2007; 26(22):4202–18.
Prudêncio RB, Ludermir TB. Meta-learning approaches to selecting time series models. Neurocomputing. 2004; 61:121–37.
Lan Z, Gu J, Zheng Z, Thakur R, Coghlan S. A study of dynamic meta-learning for failure prediction in large-scale systems. J Parallel Distrib Comput. 2010; 70(6):630–43.
Zhou S, Lai KK, Yen J. A dynamic meta-learning rate-based model for gold market forecasting. Expert Syst Appl. 2012; 39(6):6168–73.
Matijaṡ M, Suykens JA, Krajcar S. Load forecasting using a multivariate meta-learning system. Expert Syst Appl. 2013; 40(11):4427–37.
Lemke C, Gabrys B. Meta-learning for time series forecasting and forecast combination. Neurocomputing. 2010; 73(10):2006–16.
Vilalta R, Drissi Y. A perspective view and survey of meta-learning. Artif Intell Rev. 2002; 18(2):77–95.
Cui C, Wu T, Hu M, Weir JD, Li X. Short-term building energy model recommendation system: A meta-learning approach. Appl Energy. 2016; 172:251–63.
Cortes C, Vapnik V. Support-vector networks. Mach Learn. 1995; 20(3):273–97.
Lee YJ, Yeh YR, Wang YCF. Anomaly detection via online oversampling principal component analysis. IEEE Trans Knowl Data Eng. 2013; 25(7):1460–70.
Shlens J. A tutorial on principal component analysis. arXiv:14041100. 2014.
Karamizadeh S, Abdullah SM, Manaf AA, Zamani M, Hooman A. An overview of principal component analysis. J Signal Inf Process. 2013; 4(03):173.
Dunia R, Qin SJ, Edgar TF, McAvoy TJ. Identification of faulty sensors using principal component analysis. AIChE J. 1996; 42(10):2797–812.
Petroni A, Braglia M. Vendor selection using principal component analysis. J Supply Chain Manag. 2000; 36(1):63–9.
Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Ser B Methodol. 1996; 58(1):267–88.
Tibshirani R. Regression shrinkage and selection via the lasso: a retrospective. J R Stat Soc Ser B Stat Methodol. 2011; 73(3):273–82.
Chatterjee S, Hadi AS. Regression analysis by example. New Jersey: Wiley; 2015.
Hoerl AE, Kennard RW. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics. 1970; 12(1):55–67.
Wan S, Mak MW, Kung SY. R3P-Loc: A compact multi-label predictor using ridge regression and random projection for protein subcellular localization. J Theor Biol. 2014; 360:34–45.
Box GE, Jenkins GM, Reinsel GC, Ljung GM. Time series analysis: forecasting and control. New Jersey: Wiley; 2015.
Hamilton JD, Vol. 2. Time series analysis. Princeton: Princeton university press Princeton; 1994.
Chatfield C. Time-series forecasting. Boca Raton: CRC Press; 2000.
Hyndman RJ, Khandakar Y, et al.Automatic time series for forecasting: the forecast package for R. 6/07. Melbourne: Monash University, Department of Econometrics and Business Statistics; 2007.
R Core Team. R: A Language and Environment for Statistical Computing. 2014. Available from: http://www.R-project.org/. Accessed 11 July 2018.
We acknowledge all the participants in the study. We would also like to appreciate the editors' and reviewers' valuable comments and suggestions on improving this paper.
This project has been funded in part with the RGC Theme-Based Research Scheme (TBRS) No. T32-102/14-N, and the National Natural Science Foundation of China (NSFC) No. 71420107023.
Baidu index is publicly available at https://index.baidu.com. CDC data is publicly available at http://www.nhfpc.gov.cn/jkj/s3578/new_list.shtml.
Centre for System Informatics Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, People's Republic of China
Yang Zhao
& Kwok Leung Tsui
Department of Systems Engineering and Engineering Management, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, People's Republic of China
Qinneng Xu
, Yupeng Chen
Search for Yang Zhao in:
Search for Qinneng Xu in:
Search for Yupeng Chen in:
Search for Kwok Leung Tsui in:
YZ, QX, YC and KLT conceived the study, undertook statistical analysis and drafted the manuscript. YZ and QX analyzed and interpreted the results. QX and YC assisted with data collection. All authors read and approved the final manuscript.
Correspondence to Yang Zhao.
Additional file 1
Table S1. Contains the selected 46 Baidu key words used in predictive models. (PDF 87 kb)
Table S2. Provides the nowcasting results of the monthly HFMD incidences in China, Guangxi, Zhejiang and Henan. (PDF 46 kb)
Baidu index
Predictive model
Meta-learning | CommonCrawl |
Normalization to non-degenerate distribution
I am reading de Haan's Extreme Value Theory (2006). In the discussion of distribution of sample maximum, he said "in order to obtain a non-degenerate limit distribution, a normalization is necessary". Then he gave the following example. "Suppose that there exists a sequence of constants $(a_n)>0$ and $(b_n)$ such that
\begin{equation} \frac{\max \{X_1, \cdots, X_n\} - b_n}{a_n} (1) \end{equation}
has a non-degenerate limit distribution as $n \to \infty$, i.e., $$\lim_n F^n(a_nx + b_n)=G(x), (2)$$ for every continuity point $x$ of $G$, where $G$ is a non-degenerate distribution function." And he also commented that this is a linear normalization.
I have three questions here.
What does it mean to normalize to a non-degenerate distribution function, please? In my past studies, normalization means to find the constant $c=\frac{1}{\sqrt{2\pi}}$ such that $\int_\mathbb R c e^{-\frac{x^2}{2}} = 1$. It appears that normalization means different things in de Haan's book.
What do the two sequences $(a_n)$ and $(b_n)$ mean here, please? Or what role do they play, please? Why is $(1)$ equivalent to $(2)$, please?
What are common non-linear normalization, please? Thank you!
distributions mathematical-statistics estimation inference
Glen_b -Reinstate Monica
LaTeXFanLaTeXFan
$\begingroup$ "What does it mean to normalized a degenerate distribution function, please?" -- where in the text you quoted is anyone attempting to do that? Please highlight the part where that is. All I see is discussion of normalization to achieve non-degenerate G $\endgroup$ – Glen_b -Reinstate Monica Aug 17 '14 at 8:21
$\begingroup$ "All I see is discussion of normalization to achieve non-degenerate G." What do you mean by "normalization to achieve non-degenerate G"? I gave an example on normalization which I know of in my question. But I suppose there are other meaning attached to this word. $\endgroup$ – LaTeXFan Aug 17 '14 at 8:26
$\begingroup$ Can you edit your question to reflect this change from degenerate? $\endgroup$ – Glen_b -Reinstate Monica Aug 17 '14 at 9:28
$\begingroup$ @Glen_b I do not know what you mean. Where do I need to edit, please? $\endgroup$ – LaTeXFan Aug 17 '14 at 9:44
$\begingroup$ I've changed it. See the edit history for what was altered. $\endgroup$ – Glen_b -Reinstate Monica Aug 17 '14 at 11:54
Consider the most basic example, the sample mean from an i.i.d. sample of size $n$, $\bar X_n$.
We know that as $n \rightarrow \infty$, $\bar X_n \rightarrow \mu$, where $\mu$ is the common mean, the expected value, of the random variables from which the sample is generated.
So at the limit, $\bar X$ has a degenerate distribution, which is the formal way to say that it convergences to a constant. Constant terms can be considered as degenerate random variables. We usually say "constants do not have a distribution", but since sometimes issues of existence matter (meaning that the phrase "the distribution does not exist" properly means that the statistic we examine goes to infinity as the sample size goes to infinity), the correct way to distinguish the two cases is to say "the distribution of a constant is degenerate".
And what do we do, in order to obtain a non-degenerate asymptotic distribution? We create a function of the sample mean, that does not converge to a constant, but it doesn't diverge either. In the case of the sample mean, this function is $\sqrt n(\bar X_n -\mu)$.
In analogous spirit, in Extreme Value Theory, the extreme order statistics, either diverge (if the distribution has unbounded support), or tend to a constant (if the distribution has bounded support on their side). In both cases, we don't get a limiting distribution. So we need to find a function of the extreme order statistic, which will converge to a non-constant random variable and hence, with a usable distribution. The deterministic sequences $\{a_n\}$ and $\{b_n\}$, together with the statistic, create this function. Finding these sequences is not that simple, see for example this post.
Regarding the example given by @Glen_b for the maximum order statistic from a Uniform $U(0,1)$ (a distribution with bounded support), intuitively, as the sample size increases, we will obtain at least one realization of the random variable that exactly equals its upper bound. But this means that $X_{(n)} \rightarrow \max X$, which is a constant, and so it has a degenerate distribution. So we need to find a function of $X_{(n)}$ that does not diverge, and does converge to a random variable. In the specific case, this function is indeed $Z = n(1-X_{(n)})$. To see this, use the change of variable formula to find that
$$Z =n(1-X_{(n)}) \Rightarrow X_{(n)} = 1-\frac Zn \Rightarrow \left|\frac {\partial X}{\partial Z} \right|= \frac 1n$$
and note that $Z \in [0,n]$. Therefore
$$f_Z(z) = \left|\frac {\partial X}{\partial Z} \right| f_{X_{(n)}}(1-z/n) = \frac 1n \left (nf_X(1-z/n)[F_X(1-z/n)]^{n-1}\right)$$
But $f_X(\cdot) =1$, and $F_X(x) =x$. So
$$f_Z(z) =\left(1-\frac zn\right)^{n-1}$$
$$F_Z(z) = \int_{0}^z\left(1-\frac tn\right)^{n-1}dt = 1-\left(1-\frac zn\right)^{n}$$
$$\lim_{n\rightarrow \infty}F_Z(z) = 1-\lim_{n\rightarrow \infty}\left(1-\frac zn\right)^{n} = 1-e^{-z}$$
which is the distribution function of a standard exponential (i.e. with mean value $1$).
Alecos PapadopoulosAlecos Papadopoulos
$\begingroup$ Thank you very much for such detailed answers. It all sounds cool and fun. However, why do you we want to do this kind of things in the first place. In normal case, the sample mean is also normal with parameters $\mu$ and $\sigma^2/n$. Isn't this enough, please? $\endgroup$ – LaTeXFan Aug 18 '14 at 7:45
$\begingroup$ Enough? 1) There are myriads of real-life phenomena that cannot be modeled using the normal distribution. 2) What does the sample maximum has to do with the sample mean? 3) But even in the case you mention, the variance $\sigma^2/n$ becomes negligibly small as the sample size increases... and in such a case, without "all this kind of stuff", Statistics would be able to provide us essentially only point estimates of the things we don't know, which in many cases is the least we are interested in. $\endgroup$ – Alecos Papadopoulos Aug 18 '14 at 10:03
$\begingroup$ very nice explanation. +1 $\endgroup$ – Aaron Hendrickson Nov 21 '18 at 16:49
Normalization is used to mean a variety of things - which usually relate to scaling in some way. In this case it's just a matter of finding constants to subtract and divide by such that the resulting sequence of random variables converges to a distribution that isn't degenerate.
Presumably in the situation under discussion,
\begin{equation} \max \{X_1, \cdots, X_n\} \end{equation}
is degenerate (it's typically the case).
Aside from some oddness in that they seem to be using one letter for two different things there, all they're talking about is choosing $a_n$ and $b_n$ so that
\begin{equation} \frac{\max \{X_1, \cdots, X_n\} - b_n}{a_n} \end{equation}
isn't degenerate in the limit.
If you can find $E(\max \{X_1, \cdots, X_n\})$ and $\text{Var}(\max \{X_1, \cdots, X_n\})$ as functions of $n$, for example, you might be able to set $b_n$ to the first and $a_n$ to the square root of the second, which would yield something that has constant mean and variance ($0$ and $1$ respectively). If the distribution converges in the limit, it should satisfy the conditions.
For example, consider $X_i$ being U(0,1). Then in the limit, the sample maximum $X_{(n)}$ is degenerate.
But I think $n(1-X_{(n)})$ is not degenerate in the limit - IIRC it goes to a standard exponential.
$\begingroup$ Thank you. Why is $X_{(n)}$ degenerate, please? And what is IIRC? $\endgroup$ – LaTeXFan Aug 17 '14 at 9:47
$\begingroup$ In the example I gave, consider the variance of $X_{(n)}$ in the limit (it's 0). In other cases, it's the mean you need to worry about. $\endgroup$ – Glen_b -Reinstate Monica Aug 17 '14 at 11:40
Not the answer you're looking for? Browse other questions tagged distributions mathematical-statistics estimation inference or ask your own question.
Extreme Value Theory - Show: Normal to Gumbel
How do we pass from proportionality back to equality in a Bayesian derivation?
Dirichlet distribution: Normalization of alpha values
Degenerate distribution
Visualize bivariate binomial distribution
dividing sample variance with variance
On the existence of UMVUE and choice of estimator of $\theta$ in $\mathcal N(\theta,\theta^2)$ population
Why is the population standard deviation approximated as the sample standard deviation?
Correlations between two sequences of irrational numbers | CommonCrawl |
Nano Express
Photovoltaic Characteristics of GaSe/MoSe2 Heterojunction Devices
Ryousuke Ishikawa ORCID: orcid.org/0000-0002-3857-69401 na1,
Pil Ju Ko2 na1,
Ryoutaro Anzo3,
Chang Lim Woo2,
Gilgu Oh3 &
Nozomu Tsuboi3
Nanoscale Research Letters volume 16, Article number: 171 (2021) Cite this article
The two-dimensional materials have the thickness of an atomic layer level and are expected as alternative materials for future electronics and optoelectronics due to their specific properties. Especially recently, transition metal monochalcogenides and dichalcogenides have attracted attention. Since these materials have a band gap unlike graphene and exhibit a semiconductor property even in a single layer, application to a new flexible optoelectronics is expected. In this study, the photovoltaic characteristics of a GaSe/MoSe2 heterojunction device using two-dimensional semiconductors, p-type GaSe and n-type MoSe2, were investigated. The heterojunction device was prepared by transferring GaSe and MoSe2 onto the substrate which the titanium electrodes were fabricated through a mechanical peeling method. The current–voltage characteristics of the GaSe/MoSe2 heterojunction device were measured in a dark condition and under light irradiation using a solar simulator. The irradiation light intensity was changed from 0.5 to 1.5 sun. It was found that when the illuminance was increased in this illuminance range, both the short-circuit current and the open-circuit voltage increased. The open-circuit voltage and the energy conversion efficiency were 0.41 V and 0.46% under 1.5 sun condition, respectively.
Two-dimensional (2D) materials have been found to have various unique characteristics that are not an extension of conventional materials science [1,2,3,4,5]. In particular, they are attracting attention as optoelectronic materials owing to the notable physical properties such as their strong optical absorption in the solar spectrum region [6], high internal radiative efficiencies [7], and tunable band gaps for both single- and multi-junction solar cells [8]. Some solar cells are made of 2D materials by forming in-plane and out-of-plane heterojunctions. The former is characterized in that a very clean heterojunction interface can be formed by continuously growing different types of 2D materials [9, 10]. On the other hand, in the latter case, since the heterojunction area can be increased, and tandem solar cells can be fabricated by stacking several junctions, the solar cell characteristics of the GaSe/MoSe2 vertical heterojunction device were evaluated in this study.
Gallium selenide has long been expected as an optical material for photodetectors and nonlinear optics, but its practical application has been promoted only in limited situations due to the difficulty of synthesizing single crystals [11,12,13]. However, due to recent advances in two-dimensional materials science, this layered optical material has been attracting attention again [14,15,16,17,18,19,20,21]. MoSe2 is a typical transition metal dichalcogenide, the Mo ion in these compounds is surrounded by six Se2− ions. The coordination geometry of the Mo is found as octahedral and trigonal prismatic. Monolayer MoSe2 exhibits semiconducting properties with a direct bandgap of about 1.6 eV and has relatively high carrier mobility on the order of hundreds [22]. Therefore, MoSe2 is attracting attention not only as optoelectronics but also as an active region material for transistors [23, 24].
These 2D material heterojunctions have high potential as solar cell materials due to the properties already described that very high theoretical conversion efficiencies for single- and tandem-junctions have been demonstrated thanks to high external radiative efficiency [8], but conversion efficiencies reported so far due to inadequate material and interface quality and device design [25,26,27]. Furthermore, there are still many unclear points about the device physics in the out-of-plane heterostructure of 2D materials, especially the carrier separation process, which is important in solar cells.
In this paper, the current-voltage characteristics of the GaSe/MoSe2 heterojunction device fabricated through a mechanical peeling method were measured in a dark condition and under light irradiation using a solar simulator. The irradiation light intensity was changed from 0.5 to 1.5 sun. It was found that when the illuminance was increased in this illuminance range, both the short-circuit current and the open-circuit voltage increased. The open-circuit voltage and the energy conversion efficiency were 0.41 V and 0.46% under 1.5 sun condition, respectively.
We fabricated four-terminal devices using 50 nm of thickness titanium (Ti) electrodes deposited by electron-beam evaporation on p-type silicon substrates covered with 300 nm of thermally oxidized silicon dioxide (SiO2). We transferred flakes of natural GaSe and MoSe2 (HQ graphene) onto the Ti electrodes sequentially using polydimethylsiloxane (PDMS, Dow Toray) by mechanical exfoliation as described in previous report [23]. Finally, the Ti/GaSe/MoSe2 heterojunction device was annealed at 400 °C under nitrogen gas atmosphere for two hours. The transmittance and reflectance spectra in a few ten micro meters square areas were obtained using transferred flakes onto glass substrates by a micro-UV–Vis spectrometer with a wide-band cassegrain objectives lens (JASCO MSV-5300). The thickness of each sample flakes was determined from line profile of atomic force microscopy (AFM) images (HITACHI Nano Navi Real). The micro-PL and Raman measurements were conducted with a continuous wave excitation laser emitting at 532 nm coupled to a 100× microscope objective at 25 °C. The excitation light intensities for Raman and PL measurements were 1.5 and 0.3 mW, respectively. The solar cell performance was measured at a sample temperature of 25 °C using a solar simulator with a variable intensity between 0.5 sun and 1.5 sun. The spectral response was evaluated by combining a monochromatic light source and a pico-ammeter. From the optical microscopic image, the heterojunction region was determined as the active area of solar cells.
Figure 1a shows the transmittance (T) and reflectance (R) spectra of GaSe flake on glass substrates. The solid red and blue lines show the measured transmittance and reflectance spectra in the range of 200–1600 nm, respectively. The absorbance spectrum (A) represented by a solid black line was calculated by following relation;
$$A = 1 - T - R$$
a Transmittance, reflectance, absorbance spectra and b absorption coefficient of GaSe flake. Inset: optical microscope image of GaSe flake
The absorption coefficient was calculated by following equation as shown in Fig. 1b.
$$\alpha = \frac{{\ln \left( {1 - R} \right) - \ln T}}{d}$$
where d is thickness of sample, which was estimated to be 638 ± 29 nm by AFM measurement. The absorption coefficient of GaSe gradually increased from around 2 eV corresponding to the bandgap. Since the valence band maximum exists at Γ-point, and the bottom of the conduction band at Γ-point is only a few tens meV above the conduction band minimum at M-point, GaSe is considered a quasi-direct bandgap [12]. Direct excitons are also known to be at the Γ-point of energy very close to the direct and indirect interband transitions [12, 19]. Inset of Fig. 1b shows the optical microscope (OM) image of GaSe flake for measurement. The centered circle in OM image indicates measuring area. On the other hand, Fig. 2 shows the optical properties of MoSe2 flake with the thickness of 99 ± 3 nm transferred on glass substrates. The absorption coefficient of MoSe2 exhibited more than an order of magnitude higher than that of GaSe. The sharp increase from 1.5 eV and two exciton-oriented peaks were compatible to previous reports [28, 29].
a Transmittance, reflectance, absorbance spectra and b absorption coefficient of MoSe2 flake. Inset: optical microscope image of MoSe2 flake
Next, the crystallinity and further optical properties of these two-dimensional materials were investigated by Raman and PL. Raman and PL spectra were measured using fabricated GaSe/MoSe2 heterojunction devices. The Raman peaks at 133, 214, and 309 cm−1 were observed as shown in Fig. 3a. The Raman peaks at 133 and 309 cm−1 indicate the planar vibrational modes of A11g (133 cm−1) and A21g (309 cm−1), respectively. The other peak at 214 cm−1 comes from the vibration of selenides in the out-of-plane mode so called E12g [15, 17]. These clear crystalline vibrations indicate high crystallinity of transferred GaSe flakes. Figure 3b shows the PL spectrum obtained from GaSe flakes on Si substrate at 25 °C. The PL peaks arounds 626 and 655 nm corresponding to the direct and the indirect bandgaps, respectively. The indirect bandgap sets only 25 meV lower than the direct bandgap in GaSe [18, 19]. The Raman spectra of MoSe2 transferred on Si substrates indicated two obvious peaks at around 236 and 243 cm−1, which are corresponding to A1g mode as shown in Fig. 4a. The Raman and luminescence spectra (Fig. 4b) indicate high quality of transferred MoSe2 flakes on Si substrates.
a Raman and b PL spectra of GaSe flake
a Raman and b PL spectra of MoSe2 flake
Figure 5a shows the optical microscopic image of the fabricated GaSe/MoSe2 heterojunction device contacted with Ti electrodes. The GaSe flake is contacted with left and bottom electrodes, and the MoSe2 flake is contacted with right and top electrodes, respectively. The heterojunction region defined as the active area of solar cells was estimated to be 490 μm2 from this image. The solar cell performance was measured using bottom and top electrodes under simulated sunlight. The thickness of these GaSe and MoSe2 flakes were estimated to be 118 and 79 nm by AFM measurement, respectively. Both of these film thicknesses correspond to 120–130 layers. Schematic image and band diagram of GaSe/MoSe2 heterojunction device were illustrated in Fig. 5b, c, respectively.
a Optical microscopic image, b schematic image, and c band diagram of the fabricated GaSe/MoSe2 heterojunction device
The current-voltage characteristics of the fabricated GaSe/MoSe2 heterojunction device under 0.5–1.5 sun light condition are shown in Fig. 6a. It is clear that this heterojunction device exhibits rectification and photovoltaic effect, and it can also be seen that the I–V curve changes depending on the light irradiation intensity from Fig. 6a. Figure 6b shows a summary of the light irradiation intensity dependence of the short-circuit current (Isc) and the open-circuit voltage (Voc). Isc increases linearly with light irradiation intensity in this range. On the other hand, it can be seen that Voc increases logarithmically with respect to the light irradiation intensity. Since the following relational expression holds for an ideal diode, the ideal factor was estimated to be 1.11 by fitting.
$$V_{{{\text{oc}}}} = \frac{{nk_{{\text{B}}} T}}{q}\ln \left( {\frac{{I_{{\text{L}}} }}{{I_{{{\text{dark}}}} }} + 1} \right)$$
where n is the ideality factor, kB is the Boltzmann constant, T is the temperature of the device, q is the fundamental unit of charge, so that \(\frac{{k_{{\text{B}}} T}}{q} \approx\) 0.0258 V at room temperature. The IL and Idark are photo- and dark-current, respectively. An ideal factor closes to 1 indicates that this GaSe/MoSe2 structure forms an ideal heterojunction in which an internal electric field sufficient to dissociate excitons is present. The short-circuit current density (Jsc) was calculated to be 3.11 mA/cm2 from active area defined by optical image. The fill factor (FF) and conversion efficiency (η) were estimated to be 0.44 and 0.54% under 1 sun condition, respectively. Since the FF decreased due to the influence of the series resistance when irradiating for 1 sun or more, the η was almost the same as when irradiating for 1 sun, although the Jsc and the Voc.increased. In order to improve FF, it is necessary to improve the device configuration such as shortening the distance to the electrode.
a I–V characteristics and b light irradiation intensity dependence of GaSe/MoSe2 heterojunction solar cell performance
Next, we estimated the external quantum efficiency of the GaSe/MoSe2 heterojunction by using an optical simulator (e-ARC) [29]. Calculations were made with a completely flat structure in which GaSe and MoSe2 with the same film thickness as the fabricated device were laminated on a flat Si substrate. The optical constants of GaSe and MoSe2 were referred to the reported values [30, 31]. The carrier loss induced by recombination at material interface and bulk regions are fully incorporated. The simulated absorbance spectra are shown in Fig. 7. The green-colored region shows the absorption region of the GaSe/MoSe2 heterojunction, which is the sum of the absorption of GaSe indicated by the blue dashed line and the absorption of MoSe2 indicated by the red dashed line. The yellow region is transmitted and absorbed by the Si substrate, and the other regions show reflection components. The maximum Jsc over the wavelength range of 300–950 nm was estimated to be 19.29 mA/cm2 if the generated photocarriers could be completely collected from fabricated device. Our simulation results predicted that the Jsc would increase, and 23 mA/cm2 could be obtained when the GaSe film thickness was about 60 nm. The large dissociation between the calculated current value and the experimental value may be due to insufficient built-in potential in the fabricated device. If this hypothesis is correct, optimizing the film thickness of the absorbent layer and optimizing the work function of the contact material could significantly improve the Jsc. Furthermore, since this simulation result shows that the reflection component is also large, it can be said that the light confinement effect on the incident surface side and the back surface side of the GaSe/MoSe2 heterojunction solar cell is also an important issue in the future. Surface plasmon technology is considered to be very effective for light confinement in two-dimensional material-based solar cells [32].
The simulated absorbance spectra of GaSe/MoSe2 heterojunction
In conclusion, we fabricated the GaSe/MoSe2 heterojunction devices through a mechanical peeling method and analyzed the photovoltaic performance. The absorption coefficient obtained from transmittance and reflectance spectra of MoSe2 exhibited more than an order of magnitude higher than that of GaSe. The Raman and luminescence spectra of GaSe and MoSe2 indicated that high crystallinity maintained after device fabrication. Both the short-circuit current and the open-circuit voltage increased when the light intensity is increased from 0.5 to 1.5 sun. The open-circuit voltage and the energy conversion efficiency were 0.41 V and 0.46% under 1.5 sun condition, respectively. The maximum Jsc over the wavelength range of 300–950 nm was estimated to be 19.29 mA/cm2 if the generated photocarriers could be completely collected from fabricated device from optical simulation study. The optimizing the film thickness of the absorbent layer and optimizing the work function of the contact material could significantly improve the Jsc. Furthermore, the light confinement effect on the incident surface side and the back surface side of the GaSe/MoSe2 heterojunction solar cell is also an important issue in the future.
The datasets supporting the conclusions of this article are included within the article.
2D materials:
Two-dimensional materials
AFM:
Atomic force microscopy
I sc :
Short-circuit current
V oc :
Open-circuit voltage
J sc :
Short-circuit current density
FF :
Fill factor
Geim AK, Grigorieva IV (2013) Van der Waals heterostructures. Nature 499:419–425
Zeng H, Dai J, Yao W et al (2012) Valley polarization in MoS2 monolayers by optical pumping. Nat Nanotechnol 7:490–493
Cao T, Feng J, Shi J et al (2011) Valley-selective circular dichroism of monolayer molybdenum disulphide. Nat Commun 3:1–5
Cao Y, Fatemi V, Fang S et al (2018) Unconventional superconductivity in magic-angle graphene superlattices. Nature 556:43–50
Tran K, Moody G, Wu F et al (2019) Evidence for moiré excitons in van der Waals heterostructures. Nature 567:71–75
Mak KF, Lee C, Hone J et al (2010) Atomically thin MoS2: a new direct-gap semiconductor. Phys Rev Lett 105:136805
Amani M, Lien DH, Kiriya D et al (2015) Near-unity photoluminescence quantum yield in MoS2. Science 350:1065–1068
Jariwala D, Davoyan AR, Wong J, Atwater HA (2017) Van der Waals materials for atomically-thin photovoltaics: promise and outlook. ACS Photonics 4:2962–2970
Gong Y, Lin J, Wang X et al (2014) Vertical and in-plane heterostructures from WS2 /MoS2 monolayers. Nat Mater 13:1135–1142
Kobayashi Y, Yoshida S, Maruyama M et al (2019) Continuous heteroepitaxy of two-dimensional heterostructures based on layered chalcogenides. ACS Nano 13:7527–7535
Bube RH, Lind EL (1959) Photoconductivity of gallium selenide crystals. Phys Rev 115:1159–1164
Mooser E, Schlüter M (1973) The band-gap excitons in gallium selenide. Il Nuovo Cimento 18:164–208
Singh NB, Suhre DR, Balakrishna V et al (1998) Far-infrared conversion materials: gallium selenide for far-infrared conversion applications. Prog Cryst Growth Charact Mater 37:47–102
Jappor HR, Habeeb MA (2018) Optical properties of two-dimensional GaS and GaSe monolayers. Physica E 101:251–255
Hu P, Wen Z, Wang L et al (2012) Synthesis of few-layer GaSe nanosheets for high performance photodetectors. ACS Nano 6:5988–5994
Ko PJ, Abderrahmane A, Takamura T et al (2016) Thickness dependence on the optoelectronic properties of multilayered GaSe based photodetector. Nanotechnology 27:325202
Li X, Lin MW, Lin J et al (2016) Two-dimensional GaSe/MoSe2 misfit bilayer heterojunctions by van der Waals epitaxy. Sci Adv 2:e1501882
Pham KD, Phuc HV, Hieu NN et al (2018) Electronic properties of GaSe/MoS2 and GaS/MoSe2 heterojunctions from first principles calculations. AIP Adv 8:075207
Budweg A, Yadav D, Grupp A et al (2019) Control of excitonic absorption by thickness variation in few-layer GaSe. Phys Rev B 100:045404
Pham KD, Nguyen CV, Phung HTT et al (2019) Strain and electric field tunable electronic properties of type-II band alignment in van der Waals GaSe/MoSe2 heterostructure. Chem Phys 521:92–99
Ning J, Zhou Y, Zhang J et al (2020) Self-driven photodetector based on a GaSe/MoSe2 selenide van der Waals heterojunction with the hybrid contact. Appl Phys Lett 117:163104
Tongay S, Zhou J, Ataca C et al (2012) Thermally driven crossover from indirect toward direct bandgap in 2D Semiconductors: MoSe2 versus MoS2. Nano Lett 12:5576–5580
Abderrahmane A, Ko PJ, Thu TV et al (2014) High photosensitivity few-layered MoSe2 back-gated field-effect phototransistors. Nanotechnology 25:365202
Larentis S, Fallahazad B, Tutuc E (2012) Field-effect transistors and intrinsic mobility in ultra-thin MoSe2 layers. Appl Phys Lett 101:1–4
Jariwala D, Sangwan VK, Lauhon LJ et al (2014) Emerging device applications for semiconducting two-dimensional transition metal dichalcogenides. ACS Nano 8:1102–1120
Jariwala D, Davoyan AR, Tagliabue G et al (2016) Near-unity absorption in van der Waals semiconductors for ultrathin optoelectronics. Nano Lett 16:5482–5487
Wong J, Jariwala D, Tagliabue G et al (2017) High photovoltaic quantum efficiency in ultrathin van der Waals heterostructures. ACS Nano 11:7230–7240
Beal AR, Knights JC, Liang WY (1972) Transmission spectra of some transition metal dichalcogenides. II. Group VIA: trigonal prismatic coordination. J Phys C Solid State Phys 5:3540–3551
Arora A, Nogajewski K, Molas M et al (2015) Exciton band structure in layered MoSe2: from a monolayer to the bulk limit. Nanoscale 7:20769–20775
Nakane A, Tampo H, Tamakoshi M et al (2016) Quantitative determination of optical and recombination losses in thin-film photovoltaic devices based on external quantum efficiency analysis. J Appl Phys 120:064505
Hsu C, Frisenda R, Schmidt R et al (2019) Thickness-dependent refractive index of 1L, 2L, and 3L MoS2, MoSe2, WS2, and WSe2. Adv Opt Mater 7:1900239
Nootchanat S, Pangdam A, Ishikawa R et al (2017) Grating-coupled surface plasmon resonance enhanced organic photovoltaic devices induced by Blu-ray disc recordable and Blu-ray disc grating structures. Nanoscale 9:4963–4971
This study was supported by the Japan Society for the Promotion of Science KAKENHI (Grant No. JP20H02851).
Ryousuke Ishikawa and Pil Ju Ko have contributed equally to this work.
Advanced Research Laboratories, Tokyo City University, Tokyo, Japan
Ryousuke Ishikawa
Department of Electrical Engineering, Chosun University, Gwangju, Republic of Korea
Pil Ju Ko & Chang Lim Woo
Department of Materials Science and Technology, University of Niigata, Niigata, Japan
Ryoutaro Anzo, Gilgu Oh & Nozomu Tsuboi
Pil Ju Ko
Ryoutaro Anzo
Chang Lim Woo
Gilgu Oh
Nozomu Tsuboi
CLW and PJK fabricated the devices. RA and GO characterized the devices. RI, NT, and PJK gave the idea and the experimental guidance for the whole processes and drafting the manuscript. All authors read and approved the final manuscript.
Correspondence to Ryousuke Ishikawa.
Ishikawa, R., Ko, P.J., Anzo, R. et al. Photovoltaic Characteristics of GaSe/MoSe2 Heterojunction Devices. Nanoscale Res Lett 16, 171 (2021). https://doi.org/10.1186/s11671-021-03630-y
DOI: https://doi.org/10.1186/s11671-021-03630-y
Heterojunction
MoSe2 | CommonCrawl |
Krylov implicit integration factor WENO method for SIR model with directed diffusion
DCDS-B Home
Distribution of SS and AS and their bifurcations in aggregations of tuna around two FOBs
September 2019, 24(9): 5003-5039. doi: 10.3934/dcdsb.2019042
Dynamics of a prey-predator system with modified Leslie-Gower and Holling type Ⅱ schemes incorporating a prey refuge
Safia Slimani 1, , Paul Raynaud de Fitte 1,2,, and Islam Boussaada 1,2,
Normandie Univ, Laboratoire Raphaël Salem, UMR CNRS 6085, Rouen, France
PSA & Inria DISCO & Laboratoire des Signaux et Systèmes, Université Paris Saclay, CNRS-CentraleSupélec-Université Paris Sud, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette cedex, France
* Corresponding author: P. Raynaud de Fitte
Received June 2018 Revised September 2018 Published February 2019
Fund Project: The first author is supported by TASSILI research program 16MDU972 between the University of Annaba (Algeria) and the University of Rouen (France)
Full Text(HTML)
Figure(4)
We study a modified version of a prey-predator system with modified Leslie-Gower and Holling type Ⅱ functional responses studied by M.A. Aziz-Alaoui and M. Daher-Okiye. The modification consists in incorporating a refuge for preys, and substantially complicates the dynamics of the system. We study the local and global dynamics and the existence of cycles. We also investigate conditions for extinction or existence of a stationary distribution, in the case of a stochastic perturbation of the system.
Keywords: Prey-predator, Leslie-Gower, Holling type Ⅱ, refuge, Poincaré index theorem, stochastic differential, persistence, stationary distribution, ergodic.
Mathematics Subject Classification: Primary: 92D25, 34D23, Secondary: 60H10.
Citation: Safia Slimani, Paul Raynaud de Fitte, Islam Boussaada. Dynamics of a prey-predator system with modified Leslie-Gower and Holling type Ⅱ schemes incorporating a prey refuge. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 5003-5039. doi: 10.3934/dcdsb.2019042
W. Abid, R. Yafia, M. A. Aziz-Alaoui and A. Aghriche, Turing Instability and Hopf Bifurcation in a Modified Leslie–Gower Predator–Prey Model with Cross-Diffusion, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 28 (2018), 1850089, 17pp. doi: 10.1142/S021812741850089X. Google Scholar
W. Abid, R. Yafia, M. A. Aziz-Alaoui, H. Bouhafa and A. Abichou, Diffusion driven instability and Hopf bifurcation in spatial predator-prey model on a circular domain, Appl. Math. Comput., 260 (2015), 292-313. doi: 10.1016/j.amc.2015.03.070. Google Scholar
M. A. Aziz-Alaoui and M. Daher Okiye, Boundedness and global stability for a predator-prey model with modified Leslie-Gower and Holling-type Ⅱ schemes, Appl. Math. Lett., 16 (2003), 1069-1075. doi: 10.1016/S0893-9659(03)90096-6. Google Scholar
M. Bandyopadhyay and J. Chattopadhyay, Ratio-dependent predator-prey model: Effect of environmental fluctuation and stability, Nonlinearity, 18 (2005), 913-936. doi: 10.1088/0951-7715/18/2/022. Google Scholar
N. P. Bhatia and G. P. Szegö, Stability Theory of Dynamical Systems, Die Grundlehren der mathematischen Wissenschaften, Band 161, Springer-Verlag, New York-Berlin, 1970. Google Scholar
B. I. Camara, Waves analysis and spatiotemporal pattern formation of an ecosystem model, Nonlinear Anal. Real World Appl., 12 (2011), 2511-2528. doi: 10.1016/j.nonrwa.2011.02.020. Google Scholar
F. Chen, L. Chen and X. Xie, On a Leslie-Gower predator-prey model incorporating a prey refuge, Nonlinear Anal. Real World Appl., 10 (2009), 2905-2908. doi: 10.1016/j.nonrwa.2008.09.009. Google Scholar
G. Da Prato and H. Frankowska, Stochastic viability of convex sets, J. Math. Anal. Appl., 333 (2007), 151-163. doi: 10.1016/j.jmaa.2006.08.057. Google Scholar
M. Daher Okiye and M. A. Aziz-Alaoui, On the dynamics of a predator-prey model with the Holling-Tanner functional response, in Mathematical modelling & computing in biology and medicine, vol. 1 of Milan Res. Cent. Ind. Appl. Math. MIRIAM Proj., Esculapio, Bologna, 2003, 270–278. Google Scholar
N. Dalal, D. Greenhalgh and X. Mao, A stochastic model for internal HIV dynamics, J. Math. Anal. Appl., 341 (2008), 1084-1101. doi: 10.1016/j.jmaa.2007.11.005. Google Scholar
F. Dumortier, J. Llibre and J. C. Artés, Qualitative Theory of Planar Differential Systems, Universitext, Springer-Verlag, Berlin, 2006. Google Scholar
G. Ferreyra and P. Sundar, Comparison of solutions of stochastic equations and applications, Stochastic Anal. Appl., 18 (2000), 211-229. doi: 10.1080/07362990008809665. Google Scholar
J. Fu, D. Jiang, N. Shi, T. Hayat and A. Alsaedi, Qualitative analysis of a stochastic ratio-dependent Holling-Tanner system, Acta Math. Sci. Ser. B (Engl. Ed.), 38 (2018), 429-440. doi: 10.1016/S0252-9602(18)30758-6. Google Scholar
F. R. Gantmacher, The Theory of Matrices. Vols. 1, 2, Translated by K. A. Hirsch, Chelsea Publishing Co., New York, 1959. Google Scholar
D. H. Gottlieb, A de Moivre like formula for fixed point theory, in Fixed Point Theory and Its Applications (Berkeley, CA, 1986), vol. 72 of Contemp. Math., Amer. Math. Soc., Providence, RI, 1988, 99–105. doi: 10.1090/conm/072/956481. Google Scholar
J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, vol. 42 of Applied Mathematical Sciences, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-1140-2. Google Scholar
C. Ji, D. Jiang and N. Shi, Analysis of a predator-prey model with modified Leslie-Gower and Holling-type Ⅱ schemes with stochastic perturbation, J. Math. Anal. Appl., 359 (2009), 482-498. doi: 10.1016/j.jmaa.2009.05.039. Google Scholar
R. Khasminskii, Stochastic Stability of Differential Equations, vol. 66 of Stochastic Modelling and Applied Probability, 2nd edition, Springer, Heidelberg, 2012, With contributions by G. N. Milstein and M. B. Nevelson. doi: 10.1007/978-3-642-23280-0. Google Scholar
P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, vol. 23 of Applications of Mathematics (New York), Springer-Verlag, Berlin, 1992. doi: 10.1007/978-3-662-12616-5. Google Scholar
P. H. Leslie and J. C. Gower, The properties of a stochastic model for the predator-prey type of interaction between two species, Biometrika, 47 (1960), 219-234. doi: 10.1093/biomet/47.3-4.219. Google Scholar
L. Liu and Y. Shen, Sufficient and necessary conditions on the existence of stationary distribution and extinction for stochastic generalized logistic system, Adv. Difference Equ., 2015 (2015), 13pp. doi: 10.1186/s13662-014-0345-y. Google Scholar
Z. Liu, Stochastic dynamics for the solutions of a modified Holling-Tanner model with random perturbation, Internat. J. Math., 25 (2014), 1450105, 23pp. doi: 10.1142/S0129167X14501055. Google Scholar
J. Llibre and J. Villadelprat, A Poincaré index formula for surfaces with boundary, Differential Integral Equations, 11 (1998), 191-199. Google Scholar
J. Lv and K. Wang, Analysis on a stochastic predator-prey model with modified Leslie-Gower response, Abstr. Appl. Anal., 2011 (2011), Art. ID 518719, 16pp. doi: 10.1155/2011/518719. Google Scholar
J. Lv and K. Wang, Asymptotic properties of a stochastic predator-prey system with Holling Ⅱ functional response, Commun. Nonlinear Sci. Numer. Simul., 16 (2011), 4037-4048. doi: 10.1016/j.cnsns.2011.01.015. Google Scholar
T. Ma and S. Wang, A generalized Poincaré-Hopf index formula and its applications to 2-D incompressible flows, Nonlinear Anal. Real World Appl., 2 (2001), 467-482. doi: 10.1016/S1468-1218(01)00004-9. Google Scholar
P. S. Mandal and M. Banerjee, Stochastic persistence and stability analysis of a modified Holling-Tanner model, Math. Methods Appl. Sci., 36 (2013), 1263-1280. doi: 10.1002/mma.2680. Google Scholar
[28] R. M. May, Stability and Complexity in Model Ecosystems, Princeton University Press, Princeton, New Jersey, 1973.
A. F. Nindjin, M. A. Aziz-Alaoui and M. Cadivel, Analysis of a predator-prey model with modified Leslie-Gower and Holling-type Ⅱ schemes with time delay, Nonlinear Anal. Real World Appl., 7 (2006), 1104-1118. doi: 10.1016/j.nonrwa.2005.10.003. Google Scholar
E. C. Pielou, Mathematical Ecology, 2nd edition, Wiley-Interscience [John Wiley & Sons], New York-London-Sydney, 1977. Google Scholar
C. C. Pugh, A generalized Poincaré index formula, Topology, 7 (1968), 217-226. doi: 10.1016/0040-9383(68)90002-5. Google Scholar
J. Tong, $b^2-4ac$ and $b^2-3ac$, Math. Gaz., 88 (2004), 511-513. Google Scholar
R. Yafia and M. A. Aziz-Alaoui, Existence of periodic travelling waves solutions in predator prey model with diffusion, Appl. Math. Model., 37 (2013), 3635-3644. doi: 10.1016/j.apm.2012.08.003. Google Scholar
R. Yafia, F. El Adnani and H. T. Alaoui, Limit cycle and numerical similations for small and large delays in a predator-prey model with modified Leslie-Gower and Holling-type Ⅱ schemes, Nonlinear Anal. Real World Appl., 9 (2008), 2055-2067. doi: 10.1016/j.nonrwa.2006.12.017. Google Scholar
R. Yafia, F. El Adnani and H. Talibi Alaoui, Stability of limit cycle in a predator-prey model with modified Leslie-Gower and Holling-type Ⅱ schemes with time delay., Appl. Math. Sci., Ruse, 1 (2007), 119-131. Google Scholar
Figure 1. A phase portrait of (1.2) with three equilibrium points and a cycle in the interior of $ \mathcal{A} $. The dashed lines are isoclines $ y = \frac{x(1-x)(k_1+x-m)}{a(x-m)} $ and $ y = k_2+x-m $. The grey region is the invariant attracting domain $ \mathcal{A} $
Figure Options
Download full-size image
Download as PowerPoint slide
Figure 2. A phase portrait of (1.2) with an unstable equilibrium and a stable limit cycle
Figure 3. Hopf bifurcation of the system (1.2)
Figure 4. Solutions to the stochastic system (1.3) and the corresponding deterministic system, represented respectively by the blue line and the red line
Jun Zhou, Chan-Gyun Kim, Junping Shi. Positive steady state solutions of a diffusive Leslie-Gower predator-prey model with Holling type II functional response and cross-diffusion. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3875-3899. doi: 10.3934/dcds.2014.34.3875
Zengji Du, Xiao Chen, Zhaosheng Feng. Multiple positive periodic solutions to a predator-prey model with Leslie-Gower Holling-type II functional response and harvesting terms. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1203-1214. doi: 10.3934/dcdss.2014.7.1203
Wenjie Ni, Mingxin Wang. Dynamical properties of a Leslie-Gower prey-predator model with strong Allee effect in prey. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3409-3420. doi: 10.3934/dcdsb.2017172
Na Min, Mingxin Wang. Hopf bifurcation and steady-state bifurcation for a Leslie-Gower prey-predator model with strong Allee effect in prey. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1071-1099. doi: 10.3934/dcds.2019045
Yunfeng Liu, Zhiming Guo, Mohammad El Smaily, Lin Wang. A Leslie-Gower predator-prey model with a free boundary. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2063-2084. doi: 10.3934/dcdss.2019133
Walid Abid, Radouane Yafia, M.A. Aziz-Alaoui, Habib Bouhafa, Azgal Abichou. Global dynamics on a circular domain of a diffusion predator-prey model with modified Leslie-Gower and Beddington-DeAngelis functional type. Evolution Equations & Control Theory, 2015, 4 (2) : 115-129. doi: 10.3934/eect.2015.4.115
Changrong Zhu, Lei Kong. Bifurcations analysis of Leslie-Gower predator-prey models with nonlinear predator-harvesting. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1187-1206. doi: 10.3934/dcdss.2017065
Sampurna Sengupta, Pritha Das, Debasis Mukherjee. Stochastic non-autonomous Holling type-Ⅲ prey-predator model with predator's intra-specific competition. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3275-3296. doi: 10.3934/dcdsb.2018244
Li Zu, Daqing Jiang, Donal O'Regan. Persistence and stationary distribution of a stochastic predator-prey model under regime switching. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2881-2897. doi: 10.3934/dcds.2017124
Hongmei Cheng, Rong Yuan. Existence and stability of traveling waves for Leslie-Gower predator-prey system with nonlocal diffusion. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5433-5454. doi: 10.3934/dcds.2017236
Jun Zhou. Qualitative analysis of a modified Leslie-Gower predator-prey model with Crowley-Martin functional responses. Communications on Pure & Applied Analysis, 2015, 14 (3) : 1127-1145. doi: 10.3934/cpaa.2015.14.1127
C. R. Zhu, K. Q. Lan. Phase portraits, Hopf bifurcations and limit cycles of Leslie-Gower predator-prey systems with harvesting rates. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 289-306. doi: 10.3934/dcdsb.2010.14.289
Hongwei Yin, Xiaoyong Xiao, Xiaoqing Wen. Analysis of a Lévy-diffusion Leslie-Gower predator-prey model with nonmonotonic functional response. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2121-2151. doi: 10.3934/dcdsb.2018228
Jian Zu, Wendi Wang, Bo Zu. Evolutionary dynamics of prey-predator systems with Holling type II functional response. Mathematical Biosciences & Engineering, 2007, 4 (2) : 221-237. doi: 10.3934/mbe.2007.4.221
Shuping Li, Weinian Zhang. Bifurcations of a discrete prey-predator model with Holling type II functional response. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 159-176. doi: 10.3934/dcdsb.2010.14.159
Xiaoling Zou, Dejun Fan, Ke Wang. Stationary distribution and stochastic Hopf bifurcation for a predator-prey system with noises. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1507-1519. doi: 10.3934/dcdsb.2013.18.1507
Kazuhiro Oeda. Positive steady states for a prey-predator cross-diffusion system with a protection zone and Holling type II functional response. Conference Publications, 2013, 2013 (special) : 597-603. doi: 10.3934/proc.2013.2013.597
Zhijun Liu, Weidong Wang. Persistence and periodic solutions of a nonautonomous predator-prey diffusion with Holling III functional response and continuous delay. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 653-662. doi: 10.3934/dcdsb.2004.4.653
Mingxin Wang, Qianying Zhang. Dynamics for the diffusive Leslie-Gower model with double free boundaries. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2591-2607. doi: 10.3934/dcds.2018109
Andrei Korobeinikov, William T. Lee. Global asymptotic properties for a Leslie-Gower food chain model. Mathematical Biosciences & Engineering, 2009, 6 (3) : 585-590. doi: 10.3934/mbe.2009.6.585
PDF downloads (49)
HTML views (288)
Safia Slimani Paul Raynaud de Fitte Islam Boussaada
Article outline | CommonCrawl |
Domain and Range
Find the domain of a function defined by an equation
In Functions and Function Notation, we were introduced to the concepts of domain and range. In this section, we will practice determining domains and ranges for specific functions. Keep in mind that, in determining domains and ranges, we need to consider what is physically possible or meaningful in real-world examples, such as tickets sales and year in the horror movie example above. We also need to consider what is mathematically permitted. For example, we cannot include any input value that leads us to take an even root of a negative number if the domain and range consist of real numbers. Or in a function expressed as a formula, we cannot include any input value in the domain that would lead us to divide by 0.
We can visualize the domain as a "holding area" that contains "raw materials" for a "function machine" and the range as another "holding area" for the machine's products.
We can write the domain and range in interval notation, which uses values within brackets to describe a set of numbers. In interval notation, we use a square bracket [ when the set includes the endpoint and a parenthesis ( to indicate that the endpoint is either not included or the interval is unbounded. For example, if a person has $100 to spend, he or she would need to express the interval that is more than 0 and less than or equal to 100 and write [latex]\left(0,\text{ }100\right][/latex]. We will discuss interval notation in greater detail later.
Let's turn our attention to finding the domain of a function whose equation is provided. Oftentimes, finding the domain of such functions involves remembering three different forms. First, if the function has no denominator or an even root, consider whether the domain could be all real numbers. Second, if there is a denominator in the function's equation, exclude values in the domain that force the denominator to be zero. Third, if there is an even root, consider excluding values that would make the radicand negative.
Before we begin, let us review the conventions of interval notation:
The smallest term from the interval is written first.
The largest term in the interval is written second, following a comma.
Parentheses, ( or ), are used to signify that an endpoint is not included, called exclusive.
Brackets, [ or ], are used to indicate that an endpoint is included, called inclusive.
The table below gives a summary of interval notation.
Example 1: Finding the Domain of a Function as a Set of Ordered Pairs
Find the domain of the following function: [latex]\left\{\left(2,\text{ }10\right),\left(3,\text{ }10\right),\left(4,\text{ }20\right),\left(5,\text{ }30\right),\left(6,\text{ }40\right)\right\}[/latex] .
First identify the input values. The input value is the first coordinate in an ordered pair. There are no restrictions, as the ordered pairs are simply listed. The domain is the set of the first coordinates of the ordered pairs.
[latex]\left\{2,3,4,5,6\right\}[/latex]
Find the domain of the function:
[latex]\left\{\left(-5,4\right),\left(0,0\right),\left(5,-4\right),\left(10,-8\right),\left(15,-12\right)\right\}[/latex]
How To: Given a function written in equation form, find the domain.
Identify the input values.
Identify any restrictions on the input and exclude those values from the domain.
Write the domain in interval form, if possible.
Example 2: Finding the Domain of a Function
Find the domain of the function [latex]f\left(x\right)={x}^{2}-1[/latex].
The input value, shown by the variable [latex]x[/latex] in the equation, is squared and then the result is lowered by one. Any real number may be squared and then be lowered by one, so there are no restrictions on the domain of this function. The domain is the set of real numbers.
In interval form, the domain of [latex]f[/latex] is [latex]\left(-\infty ,\infty \right)[/latex].
Find the domain of the function: [latex]f\left(x\right)=5-x+{x}^{3}[/latex].
How To: Given a function written in an equation form that includes a fraction, find the domain.
Identify any restrictions on the input. If there is a denominator in the function's formula, set the denominator equal to zero and solve for [latex]x[/latex] . If the function's formula contains an even root, set the radicand greater than or equal to 0, and then solve.
Write the domain in interval form, making sure to exclude any restricted values from the domain.
Example 3: Finding the Domain of a Function Involving a Denominator (Rational Function)
Find the domain of the function [latex]f\left(x\right)=\frac{x+1}{2-x}[/latex].
When there is a denominator, we want to include only values of the input that do not force the denominator to be zero. So, we will set the denominator equal to 0 and solve for [latex]x[/latex].
[latex]\begin{cases}2-x=0\hfill \\ -x=-2\hfill \\ x=2\hfill \end{cases}[/latex]
Now, we will exclude 2 from the domain. The answers are all real numbers where [latex]x<2[/latex] or [latex]x>2[/latex]. We can use a symbol known as the union, [latex]\cup [/latex], to combine the two sets. In interval notation, we write the solution: [latex]\left(\mathrm{-\infty },2\right)\cup \left(2,\infty \right)[/latex].
In interval form, the domain of [latex]f[/latex] is [latex]\left(-\infty ,2\right)\cup \left(2,\infty \right)[/latex].
Find the domain of the function: [latex]f\left(x\right)=\frac{1+4x}{2x - 1}[/latex].
How To: Given a function written in equation form including an even root, find the domain.
Since there is an even root, exclude any real numbers that result in a negative number in the radicand. Set the radicand greater than or equal to zero and solve for [latex]x[/latex].
The solution(s) are the domain of the function. If possible, write the answer in interval form.
Example 4: Finding the Domain of a Function with an Even Root
Find the domain of the function [latex]f\left(x\right)=\sqrt{7-x}[/latex].
When there is an even root in the formula, we exclude any real numbers that result in a negative number in the radicand.
Set the radicand greater than or equal to zero and solve for [latex]x[/latex].
[latex]\begin{cases}7-x\ge 0\hfill \\ -x\ge -7\hfill \\ x\le 7\hfill \end{cases}[/latex]
Now, we will exclude any number greater than 7 from the domain. The answers are all real numbers less than or equal to [latex]7[/latex], or [latex]\left(-\infty ,7\right][/latex].
Find the domain of the function [latex]f\left(x\right)=\sqrt{5+2x}[/latex].
Can there be functions in which the domain and range do not intersect at all?
Yes. For example, the function [latex]f\left(x\right)=-\frac{1}{\sqrt{x}}[/latex] has the set of all positive real numbers as its domain but the set of all negative real numbers as its range. As a more extreme example, a function's inputs and outputs can be completely different categories (for example, names of weekdays as inputs and numbers as outputs, as on an attendance chart), in such cases the domain and range have no elements in common.
All rights reserved content
Ex: The Domain of Rational Functions . Authored by: Mathispower4u. Located at: https://www.youtube.com/watch?v=v0IhvIzCc_I&feature=youtu.be. License: All Rights Reserved. License Terms: Standard YouTube License
Mathispower4u. Authored by: Ex: Domain and Range of Square Root Functions. Located at: https://www.youtube.com/watch?v=lj_JB8sfyIM&feature=youtu.be. License: All Rights Reserved. License Terms: Standard YouTube License | CommonCrawl |
Modes of clustered star formation [PDF]
S. Pfalzner,T. Kaczmarek,C. Olczak
Physics , 2012, DOI: 10.1051/0004-6361/201219881
Abstract: The realization that most stars form in clusters, raises the question of whether star/planet formation are influenced by the cluster environment. The stellar density in the most prevalent clusters is the key factor here. Whether dominant modes of clustered star formation exist is a fundamental question. Using near-neighbour searches in young clusters Bressert et al. (2010) claim this not to be the case and conclude that star formation is continuous from isolated to densely clustered. We investigate under which conditions near-neighbour searches can distinguish between different modes of clustered star formation. Near-neighbour searches are performed for model star clusters investigating the influence of the combination of different cluster modes, observational biases, and types of diagnostic and find that the cluster density profile, the relative sample sizes, limitations in observations and the choice of diagnostic method decides whether modelled modes of clustered star formation are detected. For centrally concentrated density distributions spanning a wide density range (King profiles) separate cluster modes are only detectable if the mean density of the individual clusters differs by at least a factor of ~65. Introducing a central cut-off can lead to underestimating the mean density by more than a factor of ten. The environmental effect on star and planet formation is underestimated for half of the population in dense systems. A analysis of a sample of cluster environments involves effects of superposition that suppress characteristic features and promotes erroneous conclusions. While multiple peaks in the distribution of the local surface density imply the existence of different modes, the reverse conclusion is not possible. Equally, a smooth distribution is not a proof of continuous star formation, because such a shape can easily hide modes of clustered star formation (abridged)
Clustered vs. Isolated Star Formation [PDF]
Mordecai-Mark Mac Low
Abstract: I argue that star formation is controlled by supersonic turbulence, drawing for support on a number of 3D hydrodynamical and MHD simulations as well as theoretical arguments. Clustered star formation appears to be a natural result of a lack of turbulent support, while isolated star formation is a signpost of global turbulent support.
The efficiency of star formation in clustered and distributed regions [PDF]
Ian A. Bonnell,Rowan J. Smith,Paul C. Clark,Matthew R. Bate
Physics , 2010, DOI: 10.1111/j.1365-2966.2010.17603.x
Abstract: We investigate the formation of both clustered and distributed populations of young stars in a single molecular cloud. We present a numerical simulation of a 10,000 solar mass elongated, turbulent, molecular cloud and the formation of over 2500 stars. The stars form both in stellar clusters and in a distributed mode which is determined by the local gravitational binding of the cloud. A density gradient along the major axis of the cloud produces bound regions that form stellar clusters and unbound regions that form a more distributed population. The initial mass function also depends on the local gravitational binding of the cloud with bound regions forming full IMFs whereas in the unbound, distributed regions the stellar masses cluster around the local Jeans mass and lack both the high-mass and the low-mass stars. The overall efficiency of star formation is ~ 15 % in the cloud when the calculation is terminated, but varies from less than 1 % in the the regions of distributed star formation to ~ 40 % in regions containing large stellar clusters. Considering that large scale surveys are likely to catch clouds at all evolutionary stages, estimates of the (time-averaged) star formation efficiency for the giant molecular cloud reported here is only ~ 4 %. This would lead to the erroneous conclusion of 'slow' star formation when in fact it is occurring on a dynamical timescale.
Thickening of galactic disks through clustered star formation [PDF]
Pavel Kroupa
Abstract: (Abridged) The building blocks of galaxies are star clusters. These form with low-star formation efficiencies and, consequently, loose a large part of their stars that expand outwards once the residual gas is expelled by the action of the massive stars. Massive star clusters may thus add kinematically hot components to galactic field populations. This kinematical imprint on the stellar distribution function is estimated here by calculating the velocity distribution function for ensembles of star-clusters distributed as power-law or log-normal initial cluster mass functions (ICMFs). The resulting stellar velocity distribution function is non-Gaussian and may be interpreted as being composed of multiple kinematical sub-populations. The notion that the formation of star-clusters may add hot kinematical components to a galaxy is applied to the age--velocity-dispersion relation of the Milky Way disk to study the implied history of clustered star formation, with an emphasis on the possible origin of the thick disk.
Galactic consequences of clustered star formation [PDF]
M. R. Haas,P. Anders
Physics , 2009, DOI: 10.1017/S1743921309991566
Abstract: If all stars form in clusters and both the stars and the clusters follow a power law distribution which favours the creation of low mass objects, then the numerous low mass clusters will be deficient in high mass stars. Therefore, the mass function of stars, integrated over the whole galaxy (the Integrated Galactic Initial Mass Function, IGIMF) will be steeper at the high mass end than the underlying IMF of the stars. We show how the steepness of the IGIMF depends on the sampling method and on the assumptions made for the star cluster mass function. We also investigate the O-star content, integrated photometry and chemical enrichment of galaxies that result from several IGIMFs, as compared to more standard IMFs.
Present-Day Star Formation: Protostellar Outflows and Clustered Star Formation [PDF]
Fumitaka Nakamura,Zhi-Yun Li
Abstract: Stars form predominantly in clusters inside dense clumps of turbulent, magnetized molecular clouds. The typical size and mass of the cluster-forming clumps are \sim 1 pc and \sim 10^2 - 10^3 M_\odot, respectively. Here, we discuss some recent progress on theoretical and observational studies of clustered star formation in such parsec-scale clumps with emphasis on the role of protostellar outflow feedback. Recent simulations indicate that protostellar outflow feedback can maintain supersonic turbulence in a cluster-forming clump, and the clump can keep a virial equilibrium long after the initial turbulence has decayed away. In the clumps, star formation proceeds relatively slowly; it continues for at least several global free-fall times of the parent dense clump (t_{ff}\sim a few x 10^5 yr). The most massive star in the clump is formed at the bottom of the clump gravitational potential well at later times through the filamentary mass accretion streams that are broken up by the outflows from low-mass cluster members. Observations of molecular outflows in nearby cluster-forming clumps appear to support the outflow-regulated cluster formation model.
Observations of Protostellar Outflow Feedback in Clustered Star Formation [PDF]
Fumitaka Nakamura
Abstract: We discuss the role of protostellar outflow feedback in clustered star formation using the observational data of recent molecular outflow surveys toward nearby cluster-forming clumps. We found that for almost all clumps, the outflow momentum injection rate is significantly larger than the turbulence dissipation rate. Therefore, the outflow feedback is likely to maintain supersonic turbulence in the clumps. For less massive clumps such as B59, L1551, and L1641N, the outflow kinetic energy is comparable to the clump gravitational energy. In such clumps, the outflow feedback probably affects significantly the clump dynamics. On the other hand, for clumps with masses larger than about 200 M$_\odot$, the outflow kinetic energy is significantly smaller than the clump gravitational energy. Since the majority of stars form in such clumps, we conclude that outflow feedback cannot destroy the whole parent clump. These characteristics of the outflow feedback support the scenario of slow star formation.
Nuclear Star Clusters from Clustered Star Formation [PDF]
Meghann Agarwal,Milos Milosavljevic
Physics , 2010, DOI: 10.1088/0004-637X/729/1/35
Abstract: Photometrically distinct nuclear star clusters (NSCs) are common in late-type-disk and spheroidal galaxies. The formation of NSCs is inevitable in the context of normal star formation in which a majority of stars form in clusters. A young, mass-losing cluster embedded in an isolated star-forming galaxy remains gravitationally bound over a period determined by its initial mass and the galactic tidal field. The cluster migrates radially toward the center of the galaxy and becomes integrated in the NSC if it reaches the center. The rate at which the NSC grows by accreting young clusters can be estimated from empirical cluster formation rates and dissolution times. We model cluster migration and dissolution and find that the NSCs in late-type disks and in spheroidals could have assembled from migrating clusters. The resulting stellar nucleus contains a small fraction of the stellar mass of the galaxy; this fraction is sensitive to the high-mass truncation of the initial cluster mass function (ICMF). The resulting NSC masses are consistent with the observed values, but generically, the final NSCs are surrounded by a spatially more extended excess over the inward-extrapolated exponential (or Sersic) law of the outer galaxy. We suggest that the excess can be related to the pseudobulge phenomenon in disks, though not all of the pseudobulge mass assembles this way. Comparison with observed NSC masses can be used to constrain the truncation mass scale of the ICMF and the fraction of clusters suffering prompt dissolution. We infer truncation mass scales of <~ 10^6 M_sun (>~ 10^5 M_sun) without (with 90%) prompt dissolution.
Clustered star formation as a natural explanation of the Halpha cutoff in disc galaxies [PDF]
Jan Pflamm-Altenburg,Pavel Kroupa
Physics , 2009, DOI: 10.1038/nature07266
Abstract: Star formation is mainly determined by the observation of H$\alpha$ radiation which is related to the presence of short lived massive stars. Disc galaxies show a strong cutoff in H$\alpha$ radiation at a certain galactocentric distance which has led to the conclusion that star formation is suppressed in the outer regions of disc galaxies. This is seemingly in contradiction to recent UV observations (Boissier et al., 2007) that imply disc galaxies to have star formation beyond the Halpha cutoff and that the star-formation-surface density is linearly related to the underlying gas surface density being shallower than derived from Halpha luminosities (Kennicutt, 1998). In a galaxy-wide formulation the clustered nature of star formation has recently led to the insight that the total galactic Halpha luminosity is non-linearly related to the galaxy-wide star formation rate (Pflamm-Altenburg et al., 2007d). Here we show that a local formulation of the concept of clustered star formation naturally leads to a steeper radial decrease of the Halpha surface luminosity than the star-formation-rate surface density in quantitative agreement with the observations, and that the observed Halpha cutoff arises naturally.
Drama of HII regions: Clustered and Triggered Star Formation [PDF]
Jin-Zeng Li,Jinghua Yuan,Hong-Li Liu,Yuefang Wu,Ya-Fang Huang
Abstract: In order to understand the star formation process under the influence of HII regions, we have carried out extensive investigations to well selected star-forming regions which all have been profoundly affected by existing massive O type stars. On the basis of multi-wavelength data from mid-infrared to millimeter collected using $Spitzer$, $Herschel$, and ground based radio telescopes, the physical status of interstellar medium and star formation in these regions have been revealed. In a relatively large infrared dust bubble, active star formation is undergoing and the shell is still expanding. Signs of compressed gas and triggered star formation have been tentatively detected in a relatively small bubble. The dense cores in the Rosette Molecular Complex detected at 1.1 mm using SMA have been speculated to have a likely triggered origin according to their spatial distribution. Although some observational results have been obtained, more efforts are necessary to reach trustworthy conclusions. | CommonCrawl |
SSC CHSL 7 March 2018 Evening Shift
For the following questions answer them individually
Find the number which is NOT a prime number.
Which of the following is the largest number among $$\sqrt{2},\sqrt[3]{2},\sqrt{4},\sqrt[3]{5}$$
$$\sqrt{2}$$
$$\sqrt[3]{3}$$
What is the value of $$13 \times 49^{\frac{3}{2}}$$
What is the value of $$\frac{a^2+b^2}{a^3-b^3}$$ when $$a+b=8$$ and $$a-b=2$$
If DE is parallel to BC and bisects the other two sides of the triangle ABC such that the ratio $$\frac{AD}{DB}=\frac{5}{13}$$ and the length of the part EC is 26 cm. Then determine the length of AE (in cm).
The diameter of the driving wheel of a cart is 154 cm. Calculate the revolution per minute [RPM] of the wheel in order to keep a speed of 33 Kilo meter per hour.
A number is divided into two parts in such a way that 30% of first part is 25 more than the 20% of second part. 50% of second part is 33.5 more than the 60% of first part. What is the number?
Two vessels of equal capacity contains juice and water in the ratio of 5 : 1 and 5 : 7 respectively. The mixture of both the vessels are mixed and transferred into a bigger vessel. What is the ratio of juice and water in the new mixture?
In an alloy, aluminium and tin are in the ratio of 4 : 5. In the second alloy, the ratio of same elements is 4 : 7. If equal quantities of these two alloys be mixed to form a new alloy, then what will be the ratio of both of these elements in the new alloy?
Among four bags, average weight of last three bags is 18 kg and the average weight of first three bags is 19 kg. If the weight of last bag is 22 kg, then what is the weight (in kg) of first bag? | CommonCrawl |
Inferring extrinsic noise from single-cell gene expression data using approximate Bayesian computation
Oleg Lenive1,
Paul D. W. Kirk2 &
Michael P. H. Stumpf3
BMC Systems Biology volume 10, Article number: 81 (2016) Cite this article
Gene expression is known to be an intrinsically stochastic process which can involve single-digit numbers of mRNA molecules in a cell at any given time. The modelling of such processes calls for the use of exact stochastic simulation methods, most notably the Gillespie algorithm. However, this stochasticity, also termed "intrinsic noise", does not account for all the variability between genetically identical cells growing in a homogeneous environment.
Despite substantial experimental efforts, determining appropriate model parameters continues to be a challenge. Methods based on approximate Bayesian computation can be used to obtain posterior parameter distributions given the observed data. However, such inference procedures require large numbers of simulations of the model and exact stochastic simulation is computationally costly.
In this work we focus on the specific case of trying to infer model parameters describing reaction rates and extrinsic noise on the basis of measurements of molecule numbers in individual cells at a given time point.
To make the problem computationally tractable we develop an exact, model-specific, stochastic simulation algorithm for the commonly used two-state model of gene expression. This algorithm relies on certain assumptions and favourable properties of the model to forgo the simulation of the whole temporal trajectory of protein numbers in the system, instead returning only the number of protein and mRNA molecules present in the system at a specified time point. The computational gain is proportional to the number of protein molecules created in the system and becomes significant for systems involving hundreds or thousands of protein molecules.
We employ this simulation algorithm with approximate Bayesian computation to jointly infer the model's rate and noise parameters from published gene expression data. Our analysis indicates that for most genes the extrinsic contributions to noise will be small to moderate but certainly are non-negligible.
Experiments have demonstrated the presence of considerable cell-to-cell variability in mRNA and protein numbers [1–5] and slow fluctuations on timescales similar to the cell cycle [6, 7]. Broadly speaking, there are two plausible causes of such variability. One is the inherent stochasticity of biochemical processes which are dependent on small numbers of molecules. The other relates to differences in numbers of protein, mRNA, metabolites and other molecules available for each reaction or process within a cell, as well as any heterogeneity in the physical environment of the cell population. These sources of variability have been dubbed as "intrinsic noise" and "extrinsic noise", respectively.
One of the earliest investigations into the relationship between intrinsic and extrinsic noise employed two copies of a protein with different fluorescent tags, expressed from identical promoters equidistant from the replication origin in E. coli [8]. By quantifying fluorescence for a range of expression levels and genetic backgrounds the authors concluded that intrinsic noise decreases monotonically as transcription rate increases while extrinsic noise attains a maximum at intermediate expression levels. Other studies have considered extrinsic noise in the context of a range of cellular processes including the induction of apoptosis [9]; the distribution of mitochondria within cells [10]; and progression through the cell cycle [11]. From a computational perspective, extrinsic variability has been modelled by linking the perturbation of model parameters to perturbation of the model output using a range of methods, including the Unscented Transform [12] the method of moment closure [13], and density estimation [14].
Taniguchi et al. [7] carried out a high-throughput quantitative survey of gene expression in E. coli. By analysing images from fluorescent microscopy they obtained discrete counts of protein and mRNA molecules in individual E. coli cells. They provided both the measurements of average numbers of protein and mRNA molecules in a given cell, as well as measurements of cell-to-cell variability of molecule numbers. The depth and scale of their study revealed the influence of extrinsic noise on gene expression levels. The authors demonstrated that the measured protein number distributions can be described by Gamma distributions, the parameters of which can be related to the transcription rate and protein burst size [15]. To quantify extrinsic noise they consider the relationship between the means and the Fano factors of the observed protein distributions. They also illustrate how extrinsic noise in protein numbers may be attributed to fluctuations occurring on a timescale much longer than the cell cycle.
Here we aim to describe extrinsic noise at a more detailed, mechanistic, level using a stochastic model of gene expression. A relatively simple mechanistic model of gene expression may represent mRNA production as a zero order reaction with protein being produced from each mRNA via first order reactions. This can be described as the one-state model since the promoter is modelled as being constitutively active (Fig. 1). In the one-state model, mRNA production is represented by a homogeneous Poisson process and the Fano factor of the mRNA distribution at any time point will be one. However, experimental counts of mRNA molecules in single cells indicate that the Fano factor is often considerably higher than one [7].
Schematic representations of the one- and two-state models. In the one-state model (top), mRNA is produced at a constant rate (k 1). Protein is produced from mRNA with first order kinetics at a rate k 2. Both mRNA and protein molecules are degraded according to first order kinetics with rates d 1 and d 2 respectively. The two-state model (bottom) has the added feature of two promoter states. In the inactive state the promoter produces mRNA at a fraction (k 0) of that of the active state. Switching between the two states corresponds to a telegraph process characterised by two rate constants (k on and k off)
Such a description calls for quantitative inference of the model's parameters. We achieve this by relying on the data made available by Taniguchi et al. and employing approximate Bayesian computation (ABC) [16, 17]. One difficulty that arises when trying to investigate the extent and effect of extrinsic noise is that it is difficult to separate it from intrinsic noise. To overcome this confounding effect, the parameters of our model come in two varieties. Firstly, reaction rate parameters describe the probability of events occurring per unit of time. These correspond to the reaction rate parameters of a typical stochastic model which accounts for intrinsic noise. Secondly, noise parameters describe the variability in reaction rate parameters caused by the existence of extrinsic noise. In this model, extrinsic noise is represented by a perturbation of the model's rate parameters using a truncated Gaussian distribution. The magnitude of the perturbation of each rate parameter depends on the corresponding noise parameter, which is closely related to the standard deviation of the relevant Gaussian (see "Methods"). This approach allows us to simultaneously infer the rate parameters and the magnitude of extrinsic noise and may be thought of as an application of mixed effect modelling [18] in the context of exact stochastic simulation.
Stochastic simulation and ABC inference methods are both computationally costly endeavours. In this particular case, the experimental data corresponds to snapshots of the system at a single time point. The data are made available in the form of summary statistics, measures of central tendency (e.g. mean) and statistical dispersion (e.g. variance).
Thus, a complete temporal trajectory of the system is not necessary to carry out comparisons with the data. This allows us to make the problem computationally tractable. To this end, we develop a model-specific simulation method which takes advantage of the Poissonian relationship between the number of surviving protein molecules produced from a given mRNA molecule and its lifetime, under certain assumptions.
Posterior distributions of parameters
We begin our analysis by examining the posterior distributions of parameters obtained for each gene using the ABC Sequential Monte Carlo (ABC-SMC) inference procedure [16] A selection of distributions is shown in Fig. 3 and the Additional files 2, 3, 4 and 5 supplementary figures. The simulated summary statistics converged to within the desired threshold of the experimental measurements for 86 out of 87 genes. The inferred posterior for the one remaining gene converged relatively slowly and we chose to terminate the process after 30 days of CPU time.
Figure 2 shows a contour plot of the distribution of summary statistics and the mRNA degradation rate, obtained from particles in the final ABC-SMC population for a typical gene (dnaK).
Posterior distribution of summary statistics and the mRNA degradation rate for the gene dnaK. Contour plots indicating the density of points with the corresponding summary statistic for each particle in the final population. The summary statistics for each particle are calculated from 1000 simulation runs. The posterior distribution consists of 1000 particles
Posterior distribution of model parameters for the gene dnaK. Contour plots indicating the density of points with the corresponding parameter values for each particle in the final population. The posterior distribution consists of 1000 particles
We begin with a discussion of features of the posterior parameter distributions, that are common to most genes. Next, we examine the relationships between model parameters and summary statistics of the model outputs. Lastly, we carry out a sensitivity analysis on the inferred posteriors to assess the importance of each parameter in setting the overall levels of extrinsic noise.
In the two-state model, the switching of the promoter between active and inactive states is described by a telegraph process that can be parametrised either in terms of the switching reaction rates (k on and k off) or in terms of the on/off bias (k r ) and frequency of switching events (k f ) (Fig. 1). The simulation algorithm takes parameters in the form of k on and k off. However, the effects of k r and k f on the observed mRNA distribution may be interpreted more directly and intuitively.
For the majority of genes the k 0 and k r parameters are relatively small. This appears to be a prerequisite for a high Fano factor of the mRNA distribution and the mean marginal inferred values of these parameters are negatively correlated with Fano factors across all 86 genes as discussed below. A low switching rate combined with a low basal expression rate ensures that there are two distinct mRNA expression levels. This in turn produces a larger variance in measured mRNA counts and results in Fano factor values well above one. Conversely, genes for which mRNA production appears to be more Poissonian were inferred to have basal mRNA production rates close to one, i.e. similar to the active mRNA production rates. In other words, these genes appear to be constitutively active. Here again, we point out that the two-state promoter model provides a convenient abstraction and a hypothesis for explaining the super-Poissonian variance in mRNA copy number [5, 19]. However, based on these observations it is difficult to determine whether a model with more states or some other more elaborate regulatory model, would not be more appropriate. Our attempts at carrying out the inference procedure with a one-state model indicate that extrinsic noise alone does not explain the observed mRNA distributions without also producing unacceptably high variability in protein numbers.
Our initial inference attempts used only the summary statistics from the data. We observed that the production and degradation rate parameters for mRNA (k 1 and d 1) and protein (k 2 and d 2) tended to be positively correlated in the posterior parameter distributions of many genes. This is due to limited identifiability of model parameters since different combinations of rates may produce similar steady state expression levels. We included the mRNA degradation rate in the inference procedure with the aim of overcoming the problem of unidentifiable parameters. However, this did not alleviate the problem entirely and there is still considerable uncertainty, or sloppiness, in the posterior with regard to some directions in parameter space. While this does make it difficult to pick precise parameter values it also illustrates how using ABC provides us with a way of measuring the model's sensitivity to changes in parameters. Our approach provides an indication of the possible range of extrinsic noise values that can account for the observed variability in mRNA and protein numbers (Fig. 3).
Although the posterior summary statistics (and mRNA degradation rate) are reasonably well constrained and distinct for each gene, the distributions of model parameters can still be relatively broad (Fig. 3). There are a number of reasons for this. Firstly, changes in parameters associated with active transcription and translation, as well as degradation rates, are more easily inferred than parameters describing switching between promoter states, basal transcription or extrinsic noise. In particular, when the production and degradation rates for the same species are subjected to different extrinsic noise parameters, the inference procedure struggles to resolve between the different source of extrinsic noise. This explains the correlation between the means of inferred extrinsic noise parameters (Fig. 4). Such correlations between extrinsic noise parameters are not observed in the posterior of each gene or when taking the single particle with the highest weight from the final population of each gene as in Fig. 5.
Relationships between means of the marginal parameter posteriors. Scatter plots of the means of the marginal distributions of parameter posteriors are shown for all pairs of parameters. Each point corresponds to a gene. Warmer hues are used to indicate a higher density of data points
Relationships between the heaviest particles. Scatter plots of the particles with the highest weight in the final ABC-SMC population, shown for all pairs of parameters. Each point corresponds to one particle from the inferred posterior of one gene. Warmer hues are used to indicate a higher density of data points
A comparison of Figs. 4 and 5 suggests that a certain level of extrinsic noise is expected for all genes. However, the extrinsic noise may affect various combinations of rate parameters and it may not be possible to discern if, for example, the production rate or the degradation rate is more affected by extrinsic variability. While our inference procedure does not indicate a distinctive lower boundary for the amount of extrinsic noise affecting each reaction rate, there is usually an upper limit to the inferred noise parameters ranges. The extrinsic noise parameters for most genes are below 0.2 in the units set here (Fig. 5); however, for some genes, \(\eta _{k_{\text {on}}}\) and \(\eta _{k_{\text {off}}}\) have relatively broad posterior marginal distributions.
To better understand the relationship between model parameters and observed patterns of gene expression, we look for correlations between means and variances of the inferred marginal parameters of each gene and the summary statistics used in the inference procedure (Fig. 6). As expected, the correlation between the measured mRNA degradation rate, calculated form mRNA lifetime, and the inferred mRNA degradation rate parameter of the model, is close to one.
Heat maps of correlation coefficients between parameters and summary statistics. Heat maps are of the correlation coefficients calculated between experimentally obtained summary statistics and the mean (top) or the variance (bottom) of the marginal posterior for each model parameter. Correlation coefficients for which the associated p-values are greater than 0.05, after correcting for multiple testing using the Benjamini-Hochberg method [43], are treated as zero for plotting purposes
The promoter switching rate parameters, k on and k off, display positive and negative correlation with the mean mRNA number, respectively (as may be expected). They have the opposite relationship with the Fano factor associated with the mRNA distribution. This is consistent with the idea that distinct levels of transcription are required to account for the observed mRNA Fano factors. The corresponding extrinsic noise parameters \(\eta _{k_{\text {on}}}\) and \(\eta _{k_{\text {off}}}\) are positively correlated with mRNA abundance. However, the means and variances of the marginal distributions of these parameters are negatively correlated with the Fano factor of the mRNA distribution. This indicates that when promoter switching is affected by higher extrinsic noise, the mRNA distribution becomes more Poissonian as the effect of the two distinct promoter states is averaged out.
Curiously, the mean and variance of the protein degradation rate (d 2) are positively correlated with mean mRNA number and negatively correlated with the mRNA Fano factor. Unlike the translation rate (k 2), it shows no significant correlation with the mean or variance of the protein number.
Parameter sensitivity
There are two complementary approaches to investigating the sensitivity of a modelled system to its parameters or inputs [20]. One approach is to consider a single point in parameter space and study how the model responds to infinitesimal changes in parameters. This local approach usually involves calculating the partial derivatives of the model output with respect to the parameters of interest. Alternatively, one may consider how the model behaviour varies within a region of parameter space by sampling parameters and observing model behaviour. Regardless of the method used, different linear combinations of parameters will affect the model output to varying degrees [21]. Gutenkunst et al. [22] coined the terms "stiff" and "sloppy" to describe these differences. They defined a Hessian matrix,
$$H_{i,j}^{\chi^{2}} \equiv \frac{d^{2} \chi^{2}}{ d\log \theta_{i} d \log \theta_{j}}, $$
where χ 2 provides a measure of model behaviour, such as the average squared change in the species time course. By considering the eigenvalues of this Hessian, λ i , the authors were able to quantify the (local) responsiveness of the system to a given change in parameters. Conceptually, moving along a stiff direction in parameter space causes a large change in model behaviour; conversely moving along a sloppy direction results in comparatively little effect on the output of the system.
Secrier et al. [23] later demonstrated how these ideas can be applied to the analysis of posterior distributions obtained by ABC methods [24]. Principal component analysis (PCA) may be used to approximate the log posterior density using a multivariate normal (MVN) distribution. They showed that the eigenvalues of the covariance matrix, s i , of this MVN distribution are related to the eigenvalues of the Hessian as λ i =1/s i .
To assess the the stiffness/sloppiness of the inferred parameters we carry out PCA of the covariance matrices of log posterior distributions for each gene. In interpreting the results of the PCA we assume that the posterior distribution is, in practice, unimodal. The principal components (eigenvectors), ν, and the corresponding loadings (eigenvalues), s, provided by the PCA are then used to obtain the eigen-parameters, q, as
$$q_{i} = s_{i}\nu_{i}. $$
We calculate the projections of each parameter, θ i , onto each eigen-parameter, q j , as
$$c_{ij} = \theta_{i} \cdot q_{j}. $$
As a measure of the overall sloppiness of each parameter, l, we use the sum of the contributions of each parameter to the eigen-parameters, \(l_{i} = \sum _{j} c_{ij}\). This can also be thought of as the sum of the projections of each principal component onto the parameter, weighted by the fraction of total variance explained by each of the principal components.
Having obtained a measure of the sloppiness of each parameter, for each gene, we carry out hierarchical clustering [25] of genes and parameters using a Euclidean distance metric for both (Fig. 7).
Clustering of genes and inferred posteriors according to parameter sloppiness. The clustergram shows a heat map of parameter (columns) sloppiness for each gene (rows). Warmer hues indicate more sloppy parameters. Dendograms above and to the left of the heat map display the hierarchical tree obtained when clustering either the model parameters or the genes using a Euclidean distance metric
The majority of genes show a similar pattern of parameter stiffness/sloppiness. The most distinctive and the second most distinctive clusters consist of just two genes each, yiiU with aceE and cspE with map, respectively. These four genes are distinguished by unusually sloppy promoter activity ratio, k r , and promoter switching frequency, k f , parameters. The pair yiiU and aceE display a high ratio of protein variance to protein mean (Fano factor) and are stiff with regard to the protein degradation rate noise parameter \(\eta _{d_{1}}\). cspE also has a high Fano factor of the protein distribution while map has an unusually low mRNA Fano factor. What these four genes appear to have in common is that the variability in their protein numbers is difficult to explain based solely on the mRNA variability. Thus, a higher level of extrinsic noise is inferred to account for the observed variability. Since these genes comprise a small minority, it may be that their expression is subject to regulatory mechanisms that are not well approximated by the two state model. The remaining majority of genes are broadly divided into two similar groups which differ mostly in the sloppiness of k 0.
The noise and rate parameters segregate into two clusters with the noise parameters generally being sloppier than the rate parameters (Fig. 7). The least sloppy parameter is the mRNA degradation rate (d 1). This is not surprising since it was used, together with the molecule number summary statistics, to infer the posterior distribution. Of the rate parameters, the basal transcription rate (k 0) is the sloppiest and often approaches the noise parameters in its sloppiness. Since this parameter is defined as a fraction of the active transcription rate (k 1), its relative sloppiness should not be equated to a lack of importance. For most genes the marginal posterior of k 0 is largely constrained to the lower half of its prior distribution, U(0,1). The only exception being the gene map for which the measured mRNA Fano factor was close to one and the marginal posterior of k 0 is in the top half of the prior range. The mean of the marginal posterior of k 0 is negatively correlated with the mRNA Fano factor across all genes (Fig. 6). The two other parameters that influence the mRNA Fano factor, k r and k f , are the next sloppiest rate parameters.
Cell-to-cell variability in genetically homogeneous populations of cells is a ubiquitous phenomenon [26–28]. Attempts to quantify it are complicated by the difficulty of assigning it to a single cellular process or any one experimentally measurable variable. It can also be difficult, for example, to distinguish between the intrinsic stochasticity of biochemical processes in the short term and longer term variations which may have been inherited from previous cell generations.
By including a representation of extrinsic noise in our model of gene expression we infer the extent to which the rates of biochemical processes can vary between cells while still producing the experimentally measured mRNA and protein variability. We demonstrate the usefulness of an efficient method for exact stochastic simulation of the two-state model of gene expression. The two-state model is necessary to explain the experimentally measured mRNA variation (Fano factor), and is capable of describing the majority of the observed data. The corresponding single-state model, with constant promoter activity and extrinsic noise, does not produce mRNA Fano factors as high as those measured experimentally without leading to unacceptably high variability in the protein numbers. We show that the amount of extrinsic noise affecting most genes appears to be limited, but non-negligible.
The exact simulation method described here occupies a niche between those cases when only samples from the steady state mRNA distribution of the two-state model [3, 29, 30] are required, and cases when an approximation to the protein distribution [15, 31] is sufficient. The computational advantages of the simulation method described here are limited to specific conditions, such as, low numbers of mRNA molecules and higher numbers of protein molecules. The most limiting factor of this simulation method is that it is not applicable to models in which the protein products affect upstream processes such as promoter activity, transcription or translation. The addition of such interactions would mean that the assumptions used in deriving the Poissonian relationship between the number of surviving protein molecules produced form a given mRNA molecule and mRNA's lifetime would no longer be satisfied. Perhaps an approximate algorithm could be developed on the basis of Algorithm (1) to handle such situations. Alternatively, the tau-leaping algorithm [32], or moment expansion [33, 34], may be more appropriate for models involving these kinds of feedback interactions. Algorithm (1) could, however, be naturally extended to models involving regulatory interactions between non-coding RNAs as the simulation of that part of the model is equivalent to Gillespie's exact algorithm. Although here we use summary statistics of mRNA and protein number measurements, the simulation method is also applicable to cases where a direct comparison between sample distributions, for example using the Hellinger distance, is required.
Here we have worked under the assumption that experimental measurement error associated with individual mRNA or protein counts obtained by fluorescence microscopy are small relative to the combined effects of extrinsic and intrinsic noise. We deem this to be justifiable given the experimental method used by Tanaguchi et al. [7] and the results presented in their publication. More generally, such measurement errors would inflate estimates of the variances in molecule numbers and may skew the inferred extrinsic noise parameters. Other studies, which look directly at the interplay between intrinsic and extrinsic noise in single cells [35] — using time-resolved proteomics data — do also bear this out.
The inferred extrinsic noise parameters will also include the effects of regulatory mechanisms that are not well described by the two-state model. In this sense, our definition of noise becomes blurred with our ignorance about the regulatory interactions involved in the expression of each gene. Nonetheless, the biochemical mechanisms governing gene expression in a given species are shared between many genes. This is in agreement with our observation that, for most genes, inferred model parameters show similar patterns of sloppiness. If we are able to refine our understanding of the shared aspects of gene expression, we may be able to improve our understanding of both the nature of the noise affecting it, and the regulatory mechanisms controlling it. In practice this may mean finding a mechanistic explanation for the two-state model or further refining it to achieve a better agreement between simulations and experimental results.
The in silico approach used here not only relied on, but was inspired by the experimental work of Tanaguchi et al. [7]. As the resolution of high throughput experimental techniques and the quantity of data they generate continues to increase, more complete observations of cellular processes may begin to yield data amenable to statistical analysis and inference of extrinsic noise. These may in turn require other modelling, computational and theoretical approaches which would not rely on the assumptions and simplifications that we make in this work [36].
Modelling gene expression
A simple model of gene expression may represent the processes of transcription and translation using mass-action kinetics to describe production and degradation of various species as pseudo-first order reactions. Such a model may be simulated stochastically to take into account the intrinsic variability of processes involving low numbers of molecules. In the simplest version of this model, mRNA is produced from the promoter at a constant rate. However, such Poissonian mRNA production is often not sufficient to account for the variability in mRNA numbers measured experimentally in both prokaryotic and eukaryotic cells. In addition to this, for many genes, transcription appears to occur in bursts rather than at a constant rate. These characteristics of gene expression have been observed in organisms as diverse as bacteria [7], yeast [4], amoeba [2] and mammals [3]. One model of gene expression that takes this into account is the, so called, two-state model.
The two-state promoter model
In the two-state model of gene expression, a gene's promoter is represented as either active or inactive [5, 19]. Here we use a variant of the two-state model with the inactive state corresponding to a lower transcription rate rather than no transcription at all. For each state of the promoter, transcription events at that promoter are represented by a Poisson process with rate parameter corresponding to the transcription rate. Biochemical processes such as transcription factor binding or reorganisation of chromatin structure may account for the existence of several distinct levels of promoter activity. However, which factors play a dominant role in the apparent switching, remains an unanswered question.
The Gillespie algorithm [37] may be used to simulate all the reactions represented by this model and obtain a complete trajectory of the system through time. However, in this case we are only interested in the number of molecules present at the time of measurement. We use a model-specific stochastic algorithm (Algorithm 1) which allows us to reduce the number of computational steps required to obtain a single realisation from the model.
The following reactions, represented using mass-action kinetics, comprise the two-state model:
$${} {{\begin{aligned} \texttt{inactive-promoter} && {\stackrel{k_{\text{on}}}{\longrightarrow}} & &\texttt{active-promoter} \\ \texttt{active-promoter} && {\stackrel{k_{\text{off}}}{\longrightarrow}} & & \texttt{inactive-promoter} \\ \texttt{inactive-promoter} && {\stackrel{k_{0}}{\longrightarrow}} & & \texttt{inactive-promoter} + \texttt{mRNA} \\ \texttt{active-promoter} && {\stackrel{k_{1}}{\longrightarrow}} & & \texttt{active-promoter} + \texttt{mRNA} \\ \texttt{mRNA} && {\stackrel{k_{2}}{\longrightarrow}} & & \texttt{mRNA} + \texttt{Protein} \\ \texttt{mRNA} && {\stackrel{d_{1}}{\longrightarrow}} & & \text{\o} \\ \texttt{Protein} && {\stackrel{d_{2}}{\longrightarrow}} & & \text{\o} \end{aligned}}} $$
The propensity functions (hazards) for each of the above reactions are listed below:
$$\begin{array}{@{}rcl@{}} h_{0} &=& k_{\text{on}}[\texttt{inactive-promoter}]\\ h_{1} &=& k_{\text{off}}[\texttt{active-promoter}]\\ h_{2} &=& k_{0}[\texttt{inactive-promoter}]\\ h_{3} &=& k_{1}[\texttt{active-promoter}]\\ h_{4} &=& k_{2}[\texttt{mRNA}]\\ h_{5} &=& d_{1}[\texttt{mRNA}]\\ h_{6} &=& d_{2}[\texttt{Protein}] \end{array} $$
Here the square brackets refer to the number of molecules of a species rather than its concentration.
The model presented here relies on a number of assumptions about the process of gene expression. Firstly, that the production of mRNA and protein can be described sufficiently well by pseudo-first order reactions. Secondly, that degradation of mRNA and protein can be described as an exponential decay. In a bacterial cell, mRNA molecules are degraded enzymatically and typically have a half-life on the scale of several minutes. The half-life of protein molecules usually exceeds the time required for cell growth and division during the exponential growth phase. Thus, dilution due to partitioning of protein molecules between daughter cells tends to be the dominant factor in decreasing the number of protein molecules. Here we do not build an explicit model of cell division, instead the decrease in protein numbers is approximated by an exponential decay. Finally, it is assumed that there is no feedback mechanism by which the number of mRNA or protein molecules produced by the gene affects its promoter switching, transcription or translation rates.
Representing extrinsic noise
We model extrinsic noise by perturbing the reaction rate parameters, using a Gaussian kernel, before each simulation of the model [35, 38]. The effect of extrinsic noise on each reaction is assumed to be independent. The reaction rates associated with a particular gene are termed nominal parameters (θ n ).
$${\theta_{n}} = [k_{\text{on}}, k_{\text{off}}, k_{0}, k_{1}, k_{2}, d_{1}, d_{2} ] $$
The values determining the magnitude of the perturbation are termed the noise parameters (η).
$${\eta} = [\eta_{k_{\text{on}}}, \eta_{k_{\text{off}}}, \eta_{k_{0}}, \eta_{k_{2}}, \eta_{d_{1}}, \eta_{d_{2}} ] $$
Together they comprise the full parameter set for the model θ=[θ n ,η].
In the case of the two-state model of a single gene, each θ n has a corresponding extrinsic noise parameter with the exception that the basal transcription rate (k0′) is defined as a fraction of the active transcription rate (\(k_{1}^{\prime }\)) so the two reaction rates are subject to the same perturbation (\(\eta _{k_{0}}\)) before each simulation. This is motivated by the idea that extrinsic factors affecting the transcription rate do not depend on the state of the promoter. The parameters used to generate a single realisation from the two-state model are obtained by sampling from f(μ,σ). Where f is a truncated normal distribution, restricted to non-negative values by rejection sampling, with μ and σ being the mean and standard deviation of the corresponding normal distribution.
$$\begin{array}{*{20}l} k^{\prime}_{\text{on}} &\sim f\left(k_{\text{on}}, k_{\text{on}}\eta_{k_{\text{on}}}\right) \\ k^{\prime}_{\text{off}} &\sim f\left(k_{\text{off}}, k_{\text{off}}\eta_{k_{\text{off}}}\right) \\ k^{\prime}_{1} &\sim f\left(k_{1}, k_{1}\eta_{k_{1}}\right)\\ k^{\prime}_{0} &= k_{0}k^{\prime}_{1}\\ k^{\prime}_{2} &\sim f\left(k_{2}, k_{2}\eta_{k_{2}}\right)\\ d^{\prime}_{1} &\sim f\left(d_{1}, k_{1}\eta_{d_{1}}\right)\\ d^{\prime}_{2} &\sim f\left(d_{2}, k_{2}\eta_{d_{2}}\right)\\ \end{array} $$
The final time point of each simulation represents the number of mRNA and protein molecules in a single cell at the time of measurement.
Simulation procedure
In order to reduce the computational cost of each simulation, rather than using Gillespie's direct method to simulate the entire trajectory of mRNA and protein numbers, we employed Algorithm 1 to obtain samples of the numbers of mRNA and protein molecules at the time of measurement (t m ). First, a realisation of the telegraph process is used to obtain the birth and decay times of mRNA molecules. These are then used to sample the number of protein molecules that were produced from each mRNA molecule and survived until t m . This procedure makes use of the Poisson relationship between the life time of an individual mRNA molecule and the number of surviving protein molecules that were produced from it. This relationship is derived in Additional file 1 and its use is illustrated in Fig. 8. The final result is the number of both mRNA (M) and protein (P) molecules present in the system at t m .
Illustration of the principle behind Algorithm 1. An illustration of how the birth and death times of an mRNA molecule are used to obtain the number of proteins that were produced from it and then survived until the time at which mRNA and protein numbers were measured. According to the two-state model used here, the number of protein molecules that were translated using a given mRNA template and have not yet been degraded can be found by sampling from the corresponding Poisson distribution with a parameter which depends on the lifetime of the mRNA template. If the mRNA is degraded before the measurement time point, the remaining protein molecules are assumed to decay exponentially. Thus the number of protein molecules can be obtained by first sampling the number present at the point of mRNA decay and then sampling from the corresponding binomial distribution to determine the number of surviving molecules at the measurement time point
Use of experimental data
Using an automated fluorescent imaging assay, Taniguchi et al. [7] were able to quantify the abundances of 1018 proteins from a yellow fluorescent protein fusion library. We focus on a subset of 87 genes from the published data set from [7]. These are all the genes for which, in addition to protein numbers, the experimental data include both fluorescence in situ hybridization measurements [39] of mRNA numbers and mRNA lifetimes measurements obtained using RNAseq [40]. We note that these genes are not a random sample from the set of all genes and exhibit higher than average expression levels.
To identify model parameters for which the two-state model, with extrinsic noise, is able to reproduce the experimental measurements, we carry out Bayesian inference using an ABC sequential Monte Carlo (SMC) algorithm that compares summary statistics from simulated and experimental data [41]. Specifically we used the following summary statistics: (1) the mean numbers of mRNA molecules; (2) the Fano factors of mRNA molecule distributions; (3) the mean numbers of protein molecules; (4) the variances of protein molecule numbers; and (5) mRNA lifetimes converted to expontial decay rate parameters. The distributions of these summary statistics are shown in Fig. 9. We assume that the summary statistics correspond to steady state expression levels for each gene. While there is no guarantee that this is the case for every gene, the majority of genes are unlikely to be undergoing major changes in their expression level given that the cells are in a relatively constant environment.
Experimentally measured summary statistics. Each point on the scatter plots is an estimate of the corresponding summary statistic or mRNA degradation rate from experimental measurements. These data are taken from [7]. The mRNA degradation rates were taken to be the inverse of the mRNA lifetimes
Taniguchi et al. [7] used images of about a thousand cells to obtain estimates of mean mRNA numbers, mRNA Fano factors, mean protein numbers and protein number variances. For this reason, we use 103 simulation runs when calculating summary statistics. The experimental measurements of mRNA lifetimes are compared directly to the mRNA degradation rate parameter (d 1) in the model by assuming that lifetimes correspond to the inverse of the decay rate.
Inference procedure
We use an ABC-SMC algorithm to infer plausible parameter sets for the two-state model based on the experimental data. The inference procedure is similar to that employed by [24, 41, 42], as described in Algorithm 2.
For the distance metric, d, we take the Euclidean distance between the logarithms of each type of experimental measurement (D i ) and the corresponding simulation results (x i ):
$$d(D,x) = \sqrt{\sum\limits_{i=1}^{i=5}\left(\log{D_{i}}-\log{x_{i}}\right)^{2}} $$
$$D = \left[ \mu_{mRNA},\frac{\sigma^{2}_{mRNA}}{\mu_{mRNA}}, \mu_{prot}, \sigma^{2}_{prot}, \tau_{mRNA}^{-1} \right] $$
Where μ mRNA is the mean number of mRNA molecules; \({\sigma ^{2}_{mRNA}}/{\mu _{mRNA}}\) is the Fano factor of the mRNA distribution; μ prot is the mean number of protein molecules; \(\sigma ^{2}_{prot}\) is the variance of the protein distribution and \(\tau _{mRNA}^{-1}\) gives the exponential decay rate constant for mRNA degradation based on the measured mRNA lifetime (τ mRNA ).
$$x = \left[ \mu_{M},\frac{{\sigma^{2}_{M}}}{\mu_{M}}, \mu_{P}, {\sigma^{2}_{P}}, d_{1} \right] $$
Where μ M is the mean number of mRNA molecules; \({{\sigma ^{2}_{M}}}/{\mu _{M}}\) is the Fano factor of the mRNA distribution; μ P is the mean number of protein molecules; \({\sigma ^{2}_{P}}\) is the variance of the protein distribution and d 1 corresponds to the nominal mRNA degradation rate. The first sampled population of particles (population zero in Algorithm 2), provides a benchmark for the choice of ε values in the next population. Since we have no knowledge of the distribution of distances until a set of particles is sampled, all particles are accepted in the first population. For subsequent populations, ε values are chosen such that the probability of acceptance with the new ε value is equal to q t . The vector q is chosen prior to the simulation. This allows for larger decreases in ε in the first few populations while keeping the actual epsilon values used, a function of the distances (g) in the previous population. New populations are sampled until the final epsilon value is reached ε f =0.1. To obtain θ ∗ from θ we use a uniform perturbation kernel:
$$\theta^{*} \sim U(\theta-\mu_{t-1}, \theta+\mu_{t-1}) $$
where μ t−1 is the vector of standard deviations of each parameter in the previous population.
Parameter prior
The telegraph process may be parametrized in terms of the ratio of probabilities of switching events (k r ) and the overall frequency with which events occur (k f ):
$$k_{r} = \frac{k_{\text{on}}}{k_{\text{on}}+k_{\text{off}}} $$
$$k_{f} = 2\frac{k_{\text{on}}k_{\text{off}}}{k_{\text{on}} + k_{\text{off}}} $$
To obtain θ, the vector of parameters used in the ABC-SMC inference procedure (Algorithm 2), rate and noise parameters are sampled from the following uniform priors,
$$\begin{array}{*{20}l} k_{r} &\sim U(0, 1)\\ k_{f} &\sim U(0, 0.1)\\ k_{0} &\sim U(0, 1)\\ k_{1} &\sim U(0, 1)\\ k_{2} &\sim U(0, 10)\\ d_{1} &\sim U(0.01, 0.6)\\ d_{2} &\sim U(0.0005, 0.05)\\ \eta_{k_{\text{on}}} &\sim U(0, 0.5)\\ \eta_{k_{\text{off}}} &\sim U(0, 0.5)\\ \eta_{k_{1}} &\sim U(0, 0.4)\\ \eta_{k_{2}} &\sim U(0, 0.4)\\ \eta_{d_{1}} &\sim U(0, 0.4)\\ \eta_{d_{2}} &\sim U(0, 0.4). \end{array} $$
The parameters for the telegraph process, sampled from the prior as k r and k f , are converted to k on and k off before being passed to the simulation algorithm (Algorithm 1) as follows,
$$\begin{array}{@{}rcl@{}} k_{\text{off}} &=& \frac{k_{f}}{2k_{r}}\\ k_{\text{on}} &=& \frac{k_{\text{off}} k_{r}}{1-k_{r}}. \end{array} $$
Rate parameters k r and k 0 as well as the noise parameters (η) are unit-less. The remaining parameters have units 1s −1.
To ensure that M and P are from a distribution close to equilibrium, simulation duration is set depending on the nominal degradation rates for mRNA (d 1) and protein (d 2),
$$t_{m} = L\left(d_{1}^{-1}+d_{2}^{-1} \right) $$
where t m is the final time point and L is a constant chosen arbitrarily to indicate the desired proximity to the steady state distribution. Here we use L=5.
To confirm that our inference procedure is able to converge to the appropriate region of parameter space in an idealised case, we generate synthetic data by simulating 1000 times from the two-state model. We then calculate summary statistics from these data and carry out the inference procedure in the same manner as for the experimental data. Figures 10 and 11 show the resulting distributions of summary statistics and model parameters respectively.
Posterior distribution of summary statistics and the mRNA degradation rate for a test case where synthetic data were generated by simulating from a model with known parameters. Contour plots indicating the density of points with the corresponding summary statistic for each particle in the final population. The summary statistics for each particle are calculated from 1000 simulation runs. The posterior distribution consists of 1000 particles
Posterior distribution of model parameters for a test case where synthetic data were generated by simulating from a model with known parameters. Contour plots indicating the density of points with the corresponding parameter values for each particle in the final population. The posterior distribution consists of 1000 particles
To provide a comparison of the compute times required to simulate the two-state model using the Gillespie algorithm or our model-specific algorithm we take the final population of parameters obtained for the gene dnaK and run simulations on the same CPU using both methods. The extent of the improvement depends on the model parameters. In this case, the mean improvement is 26 fold with a variance of 12. The total times taken to simulate 1000 perturbed parameter samples from each of 1000 particles were 147 and 3786 s.
ABC, approximate Bayesian computation; SMC, sequential Monte Carlo
Golding I, Paulsson J, Zawilski SM, Cox EC. Real-Time Kinetics of Gene Activity in Individual Bacteria. Cell. 2005; 123(6):1025–36.
Chubb JR, Trcek T, Shenoy SM, Singer RH. Transcriptional Pulsing of a Developmental Gene. Curr Biol. 2006; 16(10):1018–25.
Raj A, Peskin CS, Tranchina D, Vargas DY, Tyagi S. Stochastic mRNA Synthesis in Mammalian Cells. PLoS Bio. 2006; 4(10):309.
Zenklusen D, Larson DR, Singer RH. Single-RNA counting reveals alternative modes of gene expression in yeast. Nat Struct Mol Biol. 2008; 15(12):1263–71.
Tan RZ, van Oudenaarden A. Transcript counting in single cells reveals dynamics of rDNA transcription. Mol Syst Biol. 2010; 6:358.
Rosenfeld N. Gene Regulation at the Single-Cell Level. Science. 2005; 307(5717):1962–65.
Taniguchi Y, Choi PJ, Li GW, Chen H, Babu M, Hearn J, Emili A, Xie XS. Quantifying E. coli proteome and transcriptome with single-molecule sensitivity in single cells. Science. 2010; 329(5991):533–8.
Elowitz MB, Levine AJ, Siggia ED, Swain PS. Stochastic gene expression in a single cell. Sci Adv. 2002; 297(5584):1183–186.
Spencer SL, Sorger PK, Gaudet S, Albeck JG, Burke JM. Non-genetic origins of cell-to-cell variability in TRAIL-induced apoptosis. 2009; 459(7245):428–32.
Johnston IG, Gaal B, Neves RPd, Enver T, Iborra FJ, Jones NS. Mitochondrial variability as a source of extrinsic cellular noise. PLoS Comput Biol. 2012; 8(3):1002416.
Kaufmann BB, Yang Q, Mettetal JT, van Oudenaarden A. Heritable stochastic switching revealed by single-cell genealogy. PLoS Biol. 2007; 5(9):239.
Toni T, Tidor B. Combined model of intrinsic and extrinsic variability for computational network design with application to synthetic biology. PLoS Comput Biol. 2013; 9(3):1002960.
Zechner C, Ruess J, Krenn P, Pelet S, Peter M, Lygeors J, Koeppl H. Moment-based inference predicts bimodality in transient gene expression. PNAS. 2012; 109(21):8340–345.
Hasenauer J, Waldherr S, Doszczak M, Radde N, Scheurich P, Allgöwer F. Identification of models of heterogeneous cell populations from population snapshot data. BMC Bioinforma. 2011; 12(1):1–15.
Cai L, Friedman N, Xie XS. Stochastic protein expression in individual cells at the single molecule level. Nature. 2006; 440(7082):358–62.
Toni T, Welch D, Strelkowa N, Ipsen A, Stumpf MPH. Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. J R Soc Interface. 2008; 6(31):187–202.
Article PubMed Central Google Scholar
Liepe J, Kirk P, Filippi S, Toni T, Barnes CP, Stumpf MPH. A framework for parameter estimation and model selection from experimental data in systems biology using approximate Bayesian computation. Nat Protoc. 2014; 9(2):439–56.
Karlsson M, Janzen DLT, Durrieu L, Colman-Lerner A, Kjellsson MC, Cedersund G. Nonlinear mixed-effects modelling for single cell estimation: when, why, and how to use it. BMC Syst Biol. 2015; 9:52.
Raj A, van Oudenaarden A. Nature, Nurture, or Chance: Stochastic Gene Expression and Its Consequences. Cell. 2008; 135(2):216–26.
Nienałtowski K, Włodarczyk M, Lipniacki T, Komorowski M. Clustering reveals limits of parameter identifiability in multi-parameter models of biochemical dynamics. BMC Syst Biol. 2015; 9(1):65.
Erguler K, Stumpf MPH. Practical limits for reverse engineering of dynamical systems: a statistical analysis of sensitivity and parameter inferability in systems biology models. Mol BioSyst. 2011; 7(5):1593.
Gutenkunst RN, Waterfall JJ, Casey FP, Brown KS, Myers CR, Sethna JP. Universally sloppy parameter sensitivities in systems biology models. PLoS Comput Biol. 2007; 3(10):1871–878.
Secrier M, Toni T, Stumpf MPH. The ABC of reverse engineering biological signalling systems. Mol BioSyst. 2009; 5(12):1925.
Filippi S, Barnes CP, Cornebise J, Stumpf MPH. On optimality of kernels for approximate Bayesian computation using sequential Monte Carlo. Stat Appl Genet Mol Biol. 2013; 12(1):87–107.
Bar-Joseph Z, Gifford DK, Jaakkola TS. Fast optimal leaf ordering for hierarchical clustering. Bioinformatics. 2001; 17(Suppl 1):22–9.
Kacmar J, Zamamiri A, Carlson R, Abu-Absi NR, Srienc F. Single-cell variability in growing Saccharomyces cerevisiae cell populations measured with automated flow cytometry. J Biotechnol. 2004; 109(3):239–54.
Yuan TL, Wulf G, Burga L, Cantley LC. Cell-to-Cell Variability in PI3K Protein Level Regulates PI3K-AKT Pathway Activity in Cell Populations. Curr Biol. 2011; 21(3):173–83.
Li B, You L. Predictive power of cell-to-cell variability - Springer. Quant Biol. 2013; 17(1):41–50.
Peccoud J, Ycart B. Markovian Modeling of Gene-Product Synthesis. Theor Popul Biol. 1995; 48:222–34.
Stinchcombe AR, Peskin CS, Tranchina D. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression. Phys Rev E. 2012; 85(6):061919.
Shahrezaei V, Swain PS. Analytical distributions for stochastic gene expression. Proc Natl Acad Sci. 2008; 105(45):17256–17261.
Gillespie DT. Approximate accelerated stochastic simulation of chemically reacting systems. J Chem Phys. 2001; 115(4):1716–1733.
Ale A, Kirk P, Stumpf MPH. A general moment expansion method for stochastic kinetic models. J Chem Phys. 2013; 138(17):174101.
Lakatos E, Ale A, Kirk P, Stumpf MPH. Multivariate moment closure techniques for stochastic kinetic models. J Chem Phys. 2015; 143(9):094107.
Filippi S, Barnes CP, Kirk PDW, Kudo T, Kunida K, McMahon S, Tsuchiya T, Wada T, Kuroda S, Stumpf MPH. Robustness of the MEK-ERK core dynamics and origins of cell-to-cell variability. Cell Rep. 2016; 15:2524–535.
Lillacci G, Khammash M. The signal within the noise: efficient inference of stochastic gene regulation models using fluorescence histograms and stochastic simulations. Bioinformatics. 2013; 29(18):2311–319.
Gillespie DT. A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J Comput Phys. 1976; 22:403–34.
Mc Mahon SS, Lenive O, Filippi S, Stumpf MPH. Information processing by simple molecular motifs and susceptibility to noise. J R Soc Interface. 2015; 12(110):20150597.
Levsky JM, Singer RH. Fluorescence in situ hybridization: past, present and future. J Cell Sci. 2003; 116(Pt 14):2833–838.
Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009; 10(1):57–63.
Barnes CP, Filippi S, Stumpf M, Thorne T. Considerate approaches to constructing summary statistics for ABC model selection - Springer. Stat Comput. 2012; 22(6):1181–197.
Toni T, Welch D, Strelkowa N, Ipsen A, Stumpf MPH. Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. J R Soc Interf / R Soc. 2009; 6(31):187–202.
Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: a Practical and Powerful Approach to Multiple Testing. JSTOR: J R Stat Soc Ser B Methodol. 1995; 57(1):289–300.
We thank the members of the Theoretical Systems Biology Group at Imperial College London for helpful discussions and feedback.
The work was supported by a BBSRC Bioprocessing PhD studentship to O.L. and M.P.H.S. and P.D.W.K. was supported by the MRC (project reference MC_ UP_ 0801/1).
Previously published data were used [15].
OL, PK and MPHS designed the study. OL carried out the computational work. OL and MPHS wrote the paper. All authors read and reviewed the final paper.
ICR, Sutton, SM2 5NG, UK
Oleg Lenive
MRC Biostatistics Unit, Cambridge Institute of Public Health, Cambridge, UK
Paul D. W. Kirk
Imperial College, London, Centre for Integrative Systems Biology and Bioinformatics, London, SW7 2AZ, UK
Michael P. H. Stumpf
Correspondence to Michael P. H. Stumpf.
Additional file 1
Derivation of the Poissonian relationship between the number of surviving protein molecules and mRNA lifetime. (PDF 146 kb)
Parameter posteriors for the expression model of the rcsB gene. (PDF 180 kb)
Parameter posteriors for the expression model of the yiiU gene. (PDF 179 kb)
Parameter posteriors for the expression model of the yebC gene. (PDF 160 kb)
Parameter posteriors for the expression model of the eno gene. (PDF 138 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Lenive, O., W. Kirk, P.D. & H. Stumpf, M.P. Inferring extrinsic noise from single-cell gene expression data using approximate Bayesian computation. BMC Syst Biol 10, 81 (2016). https://doi.org/10.1186/s12918-016-0324-x
Stochastic simulation
Extrinsic noise
Approximate Bayesian computation | CommonCrawl |
Forecasting elections results via the voter model with stubborn nodes
Antoine Vendeville ORCID: orcid.org/0000-0002-9044-83481,2,3,
Benjamin Guedj1,2,3,4 &
Shi Zhou1,2,3
In this paper we propose a novel method to forecast the result of elections using only official results of previous ones. It is based on the voter model with stubborn nodes and uses theoretical results developed in a previous work of ours. We look at popular vote shares for the Conservative and Labour parties in the UK and the Republican and Democrat parties in the US. We are able to perform time-evolving estimates of the model parameters and use these to forecast the vote shares for each party in any election. We obtain a mean absolute error of 4.74%. As a side product, our parameters estimates provide meaningful insight on the political landscape, informing us on the proportion of voters that are strong supporters of each of the considered parties.
For decades, modern democratic societies have been polling populations to try and track the popularity of elections candidates and members of governments. Those are often conducted by means of phone, online or even in person surveys, which can be very time-consuming and usually suffer from limited sample sizes and bias—e.g. respondants with controversial views might be reluctant of sharing them. This is why different methods are being investigated nowadays. With the rapid growth of online social platforms such as Facebook or Twitter, any individual can now publicly express their views and opinions, adding to an evergrowing pool of directly accessible data. This has open the door for a new avenue of research, that seeks to use this precious resource to forecast polls and election results without having to survey the population.
As of today, most efforts have focused on applying machine learning methods such as sentiment analysis to evaluate public opinion through samples of Twitter data and try to predict the outcome of democratic processes around the globe (Saleiro et al. 2016; Garcia et al. 2018; Grimaldi et al. 2020). The quality of predictions spans a rather wide range and numerous voices have expressed concerns over these methods, arguing that there are multiple factors at play that may alter their reliability (Gayo-Avello 2012; Jungherr et al. 2017). This is why in this work we propose a novel method that does not rely on data analysis but rather uses the authentic and official results of previous elections to perform estimation for future ones.
More precisely, we consider the well-known voter model for opinion dynamics. A population of connected nodes form a graph where some of them are in state 0 and some others in state 1. Nodes can then randomly change state over time following the distribution of others' states. Nodes are usually meant to represent users on a social network and states their opinions or views. This model then allows to describe in a simple and intuitive manner social dynamics where people are divided between two parties and form their opinion by observing that of others around them. A previous work of ours was dedicated to the theoretical study of this model in the specific case where everyone is influenced by everyone else and some users are stubborn and never change opinion (Vendeville et al. 2020). Notably, we provided closed-form expressions for the distribution of opinions at any point in the process and convergence time to equilibrium.
This paper is a follow-up of that work, as we apply our previous findings to develop a novel method that can be used to forecast the results of any election. We look at both general elections in the United Kingdom and presidential elections in the United States. In each case, we consider the evolution of the share of popular votes for each of the two major partiesFootnote 1 as a realisation of the voter model and perform time-evolving estimation of optimal parameters. This allows us to obtain a theoretical distribution for the number of seats or votes, from which we draw the expected result of future elections. We compare with real-life outcomes to assess the viability of our approach.
A number of research projects have focused on applying machine learning algorithms to Twitter data in order to forecast opinion poll results or election outcomes. We discuss some of them here and refer the interested reader to Gayo-Avello (2013) and Phillips et al. (2017) for more in-depth reviews of the literature. A pioneer work in this area was that of Tumasjan et al. (2011) whose model achieved a mean average error (MAE) of 1.65% when predicting results of the 2009 German federal election. Authors used Twitter mention counts as an direct indicator of a candidate's popularity, a method that has been considered by several other works as well, often in combination with a sentiment analysis of tweets content (O'Connor et al. 2010; Saleiro et al. 2016; Garcia et al. 2018; Grimaldi et al. 2020; Fink et al. 2013; Huberty 2013; Caldarelli et al. 2014; Thapen and Ghanem 2013). In particular, Garcia et al. (2018) achieved 90% accuracy in predicting the top two candidates in various municipalities during Brazilian municipal elections, and Saleiro et al. (2016) achieved a MAE of 0.63% when trying to predict opinion poll results during the Portuguese bailout (2011–2014).
The relevance of such approaches has however been questioned by a number of authors (Fink et al. 2013; Huberty 2013; Caldarelli et al. 2014; Thapen and Ghanem 2013; Jungherr et al. 2012, 2017; Gayo-Avello 2012). Jungherr et al. (2012) showed that merely changing the timeframe of forecast in the work of Tumasjan et al. (2011) would invalidate the results. Fink et al. (2013) found that the use of Twitter mentions mirrored actual popularity of only some of the candidates but not all of them. Jungherr et al. (2017) argued that mentions count, used in most of the works cited above, show evidence of attention to politics rather that support to the actual candidates. This is why researchers often combine mentions count with sentiment analysis algorithms, but even these can have trouble detecting and correctly interpreting all subtelties of the human language. This particular concern, has been raised by several authors (Huberty 2013; Caldarelli et al. 2014; Gayo-Avello 2012). Self-selection, i.e. the fact that people choose whether to express their views online or not, may also bias results. Add to it the rife presence of bots on the Twitter platform which makes it delicate to assess whether the online population is an accurate representation of the real one.
Some researchers have thus considered different avenues, drawing features from the Twitter user graph topology (Dokoohaki et al. 2015), hashtags co-occurences (Bovet et al. 2018) or even discarding the social platform entirely and using fluctuations of the Pound to forecast the popularity of the Conservative party in the UK (Usher and Dondio 2020). Integrating in this line of works, we build a model that does not rely on Twitter but rather uses official results of previous elections to guess the outcome of future ones. Our model is a variant of the celebrated voter model, where nodes on a graph are in one of two possible states and repeatedly update their beliefs to agree with other nodes chosen at random. It was introduced independently by Holley and Liggett (1975) and Clifford and Sudbury (1973) in the context of particles interaction. They proved that consensus is reached, i.e. that every node is eventually in the same state, on the infinite \({\mathbb {Z}}^d\) lattice. Several works have since looked at different network topologies: complete graphs (Hassin and Peleg 2002; Sood et al. 2008; Perron et al. 2009; Yildiz et al. 2010), Erdös-Rényi random graphs (Sood et al. 2008; Yildiz et al. 2010), scale-free random graphs (Sood et al. 2008; Fernley and Ortgiese 2019), and other various structures (Yildiz et al. 2010; Sood et al. 2008). Variants where nodes deterministically update to the most common state amongst their neighbours have also been studied (Chen and Redner 2005; Mossel et al. 2014).
In this paper we consider the specific case where stubborn nodes who never switch state are present in the graph. Such nodes may for example represent lobbyists, politicians or activists, i.e. entities looking to lead rather than follow and who will not easily change side. One of those placed within the network can singlehandedly change the outcome of the process (Mobilia 2003; Sood et al. 2008). If several of them are present on both sides, consensus is usually not reachable and instead the distribution of states converges to an equilibrium in which it fluctuate indefinitely (Mobilia et al. 2007; Yildiz et al. 2013). Recently, Mukhopadhyay et al. (2020) considered nodes with different degrees of stubbornness and show that time to reach consensus grows linearly with their number. Klamser et al. (2017) studied the effect of stubborn nodes on a dynamically evolving graph, and show that the two main factors shaping their influence are their degrees and the dynamical rewiring probabilities. Finally, in our previous work we developed closed-form formulas for the distribution of opinion at any step and convergence time to equilibrium in the case where stubborn nodes are present in a strongly connected network (Vendeville et al. 2020).
Our contributions In this paper we propose a new model for the forecast of elections outcome, based on official results of previous elections. Our method is based on the voter model with stubborn nodes and uses theoretical results developed in a previous work of ours (Vendeville et al. 2020). We apply it to the United Kingdom general elections and in the United States presidential elections and achieve an MAE of 4.74%. To the best of our knowledge this is the first time such work is conducted. All code used is available online.
Here we present the mathematical framework behind our forecasting method. In the traditional voter model, we consider a group of n nodes labelled \(1, \ldots , n\) who are each in state 0 or 1. These states are prone to change over time and we let \(x_i(t)\) denote the state of node i at time t. Each node has access to the state of some of the others, called its neighbours. Nodes can then be seen as forming a graph of size n, with an edge from j to i if and only if i has access to the state of j. Here we consider this graph to be a clique with unweighted edges and no self-loops. Thus each node accounts for the state of every other, except their own, with no particular preference. The process then unfolds as follows. Starting with a given initial distribution of states, an independent exponential clock of parameter 1 is associated to each node. Whenever a clock rings, the concerned node changes its state to that of one of its neighbours selected uniformly at random—or equivalently, chooses its new state by sampling the distribution of its neighbours' states.
We let \(N_1(t)\) denote the number of state-1 holders at time t; it will be our quantity of interest. Note that the number of state-0 nodes at time t is given by \(n-N_1(t)\). We assume \(N_1(0)\) is fixed and let \(n_1\) denote its value. We are interested in the particular situation where some of the nodes are stubborn, that is never change state, and we describe the evolution of \(N_1(t)\) over time. We denote by \(s_0\) and \(s_1\) the numbers of stubborn state-0 and state-1 nodes respectively and require at least one of them to be strictly positive. To this end we define
$$\begin{aligned} S_n=\{(a,b) \in \{0,\ldots ,n\}^2: 0<a+b\leqslant n\} \end{aligned}$$
and require \((s_0,s_1)\in S_n\). We write \([m_{ij}]_{i,j}\) to denote the matrix with entry \(m_{ij}\) in the i-th row and j-th column and let \(e^M\) denote the exponential of any matrix M.
Because \(s_0\) and \(s_1\) nodes will always be in respective states 0 and 1 no matter what, \(N_1(t)\) is comprised between \(s_1\) and \(n-s_0\) for all t. The idea behind our analysis is that it describes a birth-and-death process over the state-space \(\{s_1, \ldots , n-s_0\}\) with transition rates, for all \(s_1\leqslant k \leqslant n-s_0\),
$$\begin{aligned} {\left\{ \begin{array}{ll} q_{k,k-1} = (k-s_1)(n-k)/(n-1) \\ q_{k,k+1} = k(n-k-s_0)/(n-1) \\ q_{k,k} = -q_{k,k-1} -q_{k,k+1}. \end{array}\right. } \end{aligned}$$
Indeed to move from state k to \(k-1\) we need a non stubborn state-1 node to adopt the state of an state-0 node. There are \(k-s_1\) non stubborn state-1 nodes and for each of these, a proportion \((n-k)/(n-1)\) of the others is in state 0, hence \(q_{k,k-1} = (k-s_1)(n-k)/(n-1)\). We obtain \(q_{k,k+1}\) via an analogous reasoning and define \(q_{k,k}=-q_{k,k+1}-q_{k,k-1}\). Since the process only evolves by unit increments or decrements, \(q_{k,j}=0\) if \(j \notin \{k-1,k,k+1\}\). As expected we have \(q_{s_1,s_1-1}=0\) and \(q_{n-s_0,n-s_0+1}=0\). Finally we let \(Q=[q_{ij}]_{i,j}\) denote the transition rate matrix and \(e^{tQ}\) the exponential of tQ defined by \(e^{tQ} = \sum _{r=0}^{\infty } (tQ)^k / k!\) for any \(t>0\). From there we are able to compute the distribution of \(N_1(t)\) and its expected value at any point in time.
Theorem 1
Let Q be the matrix with entries described in (2) and let \(N_1(0)=n_1\) be given. Assuming \((s_0,s_1)\in S_n\) is the repartition of stubborn nodes, the probability for \(N_1\) to equal k at time t is
$$\begin{aligned} p_{n_1,k}(t) := [e^{tQ}]_{n_1,k}. \end{aligned}$$
Hence,
$$\begin{aligned} {\mathbb {E}}N_1(t) = \sum _{k=s_1}^{n-s_0} k \, p_{n_1,k}(t). \end{aligned}$$
is the expected number of state-1 nodes at time t.
Because there are stubborn agents in both camps, consensus is never reached and instead, the system indefinitely fluctuates within a state of equilibrium. More precisely, \(N_1(t)\) converges in distribution as \(t \rightarrow \infty\). The limiting distribution is called stationary and we denote it by \(\pi =(\pi _{s_1}, \ldots , \pi _{n-s_0})\). We would like to know if the political system we consider can be considered to be within such state. To this end, the long term expectation of \(N_1(t)\) is given by the following theorem.
Assuming \((s_0,s_1)\in S_n\) is the repartition of stubborn agents, \(N_1(t)\) has a unique stationary distribution \(\pi =(\pi _{s_1}, \ldots , \pi _{n-s_0})\) and thus the expected number of opinion-1 holders converges to
$$\begin{aligned} {\mathbb {E}}\pi = n\frac{s_1}{s_0+s_1}. \end{aligned}$$
The theory has been developed in our previous work (Vendeville et al. 2020) to which we refer the interested reader for more details and proof of Theorems 1, 2. We also provide a closed-form formula for the computation of convergence time.
We use the official database of the United Kingdom general elections results from 1922 onwards, published by the House of Commons (Audickas et al. 2020), as well as results for presidential elections in the United States from 1912 onwards, manually collected from Wikipedia.Footnote 2 Each time we are interested in the percentage of popular votes won by each of the two major parties—Conservative and Labour in the UK, Republicans and Democrats in the US. We assume these quantities correspond to pointwise observations of independent realisations of the voter model. The result of each election can then be forecast via Theorem 1, provided we have an estimate of the quantity of stubborn nodes \((s_0,s_1)\). Thus, our analysis is done in two steps: first we make for each election an estimate of \((s_0,s_1)\) based on previous results, then Eq. 4 gives us the expected value for the coming election that we use as a predictor.
For the sake of clarity we present our method in the UK case, but note that it directly translates to the US case. Different parties are present, the two major ones being ConservativeFootnote 3 and Labour, the rest including Liberal Democrats or Social Nationalists amongst others. Because our model applies to a two-sided situation only, we cannot consider all of them at once. Thus, we aggregate all non-Conservative parties under the label 0 while Conservatives are attributed label 1. We let \(x_i\) denote the percentage of seats won by Conservatives on the ith election, rounded to the nearest integer because our model cannot account for decimal values. In addition we let \(t_i\) denote the elapsed time, in years, since the starting point 1922. There have been \(m=27\) elections total, with the last one taking place in 2019. Thus \(t_1=0\) and \(t_m=2019-1922=97\). We let \(x_m\) denote the percentage of seats won by the conservatives in 2019. To concur with our theoretical framework we consider one seat won by the Conservatives (resp. non-Conservatives) as the observation of an node being in state 1 (resp. 0) amongst \(n=100\) of them. The \(x_i\)'s then correspond to pointwise observations at times \(t_i\)'s of a realisation of the process \(N_1(t)\) described in Sect. 3. All the reasoning described here and in the following will also be applied independently in the cases Labour versus non-Labour, Republican versus non-Republican (US) and Democrat versus non-Democrat (US).
To be able to use Theorem 1 to make predictions, we first need to estimate the proportion of potential stubborn nodes in the population, that is the percentage of votes which are guaranteed either for Conservatives or other parties. Let \(s_0\) denote the number of stubborn state-0 (non-Conservative) nodes and \(s_1\) that of state-1 (Conservative) ones. We look for the values \((s_0^\star ,s_1^\star )\) that maximise the log-likelihood of the observed data \((x_1, \ldots , x_m)\) under the assumption that those were generated via a realisation of the voter model. Let's say we want to predict results for the ith election. Because we need at least two datapoints to make an estimation, we require \(3\leqslant i \leqslant m+1\). Following the notations introduced in Sect. 3 we let \(p_{k,l}^{(s_0,s_1)}(t)\) denote the theoretical probability for \(N_1(t)\) to go from k to l in t units of time when there are respectively \(s_0\) and \(s_1\) state-0 and state-1 stubborn nodes. We seek to solve
$${\mathop{\arg \max }\limits_{{s_{0} ,s_{1} }}} \; \sum _{j=1}^{i-2} \log \left( p_{x_j,x_{j+1}}^{(s_0,s_1)}(t_{j+1}-t_j) \right).$$
Indeed, \(p_{x_j,x_{j+1}}^{(s_0,s_1)}(t_{j+1}-t_j)\) is by definition the probability for Conservatives to win \(x_{j+1}\) percent of the votes in the \((j+1)\)th election knowing they won \(x_j\) percent in the jth one. Thus we seek to simultaneously maximise the likelihood of all past elections results. Let \(Q^{(s_0,s_1)}\) be the matrix with entries calculated via (2). By Theorem 1, we have that (6) is equivalent to
$${\mathop{\arg \max }\limits_{{s_{0} ,s_{1} }}} \; \sum _{j=1}^{i-2} \log \left[ e^{(t_{j+1}-t_j)Q^{(s_0,s_1)}} \right] _{x_j,x_{j+1}}$$
The computation of matrix exponential is typically done in cubic time and quickly becomes intractable as the size of the matrix increases. Here however, because we have \(n=100\), the number of possible couples \((s_0,s_1)\) is small enough here that (7) can be solved by directly computing the sum for each of these couples individually. The optimal value \(s_1^\star\) for \(s_1\) then gives us an estimation of the percentage of votes "locked" by the Conservative party, proportion of the population that will always root for them. The optimal value \(s_0^\star\) for \(s_0\) is an estimate of the quantity of such votes for all other parties aggregated.
To make a forecast for the ith election, we just have to apply Theorem 1 with \(Q=Q^{(s_0^\star ,s_1^\star )}\), \(n_1=x_{i-1}\) and \(t=t_i-t_{i-1}\). Equation 4 then gives us the expected percentage \({\tilde{x}}_i\) of votes gathered by Conservatives on that occasion. This can then be compared to the actual value \(x_i\) to assess the efficacy of our approach.
Results for the UK
We show in Table 1 (left) the estimated values for \((s_0^\star ,s_1^\star )\), updated with each new election. They seem to globally stabilise between 15 and 25 for both parties. Look at the last value in the Labour case for example, which is (24, 15). According to our model, this means there is an estimated proportion of 15% of voters that will always vote Labour and 24% that will never do so. Note that these estimates fluctuate according to the variability of the data. For example in 1922 and 1923 there were twice in a row 38% votes for ConservativeFootnote 4 and as a result it was estimated that 38% of individuals will always vote Conservative and the remaining 62% never will. This is indeed what maximises the likelihood, with this configuration yielding a probability of 1 for the observed values. On the other hand, with pro-Conservative votes jumping from 38 to 61% in 1935, estimated values of \(s_0\) and \(s_1\) dropped significantly to account for the wide range covered by the data.
Table 1 Evolution of the estimates for the proportion of stubborn agents \((s_0^\star , s_1^\star )\) over time
Percentage of popular votes for Conservatives in the UK, prediction and reality. The shaded area covers a \(\pm 5\%\) deviation away from the predictions
Percentage of popular votes for Labour in the UK, prediction and reality. The shaded area covers a \(\pm 5\%\) deviation away from the predictions
In Figs. 1 and 2 we compare our predictions, that is the expectations \({\tilde{x}}_i\), with the real outcomes \(x_i\). We plot both values for each election starting with the third one that took place in 1924, because the optimisation problem (7) requires \(i\geqslant 3\). For both parties, most values seem to fluctuate around the 40% mark. The global tendency of the real outcomes looks respected by the predictions, albeit with less variability. Also note that most predictions appear to be within a \(\pm 5\%\) vicinity of the real values.
Absolute error between prediction and reality for the UK elections, running average over the last 5 elections
To get a better insight we look at the absolute errors \(|{\tilde{x}}_i - x_i|\) of our predictions. We plot running averages over the last 5 elections in Fig. 3. After a few erratic first years they seem to stabilise between 2 and 8%. More precisely, if we discard the first few years up until 1960 where the model lacks sufficient amount of data to properly calibrate, we get MAEs of respectively 4.63% and 5.23% for Conservative and Labour. Minimal values of 0.06% for Conservatives in 1979 and 0.40% for Labour in 2001 are observed, showing that our method was able to make very accurate predictions in these cases. Surprisingly however, the errors do not seem to monotically decrease over time, but rather fluctuate. As a matter of facts, peak absolute errors were observed in 1983 (Labour, 13.0%) and 1997 (Conservative, 13.6%).
Results for the US
We apply the exact method described above to the case of presidential elections in the United States. As we did before we independently consider two cases, Republicans versus non-Republicans and Democrats versus non-Democrats. Presidential elections in the US take place every 4 years and we start with the year 1912, then 1916, 1920, and so on. Here again, keep in mind that due to how the American system work, the party with the most popular votes does not necessarily win the elections. The first estimation we are able to make is based on the first two elections and thus our first prediction is for 1920.
Percentage of popular votes for Republicans in the US, prediction and reality. The shaded area covers a \(\pm 5\%\) deviation away from the predictions
Percentage of popular votes for Democrats in the US, prediction and reality. The shaded area covers a \(\pm 5\%\) deviation away from the predictions
We observe similar results as in the UK case. Stubborn values (Table 1, right) estimated \((s_0^\star , s_1^\star )\) are close, albeit a little bit lower—stabilising at (18, 17) for Republicans and (16, 14) for Democrats. Regarding the predictions (Figs. 4, 5) we again see a majority of them within a 5% margin from the actual outcomes, and a prediction curve that looks more stable than the slightly spiky ones with real values. Note that because of the two-party system in place in the United States, both Republicans and Democrats see their share of popular votes fluctuate around the 50% mark. In the previous case, it was rather around 40% because of the space occupied by smaller parties such as Liberal Democrats or Scottish National Party amongst others. The two-sided aspect of our model—always one party (0) versus another (1)—may thus be more adapted to the study of the US system.
As for the errors, running averages over the last 5 elections are shown in Fig. 6. Here again after a few erratic first years values appear to be comprised between 2 and 8%. However, where errors in the UK case seemed to increase in the last few years, here they to are dropping down. In fact, our most accurate forecast regarding Democrat votes is for 2016, with only 0.04% error. For Republicans it is in 1940 with 0.10%. Peak errors were again around 13% for both parties, in 1972 (Republicans, 14.0%) and 1964 (Democrats, 12.3%). The MAE over all elections, starting in 1940 when forecasts start to stabilise, is 4.27% for Republicans and 4.83% for Democrats. This is slightly better than in the UK case (4.63% and 5.23%). The MAE error over both cases is then 4.74%.
Absolute error between prediction and reality for the US elections, running average over the last 5 elections
Conclusion and future work
In this paper we proposed a new method for the forecast of elections results. A lot of published work have used Twitter data for this purpose, usually applying machine learning algorithm to extract sentiment from tweets and estimate a candidate's popularity this way. Despite promising results, such methods have been criticised in the past few years, with problem ranging from bot presence to text mining reliability that cast doubt over their reliability. As such, our model does not rely on Twitter data at all. Instead, we used official results of past elections in the United Kingdom and in the United States to try and predict outcomes of future ones.
Our method is based on findings from a previous work of ours, where we conducted a theoretical analysis of the voter model with stubborn on strongly-connected graphs. Here we applied those in the case to try and predict the percentage of popular votes won by Conservative and Labour parties in the United Kingdom, and the percentage of popular votes collected by the Republican and Democratic parties in the United States. To do so, we considered official results of past elections as observations of independent realisations of the voter model. From there we were able to perform time-evolving estimates of the model parameters and use them to forecast an outcome.
Our model yielded an MAE of 4.74%, reaching absolute errors as low as 0.04% and as high as 14%. In their review, (Gayo-Avello 2013) suggest that any model used to predict the elections outcome should not have an MAE higher than 1 or 2%. This is because the result of an election is more often than not the matter of just a few percents. According to this standard, our MAE is not low enough to reliably predict the outcome of an election. Some previous works reached error averages as low as 0.63 (Saleiro et al. 2016) and 1.65% (Tumasjan et al. 2011). Additionally, we tested our method against the baseline of systematically predict the exact result of the previous election. This simple method returned an overal 5.03% average error, which is not much worse than the 4.74% obtained via our method. Moreover, the first few elections results were discarded in both cases, as it was deemed that the model did not have enough data at this point to make predictions with a high enough confidence. The choice of a limit though is made on the basis of our observation of the model's behaviour and is purely subjective. Changing the limit would in turn make for different results that might be better or worse.
Although our method did not yield significant enough results here, we believe it is an interesting step in a novel direction. The use of results from previous elections provides a new take on the matter, which only relies on official data. Also our model does not only forecast the elections results, it also gives us estimates of the proportion stubborn voters, that is the proportion of individuals who will always—or never—vote for the considered parties. This provides meaningful insight on the political landscape of the considered areas.
Several extensions of the model could be considered to improve its accuracy. First of all, adding in-between election polls to the data would go a long way in improving the estimates. With a few years gap from one election to another, it is too wide a range of possibilites for the model to account for. Second, one could take a deeper look into the past of a country's results and try to detect tendancies about landslide victories, incumbency reelection and so forth. We believe that having a deeper understanding of the specific country one is working with could substantially improve the model calibration process. Finally, combining our method with Twitter data-based estimations may lead to higher accuracy.
All code used is available online at https://github.com/AntoineVendeville/HowOpinionsCrystallise. Data for the United Kingdoms elections is available online (Audickas et al. 2020). Data for the United States elections has been crawled from Wikipedia (https://en.wikipedia.org/wiki/United_States_presidential_election#Popular_vote_results).
Conservative and Labour in the UK, Republican and Democrat in the US.
https://en.wikipedia.org/wiki/United_States_presidential_election#Popular_vote_results.
The dataset also includes in Conservative results: National, National Liberal and National Labour candidates for 1931–1935; National and National Liberal candidates for 1945; National Liberal candidates from 1945 to 1970.
Remember that those value are rounded to the nearest integer to fit the needs of our model—the actual results were 38.5% and 38%.
MAE:
Audickas L, Cracknell R, Loft P (2020) UK election statistics: 1918-2019—a century of elections. https://commonslibrary.parliament.uk/research-briefings/cbp-7529/
Bovet A, Morone F, Makse HA (2018) Validation of Twitter opinion trends with national polling aggregates: Hillary Clinton vs Donald Trump. Sci Rep 8:8673. https://doi.org/10.1038/s41598-018-26951-y
Caldarelli G, Chessa A, Pammolli F, Pompa G, Puliga M, Riccaboni M, Riotta G (2014) A multi-level geographical study of italian political elections from twitter data. PLoS ONE 9(5):1–11. https://doi.org/10.1371/journal.pone.0095809
Chen P, Redner S (2005) Majority rule dynamics in finite dimensions. Phys Rev E. https://doi.org/10.1103/PhysRevE.71.036101
Clifford P, Sudbury A (1973) A model for spatial conflict. Biometrika 60(3):581–588. https://doi.org/10.1093/biomet/60.3.581
MathSciNet Article MATH Google Scholar
Dokoohaki N, Zikou F, Gillblad D, Matskin M (2015) Predicting Swedish elections with twitter: a case for stochastic link structure analysis. In: IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM), pp 1269–1276. https://doi.org/10.1145/2808797.2808915
Fernley J, Ortgiese M (2019) Voter models on subcritical inhomogeneous random graphs. arXiv:1911.13187
Fink C, Bos N, Perrone A, Liu E, Kopecky J (2013) Twitter, public opinion, and the 2011 Nigerian presidential election. In: International conference on social computing, pp 311–320. https://doi.org/10.1109/SocialCom.2013.50
Garcia ACB, Silva W, Correia L (2018) The PredNews forecasting model. In: Proceedings of the 19th annual international conference on digital government research: governance in the data age. Association for Computing Machinery, New York. https://doi.org/10.1145/3209281.3209295
Gayo-Avello D (2012) No, you cannot predict elections with twitter. IEEE Internet Comput 16(6):91–94. https://doi.org/10.1109/MIC.2012.137
Gayo-Avello D (2013) A meta-analysis of state-of-the-art electoral prediction from twitter data. Soc Sci Comput Rev 31(6):649–679. https://doi.org/10.1177/0894439313493979
Grimaldi D, Cely JD, Arboleda H (2020) Inferring the votes in a new political landscape: the case of the 2019 Spanish presidential elections. J Big Data. https://doi.org/10.1186/s40537-020-00334-5
Hassin Y, Peleg D (2002) Distributed probabilistic polling and applications to proportionate agreement. Inf Comput 171(2):248–268. https://doi.org/10.1006/inco.2001.3088
Holley RA, Liggett TM (1975) Ergodic theorems for weakly interacting infinite systems and the voter model. Ann Probab 3(4):643–663. https://doi.org/10.1214/aop/1176996306
Huberty ME (2013) Multi-cycle forecasting of congressional elections with social media. In: Proceedings of the 2nd workshop on politics, elections and data. PLEAD '13. Association for Computing Machinery, New York, pp 23–30. https://doi.org/10.1145/2508436.2508439
Jungherr A, Jürgens P, Schoen H (2012) Why the pirate party won the German election of 2009 or the trouble with predictions: a response to Tumasjan, A., Sprenger, T. O., Sander, P. G., & Welpe, I. M. "Predicting elections with twitter: What 140 characters reveal about political sentiment". Soc Sci Comput Rev 30(2):229–234. https://doi.org/10.1177/0894439311404119
Jungherr A, Schoen H, Posegga O, Jürgens P (2017) Digital trace data in the study of public opinion: an indicator of attention toward politics rather than political support. Soc Sci Comput Rev 35(3):336–356. https://doi.org/10.1177/0894439316631043
Klamser PP, Wiedermann M, Donges JF, Donner RV (2017) Zealotry effects on opinion dynamics in the adaptive voter model. Phys Rev E. https://doi.org/10.1103/PhysRevE.96.052315
Mobilia M (2003) Does a single zealot affect an infinite group of voters? Phys Rev Lett 91:028701. https://doi.org/10.1103/PhysRevLett.91.028701
Mobilia M, Petersen A, Redner S (2007) On the role of zealotry in the voter model. J Stat Mech Theory Exp 2007(08):08029–08029. https://doi.org/10.1088/1742-5468/2007/08/p08029
Mossel E, Neeman J, Tamuz O (2014) Majority dynamics and aggregation of information in social networks. Auton Agent Multi Agent Syst 28(3):408–429. https://doi.org/10.1007/s10458-013-9230-4
Mukhopadhyay A, Mazumdar RR, Roy R (2020) Voter and majority dynamics with biased and stubborn agents. J Stat Phys. https://doi.org/10.1007/s10955-020-02625-w
O'Connor BT, Balasubramanyan R, Routledge BR, Smith NA (2010) From tweets to polls: Linking text sentiment to public opinion time series. In: ICWSM
Perron E, Vasudevan D, Vojnovic M (2009) Using three states for binary consensus on complete graphs. IEEE INFOCOM 2009:2527–2535. https://doi.org/10.1109/INFCOM.2009.5062181
Phillips L, Dowling C, Shaffer K, Hodas NO, Volkova S (2017) Using social media to predict the future: a systematic literature review. CoRR abs/1706.06134
Saleiro P, Gomes L, Soares C (2016) Sentiment aggregate functions for political opinion polling using microblog streams. In: Proceedings of the ninth international conference on computer science and software engineering. C3S2E '16. Association for Computing Machinery, New York, pp 44–50. https://doi.org/10.1145/2948992.2949022
Sood V, Tibor A, Redner S (2008) Voter models on heterogeneous networks. Phys Rev E 77:041121. https://doi.org/10.1103/PhysRevE.77.041121
Thapen NA, Ghanem MM (2013) Towards passive political opinion polling using twitter. CEUR Workshop Proc 1110:19–34
Tumasjan A, Sprenger TO, Sandner PG, Welpe IM (2011) Election forecasts with twitter: how 140 characters reflect the political landscape. Soc Sci Comput Rev 29(4):402–418. https://doi.org/10.1177/0894439310386557
Usher J, Dondio P (2020) Brexit election: forecasting a conservative party victory through the pound using arima and facebook's prophet. In: Proceedings of the 10th international conference on web intelligence, mining and semantics. WIMS 2020. Association for Computing Machinery, New York, pp 123–128. https://doi.org/10.1145/3405962.3405967
Vendeville A, Guedj B, Zhou S (2020) Voter model with stubborn agents on strongly connected social networks. arXiv:2006.07265
Yildiz ME, Pagliari R, Ozdaglar A, Scaglione A (2010) Voting models in random networks. In: Information theory and applications workshop (ITA), pp 1–7. https://doi.org/10.1109/ITA.2010.5454090
Yildiz ME, Ozdaglar A, Acemoglu D, Saberi A, Scaglione A (2013) Binary opinion dynamics with stubborn agents. ACM Trans Econ Comput. https://doi.org/10.1145/2538508
Antoine Vendeville is a PhD student at University College London (United Kingdom) in the Computer Science department. Benjamin Guedj is a Principal Research Fellow in machine learning at University College London and a tenured research scientist at Inria, France. Shi Zhou is an Associate Professor at the Department of Computer Science, University College London. All three authors are affiliated with UCL's Centre for Doctoral Training in Cybersecurity. All three authors are affiliated with UCL's Centre for Artificial Intelligence.
This project was funded by the UK EPSRC grant EP/S022503/1 that supports the Centre for Doctoral Training in Cybersecurity delivered by UCL's Departments of Computer Science, Security and Crime Science, and Science, Technology, Engineering and Public Policy.
Department of Computer Science, University College London, London, UK
Antoine Vendeville, Benjamin Guedj & Shi Zhou
Centre for Doctoral Training in Cybersecurity, University College London, London, UK
Centre for Artificial Intelligence, University College London, London, UK
Inria Lille - Nord Europe Research Centre, Lille, France
Benjamin Guedj
Antoine Vendeville
Shi Zhou
This is a joint work by the three authors, with A.V. being the leader of the project. All authors read and approved the final manuscript.
Correspondence to Antoine Vendeville.
Vendeville, A., Guedj, B. & Zhou, S. Forecasting elections results via the voter model with stubborn nodes. Appl Netw Sci 6, 1 (2021). https://doi.org/10.1007/s41109-020-00342-7
Voter model
Opinion dynamics
Special issue on Epidemics Dynamics & Control on Networks | CommonCrawl |
Filters: First Letter Of Last Name is B [Clear All Filters]
Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Tunneling characteristics at atomic resolution on close-packed metal surfaces. Ultramicroscopy 42, 528–532 (1992).
Fritz, J. et al. Translating biomolecular recognition into nanomechanics. Science 288, 316–318 (2000).
Berger, R. et al. Sensor Technology in the Netherlands: State of the Art 33–42 (Springer Netherlands, 1998).
McKinnon, A. W. et al. Tip geometry effects in photon emission from the scanning tunnelling microscope. (1992).
Berger, R., Gerber, C., Gimzewski, J. K., Meyer, E. & Güntherodt, H. J. Thermal analysis using a micromechanical calorimeter. Applied Physics Letters 69, 40–42 (1996).
Joachim, C., Bergaud, C., Pinna, H., Tang, H. & Gimzewski, J. K. Is There A Minimum Size and a Maximum Speed for a Nanoscale Amplifier?. Annals of the New York Academy of Sciences 852, 243–256 (1998).
Sass, J. K., Gimzewski, J. K., Haiss, W., Besocke, K. H. & Lackey, D. Theoretical aspects and experimental results of STM studies in polar liquids. Journal of Physics: Condensed Matter 3, S121 (1991).
Berger, R. et al. Surface stress in the self-assembly of alkanethiols on gold probed. by a force microscopy technique. Applied Physics A: Materials Science & Processing 66, S55–S59 (1998).
Berger, R. et al. Surface stress in the self-assembly of alkanethiols on gold. Science 276, 2021–2024 (1997).
Fritz, J. et al. Stress at the solid-liquid interface of self-assembled monolayers on gold investigated with a nanomechanical sensor. Langmuir 16, 9694–9696 (2000).
Schaffner, M. - H. et al. Size-dependent light emission from mass-selected clusters. The European Physical Journal D-Atomic, Molecular, Optical and Plasma Physics 2, 79–82 (1998).
Gimzewski, J. K., Humbert, A., Bednorz, J. G. & Reihl, B. Silver films condensed at 300 and 90 K: scanning tunneling microscopy of their surface topography. Physical review letters 55, 951 (1985).
Lang, H. P. et al. Sequential position readout from arrays of micromechanical cantilever sensors. Applied Physics Letters 72, 383–385 (1998).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Scanning-tunneling-microscope study of antiphase domain boundaries, dislocations, and local mass transport on Au (110) surfaces. Physical Review B 45, 6844 (1992).
Berndt, R., Baratoff, A. & Gimzewski, J. K. Scanning Tunneling Microscopy and Related Methods 269–280 (Springer Netherlands, 1990).
Berndt, R. & Gimzewski, J. K. The role of proximity plasmon modes on noble metal surfaces in scanning tunneling microscopy. Surface science 269, 556–559 (1992).
Azov, V. A. et al. Resorcin [4] arene Cavitand-Based Molecular Switches. Advanced Functional Materials 16, 147–156 (2006).
Gimzewski, J. K. et al. Plasma surface interactions in the TCA tokamak: a preliminary study using deposition probes. (1982).
Berndt, R., Schlittler, R. R. & Gimzewski, J. K. Photon emission scanning tunneling microscope. Journal of Vacuum Science & Technology B 9, 573–577 (1991).
Berndt, R., Schlittler, R. R. & Gimzewski, J. K. Photon emission processes in STM. AIP Conf Proceedings 241, 328–336 (1992).
Berndt, R. & Gimzewski, J. K. Photon emission in scanning tunneling microscopy: interpretation of photon maps of metallic systems. SPIE MILESTONE SERIES MS 107, 376–376 (1995).
Berndt, R. & Gimzewski, J. K. Photon emission in scanning tunneling microscopy: Interpretation of photon maps of metallic systems. Physical Review B 48, 4746 (1993).
Berndt, R. & Gimzewski, J. K. Photon emission from transition metal surfaces in scanning tunneling microscopy. International Journal of Modern Physics B 7, 516–519 (1993).
Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Photon emission from small particles in an STM. Zeitschrift für Physik D Atoms, Molecules and Clusters 26, 87–88 (1993).
Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Photon emission from nanostructures in an STM. Nanostructured Materials 3, 345–348 (1993).
Berndt, R. & Gimzewski, J. K. Photon Emission from C60 in a Nanoscopic Cavity. Proceedings of the NATO Advanced Research Workshop: (Humboldt-Universität zu Berlin, 1994).
Berndt, R. et al. Photon emission from adsorbed C60 molecules with sub-nanometer lateral resolution. Applied Physics A 57, 513–516 (1993).
Berndt, R. R. J. K. B. R. R. W. D. M. et al. Photon emission at molecular resolution induced by a scanning tunneling microscope. Science 262, 1425–1427 (1993).
Lüthi, R. et al. Parallel nanodevice fabrication using a combination of shadow mask and scanning probe methods. Applied physics letters 75, 1314–1316 (1999).
Zhang, W. et al. ORGN 414-Folding polyrotaxanes using secondary noncovalent bonding interactions. ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY 235, (AMER CHEMICAL SOC 1155 16TH ST, NW, WASHINGTON, DC 20036 USA, 2008).
Gimzewski, J. K. et al. Near Field Optics 333–340 (Springer Netherlands, 1993).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of the temporal evolution of the (1$\times$ 2) reconstructed Au (110) surface using scanning tunneling microscopy. Journal of Vacuum Science & Technology B 9, 897–901 (1991).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of the temporal evolution of the (1 times 2) reconstructed Au (110) surface using scanning tunneling microscopy. Journal of Vacuum Science and Technology. B, Microelectronics Processing and Phenomena;(United States) 9, (Submitted).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of mass transport on Au (110)-(1$\times$ 2) reconstructed surfaces using scanning tunneling microscopy. Surface Science Letters 247, A213 (1990).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of mass transport on Au (110)-(1$\times$ 2) reconstructed surfaces using scanning tunneling microscopy. Surface Science 247, 327–332 (1991).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of local photoemission using a scanning tunneling microscope. Ultramicroscopy 42, 366–370 (1992).
Meyer, E. et al. Impact of Electron and Scanning Probe Microscopy on Materials Research 339–357 (Springer Netherlands, 1999). | CommonCrawl |
Forthcoming papers
Mat. Sb.:
Mat. Sb. (N.S.), 1972, Volume 88(130), Number 4(8), Pages 504–521 (Mi msb3193)
This article is cited in 10 scientific papers (total in 10 papers)
Hypoelliptic differential equations and pseudodifferential operators with operator-valued symbols
V. V. Grushin
Abstract: Sufficient conditions are established for the hypoellipticity of pseudodifferential operators with operator-valued symbols. These results are used to prove the hypoellipticity of new classes of linear partial differential equations.
Bibliography: 43 titles.
Mathematics of the USSR-Sbornik, 1972, 17:4, 497–514
UDC: 517.9
MSC: Primary 35H05, 35S05; Secondary 47G05
Received: 11.06.1971
Citation: V. V. Grushin, "Hypoelliptic differential equations and pseudodifferential operators with operator-valued symbols", Mat. Sb. (N.S.), 88(130):4(8) (1972), 504–521; Math. USSR-Sb., 17:4 (1972), 497–514
\Bibitem{Gru72}
\by V.~V.~Grushin
\paper Hypoelliptic differential equations and pseudodifferential operators with operator-valued symbols
\jour Mat. Sb. (N.S.)
\vol 88(130)
\issue 4(8)
\mathnet{http://mi.mathnet.ru/msb3193}
\mathscinet{http://www.ams.org/mathscinet-getitem?mr=316879}
\zmath{https://zbmath.org/?q=an:0243.35020}
\jour Math. USSR-Sb.
\crossref{https://doi.org/10.1070/SM1972v017n04ABEH001599}
http://mi.mathnet.ru/eng/msb3193
http://mi.mathnet.ru/eng/msb/v130/i4/p504
This publication is cited in the following articles:
A. V. Fursikov, "On a class of globally hypoelliptic operators", Math. USSR-Sb., 20:3 (1973), 383–405
Yu. V. Egorov, "Subelliptic operators", Russian Math. Surveys, 30:2 (1975), 59–118
S. A. Smagin, "Meromorphicity of $P^z$, where $P$ is a matrix", Funct. Anal. Appl., 9:1 (1975), 83–84
S. A. Smagin, "Complex powers of hypoelliptic systems in $\mathbf R^n$", Math. USSR-Sb., 28:3 (1976), 291–300
V. V. Grushin, "Construction of a parametrix for degenerate elliptic operators by the method of double-scale asymptotic expansions", Funct. Anal. Appl., 11:2 (1977), 143–144
A. I. Karol', "Operator-valued pseudodifferential operators and the resolvent of a degenerate elliptic operator", Math. USSR-Sb., 49:2 (1984), 553–567
V. P. Maslov, "Non-standard characteristics in asymptotic problems", Russian Math. Surveys, 38:6 (1983), 1–42
G. G. Kazaryan, "On a functional index of hypoellipticity", Math. USSR-Sb., 56:2 (1987), 333–347
Toshihiko Hoshiro, "On Levi-type conditions for hypoellipticity of certain diffrential operators", Communications in Partial Differential Equations, 17:5-6 (1992), 905
T. Matsuzawa, "Gevery hypoellipticity of the grushin operators and the heat kernel method", Integral Transforms and Special Functions, 6:1-4 (1998), 63
Full text: 121 | CommonCrawl |
Use a circuit to multiply two resistances
Given two resistors of unknown resistances and an infinite supply of wires and other resistors, create a circuit using multiple series and parallel combinations such that the effective resistance of the circuit is equal to the product of the two unknown resistances.
Mathematically (for those who don't want to bother about the physics),
$$s(a,b)=a+b$$ $$p(a,b)=\frac{ab}{a+b}$$
Using the two functions given above, two variables $x$ and $y$ and any other numbers, write a mathematical expression that evaluates to give $xy$. You cannot use any other operators or functions, not even division.
Your score is given by the number of times a function appears in the expression, so try to minimize this score.
Example: $s(s(x,p(y,2)),p(3.5,1))$ is a valid expression with a score of 4.
P.S. I'm not 100% sure this is even possible, if not please try giving a proof for the same.
mathematics optimization physics
ghosts_in_the_code
ghosts_in_the_codeghosts_in_the_code
$\begingroup$ Your example uses the resistor 'y' twice. $\endgroup$ – Chris Taylor Apr 6 '16 at 15:18
$\begingroup$ @ChrisTaylor I was unsure whether to allow that or not, anyways now I have disallowed it. Thanks for spotting anyways. $\endgroup$ – ghosts_in_the_code Apr 6 '16 at 15:22
$\begingroup$ Perhaps this might be useful: $p(a,b)=\frac{1}{s(1/a,1/b)}$ $\endgroup$ – Dragonemperor42 Apr 6 '16 at 15:44
$\begingroup$ s and p both have units of $\Omega$. No nested set of one inside the other will result in units of $\Omega^2$. $\endgroup$ – user1717828 Apr 7 '16 at 0:44
$\begingroup$ This makes no sense dimensionally. The product of two resistances is not a resistance! $\endgroup$ – hmakholm left over Monica Apr 7 '16 at 11:14
Another proof that it is impossible.
Consider the case that resistor $x$ has zero resistance. Then the complete circuit must also have zero resistance, so there must be a path with zero resistance; either this path has no resistors or it has only resistor $x$.
If there are no resistors on the path, the circuit will always have a resistance of zero, which is obviously wrong. If the path has only $x$, then the maximum possible resistance of the circuit is $x$, so it can't be correct if $x\ne0$ and $y>1$. Therefore such a circuit is impossible.
Alternatively:
Note that for positive $a$ and $b$, $p(a,b)\le s(a,b)=a+b$, so the total resistance is at most the sum of all the resistances in the circuit. In particular, the sum of all the fixed resistances must be at least $xy-x-y$. However, this value increases without bound as $x$ and $y$ increase, so the circuit is impossible.
f''f''
$\begingroup$ Why must the complete circuit have 0 resistance if $x$ has 0 resistance? $\endgroup$ – Shuri2060 Apr 6 '16 at 17:53
$\begingroup$ @QuestionAsker Because then the product of $x$ and $y$ is zero. $\endgroup$ – f'' Apr 6 '16 at 17:58
$\begingroup$ @RyanO'Hara The same circuit has to work if $x=0$ and also if $x\ne0$. I have shown that if it works for $x=0$, then it does not work for $x\ne0$. $\endgroup$ – f'' Apr 6 '16 at 18:47
I think it is impossible.
Consider the degrees of the expressions which you can use: $x$ and $y$ have a degree of 1. You are also allowed to use constants which have a degree of 0.
Let the function $\deg(A)$ calculate the degree of the expression $A$.
Then under the restrictions of $a$ and $b$ given in the question:
$$\deg(s(a,b))=\max(\deg(a),\deg(b))$$
(The $=$ can be used rather than $≤$ since $x$ and $y$ can only be used once overall)
$$\deg(p(a,b))=\deg(a)+\deg(b)-\deg(s(a,b))$$
Function $s$ will always have a degree equal to 1 if you use either or both of the variables $x$ or $y$ as the parameters. Otherwise, it'll have a degree of 0 if only constants are used in the parameters.
This still leaves you with expressions of degree 0 or 1 to use.
Function $p$ will always give an expression of degree 0 or 1 if the parameters have degrees 0 or 1.
Hence it is impossible to reach the expression $xy$ which has degree 2.
Shuri2060Shuri2060
$\begingroup$ Well, it's more fundamental than that. "A resistance of xy" is undefined — it's like cutting a piece of rope with a length of 2 ft². $\endgroup$ – Peregrine Rook Apr 6 '16 at 15:59
$\begingroup$ @PeregrineRook: What you're calling "units" is basically what Question Asker is calling the "degree". His argument is basically that the two operations $p$ and $s$ never change the degree of their inputs (assuming that their inputs are both of degree 1); but this is basically the same as saying that if $x$ and $y$ are both in Ohms, then $p(x,y)$ and $s(x,y)$ are also both in Ohms. $\endgroup$ – Michael Seifert Apr 6 '16 at 16:06
$\begingroup$ @ Question Asker: Yes. We know that we can map the math onto the circuit and back. If an expression for xy existed, the same expression would fail to multiply resistances measured in a different unit (e.g. deci-ohms). But the expressions p and s give the right physical answers regardless of the choice of unit. $\endgroup$ – jsocolar Apr 6 '16 at 16:13
$\begingroup$ @QuestionAsker: My argument is pretty similar to your argument, except it says, not that it's impossible to achieve the desired result, but that there is no such thing. For example, if x = 2 ohm and y = 3 ohm, is xy = 6 ohm? But x = 2,000,000 µohm and y = 3,000,000 µohm, and 6,000,000,000,000 µohm = 6,000,000 ohm. MichaelSeifert: I disagree (pun not intended). QA is using "degree" to mean dimensionality. Inch, foot, mile, cm and km are all the same dimensionality — acre and ft² are a different dimensionality. $\endgroup$ – Peregrine Rook Apr 6 '16 at 16:16
$\begingroup$ Give unknown resistors of resistance $x$ ohms and $y$ ohms, it makes sense to ask for a circuit of resistance $xy$ ohms. This is how I interpreted the question (this interpretation is consistent with the purely mathematical formulation of the question). $\endgroup$ – Julian Rosen Apr 6 '16 at 17:12
If we have multiple copies of each of the unknown resistors (so we can use the variables $x$ and $y$ more than once in the mathematical formulation), and we allow complex-valued resistors, then this is possible.
We can build the following operators: $$a(x):=s(p(s(x,-1),1),-1)=\frac{-1}{x},$$ $$ b(x):=a(s(a(s(a(s(x,i)),-i)),i))=-x, \hspace{5mm}\text{where $i$ is the imaginary unit}, $$ $$ c(x):=s(b(p(x,s(b(x),1))),x)=x^2, $$ $$ d(x):=a(s(a(s(a(s(x,-\sqrt{2})),-1/\sqrt{2})),-\sqrt{2}))=\frac{x}{2}. $$ Finally, we have $$ d(s(s(c(s(x,y)),b(c(x))),b(c(y))))=xy. $$
Julian RosenJulian Rosen
$\begingroup$ I wonder if it's possible to use only negative-valued resistors and avoid the complex ones. $\endgroup$ – Julian Rosen Apr 6 '16 at 21:38
$\begingroup$ Negative resistances are generally harder to construct than imaginary ones in any case - all the latter need are inductors and capacitors. $\endgroup$ – Zandar Apr 6 '16 at 21:48
$\begingroup$ You can simplify $d(x)=p(x,x)$. $\endgroup$ – 2012rcampion Apr 7 '16 at 0:39
$\begingroup$ @2012rcampion Indeed, that's much easier! $\endgroup$ – Julian Rosen Apr 7 '16 at 0:41
Here is a purely mathematical version of f'''s answer.
First note that the functions $s(x,y)$ and $p(x,y)$ always take non-negative inputs to positive outputs; hence, so does any operation build up from them. (To make this argument fully rigorous, it's probably most natural to consider them as total binary operations on the extended non-negative real line $\{ x \in \mathbb{R}\ |\ x \geq 0 \} \cup \{\infty\}$, with $p(x,\infty)$ and $p(\infty,x)$ defined to be $x$; or, even better, on the projective line $\mathbb{P}^1(\mathbb{R}_{>0})$. But I won't quite be that formal here.)
Now I claim: for any function $f(x,y)$ built up from the operations $s$ and $p$ together with non-negative constants, if $f(0,y) = 0$ for all $y$, then $f(x,y) = 0$ for all $x,y$.
Suppose this failed. Then there would be some minimal counterexample $f(x,y)$ — that is, minimal in its complexity as a formula built from $s$ and $p$, not necessarily minimal in its numerical output values.
It can't be a constant, so it's either of the form $s(g(x,y),h(x,y))$ or $p(g(x,y),h(x,y))$, where $g$ and $h$ are simpler expressions, so in particular are not counterexamples to the claim.
First case: $f(x,y) = s(g(x,y),h(x,y))$. So $g(0,y) + h(0,y) = 0$ for all $y$. So, since everything involved is non-negative, $g(0,y) = h(0,y) = 0$, for all $y$; so since $g$ and $h$ were not counterexamples to the claim, $g(x,y) = h(x,y) = 0$ for all $x,y$. So $f(x,y) = 0$ always — contradicting the choice of it as a counterexample!
Second case: $f(x,y) = p(g(x,y),h(x,y))$. (This case gets a bit mathematically involved.) Now $p(a,b) = 0$ exactly if at least one of $a,b$ is 0. So we know that for all $y$, at least one of $g(0,y)$ and $h(0,y)$ is $0$. So either $g(0,y)$ is zero for a dense set of $y$, or $h(0,y)$ is zero on some non-empty open set of $y$'s. But these are rational functions — and if a rational function is 0 either on a dense set or on a non-empty open set, then it must be zero everywhere. So either $g(0,y)$ is always zero, or $h(0,y)$ is. Since $g$ and $h$ aren't counterexamples, this means that either $g(x,y)$ is always zero, or $h(x,y)$ is. But either of these means that $f(x,y)$ is always zero. Contradiction again!
So no such function is possible.
Peter LeFanu LumsdainePeter LeFanu Lumsdaine
$\begingroup$ This proof is missing f(x,y) = x $\endgroup$ – ffao Apr 6 '16 at 19:01
$\begingroup$ @ffao: good point — and that is a counterexample to the claim as stated (and so are constant multiples of it). So this doesn't work as given. Hopefully it can be fixed — f'''s argument sounds very convincing physically, so it should be possible to state it mathematically! $\endgroup$ – Peter LeFanu Lumsdaine Apr 6 '16 at 19:17
Not the answer you're looking for? Browse other questions tagged mathematics optimization physics or ask your own question.
Ernie and the Orthogonal Network
The game of Sevens
How many boxes are conductors?
Express the number $2015$ using only the digit $2$ twice
May the fours be with you!
The largest and smallest value with 4 resistors
Most consecutive positive integers using two 1s
Prime Circuit Optimisation
A Simple Programming Task
Make 439204 from Φ (Golden Ratio) | CommonCrawl |
Difference between revisions of "Newton-Leibniz formula"
Ulf Rehmann (talk | contribs)
m (moved Newton–Leibniz formula to Newton-Leibniz formula: ascii title)
Nikita2 (talk | contribs)
(TeX encoding is done + hyperlinks)
The formula expressing the value of a definite integral of a given function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n0665501.png" /> over an interval as the difference of the values at the end points of the interval of any primitive (cf. [[Integral calculus|Integral calculus]]) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n0665502.png" /> of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n0665503.png" />:
{{TEX|done}}
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n0665504.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
The formula expressing the value of a definite integral of a given function $f$ over an interval as the difference of the values at the end points of the interval of any primitive (cf. [[Integral calculus|Integral calculus]])$F$ of the function $f$:
\begin{equation}\label{eq:*}
\int\limits_a^bf(x)\,dx = F(b)-F(a).
\end{equation}
It is named after I. Newton and G. Leibniz, who both knew the rule expressed by \ref{eq:*}, although it was published later.
It is named after I. Newton and G. Leibniz, who both knew the rule expressed by (*), although it was published later.
If $f$ is [[ Lebesgue integral | Lebesgue integrable]] over $[a,b]$ and $F$ is defined by
\begin{equation*}
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n0665505.png" /> is Lebesgue integrable over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n0665506.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n0665507.png" /> is defined by
F(x) = \int\limits_a^xf(t)\,dt + C,
\end{equation*}
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n0665508.png" /></td> </tr></table>
where $C$ is a constant, then $F$ is [[Absolute_continuity#Absolute_continuity_of_a_function | absolutely continuous]], $F'(x) = f(x)$ almost-everywhere on $[a,b]$ (everywhere if $f$ is continuous on $[a,b]$) and \ref{eq:*} is valid.
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n0665509.png" /> is a constant, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n06655010.png" /> is absolutely continuous, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n06655011.png" /> almost-everywhere on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n06655012.png" /> (everywhere if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n06655013.png" /> is continuous on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066550/n06655014.png" />) and (*) is valid.
A generalization of the Newton–Leibniz formula is the [[Stokes formula|Stokes formula]] for orientable manifolds with a boundary.
The formula expressing the value of a definite integral of a given function $f$ over an interval as the difference of the values at the end points of the interval of any primitive (cf. Integral calculus)$F$ of the function $f$: \begin{equation}\label{eq:*} \int\limits_a^bf(x)\,dx = F(b)-F(a). \end{equation} It is named after I. Newton and G. Leibniz, who both knew the rule expressed by \ref{eq:*}, although it was published later.
If $f$ is Lebesgue integrable over $[a,b]$ and $F$ is defined by \begin{equation*} F(x) = \int\limits_a^xf(t)\,dt + C, \end{equation*} where $C$ is a constant, then $F$ is absolutely continuous, $F'(x) = f(x)$ almost-everywhere on $[a,b]$ (everywhere if $f$ is continuous on $[a,b]$) and \ref{eq:*} is valid.
A generalization of the Newton–Leibniz formula is the Stokes formula for orientable manifolds with a boundary.
The theorem expressed by the Newton–Leibniz formula is called the fundamental theorem of calculus, cf. e.g. [a1].
[a1] K.R. Stromberg, "Introduction to classical real analysis" , Wadsworth (1981) pp. 318ff
[a2] W. Rudin, "Real and complex analysis" , McGraw-Hill (1966) pp. 165ff
Newton-Leibniz formula. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Newton-Leibniz_formula&oldid=22843
This article was adapted from an original article by L.D. Kudryavtsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Newton-Leibniz_formula&oldid=28964" | CommonCrawl |
Why do we use AC for long distance transmission?
Why do we use AC (Alternating Current) for long distance transmission of electrical power?
I know that AC is such a current that changes polarity (magnitude and direction) and has fixed poles.
electricity electrical-resistance electrical-engineering
EiNsTeInEiNsTeIn
$\begingroup$ Actually, DC power transmission is sometimes used for long distances and has some advantages over AC power transmission (en.wikipedia.org/wiki/…) EDIT (9/17/2016): Moreover, DC lines are preferable for distances over 800 km (large.stanford.edu/courses/2010/ph240/hamerly1), and the longest power transmission lines are DC (epcengineer.com/news/post/12191/…) $\endgroup$ – akhmeteli Sep 17 '16 at 5:50
$\begingroup$ I assume 'we' is USA here? In Europe, we use HVDC. $\endgroup$ – Mast Sep 17 '16 at 14:46
$\begingroup$ Interesting aspect shown here. Note it's only $220V$. As explained in the answers, high voltage is better for long distance transmission. Now imagine arcs like this on a daily basis. In AC there is zero crossing of the voltage so the arc extinguish well. $\endgroup$ – Kamil Maciorowski Sep 18 '16 at 14:43
$\begingroup$ @Mast Well, HVDC is more common in Europe, but most of the grid is still AC. Though HVDC is certainly getting more popular. $\endgroup$ – Luaan Sep 19 '16 at 8:15
The first point to make is: We don't always use AC. There is such a thing as high voltage DC for long-distance power transmission. However its use was rare until the last few decades, when relatively efficient DC-to-AC conversion techniques were developed.
The second point is to debunk the common answer given, which is "because DC won't go long distances". Sure it will. In fact DC is sometimes better for long distance (because you don't have capacitive or EM radiation losses).
But, yes, AC has been used traditionally. The "why" is because of a series of "a leads to b leads to c leads to...":
You want to lose as little power as possible in your transmission lines. And all else being equal, the longer the distance, the more power you'll lose. So the longer the distance, the more important it is to cut the line losses to a minimum.
The primary way that power lines lose power is in resistive losses. They are not perfect conductors (their resistance is non-zero), so a little of the power that goes through them is lost to heat - just as in an electric heater, only there, of course, heat is what we want! Now, the more power is being carried, the more is lost. For a given amount of power being transferred, the resistive loss in the transmission line is proportional to the square of the current! (This is because power (in watts) dissipated in a resistance is equal to current in amperes, squared, multiplied by the resistance in ohms. These losses are commonly called "I-squared-R" losses, pronounced "eye-squared-arr", "I" being the usual symbol for current in electrical work.) So you want to keep the current as low as possible. Low current has another advantage: you can use thinner wires.
So, if you're keeping the current low, then for the same amount of power delivered, you'll want the voltage high (power in watts = EMF in volts multiplied by current in amps). e.g. to halve the current, you'll need to double your voltage. But this will cut your losses to one fourth of what they were! That's a win. Now high voltage does have its issues. The higher the voltage, the harder it is to protect against accidental contact, short circuits, etc. This means higher towers, wider spacing between conductors, etc. So you can't use the highest possible voltage everywhere; it isn't economical. But in general, the longer the transmission line, the higher the voltage that makes sense.
Unfortunately you can't deliver power to the end use point (wall outlets and light sockets) at the high voltages that make sense for the long distance transmission lines. (that could be several hundred thousand volts!) Practical generators can't put out extremely high voltages either (they would arc horrendously). So you need an easy way to convert from one voltage to another.
And that's most easily done with AC and transformers. Transformers can be amazingly efficient: power distribution transformers routinely hit 98 or 99 percent efficiency, far higher than any mechanical machine.
By contrast, to convert DC voltages you essentially have to convert to AC, use a transformer, and then convert back to DC. The DC-to-AC step, in particular, will have losses. Modern semiconductors have made this a lot better in recent years, but it still generally isn't worth doing until you're talking about very long transmission lines, where the advantages of DC outweigh the conversion losses.
Another reason that AC prevailed over Edison's DC was that the AC system scaled better, as it permitted a small number of power plants far from the city, instead of a large number of small plants about a mile apart. Edison didn't just want to sell light bulbs; he (or, rather, his investors) wanted to sell lighting systems to businesses. There was no power distribution network and he didn't want to have to build one before selling light bulbs. At first he was selling lighting systems to commercial buildings, maybe some large apartment buildings; each building would have its own independent generator in the basement, just as you typically have water heaters today. He was initially successful because he (unlike other developers of light bulbs) was selling and installing complete systems, generator and switchgear and wiring and all, not just bulbs.
This would have saved a lot of the clutter of overhead wires in cities, but it was clear that this would not work well for small businesses or homes (what homeowner or shopkeeper wants to worry about keeping a generator running?). Westinghouse wanted to build a hydroelectric power generation plant at Niagara Falls - one plant to run all of New York City and beyond. Tesla designed an entire AC distribution system involving AC induction generators, step-up transformers to boost their output as necessary for long distances, then conversion through a series of step-downs to what is called "distribution voltage", and then finally to the lines that are connected to houses and light commercial buildings. This was a far more scalable system than Edison's. And, of course, AC works for light bulbs as well as for motors.
Speaking of that... Yet another reason for preferring AC is that AC, and particularly the three-phase AC that Westinghouse's system used (everywhere except at the last drop, from pole distribution transformer to house), was and remains far better for running high-power motors. All practical motors are really AC motors at heart; "DC" motors use commutators to switch the polarity to the coils back and forth as needed, to maintain rotation - essentially they make their own AC internally. But commutators require brushes, which wear out and require maintenance; they make sparks (which interfere with radio), etc. Whereas an AC induction motor needs no commutator nor even slip rings. AC power transmission systems start with three-phase AC generators and maintain three-phase right up to the pole transformer. So they can easily deliver three-phase where it's needed (medium and larger commercial and industrial), but the pole transformer can tap off single-phase for homes and light commercial use.
Three-phase AC power distribution has another advantage in not needing a dedicated "return" wire. (Just FYI, the system Tesla originally designed for Westinghouse was two-phase. They changed to three-phase after the work of Mikhail Dolivo-Dobrovolsky in 1888-1891.)
During the "war of the currents" Edison made much of the greater danger of AC. It's true that a given level of current, through a given path through the body, is more dangerous at AC than at DC. That's because AC at power line frequencies will cause involuntary muscle contractions - paralysis - and heart fibrillation at far lower current than DC (about a tenth). (See allaboutcircuits.com) However the end-user connectors were designed to minimize risk of contact with live parts, and we keep making them better in that regard.
(Aside: I have long held that the electrical transformer should be regarded as one of the basic machines, along with the lever, the inclined plane, the block and tackle, etc. They have the same property of trading off one thing for another. In the case of the mechanical basic machines it's power traded for distance, for an equivalent amount of work done; in the transformer it's voltage for current, at equivalent power. Hydraulic cylinder master-slave pairs should be in the "simple machines" list too. ;) )
Jamie HanrahanJamie Hanrahan
$\begingroup$ I find this unsatisfactory because power loss in a conductor can be written as either $P = I^2 R$ or $P = V^2 / R$. $\endgroup$ – DanielSank Sep 17 '16 at 14:15
$\begingroup$ @DanielSank Note that $V$ indicates the voltage drop over the transmission line, not the potential w.r.t. ground. Just because there's a equation that 'looks right', doesn't mean it's how the equation should be applied. $\endgroup$ – Sanchises Sep 17 '16 at 15:19
$\begingroup$ @sanchises I know that, but other readers might not. This is a very common source of confusion for lots of students. Jamie's answer doesn't clarify this issue at all, which is why, as I said, I find the answer unsatisfactory. In particular, any discussion of an electrical circuit needs an annotated diagram which uses the same notation as the equations. Anything less leads to confusion in my experience. $\endgroup$ – DanielSank Sep 17 '16 at 15:22
$\begingroup$ @DanielSank Fair enough (although personally I feel that the answer doesn't have to address all equations that could be misinterpreted, but I can understand you feel that this particular difference should be included) $\endgroup$ – Sanchises Sep 17 '16 at 15:24
$\begingroup$ @DanielSank: my reaction to your comment is that my answer is at about page 5 in the EE101 book, while yours is around page 30. I suggest to you that a typical asker of this question might not be at all familiar with schematics or with the notation we use in the formulas. And if they're not, the implication that "you have to understand this" can cause many to just shut down, thinking the explanation is over their heads. In short I was trying to NOT use formulas or schematics. (tbc...) $\endgroup$ – Jamie Hanrahan Sep 18 '16 at 9:44
The reason we use AC is that the AC voltage is easily changed using a transformer. To change DC voltage requires complex and inefficient circuitry.
Suppose you are sending some power $P$ from the power station to the end user. The power lines have some resistance $R$ so they dissipate some of the original power as heat. Specifically the power dissipated is given by:
$$ P_\text{lost} = I^2R $$
where $I$ is the current running through your power lines. If our supply voltage is $V$ then the power, voltage and current are related by:
$$ P = IV $$
And if we use this to substitute for $I$ in the power loss equation we get:
$$ P_\text{lost} = \frac{P^2R}{V^2} $$
The key result is that:
$$ P_\text{lost} \propto \frac{1}{V^2} $$
So if we increase the supply voltage $V$ we decrease the power lost. This is why the electricity transmission lines use very high voltages. The electricity produced by the power station is passed through a transformer to raise its voltage to the $100,000$V or so used in the tranmission lines. Then when it reaches your town the electricity is passed through several more transformers to reduce it to the domestic voltage.
But changing the voltage this way only works for AC because transformers don't work for DC. And that's why mains electricity is AC.
$\begingroup$ Actually, as far as I know, the longest distance power transmission lines are DC lines (power-technology.com/features/…) $\endgroup$ – akhmeteli Sep 17 '16 at 6:37
$\begingroup$ Also, of course, electricity is generated by rotating machinery which likes to make AC. $\endgroup$ – tfb Sep 17 '16 at 6:59
$\begingroup$ So it is cheaper to convert the ac produced at a power station to dc, transmit it, and then convert the dc back to ac at the consumer end. $\endgroup$ – Farcher Sep 17 '16 at 8:25
$\begingroup$ @Farcher: There are more reasons: no skin-effect, no losses due to transmission of reactive power, no need to transmit 3 phases, no need to synchronize with the grid, and so on. There are some disadvantages of DC as well. $\endgroup$ – akhmeteli Sep 17 '16 at 8:43
$\begingroup$ It's not clear what $P$ means in this answer. $P_\text{lost}$ is the power lost in the transmission lines, but this variable $P$ appears without definition. $\endgroup$ – DanielSank Sep 17 '16 at 14:18
This (from a now deleted page) clarifies how DC transmission lines are used for bulk power transmission:
Transmission Options
Power can be transmitted using either alternating current (AC) or direct current (DC). All modern power systems use AC to generate and deliver electricity to customers through transmission lines and then through distribution lines to where it is needed. The technology now exists to use DC for bulk power transmission.
AC electricity is converted to DC electricity for transmission and then converted back to AC electricity for distribution to customers on the AC power grid. A converter station at each end of the line is required to convert power from AC to DC and back so we can use the power in our homes, farms and businesses.
Thus for usage by the public DC has to go back to AC. The benefits of DC are better energy efficiency on long distances, and less land usage as shown in the link.
Similar statements can be found here.
From the scientific american
In the late 19th century, two competing electricity systems jostled for dominance in electric power distribution in the United States and much of the industrialized world. Alternating current (AC) and direct current (DC) were both used to power devices like motors and light bulbs, but they were not interchangeable.
A battle for the grid emerged from the Apple and Microsoft of the Gilded Age. Thomas Edison, who invented many devices that used DC power, developed the first power transmission systems using this standard. Meanwhile, AC was pushed by George Westinghouse and several European companies that used Nikola Tesla's inventions to step up current to higher voltages, making it easier to transmit power over long distances using thinner and cheaper wires.
The rivalry was fraught with acrimony and publicity stunts -- like Edison electrocuting an elephant to show AC was dangerous -- but AC eventually won out as the standard for transmission, reigning for more than a century.
Now comes the EMerge Alliance, a consortium of agencies and industry groups that thinks DC will make a comeback. With so many portable electronic devices and large electricity users like data centers running on DC, the technology can fill a growing niche while cutting energy consumption.
It is worth reading the article.
Generators deliver AC and technology gave us finally AC devices and thus AC dominated. This seems to be changing.
anna vanna v
$\begingroup$ The first link seems to be broken. $\endgroup$ – Denis Kniazhev Sep 23 '17 at 15:20
$\begingroup$ @DenisKniazhev thanks, the page has been deleted. i give a similar link. $\endgroup$ – anna v Sep 23 '17 at 15:31
Not the answer you're looking for? Browse other questions tagged electricity electrical-resistance electrical-engineering or ask your own question.
AC vs DC & why we ended up using AC for power in our homes, etc
Why do we need to have a low voltage drop over long distances in transmission of electricity?
Why does not my power stop when alternating current goes at 0 between positive and negative?
How do you calculate maximum current density for alternating currents?
Does AC power rating stand for peak or average power?
Why are resistors useful?
Why doesn't the brightness of a bulb change with time?
Problem regarding phase difference in Transmission lines
When high voltage leads to low current in power transmission system how its not applicable for household voltages? | CommonCrawl |
A gap for the maximum number of mutually unbiased bases
by Mihály Weiner PDF
Proc. Amer. Math. Soc. 141 (2013), 1963-1969 Request permission
A collection of pairwise mutually unbiased bases (in short: MUB) in $d>1$ dimensions may consist of at most $d+1$ bases. Such "complete" collections are known to exist in $\mathbb {C}^d$ when $d$ is a power of a prime. However, in general, little is known about the maximum number $N(d)$ of bases that a collection of MUB in $\mathbb {C}^d$ can have.
In this work it is proved that a collection of $d$ MUB in $\mathbb {C}^d$ can always be completed. Hence $N(d)\neq d$, and when $d>1$ we have a dichotomy: either $N(d)=d+1$ (so that there exists a complete collection of MUB) or $N(d)\leq d-1$. In the course of the proof an interesting new characterization is given for a linear subspace of $M_d(\mathbb {C})$ to be a subalgebra.
William K. Wootters, A Wigner-function formulation of finite-state quantum mechanics, Ann. Physics 176 (1987), no. 1, 1–21. MR 893477, DOI 10.1016/0003-4916(87)90176-X
N. J. Cerf, M. Bourennane, A. Karlsson and N. Gisin: Security of quantum key distribution using $d$-level systems. Phys. Rev. Lett. 88 (2002), 127901.
I. D. Ivanović, Geometrical description of quantal state determination, J. Phys. A 14 (1981), no. 12, 3241–3245. MR 639558, DOI 10.1088/0305-4470/14/12/019
William K. Wootters and Brian D. Fields, Optimal state-determination by mutually unbiased measurements, Ann. Physics 191 (1989), no. 2, 363–381. MR 1003014, DOI 10.1016/0003-4916(89)90322-9
P. Butterley and W. Hall: Numerical evidence for the maximum number of mutually unbiased bases in dimension six. Phys. Lett. A 369 (2007), 5.
Philippe Jaming, Máté Matolcsi, Péter Móra, Ferenc Szöllősi, and Mihály Weiner, A generalized Pauli problem and an infinite family of MUB-triplets in dimension 6, J. Phys. A 42 (2009), no. 24, 245305, 25. MR 2515542, DOI 10.1088/1751-8113/42/24/245305
Pawel Wocjan and Thomas Beth, New construction of mutually unbiased bases in square dimensions, Quantum Inf. Comput. 5 (2005), no. 2, 93–101. MR 2132048, DOI 10.26421/QIC5.2-1
Metod Saniga and Michel Planat, Viewing sets of mutually unbiased bases as arcs in finite projective planes, Chaos Solitons Fractals 26 (2005), no. 5, 1267–1270. MR 2149314, DOI 10.1016/j.chaos.2005.03.008
Stephen Brierley, Stefan Weigert, and Ingemar Bengtsson, All mutually unbiased bases in dimensions two to five, Quantum Inf. Comput. 10 (2010), no. 9-10, 803–820. MR 2731441, DOI 10.26421/QIC10.9-10-6
Personal communication from P. Sziklai, T. Szőnyi and Zs. Weiner.
R. H. Bruck, Finite nets. II. Uniqueness and imbedding, Pacific J. Math. 13 (1963), 421–457. MR 154824, DOI 10.2140/pjm.1963.13.421
J. H. van Lint and R. M. Wilson, A course in combinatorics, 2nd ed., Cambridge University Press, Cambridge, 2001. MR 1871828, DOI 10.1017/CBO9780511987045
Ian M. Wanless and Bridget S. Webb, The existence of Latin squares without orthogonal mates, Des. Codes Cryptogr. 40 (2006), no. 1, 131–135. MR 2226288, DOI 10.1007/s10623-006-8168-9
S. S. Shrikhande, A note on mutually orthogonal Latin squares, Sankhyā Ser. A 23 (1961), 115–116. MR 137247
P. Oscar Boykin, Meera Sitharam, Pham Huu Tiep, and Pawel Wocjan, Mutually unbiased bases and orthogonal decompositions of Lie algebras, Quantum Inf. Comput. 7 (2007), no. 4, 371–382. MR 2363400, DOI 10.26421/QIC7.4-6
Ingemar Bengtsson and Åsa Ericsson, Mutually unbiased bases and the complementarity polytope, Open Syst. Inf. Dyn. 12 (2005), no. 2, 107–120. MR 2151576, DOI 10.1007/s11080-005-5721-3
Ingemar Bengtsson, Wojciech Bruzda, Åsa Ericsson, Jan-Åke Larsson, Wojciech Tadej, and Karol Życzkowski, Mutually unbiased bases and Hadamard matrices of order six, J. Math. Phys. 48 (2007), no. 5, 052106, 21. MR 2326331, DOI 10.1063/1.2716990
Dénes Petz, Complementarity in quantum systems, Rep. Math. Phys. 59 (2007), no. 2, 209–224. MR 2340194, DOI 10.1016/S0034-4877(07)00010-9
Dénes Petz and Jonas Kahn, Complementary reductions for two qubits, J. Math. Phys. 48 (2007), no. 1, 012107, 6. MR 2292598, DOI 10.1063/1.2424883
Hiromichi Ohno, Dénes Petz, and András Szántó, Quasi-orthogonal subalgebras of $4\times 4$ matrices, Linear Algebra Appl. 425 (2007), no. 1, 109–118. MR 2334495, DOI 10.1016/j.laa.2007.03.020
Hiromichi Ohno, Quasi-orthogonal subalgebras of matrix algebras, Linear Algebra Appl. 429 (2008), no. 8-9, 2146–2158. MR 2446648, DOI 10.1016/j.laa.2008.06.012
Dénes Petz, András Szántó, and Mihály Weiner, Complementarity and the algebraic structure of four-level quantum systems, Infin. Dimens. Anal. Quantum Probab. Relat. Top. 12 (2009), no. 1, 99–116. MR 2509987, DOI 10.1142/S0219025709003598
Mihály Weiner, On orthogonal systems of matrix algebras, Linear Algebra Appl. 433 (2010), no. 3, 520–533. MR 2653817, DOI 10.1016/j.laa.2010.03.017
Sorin Popa, Orthogonal pairs of $\ast$-subalgebras in finite von Neumann algebras, J. Operator Theory 9 (1983), no. 2, 253–268. MR 703810
R. F. Werner, All teleportation and dense coding schemes, J. Phys. A 34 (2001), no. 35, 7081–7094. Quantum information and computation. MR 1863141, DOI 10.1088/0305-4470/34/35/332
H. Ohno and D. Petz, Generalizations of Pauli channels, Acta Math. Hungar. 124 (2009), no. 1-2, 165–177. MR 2520625, DOI 10.1007/s10474-009-8171-5
Man Duen Choi, A Schwarz inequality for positive linear maps on $C^{\ast } \$-algebras, Illinois J. Math. 18 (1974), 565–574. MR 355615
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 15A30, 47L05, 81P70
Retrieve articles in all journals with MSC (2010): 15A30, 47L05, 81P70
Mihály Weiner
Affiliation: Department of Analysis, Mathematical Institute, Budapest University of Economics and Technology (BME), Pf. 91, H-1521 Budapest, Hungary
Email: mweiner@renyi.hu
Received by editor(s): July 16, 2010
Received by editor(s) in revised form: October 4, 2011
Published electronically: January 23, 2013
Additional Notes: Supported in part by the ERC Advanced Grant 227458 OACFT "Operator Algebras and Conformal Field Theory" and the Momentum Fund of the Hungarian Academy of Sciences.
Communicated by: Marius Junge
The copyright for this article reverts to public domain 28 years after publication.
Journal: Proc. Amer. Math. Soc. 141 (2013), 1963-1969
MSC (2010): Primary 15A30, 47L05, 81P70
DOI: https://doi.org/10.1090/S0002-9939-2013-11487-5 | CommonCrawl |
Help with Proof: $Aut_{Grp}(\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}) \cong S_3$ [duplicate]
Showing $\text{Aut}_\mathsf{Grp}(\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}) \cong S_3$ 2 answers
I need help improving my proof. It's a bit lacking in details, and I don't know how to phrase my reasoning.
For any homeomorphism, identities must map to each other. Therefore, any automorphism of $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$ must map $(0,0) \mapsto (0,0)$.
This leaves three other elements, label them $1: (0,1), 2: (1,0), 3: (1,1)$.
Since all of these elements are order two, and the group action of any two will give the third, they can be interchanged with each other freely. So, any bijection of these three elements will result in an automorphism. Hence, there is a function between the automorphisms of $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$ and the symmetric group $S_3$
To prove this is a homeomorphism, we show $\mu(g\circ h) = \mu(g)\cdot\mu(h)$. This is true because $g$ and $h$ are automorphisms, and the bijection above supports this property, and this is circular reasoning that I can't seem to articulate more clearly.
To prove this is a bijection, like before, the function maps the pairs to their labels. It's injective, because if two labels are equal, then their points are equal, and so it is with the functions inputs and outputs. It's also surjective because of labeling of inputs and outputs carrying over to functions.
Is "labeling" enough to prove bijection? I don't want to just enumerate all the functions, there has to be a better way. The reasoning for homeomorphism and bijection are both really lame, but I don't know what sort of reasoning I can employ to make this more concise.
This is in the "basic" Group Theory section of Paolo Aluffi's "Algebra: Chapter 0". I have not yet gotten to Rings, Fields, Modules, or any Linear Algebra.
group-theory proof-writing
Larry B.
Larry B.Larry B.
marked as duplicate by Dietrich Burde group-theory Users with the group-theory badge can single-handedly close group-theory questions as duplicates and reopen them as needed. Jun 1 '18 at 18:05
Here is another take:
Let $G=Aut(\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z})$.
Prove that $G$ has over $6$. See this question.
Prove that $G$ is not abelian. You just need to find two elements that do not commute.
lhflhf
The given group $G$ is isomorphic with the abelian group underlying in the vector space structure of $\Bbb F_2^2$ of dimension $2$ over the field with two elements $\Bbb F_2$. It has a canonical basis, the two basis elements $e_1=(1,0)$, $e_2=(0,1)$ were named already in the question. There is also an element $v=e_1+e_2=(1,1)$. The proposed solution should also show/check that moving $e_1,e_2$ to two different elements among the above three elements $e_1,e_2,v$ indeed leads to a permutation, i.e. the third element $v$ is moved to the not mapped element among them! (This is of couse so, since $e_1+e_2+v=0$, and $0$ is mapped to $0$ by a group/vectorspace automorphism.)
Alternatively: We are searching for the structure of $G=\operatorname{GL}(2, \Bbb F_2)$, an element $g$ of it is a $2\times 2$ matrix of determinant $1$, its first column (the image of $e_1$ by $g$) is any non-zero vector, there are $3=2^2-1$ such vectors, its second column (the image of $e_2$ by $g$) is any non-zero vector different from (i.e. independent of) the first column, there are $2=2^2-2^1$ such vectors, so $G$ has six elements.
dan_fuleadan_fulea
Not the answer you're looking for? Browse other questions tagged group-theory proof-writing or ask your own question.
Showing $\text{Aut}_\mathsf{Grp}(\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}) \cong S_3$
order of $GL(2,\mathbb{Z}_2)$ list all elements
Are there 16 or 24 automorphisms of $\mathbb{Z}/4\mathbb{Z}\times\mathbb{Z}/6\mathbb{Z}$?
Finding subgroups of $G=\displaystyle\normalsize{\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/4\mathbb{Z}}\LARGE_{/}\large_{\langle(1,0)\rangle}$
A generalization of the anti-automorphisms of a group
How to determine $\left|\operatorname{Aut}(\mathbb{Z}_2\times\mathbb{Z}_4)\right|$
$G \cong \mathbb{Z}_{p^{n_1}}\times \dots \mathbb{Z}_{p^{n_k}}$. If $p=2$ and $n_1>n_2$, prove that $L(G)\cong \mathbb{Z}_2$.
How does one find the automorphism group of the following groups?
Matrix representation of $\operatorname{Aut}(\mathbb{Z}_2\times\mathbb{Z}_2)\cong\mathbb{S}_3$
Automorphism group of $\mathbb Z_p\times \mathbb Z_{p^2}$
How to find Automorphism group of $\mathbb Z_q \times\mathbb Z_q$ where $q$ is a prime.
Automorphism group of ($\mathbb{R}^{\times}, \cdot$) | CommonCrawl |
Is it possible for a crystal to have different structures at different temperatures?
For instance, suppose it is a 50-50 alloy of two metals that is BCC at room temperature $T_0$. If I raise (or lower) the temperature, is it possible for the bonds in the crystal to rearrange and form a new structure (say, FCC) that is more energetically favourable at the new $T$?
I saw a question on an engineering forum (here) that seemed to suggest the only thing that affected the structure was the % composition of the alloy. Could temperature play a role, too?
temperature crystals
Wise OwlWise Owl
The microstructure of an alloy depends on such variables as the alloying elements present, their concentrations, and the heat treatment of the alloy (i.e., the temperature, the heating time at temperature, and the rate of cooling to room temperature). -Materials Science and Engineering: An Introduction 9th, Wiley, Calister, Rethwisch
Look up phase diagrams for different alloys
Here is a phase diagram for iron and carbon (steel). Depending on the composition of the alloy and the temperature it is at different crystal structures will form and multiple phases can be present at the same time. $\alpha$ is ferrite and has BCC structure, austenite ($\gamma$) has FCC.
Devin CrossmanDevin Crossman
$\begingroup$ Is it possible to change the structure without changing the composition (ie. only change the temperature)? This isn't clear from the phase diagram. $\endgroup$ – Wise Owl Mar 4 '16 at 2:55
$\begingroup$ if you keep the concentration of the alloy the same and you raise the temperature the structure will change. For instance at the eutectic composition of 0.76% carbon the ferrite and perlite (alternating layers of ferrite and cementite) transform into FCC austenite when you increase the temperature above 730 degrees celsius $\endgroup$ – Devin Crossman Mar 4 '16 at 3:02
$\begingroup$ Phase changes based on temperature and pressure can also occur in pure metals; you can see the x-ray diffraction pattern change with the conditions, e.g., bismuth, which has bcc, fcc, hcp phases: books.google.com/… $\endgroup$ – Peter Diehr Mar 4 '16 at 3:05
$\begingroup$ Perhaps the most dramatic example of this is in the two "common" allotropes of Carbon. $\endgroup$ – Aron Mar 4 '16 at 9:16
$\begingroup$ @Aron: indeed, and not everybody knows that diamond is unstable at STP. It's possible for the bonds to re-arrange into a more favourable structure, exactly as the questioner asks, it's just a really really slow process. $\endgroup$ – Steve Jessop Mar 4 '16 at 10:41
Yes, it is very possible. Even water goes through such different structures
Two lines in particular from the wikipedia article on Ice:
Ice II A rhombohedral crystalline form with highly ordered structure. Formed from ice Ih by compressing it at temperature of 190–210 K. When heated, it undergoes transformation to ice III.
Ice III A tetragonal crystalline ice, formed by cooling water down to 250 K at 300 MPa. Least dense of the high-pressure phases. Denser than water.
Cort Ammon - Reinstate MonicaCort Ammon - Reinstate Monica
$\begingroup$ " Formed from ice Ih by compressing it at temperature of 190–210 K" - at first I misread that as Celsius and I thought "that must take a lot of pressure". $\endgroup$ – John Dvorak Mar 5 '16 at 21:00
No alloy is required. Plutonium is an example:
Martín-Blas Pérez PinillaMartín-Blas Pérez Pinilla
There is certainly far more to metallurgy than % composition of the alloy. Devin Crossman's answer hints a little at the processes involved. Hardness is strongly affected by heat treatments, for example, though this does not have much to do directly with the subject of this question https://en.wikipedia.org/wiki/Hardening_(metallurgy).
A classic (and very relevant) example of a phase change in metallurgy occurs with tin. Tin changes from a tetragonal metallic structure (density 7.365 g/cm3) to a diamond type nonmetallic structure (density 5.769 g/cm3) at temperatures below 13.2C.
The transformation is said to be "autocatalytic" (though I think it may be more a case of crystal nucleation) and is referred to as "tin pest." Tin objects will become brittle and simply fall apart at low enough temperatures.
https://en.wikipedia.org/wiki/Tin_pest
https://www.youtube.com/watch?v=FUoVEmHuykM
Level River StLevel River St
Not the answer you're looking for? Browse other questions tagged temperature crystals or ask your own question.
Is it possible to see a diffraction pattern from a BCC crystal made of 2 randomly-distributed species?
Does a thermally expanding torus experience internal stress?
What is the lowest level things have temperatures?
Can two bodies having the same internal energy have different temperatures?
If i have 3 same bodies, all have different temperatures. How i can get max temperature for some body?
Water mixture at different temperatures | CommonCrawl |
What is Conceptual Clustering?
Data MiningDatabaseData Structure
Conceptual clustering is a form of clustering in machine learning that, given a set of unlabeled objects, makes a classification design over the objects. Unlike conventional clustering, which generally identifies groups of like objects, conceptual clustering goes one step further by also discovering characteristic definitions for each group,where each group defines a concept or class.
Therefore, conceptual clustering is a two-step process − clustering is implemented first, followed by characterization. Thus, clustering quality is not solely a service of single objects. Most techniques of conceptual clustering adopt a statistical method that uses probability measurements in deciding the concepts or clusters.
Probabilistic descriptions are generally used to define each derived concept. COBWEB is a famous and simple method of incremental conceptual clustering. Its input objects are defined by categorical attribute-value pairs. COBWEB makes a hierarchical clustering in the form of a classification tree.
A classification tree differs from a decision tree. Each node in a classification tree defines a concept and includes a probabilistic description of that concept, which summarizes the objects classified under the node. The probabilistic description contains the probability of the concept and conditional probabilities of the form $P(A_{i}=v_{ij}|C_{k})$ is an attribute-value pair (the ith attribute takes its jth possible value) and Ck is the concept class.
COBWEB uses a heuristic evaluation measure known as category utility to guide the construction of the tree. Category Utility (CU) is defined as
$$\frac{\sum_{k=1}^{n}P(C_{k})\left [\sum_{i}\sum_{j}P(A_{i}=v_{ij}|C_{k})^{2}-\sum_{i}\sum_{j}P(A_{i}=v_{ij})^{2}\right ]}{n}$$
where n is the number of nodes, concepts, or "categories" forming a partition, {C1,C2,..., Cn}, at the given level of the tree. In other terms, category utility is the increase in the expected number of attribute values that can be perfectly guessed given a partition (where this expected number corresponds to the term $P(C_{k})\sum_{i}\sum_{j}P(A_{i}=v_{ij}|C_{k})^{2}$ over the expected number of correct guesses with no such knowledge (corresponding to the term $\sum_{i}\sum_{j}P(A_{i}=v_{ij})^{2}$ .Although it does not have room to display the derivation, category utility rewards intraclass similarity and interclass dissimilarity, where −
Intraclass similarity − It is the probability $P(A_{i}=v_{ij}|C_{k})$. The higher this value is, the higher the proportion of class members that share this attribute-value pair and the more predictable the pair is of class members.
Interclass dissimilarity − It is the probability $P(C_{k}|A_{i}=v_{ij})$. The higher this value is,the fewer the objects in contrasting classes that share this attribute-value pair and the more predictive the pair is of the class.
COBWEB descends the tree along a suitable path, refreshing counts along the way, in search of the "best host" or node at which to define the object. This decision depends on temporarily locating the object in each node and evaluating the category utility of the resulting partition. The placement that results in the highest category utility should be the best host for the object.
Ginni
What is Clustering?
What is Multi-relational Clustering?
What is clustering Index in DBMS?
What is scipy cluster hierarchy? How to cut hierarchical clustering into flat clustering?
What are the methods of clustering?
What are the applications of clustering?
Why is wavelet transformation useful for clustering?
Which SciPy package is used to implement Clustering?
What are the requirements of clustering in data mining?
Asymmetric and Symmetric Clustering System
Difference Between Classification and Clustering
How to make a scatter plot for clustering in Python?
Implementing K-means clustering of Diabetes dataset with SciPy library
Implementing K-means clustering with SciPy by splitting random data in 3 clusters? | CommonCrawl |
Research | Open | Published: 14 May 2019
Jin Wang1,2,
Xiujian Gu3,
Wei Liu3,
Arun Kumar Sangaiah4 &
Hye-Jin Kim5
Human-centric Computing and Information Sciencesvolume 9, Article number: 18 (2019) | Download Citation
In wireless sensor networks (WSNs), sensor devices must be equipped with the capabilities of sensing, computation and communication. These devices work continuously through non-rechargeable batteries under harsh conditions, the batter span of nodes determines the whole network lifetime. Network clustering adopts an energy neutral approach to extend the network life. The clustering methods can be divided into even and uneven clustering. If even clustering is adopted, it will cause the cluster head nodes (CHs) in vicinity of the base station to relay more data and cause energy hole phenomenon. Therefore, we adopt a non-uniform clustering method to alleviate the problem of energy hole. Furthermore, to further balance and remit resource overhead of the entire network, we combined the PEGASIS algorithm and the Hamilton loop algorithm, through a mixture of single-hop and multiple hops mechanisms, inserting a mobile agent node (MA) and designing an optimal empower Hamilton loop is obtained by the local optimization algorithm. MA is responsible for receiving and fusing packet from the CHs on the path. Network performance results show that the proposed routing algorithm can effectively prolong network lifetime, equalize resource expenditure and decrease the propagation delay.
A WSN can be viewed as a distributed and self-organized network which consists of many miniature sensor devices mostly assigned throughout the sensing area randomly [1]. Recently, advance in the technology of sensor networks have been highly recognized to implement various large-scale wireless sensor technology applications such as environment monitoring, health care, agriculture, military and smart home [2,3,4]. Meanwhile, there are still numerous research fields remaining to be studied in WSN applications, including localization, data fusion, data transmission and energy efficiency. The energy shortage of sensor nodes is a crucial problem in terms of prolonging the network lifetime. Rationalizing the energy distribution of sensor nodes is a crucial challenge to improve network performance [5, 6].
Wireless routing protocol is a hot topic in the research of distributed sensor networks. The routing protocol of sensor network is responsible for the reliable transmission of data between source node and destination node, including routing selection and data forwarding. According to whether the network topology is hierarchical, network routing protocols can be classified as flat routing protocols and layered routing protocols [7].
Flat routing protocols include flooding protocol [8,9,10], sensor protocols for information via negotiation (SPIN), directed diffusion (DD) protocol. The planar routing protocol is applicable to the network with planar structure, all nodes have equal status, and the protocol is relatively simple. There are lots of relay nodes in the transmission from the source node to the destination node, and they can share the network load. However, the organization of nodes, the establishment of routing, the control and maintenance of the overhead need to occupy a large amount of bandwidth, thereby limiting the transmission rate of network data greatly. In addition, when the network scale is large, data transmission requires consuming large amounts of energy, and the scalability of the network is poor. Motivated by this aspect, the flat routing protocol applies only to smaller networks.
Layered routing protocols mainly used in hierarchical network, it divides the whole network into multiple clusters [11]. The layered protocol is proposed to reduce network resource overhead, which effectively can prolong the whole network cycle [12]. Compared to the flat routing protocols, hierarchical routing has great advantages in energy efficiency, transmission delay and packet loss rate, therefore, it is necessary to adopt the routing protocols in large networks. In the process of dividing the clusters, it can be divided into uniform clustering and non-uniform clustering techniques [13]. Both methods have their respective characteristics, the former can save the calculation cost of re-planning cluster formation, while the latter can mitigate the problem of hot nodes [14].
Recently literature shows that in the period of data dissemination, MA deployed in sensing field can reduce the network load, overcome asynchronous transmission, and decrease the loss packet rate [15, 16]. A mobile agent that roam in sensing field is controlled by autonomous programs, and it takes charge of aggregating data packets from the CHs of the whole network in complicated network environment intelligently. Many studies have shown that deploying MAs in the sensing area can facilitate effective data propagation and consolidation in WSNs.
In this paper, we propose an effective data aggregation algorithm based on empower Hamilton loop for WSNs, we combined the PEGASIS algorithm and the Hamilton loop algorithm, through a mixture of single-hop and multi-hop mechanisms, and includes a mobile agent (MA) node on the Hamilton loop, MA is responsible for receiving and fusing CHs on the path. Network performance analysis result show that the proposed routing algorithm can effectively extend network life cycle, balance network overhead and decrease propagation latency.
The rest of the paper is organized as follows. In "Related work" section, we discuss the related work of our research. "Our proposed energy-efficient routing algorithm" section provides experimental model required. In "Our proposed algorithm" section, we illustrate the algorithm in the experiment. "Performance evaluation" section presents simulation results. Finally, we make a conclusion on the current work and future plan in "Conclusion" section.
In the recent, many researchers take more efforts into using network energy properly to improve network performance [17, 18]. The routing protocols are classified into two categories to address the energy consumption problem, as the aforementioned techniques, flat routing protocols apply only to smaller networks [19], therefore, we adopt hierarchical routing protocol with respect to the clustering process.
As we all known, A classical hierarchical routing algorithm, low energy adaptive clustering hierarchy (LEACH) has been proposed in [20]. In LEACH protocol, CH collects and process network information resources from its own cluster members (CMs), and each CH transmits the data to the BS by single hop. The protocol stipulates that new CH nodes are randomly elected in each round to balance the load of sensor nodes and extend the network life cycle. The disadvantage is that the unreasonable deployment of CHs causes the consequence of unbalanced energy consumption in certain parts of the network region. Especially those nodes far away from the BS will experience premature energy exhaustion. Later on, Liu et al. [21] put forward a Genetic Algorithm based LEACH (LEACH-GA) to utilize genetic algorithm to determine the selection of CHs optimally, the algorithm optimize the threshold of CH selection, but did not take the remaining energy of CH into account.
In [22, 23], a chain-structured routing protocol, power efficient gathering in sensor information system (PEGASIS) is put forward, it is based on greedy algorithm to establish network topology. The main routing of PEGASIS is that sensor node forwards data packet to the nearest node by forming a chain among the nodes. Apparently, the shorter transmission distance become, the smaller required energy consumes in each round [24]. Generally speaking, PEGASIS reduces the energy consumption by shortening the distance of transmission, while it will cause the CHs in vicinity of the base station to relay more data and increase the transmission delay. So in this paper, in order to make up for the deficiency of this protocol, we did not use this algorithm to traverse the entire network completely, but combined with Hamilton loop algorithm to complete the data communication between networks. There are many researches on the combination of cluster and chain. Zarei et al. [25] proposed a distributed routing algorithm based on clustering routing protocol (CBRP), which is a hybrid protocol architecture cluster and tree. CBRP adopt single-hop mechanism to deliver the data packet of root node to BS. However, the inestimable shortcoming is that it contains large amounts of non-data messages between sensor nodes when delivering data to BS, leading to extra communication overhead.
EEUC (Energy Efficient Uneven Clustering) [26] is an energy efficient clustering routing algorithm. The basic principle of the algorithm is as follows. Firstly, several candidate cluster head nodes are elected by random function. Secondly, the competitive radius of CHs is measured according to the Euclidean distance between each CH to BS. Finally, when the distance between two CHs is less than their competitive radius, high-energy nodes are appointed as their joint CHs. The equation for computing the competitive radius is as follows,
$$R_{c} = (1 - c\frac{{D_{max} - D(S_{i} ,BS)}}{{D_{max} - D_{min} }})R_{c}^{0}$$
where \(R_{c}^{0}\) is a fixed value, it merely needs to guarantee normal communication between nodes? \(c\) is a constant, and the simulation result proves that the network performance achieves the most ideal effect when the optimal value is 0.5. Symbol \(D_{max}\) and \(D_{min}\) are the maximum and minimum distance of CH to BS separately, and \(D(S_{i} ,BS)\) is the distance from current CH to BS. It is obviously that the smaller the competitive radiuses of the CH node near the base station, the smaller the number of member nodes it contains, and it will have enough energy to serve as the relay node in the next rounds. This protocol can effectively alleviate the problem of non-uniform energy depletion, thus extending the network life cycle.
Recently, Scholars have invested more energy in the design, deployment and management of MA. It is a promising research direction to solve the lack of network resource by adding MA in WSNs. MA is used as the media for data communication between CHs and BS. DGMA is a data collection algorithm based on MA in dynamic WSNs [27, 28], and the algorithm design an emergency event driven program. The sequence that MA accesses throughout the network has great influences on the routing optimization and network lifetime. The MAs traverse entire sensing field to collect information from each member node. In [29], directional diffusion based on mobile agent (MADD) was proposed. Target sensor nodes propagate the announcing signal to BS when the source node in the target field detects a critical event. According to the received signal packets, the sink node statically picks out the most suitable source node to be accessed by MA, which automatically determines the priority of accessing sensors sequence in the transfer of MA. The hybrid structure algorithm plans a low latency route for WSN, and balance the unique drawback between static and mobile network structure, but it has no obvious effect on improving energy efficiency of sensor nodes.
Our proposed energy-efficient routing algorithm
Basic assumptions
In this paper, to design the clustering protocol, we make some assumptions about network model. The network structure consists of one BS, one MA and \(n\) common sensor devices, these nodes are deployed in sensing filed. Each sensor node has a specific id, the identity of sensor nodes are represented as {S1, S2, S3 ,…, Sn} respectively. We make the assumptions as follows:
All sensor devices are stationary and deployed randomly. Each one is denoted by a detailed coordinate.
Mobile agent has sufficient energy so that the problem of energy exhaustion is not considered.
All sensors are isomorphic and have the same ability to process data.
All sensor nodes have the same battery capacity and cannot harvest energy from the external environment.
In Fig. 1, we set the sensing area as square, the cluster size is not uniform, the base station is mixed in the central point of the sensing region and remains stationary all the time, and MA moves in the sensing area according to the predetermined path. In the below figure, the dotted black line with arrows represents the moving trajectory of MA, and data packets transmission among the clusters can be described as red lines.
Energy model
We present the simple radio energy consumption model in Fig. 2 [18]. Firstly, the transmitter sends k bit packet, which can be divided into signal generation stage and signal amplifier stage. The energy consumption in the signal amplifier phase is determined by the size of the packet, while the energy consumption in the signal amplifier phase is determined by the size of the packet and the distance between the receiver and the transmitter. Secondly, the receiver is responsible for gathering the packet and it also produces energy expenditure. The specific energy consumption formula can be seen in the following formula (1).
This part mainly discusses the energy model of the network. The battery of sensor devices is a fundamental constraint with respect to the energy dissipation of the network. We regard the initial energy of sensor nodes as \(E_{0}\), and these nodes are not rechargeable. Network energy consumption mainly refers to the forwarding and receiving process of data packets, and the specific calculation formulas are respectively stated in formulas (1) and (2) [30]. Transmission energy is expressed as \(E_{Tx} (k,d)\), where coefficient \(k\) is the number of bits that forward signal packet, and \(d\) is the communication distance between the forwarding node and the receiving node.
$$\begin{aligned} E_{Tx} (k,d) &= E_{Tx - elec} (k) + E_{Tx - amp} (k,d) \hfill \\ & = \left\{ {\begin{array}{*{20}c} {k \cdot E_{elec} + E_{fs} \cdot k \cdot d^{2} ,d < d_{0} } \\ {k \cdot E_{elec} + E_{mp} \cdot k \cdot d^{4} ,d \ge d_{0} } \\ \end{array} } \right. \hfill \\ \end{aligned}$$
In this formula, the size of transmission message \(k\) = 2000 bits when each node forwards the packet, and the threshold value \(d_{0} = \sqrt {E_{fs} /E_{mp} }\). Furthermore, in energy settings, the radio energy parameter \(E_{elec}\) = 50 nJ/bit, \(E_{fs}\) = 10 pJ/bit/m and \(E_{mp}\) = 0.0013 pJ/bit/m.
$$E_{Rx} = k \cdot E_{elec}$$
The process of clustering construction
Firstly, each node in the network will randomly generate a number between 0 and 1. If the random number of sensor node is less than the preset threshold value \(T\), the node is elected as the candidate CH. Secondly, the competitive radius of the candidate CH is set. Different from the previous algorithm, the competitive radius of the EEUC algorithm is improved. By combining the Euclidean distance and residual energy between the nodes, the competitive radius of the node is more reasonably planned, as shown in formulas (3)–(6).
$$d = \frac{{D_{i} - D_{min} }}{{D_{max} - D_{min} }}$$
$$e = \frac{{nE_{ri} }}{{\sum\nolimits_{j} {E_{rj} } }}$$
$$R_{i} = \left\{ {\begin{array}{ll} \left[ {\omega \cdot d + (1 - \omega )e} \right]R_{bc} , & \quad k_{1} < c < k_{2} \\ \left[ {\omega \cdot k_{1} + (1 - \omega )e} \right]R_{bc} , & \quad c \le k_{1} \\ \left[ {\omega \cdot k_{2} + (1 - \omega )e} \right]R_{bc} ,& \quad c \ge k_{2} \\ \end{array} } \right.$$
where, \(D_{i}\) is the distance from the common sensor member node to its CH, \(D_{max}\) and \(D_{min}\) are the furthest and closest distances from the node in the sensing region to BS respectively. We use \(c\) to represent the distance between them, and \(e\) is the ratio between the residual energy of the current sensor and the average residual energy of all the sensors. In formula (5), where \(\omega\) is a regulation coefficient that is used to optimize the weight relationship between distance and energy. We set a limit on the value of \(c\), and when it exceeds this range, set a corresponding threshold for it. \(R_{bc}\) is denoted as benchmark competitive radius and take the \(R_{bc}\) = 90 m. Usually, \(k_{1}\) = 0.5 and \(k_{2}\) = 2 are used to reasonably control the competitive radius. When the distance between the two candidate CHs is less than their competition radius, the node with more energy is elected as a CH, and another candidate are disqualified.
After the election process of CHs, ordinary nodes join the appropriate cluster once receiving the broadcast information from CH. In the algorithm, we proposed three factors for selecting cluster, namely, the remaining energy of CHs, the distance from current ordinary node to BS and the distance from the node to other CHs. Finally, the fitness function \(f(i,j)\) of ordinary node \(i\) added to CH \(j\) is proposed, as shown in formula (6).
$$f(i,j) = \alpha \frac{{E_{rj} }}{{E_{0} }} + \beta \frac{{D_{j} }}{d(i,j)} + \varepsilon \frac{{\sum {D_{k} } }}{{\sum {d(i,k)} }}$$
where, \(\alpha\) is the energy regulation coefficient, \(\beta\) and \(\varepsilon\) are the regulation coefficient between the node distance from BS and another node separately. All of these coefficients are between 0 and 1, and \(\alpha + \beta + \varepsilon = 1\).
Our proposed algorithm
Clustering phase
In this paper, we use non-uniform clustering method to divide the whole sensing area. First, each node will generate a random number \(Rand_{i}\). We optimized the previous threshold setting, taking the residual energy of the node into account, and redefined the threshold \(T(n)\) as shown in formula (7).
$$T(n) = \left\{ {\begin{array}{ll} \frac{p}{{1 - p(r\bmod \frac{1}{p})}}\frac{{E_{ri} }}{{\overline{E} }},&\quad n \in G \\ 0, &\quad n \notin G \\ \end{array} } \right.$$
where \(p\) represents the expected percentage of nodes selected as CHs; \(r\) is denoted as the maximum number of rounds in the network; G is a set of common nodes that have not been selected as CHs within \({\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 p}}\right.\kern-0pt} \!\lower0.7ex\hbox{$p$}}\) cycles; \(E_{ri}\) represents the remaining energy of node \(i\); \(\overline{E}\) represents the average residual energy of the whole sensor network. The calculated threshold \(T(n)\) is compared with the random number \(Rand_{i}\) generated by each node. If \(Rand_{i} < T(n)\), the node is selected as the candidate CH.
Then, BS broadcasts clustering information to the entire sensing area. After receiving the broadcast, the candidate CH determines its competitive radius by using formula (3)–(5) according to the distance from the node to BS and the residual energy of the node. If the distance among the adjacent CHs is less than their competitive radius, the node with more energy is selected as the cluster head node. Once entering the clustering stage, the common node selects a suitable cluster according to the message broadcast by each CH. Taking into account the distance between nodes to BS, the distance between nodes and the residual energy of nodes, the fitness function (6) of each ordinary node is obtained. A node is declared a member node of the cluster when the maximum fitness value between the node and a CH.
Conventional hamilton loop problem
In 1857, Hamilton, a British mathematician, proposed the famous Hamilton circuit problem, which arose when the mathematician wanted to design a plan to travel around the world. Hamilton wanted to travel to all the countries, and he wanted to find a suitable path, starting from the beginning, traveling to all the countries in a certain order, as long as he could not go to the same country repeatedly and finally back to the starting point. From a mathematical point of view, a loop is designed to pass through each vertex (country) once and for all. Subsequently, this problem further evolved into a traveling salesman problem (TSP), namely, the problem of empowering Hamilton loop minimization. It not only guarantees that the constructed path is a Hamilton loop, but also requires that the selection target of the path is the minimum total distance of all paths.
The classic Hamilton algorithm is mainly to find a loop that can traverse all the CHs, so that MA can start from the BS and pass through all the CHs that are required only once and finally return to the starting point. Next, in order to reduce the total cost of MA traversing these CHs, we need to optimize the loop.
In simple terms, this problem can be handled by the enumeration method, but the calculation amount of the method is too large, reaching the number of n-1 factorial, so the enumeration method is not feasible for the complex network environment. Later, heuristic algorithm is proposed to calculate the approximate solution of the problem through the features of some optimal solutions or the features that should not be present, but the solution obtained by this method is not the optimal solution of the problem. We put forward an idea of local optimization to optimize the known Hamilton loop.
Empower Hamilton loop optimization algorithm
Step 1::
Determine the starting point of the cycle with BS. A Hamilton loop is obtained for each CH near BS using the classical algorithm;
Cut across the line (in the formed Hamilton loop, the current CH is regarded as the starting point of cutting, and the intermediate node is isolated, and the other nodes are connected among the three CHs connected to each other), generate an isolated CH;
The isolated CH node is reconnected to the loop to form a new Hamilton loop according to the principle of path energy consumption minimization, the minimum of energy consumption is measured by the weight between the cluster head nodes in formula (9);
If the total weight of the path changes, replace the old loop with a new loop, and take the current CH as the new starting point of the cycle, put back to step 2. Otherwise, move the starting point to the next CH, and enter step 5;
Determine whether the loop is completed, that is, whether the current cluster head node is a base station; if so, the algorithm terminates. Otherwise, go to step 2.
To describe the algorithm above in a brief way, we provide an intuitive process to locally optimize the moving path according to the Empower Hamilton loop optimization algorithm in Fig. 3.
The optimization of moving path
Data transmission phase
In this paper, we combined the PEGASIS algorithm and the Hamilton loop algorithm, through a mixture of single-hop and multiple hops mechanisms, and include a mobile agent (MA) node on the Hamilton loop; MA is responsible for receiving and fusing CHs on the path. Specifically, those cluster head nodes far away from BS adopt PEGASIS algorithm for multiple hops transmission. We use Hamilton loop algorithm to plan an agent itinerary when data is transmitted to CHs close to BS relatively. In the standard of distance division, we mainly decide on the competing radius of cluster head nodes. Usually, the farther away the node from BS, the larger the competition radius of the node is. Therefore, we merely need to set a threshold of \(T_{c}\), and the derivation of \(T_{c}\) follows the formula (8). Where \(c\) is the radius adjustment coefficient, the simulation results show that when \(c = 0.5\), the entire network performance is optimal.
$$T_{c} = c \cdot \frac{{\sum\nolimits_{i}^{m} {R_{i} } }}{m}$$
Furthermore, in Fig. 4a–e are used to represent CHs in different clusters respectively, meanwhile number represents the packets size of each CH calculated, d is denoted as the distance between two CHs. The weight relationship between CHs is presented by the distance between them and the size of their own packets, in the following work, we can find an optimal MS movement path through the weight relationship between them.
Weight relationship between CHs
Specific weight calculation formula is satisfied (9), where \(\alpha\) is coordination coefficient, \(d(i,j)\) represents the distance between CH \(i\) and \(j\), \(number(i)\) is denoted as the packet size of CH \(i\).
$$Weight_{i} = \mathop {\hbox{max} }\limits_{j} \left[ {\frac{1}{\alpha \cdot d(i,j) + (1 - \alpha ) \cdot number(i)}} \right]$$
In the part of experiment, we have simulated network environment in Matlab on Microsoft Windows 7 professional platform to analyze the performance of our proposal. For convenience of comparison, LEACH-GA algorithm, CBRP algorithm and EEUC algorithm were used for comparison [22, 25]. Using the energy consumption model mentioned in this article, the measurement results of each algorithm for the energy consumption from nodes, the packet forward rate of packets and delay are presented. The values of relevant experimental parameters are shown in Table 1.
Table 1 Simulation parameters
In Fig. 5, The network lifetime under different algorithms is analyzed. Compared with other algorithms, the proposed algorithm greatly extends the life cycle of the network, which is mainly reflected in the longer stable stage of the whole network and the steeper slope of node death. It is apparent that our proposed algorithm did not appear the node death phenomenon until the number of rounds is close to 1000, while all the nodes of other algorithms have almost died during this time period, and LEACH-GA algorithm has crashed when it run to 724th rounds. Based on the idea of empowering Hamilton loop to collect data packets, an optimal agent traverse route is planned. It not only extends the longevity of the node, but also achieves the effect of balanced energy consumption and alleviates the problem of energy hole.
Network lifetime
As is clearly shown in Fig. 6, in comparison with the sum of energy consumption among the other three algorithms, it is obvious that our proposed scheme is superior to other algorithms in terms of the stability period and the network lifetime. The gentler the slope is, the less energy will be consumed per round, indicating that our algorithm is more energy efficient compared with other algorithms. Our proposed algorithm will consume 10.5 joules of energy per 100 rounds approximately, while the energy consumption of the other three algorithms are 11.3, 12.2 and 14.2 respectively. As can be seen from the chart, when the number of rounds reached 1200, the total energy of the network was exhausted, while other algorithms have a network life of less than 1000 rounds. In short, the less energy consumed, the longer the lifetime of the entire network.
Network average transmission delay is another important standard to evaluate network performance. The average delay time between the beginning of propagating a data and its arrival at the other node required to receive the data [31, 32]. Where, the calculation formula of the average delay is obtained from formulas (10)–(12). From the simulation performance of the experiment in Fig. 7, the LEACH-GA algorithm and the proposed algorithm proposed are relatively low in delay, and the delay difference value between our algorithm and CBRP algorithm is 80 ms and 15 ms with respect of time delay, Apparently, our algorithm is an ideal compromise algorithm, it combines the advantages of LEACH-GA and CBRP algorithm for alleviating the energy consumption and reducing the transmission delay respectively.
$$T_{averageround} = \frac{{T_{Networklifetime} }}{{Round_{{\rm max} } }}$$
$$Latency_{nodes} = \sum\limits_{i}^{n} {(T_{receive}^{{S_{i} }} - T_{send}^{{S_{i} }} )}$$
$$AverageLatency = \frac{{Latency_{nodes} }}{{T_{averageround} }}$$
In Formula (10), we calculated the average time to run a round. Symbol \(Round_{{\rm max} }\) represents the maximum rounds in the network life cycle. \(T_{send}^{{S_{i} }}\) and \(T_{receive}^{{S_{i} }}\) are time periods for sending and receiving packets respectively.
Transmission delay
In this paper, Empower Hamilton Loop based Data Collection Algorithm by Mobile Agent is proposed in Uneven Clustering for WSN. With the aim of alleviating the problem of energy hole, we adopt a non-uniform clustering method, the method mitigate transmission load of the nodes in vicinity of the base station. Furthermore, to further balance and decrease the resource expenditure of the entire network, we combined the PEGASIS algorithm and the Hamilton loop algorithm, adopts a mixture of single-hop and multiple hops mechanisms, and includes MA on the optimal Hamilton moving loop, MA is responsible for aggregating and fusing data packets from the CHs on the loop. Network performance analysis results show that the proposed routing algorithm can effectively prolong network life cycle, equalize energy consumption and reduce network delay. Although LEACH-GA algorithm delay was slightly lower than our algorithm, but it will consume the most energy consumption of each round, so its life cycle is short. The energy saving effect of our algorithm is better than CBRP and EEUC algorithm, meanwhile, the efficiency of two algorithms are not ideal in terms of transmission latency. Therefore, the proposed algorithm is more reliable and effective compared to other protocols. In the future, we intend to adjust the number of MA and change the value of the benchmark competition radius to specifically analyze the influence of these variables on network performance.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Akyildiz IF, Su W, Sankarasubramaniam Y et al (2002) Wireless sensor networks: a survey. Comput Netw 38(4):393–422
Chen M, Gonzalez S, Vasilakos A, Cao H et al (2011) Body area networks: a survey. Mobile Netw Appl 16(2):171–193
Bhuiyan MZA, Wang G, Cao J (2015) Deploying wireless sensor networks with fault-tolerance for structural health monitoring. IEEE Trans Comput 64(2):382–395
MathSciNet
Wang J, Cao J, Ji S et al (2017) Energy efficient cluster-based dynamic routes adjustment approach for wireless sensor networks with mobile sinks. J Supercomput 73(7):3277–3290
Khan JA, Qureshi HK, Iqbal A (2015) Energy management in wireless sensor networks: a survey. Comput Electr Eng 41(1):159–176
Tu Y, Lin Y, Wang J, Kim JU (2018) Learning with generative adversarial networks on digital signal modulation classification. Comput Mater Con 55(2):243–254
Pantazis NA, Nikolidakis SA, Vergados D (2013) Energy-efficient routing protocols in wireless sensor networks: a survey. IEEE Commun Surv Tut 15(2):551–591
Kanavalli A, Sserubiri D, Shenoy PD et al (2010) A flat routing protocol for sensor networks. In: Proceeding of international conference on methods and models in computer science, IEEE, pp 1–5
Salman HM (2014) Survey of routing protocols in wireless sensor networks. Int J Sens Sens Netw 2(1):1–6
Al-Karaki JN, Kamal AE (2004) Routing techniques in wireless sensor networks: a survey. IEEE Wirel Commun 11(6):6–28
Singh SK, Singh MP, Singh DK (2010) A survey of energy-efficient hierarchical cluster based routing in wireless sensor networks. Int J Adv Netw Appl 2(2):570–580
Yin C, Xi J, Sun R, Wang J (2018) Location privacy protection based on differential privacy strategy for big data in industrial internet of things. IEEE Trans Industr Inf 14(8):3628–3636
Guiloufi ABF, Nasri N, Kachouri A (2016) An energy-efficient unequal clustering algorithm using 'sierpinski triangle' for WSNs. Wireless Pers Commun 88(3):449–465
Wang J, Cao J, Sherratt JR, Park JH (2017) An improved ant colony optimization based approach with mobile sink for wireless sensor networks. J Supercomput. https://doi.org/10.1007/s11227-017-2115-6
Venetis IE, Pantziou G, Gavalas D, et al (2014) Benchmarking mobile agent itinerary planning algorithms for data aggregation on WSNs. In: International conference on ubiquitous & future networks, IEEE. pp 105–110
Wang J, Ju C, Kim H et al (2017) A mobile assist coverage hole patching scheme based on particle swarm optimization for wsNs. Cluster Comput. https://doi.org/10.1007/s10586-017-1586-9
Wang J, Ju C, Gao Y et al (2018) A PSO based energy efficient coverage control algorithm for wireless sensor networks. Comput Mater Con 56(3):433–446
Wang J, Cao Y, Li B, Kim H et al (2017) Particle swarm optimization based clustering algorithm with mobile sink for WSNs. Future Gener Comput Syst 76:452–457
Tseng YC, Kuo SP, Lee HW et al (2004) Location tracking in a wireless sensor network by mobile agents and its data fusion strategies. Info Process Sens Netw 47(4):448–460
Heinzelman WR, Sinha A, Wang A et al (2000) Energy-scalable algorithm and protocol for wireless micro sensor networks, acoustics, speech and signal processing. Proc IEEE Int Conf 6:3722–3725
Tirkolaee EB, Hosseinabadi AAR, Soltani M et al (2018) A hybrid genetic algorithm for multi-trip green capacitated arc routing problem in the scope of urban services. Sustainability 10(5):1366
Lindsey S, Raghavendra CS (2003) PEGASIS: power-efficient gathering in sensor information systems. Aerospace Conf Proc 42(3):1125–1130
Zhao M, Ma M, Yang Y (2014) Efficient data gathering with mobile collectors and space-division multiple access technique in wireless sensor networks. Comput IEEE Transact 18(3):400–417
Zeng D, Dai Y, Li F et al (2018) Adversarial learning for distant supervised relation extraction. Comput Mater Con 55(1):121–136
Li CF, Ye M, Chen GH, et al (2005) An energy-efficient unequal clustering mechanism for wireless sensor networks. In: Protocol of the IEEE, international conference on mobile ad-hoc and sensor systems, pp. 597–604
Kamble SP, Thakare NM (2014) A novel cluster-based energy efficient routing with hybrid, protocol in wireless sensor networks. Int J Eng Res Appl 4(8):113–117
Yao J, Zhang K, Yang Y et al (2017) Emergency vehicle route oriented signal coordinated control model with two-level programming. Soft Comput 1:1–12
Yin C, Zhang S, Yin Z, Wang J (2017) Anomaly detection model based on data stream clustering. Cluster Comput. https://doi.org/10.1007/s10586-017-1066-2
Chen M, Kwon T, Yuan Y et al (2007) Mobile agent-based directed diffusion in wireless sensor networks. Eurasip J Appl Signal Processing 1:219
Rhim H, Tamine K, Abassi R et al (2018) A multi-hop graph-based approach for an energy-efficient routing protocol in wireless sensor networks. Hum Cent Comput Info Sci 8(1):30
Alazzawi L, Elkateeb A (2008) Performance evaluation of the wsn routing protocols scalability. J Comput Netw Commun 1:1–9
Ullah F, Abdullah AH, Kaiwartya O et al (2017) Medium access control (MAC) for wireless body area network (WBAN): superframe structure, multiple access technique, taxonomy, and challenges. Hum Cent Comput Info Sci 7(1):34
This work is supported by the National Natural Science Foundation of China (61772454, 61811530332, 61811540410). Professor Hye-Jin Kim is the corresponding author.
Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation, School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, China
Jin Wang
School of Information Science and Engineering, Fujian University of Technology, Fujian, China
College of Information Engineering, Yangzhou University, Yangzhou, China
Xiujian Gu
& Wei Liu
School of Computing Science and Engineering, Vellore Institute of Technology (VIT), Vellore, India
Arun Kumar Sangaiah
Business Administration Research Institute, Sungshin W. University, Seoul, South Korea
Hye-Jin Kim
Search for Jin Wang in:
Search for Xiujian Gu in:
Search for Wei Liu in:
Search for Arun Kumar Sangaiah in:
Search for Hye-Jin Kim in:
JW and HJK conceived and designed the experiments; XG and WL performed the experiments; AKS analyzed the data and helped to revise this paper, HJK advised on the simulation settings and JW wrote this paper. All authors read and approved the final manuscript.
This work is supported by the National Natural Science Foundation of China (61772454, 61811530332, 61811540410).
Correspondence to Hye-Jin Kim.
Hamilton loop
Optimization algorithm
Big data, IoT, and Cloud Computing for Human-centric Computing | CommonCrawl |
Economic estimation of Bitcoin mining's climate damages demonstrates closer resemblance to digital crude than digital gold
Bitcoin emissions alone could push global warming above 2°C
Camilo Mora, Randi L. Rollins, … Erik C. Franklin
Quantification of energy and carbon costs for mining cryptocurrencies
Max J. Krause & Thabet Tolaymat
Policy assessments for the carbon emission flows and sustainability of Bitcoin blockchain operation in China
Shangrong Jiang, Yuze Li, … Shouyang Wang
Strategies in and outcomes of climate change litigation in the United States
Sabrina McCormick, Robert L. Glicksman, … Brittany Whited
Large and persistent effects of green energy defaults in the household and business sectors
Ulf Liebe, Jennifer Gewinner & Andreas Diekmann
Green defaults can combat climate change
Reframing incentives for climate policy action
J.-F. Mercure, P. Salas, … J. E. Vinuales
The short-term costs of local content requirements in the Indian solar auctions
Benedict Probst, Vasilios Anatolitis, … Laura Díaz Anadón
Climate risks are becoming legal liabilities for the energy sector
Justin Gundlach
Benjamin A. Jones1,
Andrew L. Goodkind1 &
Robert P. Berrens1
2333 Altmetric
Energy and society
This paper provides economic estimates of the energy-related climate damages of mining Bitcoin (BTC), the dominant proof-of-work cryptocurrency. We provide three sustainability criteria for signaling when the climate damages may be unsustainable. BTC mining fails all three. We find that for 2016–2021: (i) per coin climate damages from BTC were increasing, rather than decreasing with industry maturation; (ii) during certain time periods, BTC climate damages exceed the price of each coin created; (iii) on average, each $1 in BTC market value created was responsible for $0.35 in global climate damages, which as a share of market value is in the range between beef production and crude oil burned as gasoline, and an order-of-magnitude higher than wind and solar power. Taken together, these results represent a set of sustainability red flags. While proponents have offered BTC as representing "digital gold," from a climate damages perspective it operates more like "digital crude".
Given rapidly developing blockchain technology and the use of encryption and decentralized, permission-less public ledgers, today's evolving internet has allowed the emergence of various digitally scarce goods1. This digital economy includes nonfungible assets like tokens for various digital media2, as well as fungible, divisible assets like the several thousand cryptocurrencies supported by hundreds of exchange platforms3. Select digitally scarce goods use production schemes with intensive energy use4,5. These include several prominent cryptocurrencies (e.g., Bitcoin, Ether), which to-date are based on highly energy-intensive, competitive tournament-style production schemes known as proof-of-work (POW) mining for providing the encrypted validation in decentralized public ledgers6,7.
POW-based cryptocurrencies are a slice of the larger set of blockchain technologies that have disruptively entered global marketplaces over the last decade or more8. The production of cryptocurrencies has been relatively decentralized and largely unregulated as they have first gained a foothold and then occupied a larger space9. Cryptocurrencies are priced and traded in markets, but often exhibit considerable volatility10, and financial anomalies like speculative bubbles11, or evidence of price manipulation12,13. Yet, various proponents argue that such innovations provide significant value or are especially needed in the developing world (e.g., from providing sustainable new financial goods or mediums of exchange to the underserved14, investment diversification15, or routes around government corruption16). Others question the benefit of such disruptions, and especially so if the new technologies (e.g., POW-type technologies) have intensive energy use, with potentially large social costs from associated carbon emissions17,18. Potentially, there may be significant room for learning19 and moving to alternative production pathways that use significantly less energy, while still providing the purported benefits20. However, achieving net reductions in energy use is inherently challenging, due to redundancies (e.g., number of nodes involved, or the workload of operations) in all types of blockchain technology21. Against this backdrop and within broader efforts to mitigate climate change, the policy challenge is creating governance mechanisms for an emergent, decentralized industry, which includes energy-intensive POW cryptocurrencies22,23. Such efforts would be aided by measurable, empirical signals concerning potentially unsustainable climate damages.
Taking Bitcoin (BTC) as our focus, this analysis estimates climate damages of mining coins and explores several criteria for signaling when these damages might be unsustainable. First, the trend of estimated climate damages per BTC mined should not be increasing, as the industry matures. Second, per BTC mined, its market price should always exceed its estimated climate damages; i.e., BTC mining should not be "underwater" wherein per unit climate damages are greater than coin market prices for any appreciable period. Third, to contextualize the sustainability of BTC over some chosen time frame, estimated climate damages per coin mined should favorably compare to some reference percentage benchmark of the climate damages per unit market value of other sectors and commodities; e.g., ones that we regulate or consider unsustainable. We offer these measurable criteria for consideration as "red flags" of incipient climate damage from an emerging industry. They signal the need for change (e.g., production alternatives). Absent such change, it may be time to forgo a "business-as-usual" approach and consider collective action (e.g., increased regulation).
Energy use for mining cryptocurrencies
The proof-of-work (POW) blockchain technology used by Bitcoin (BTC) is energy intensive5,24. For context, BTC is a cryptocurrency with a decentralized open-source blockchain whose public ledger began in 200925 and is transacted peer-to-peer without any central authority (e.g., bank or government). Through December 2021, BTC had an approximately $960 billion (US$) market capitalization, and a roughly 41% global market share among all cryptocurrencies 26.
POW blockchain technology is energy intensive because new blocks are added to the blockchain through a competitive consensus-driven verification process carried out by individual or pools of "miners." Miners verify transactions occurring on the blockchain and compete simultaneously to correctly provide a unique transaction identifier, or "hash," for a block27. Miners who are first to verify a given number of transactions and to provide the correct hash identifier are rewarded with new cryptocurrency and a new block is added to the chain28.
Providing the correct hash identifier employs enormous amounts of energy due to the decentralized production process, which encourages competition and creates a "winner-take-all" game27. As miners across the globe compete, as quickly as possible, to add new blocks to the chain (i.e., by generating guesses of the target hash identifier ["hash rate"]), they employ highly specialized computer equipment and machinery (known as "mining rigs") that uses significant amounts of electricity to operate competitively4. As miners compete with ever more computing power (e.g., as more miners participate in the network, or, as more efficient mining rigs are employed, or both), the overall network hash rate increases, endogenously raising the computational difficulty required to correctly guess the target hash, thereby increasing the overall energy use of mining activity29.
Bitcoin's global electricity usage
Using network hash rate data from January 2016 through December 2021 and data on mining equipment power consumption and efficiency5,30, Fig. 1 presents global electricity usage of mining BTC and prices per coin. On the basis of these estimates, in 2020 BTC mining used 75.4 TWh yr−1 of electricity, which is more energy than used by Austria (69.9 TWh yr−1 in 2020) or Portugal (48.4 TWh yr−1 in 2020)31. There is a general upward time trend in BTC electricity use and a close correlation between BTC prices and mining energy usage. The decline in BTC exchange prices and mining energy use in the summer of 2021 is likely due in part to China's banning of financial institutions and payment companies from providing cryptocurrency-related transactions32.
Global 7-days averaged daily electricity usage of mining activity (right axis) and coin exchange price in US$ (left axis) for Bitcoin (BTC). Data from January 1, 2016 to December 31, 2021 shown. Electricity usage is calculated based on network hash rate data downloaded from Blockchain Charts (https://www.blockchain.com/charts) and mining rig efficiency (see Methods section). Prices downloaded from Yahoo! Finance (https://finance.yahoo.com/cryptocurrencies/). All network hash rate and price data are supplied in the Supplementary Data.
Estimates from Cambridge University suggest the majority of electricity used to mine POW cryptocurrencies comes from coal and natural gas, though hydropower use was likely prominent in China until cryptocurrency mining was banned there32,33. Globally, it is estimated that 39% of POW mining is powered by renewable energy, meaning that non-renewables, such as fossil fuels, power the majority (~ 61%)33. Due to its considerable fossil fuel energy use, cryptocurrency mining contributes to global carbon emissions30,34 with associated environmental damages35. Goodkind et al.29 estimated that in 2018 each $1 (US$) of BTC market value created through mining was associated with $0.49 (US$) in combined health and climate damages in the US and $0.37 (US$) in China. Krause and Tolaymat5 estimated that BTC, Ether, Litecoin, and Monero coins were responsible for 3–15 million tonnes of CO2 emissions over January 2016 to June 2018. For comparison, in 2018, similar amounts of CO2 were emitted from Afghanistan (7.44 million tonnes), Slovenia (14.1 million tonnes), and Uruguay (6.52 million tonnes)36.
Climate damages associated with bitcoin mining
As mining efforts have increased over time, we estimate steeply increasing CO2e (carbon dioxide equivalent) emissions per coin created. Using a global estimate of the location of BTC miners and the local electricity mix, and regional CO2e emission coefficients by generation type37, a BTC mined in 2021 is responsible for emitting 126 times the CO2e as a BTC mined in 2016—increasing from 0.9 to 113 tonnes (t) CO2e per coin from 2016 to 2021 (Fig. 2A).
Global estimates of Bitcoin (BTC) mining's climate damages, CO2e emissions, and climate damages as a share of coin price. (A) Estimated climate damages ($/coin mined) and CO2e emissions (t/coin mined; bar chart) of BTC. A non-linear trend line has been fit to the damages per coin data to illustrate time trends (dotted line). (B) Climate damages as a share of the coin's price for BTC. Values displayed are the 7-days running average. Climate damages per coin mined in (A) were divided by the daily market price of the coin and multiplied by 100 to put into percentage terms for calculation in (B). $100 t−1 damage coefficient used for CO2e emissions based on ranges in the peer-reviewed literature. Damages are in US$. Estimates span January 1, 2016 to December 31, 2021. See the Supplementary Data for emissions factors used and the climate damages data.
With increasing CO2e emissions per coin created, climate damages of producing BTC increased over time (Fig. 2A). Using a $100 t−1 damage coefficient for CO2e emissions (dollar values in US dollars (US$) unless otherwise noted), commonly referred to as the social cost of carbon (SCC), each BTC created in 2021 resulted in $11,314 in climate damages, on average, with total global damages of all coins mined in 2021 exceeding $3.7 billion. Between 2016 and 2021, total global BTC climate damages are estimated at $12 billion. With rapid price increases in BTC at the end of 2020, climate damages of mining represented 25% of market prices for 2021 (Fig. 2B). This percentage is useful to normalize the scale of externalities to the market price of the product. We offer two potential ranges of concern in Fig. 2B—when the climate damages as a share of the coin price are between 50 and 100% (shown in amber), and when they are > 100% (shown in red). The former would be above those found on average in Goodkind et al.29, while the latter represents times when BTC was "underwater" on a per coin basis (i.e., climate damages exceeding the coin's market price). With much lower prices in 2019 and 2020, BTC climate damages were 64% of market price, on average. For more than one-third of the days in 2020, BTC climate damages exceeded the price of the coins sold. Damages peaked at 156% of coin price in May 2020, suggesting each $1 of BTC market value created in that month was responsible for $1.56 in global climate damages.
By our first sustainability criterion that "the trend of the estimated climate damages per BTC mined should not be increasing, as the industry matures," BTC fails. There is a clear upward trajectory in per coin estimated climate damages, as seen from the non-linear trend line in Fig. 2A. Rather than declining as the industry matures, each new BTC coin mined is, on average, associated with increasing climate damages.
BTC also fails our second sustainability criterion that "per BTC mined, its market price should always exceed its estimated climate damages." From Fig. 2B, at multiple periods of time in 2020, BTC climate damages as a share of the coin's price were greater than 100% (areas indicated in red). BTC was "underwater" at these intervals, meaning that each coin mined produced climate damages exceeding the market price of the coin. Over 2016–2021, BTC was underwater on 6.4% of days, and the damages exceeded 50% of coin price on 30.6% of days.
What if the social cost of carbon is varied?
One key parameter where we assume a range of values from available evidence is the SCC. For our baseline estimate, we follow Pindyck38 in choosing $100 t−1. SCC is the estimated present value of monetary damages from emitting an additional tonne of carbon today and monetizes the negative social externalities of carbon emissions38. From a policy and regulatory perspective, SCC is a key parameter for evaluating the social costs (i.e., those not considered in the market price) of a high-energy use product or service. Carleton and Greenstone39 note the central role of the United States (US) Government's official SCC estimate in both domestic US and international climate policy. SCC estimation has extensive history in economics40,41,42, and such values are widely used39.
However, while analyses that use SCC estimates must make assumptions on its value or range, there is no consensus38. There is a growing literature on both estimating the SCC and modeling the optimal SCC for pricing the externality43. The current US Government estimated SCC value is $51 t−1 CO2e in 2020 inflation-adjusted dollars44. However, President Biden's Executive Order #13,990 (January 20, 2021) directed an updating of this value45.
Even a select review of recent SCC estimation studies encompasses a broad range of values38,40,43. Depending on varying assumptions and approaches, recent empirical studies can easily support a range of values around our SCC baseline coefficient of $100 t−1 CO2e, from + /−$50 t−1 on either side. Thus, to represent some of this variability we use two alternative SCC values to augment the $100 t−1 baseline: (i) $50 t−1 CO2e (essentially equivalent to the 2020 value of the 2010 US Government estimate), and; (ii) $150 t−1 CO2e.
We re-estimate climate damages of BTC using these alternative SCC values (Supplementary Table 1). The high and low values of the SCC adjust the estimated climate damages proportionally to the baseline value of $100 t−1 CO2e, and greatly impact the magnitude of the estimated damages. At $150 t−1 CO2e, BTC climate damages per coin mined averaged $4632 over 2016–2021, compared to $1544 at $50 t−1 CO2e, versus $3088 at $100 t−1 CO2e from the results in Fig. 2A. With the high SCC, the climate damages were underwater 17% of the time between 2016 and 2021 (69% of days in 2020), whereas with the low SCC the climate damages were never underwater. Regardless of SCC value, climate damages of BTC mining increased substantially from 2016 to 2021, with a continuing upward trajectory.
What if mining used more renewable energy?
The CO2e emission estimates and climate damages depend, critically, on assumptions of the share of renewable electricity sources used in cryptocurrency mining. Due to the decentralized and anonymized nature of cryptocurrency mining, determining actual energy sources is a challenge and no primary data sources exist30. This has led to a range of estimates in the literature. Prior work suggests the share of renewables (e.g., solar, wind, hydropower) used by POW mining processes may vary considerably, from 25.1% of mining's total electricity use37, to 39%33 and even up to 73%46. Some of the differences in estimates are due to the time periods studied. China, once a large source of global Bitcoin mining that likely used significant amounts of renewable hydropower30, banned all cryptocurrency mining in 202132. This appears to have drastically altered the global share of renewables used by Bitcoin miners, resulting in an increased use of fossil fuels37. Thus, renewable share estimates before and after the China ban would be expected to be different, and perhaps considerably so. Other differences, such as the methods used to locate miners, assumptions on mining rig efficiency and cooling needs, and assumptions on electricity sources can also drive differences in the range of estimates found in prior work30,37.
Given the large ranges found, we expand our analysis with an alternative higher renewable electricity scenario. In this scenario, we increase the share of renewable generation used to mine cryptocurrencies from the baseline of 38.5% (plus 5.2% nuclear power) to a scenario with 50% more renewables (to 57.8% in total plus 5.2% nuclear). This scenario represents a hypothetical situation in which cryptocurrency miners use substantially more renewables than the baseline and a large majority (63%) of electricity from directly carbon free sources (renewables and nuclear combined).
Compared to the baseline renewable share, increasing use of renewables in BTC mining reduces associated climate damages per coin mined (Supplementary Table 2). With a 50% increase in the renewable share, BTC climate damages are approximately two-thirds of the baseline magnitude. Yet, even for this high renewable scenario the climate damages still average 23% of the coin's price (2016–2021), despite miners only using 37% of their electricity from fossil fuels. Thus, even if BTC miners obtained the majority of their electricity from renewables and directly carbon free sources, there are still large and growing climate damages.
Comparison to other commodities
Recall from Fig. 2B, which showed climate damages per coin market price, that the ratio of BTC damages to price declined from 2020 to 2021. This does not necessarily imply that the POW mining process is sustainable. To contextualize these ratios, we make climate damage comparisons against some other relevant commodities and economic products: (i) electricity generation by source (hydropower, wind, solar, nuclear, natural gas, and coal), (ii) crude oil processed and burned as gasoline, (iii) automobile use and manufacturing (sport utility vehicles (SUVs) and mid-sized sedans), (iv) agricultural meat production (chicken, pork, and beef), and; (v) precious metals mining (rare earth oxides (REOs), copper, platinum group metals (PGMs), and gold). Figure 3 shows climate damages per unit market price (% of price) for BTC compared to lifecycle climate damages of these 16 other commodities.
Bitcoin (BTC) mining's climate damages as a share of coin market price (2016–2021), compared with full lifecycle analysis climate damages as a share of market price for other commodities (for a single year). Damages are expressed in percentage terms (% of market price). BTC climate damages only include energy use and emissions from running mining rigs, and do not include climate damages associated with cooling and manufacturing of mining rigs or other potential sources of carbon equivalent emissions. This makes estimated BTC damages a lower bound compared to the full lifecycle damages for the other commodities shown. Climate damages for the other commodities and economic products shown are calculated using lifecycle estimates from the peer-reviewed literature and US government agencies combined with publicly available price data. All commodity prices and lifecycle climate damage data are in the Supplementary Data.
Climate damages of BTC averaged 35% of its market value (2016–2021), and 58% (2020–2021). This places BTC in the category of other energy intensive or heavily-polluting commodities such as beef production, natural gas electricity generation, or gasoline from crude oil, and substantially more damaging than what we might consider to be more sustainable commodities like chicken and pork production and renewable electricity sources like solar and wind. For solar and wind specifically, their full lifecycle climate damages as a share of their market prices are an order-of-magnitude below those of BTC over 2016–2021. BTC mining also generates climate damages per unit price that are an order-of-magnitude above those generated from the mining of precious metals such as gold, copper, PGMs, and REOs, which all average < 10% per unit market value compared to BTC's 35% average over 2016–2021. For the specific case of gold, which is considered by some to be an important store of value and a hedge against volatility in stocks, bonds, and the US dollar47, BTC's climate damages are a relative outlier. As a share of gold's market price, its climate damages average 4%; BTC's 2016–2021 average climate damages are 8.75 times greater.
Given the high share of climate damages to BTC market price, we ask: "What utilization share of renewable electricity sources would make BTC production similar in climate damage impact to more sustainable commodities?" Our results suggest that if the share of renewable electricity sources for 2016–2021 increased from 38.5 to 88.4% (with additional 5.2% from nuclear)—a 129% increase—the climate damages as a share of coin price for BTC would drop from 35 to 4.0%; similar in magnitude to the climate damages of solar power or gold.
Absent such an extreme increase in the share of renewable electricity used in mining, BTC's climate damages will remain an outlier compared to more sustainable commodities. Thus, BTC mining presently fails our third sustainability criterion that "estimated climate damages per coin mined should favorably compare to some reference percentage benchmark of the climate damages per unit market value of other sectors and commodities." Though not as climate damaging as coal electricity generation, BTC mining generates similar damages as gasoline, natural gas generation and beef production, as a share of market prices; none of which would generally be considered sustainable48,49.
Digitally scarce goods are likely here to stay, and will bring innovation to a variety of economic dimensions generating value to people. It is important to sort this broader context from the elements of this digital economy that may have particularly significant sustainability and climate concerns (see President Biden's March 2022 Executive Order on cryptocurrencies for the US50). Our focus is on the dominant cryptocurrency, BTC, which uses a highly energy-intensive, competitive POW mining scheme. While society and nations weigh the benefits and costs of various digitally scarce goods, we provide an empirical approach for evaluating BTC sustainability concerns.
We find that for 2016–2021: (i) per coin climate damages from BTC were increasing; (ii) as a share of its market price, BTC climate damages were underwater 6.4% of days, and damages exceeded 50% of the coin price 30.6% of days; and (iii) the average BTC climate damage share was 35% over the period, which falls in the range between beef production and gasoline consumption (as processed from crude oil), but is less than coal electricity generation. BTC's climate damages per unit market price are roughly an order-of-magnitude higher than wind and solar generation; i.e., it is operating far above any renewable benchmark that might be offered. Taken together, the results represent a set of red flags for any consideration as a sustainable sector (investment or otherwise). While proponents regularly offer BTC as representing a kind of "digital gold"51,52, from a climate damages perspective BTC operates more like "digital crude."
There are a number of important caveats about our offered criteria. First, as to our second criterion, the meaningfulness of our "underwater" benchmark (where the ratio of per coin climate damages as a share of market price not exceed 100%) could be called into question. This exceedance occurs 6.4% of the study period for BTC. While this might be a clear alarm threshold, might it be too weak? Why not 50%, or even staying below 25%? To help consider this, we turn to our third criterion, where we make comparisons to other commodities and sectors. In doing so, staying under a 10% share for an emergent technology might be a preferable sustainability criterion—a level exceeded by BTC 96% of the days in our study.
We highlight that for our comparison commodities, the shares all represent full lifecycle damage estimates, but not for BTC. Thus, BTC shares are deflated in this initial research, ignoring carbon emissions from cooling of mining rigs, rig manufacturing, electronic waste, building construction, etc., where only very preliminary impact estimates are emerging in the literature35. A further caveat, with respect to our second and third criteria, relates to accumulating evidence that some cryptocurrency prices may be inflated by significant speculation, and even manipulation (referred to as "crypto washing") 13. Naturally, an inflated price will artificially decrease the estimated climate damages to price ratio. To the extent that artificial price inflation is occurring, the damage ratio with a not-manipulated price may be higher than those presented here. Finally, we have focused strictly on climate damages, but many technology assessments also include health damages from emissions. Thus, for several reasons our sustainability evaluations for BTC are highly conservative.
While not the focus of this paper, an alternative cryptocurrency production process to POW, known as proof-of-stake (POS), could be used to lower the energy use of cryptocurrency mining. POS works by requiring validators to hold and stake coins, with the next block writer on the blockchain being selected at random, with higher odds being assigned to those with larger stake positions53. POS, by relying on randomization and validation sharing, does not require significant computational power and therefore uses a fraction of the electricity as POW mining. Ethereum, the second largest cryptocurrency by market capitalization26, is scheduled to switch from POW to POS sometime in 2022, lowering its estimated energy use by 99.95%54. If Bitcoin, the dominant global cryptocurrency, could also switch from POW to POS, its energy use, and, by extension, its climate damages estimated in this work, would likely become negligible. However, the likelihood of BTC switching to POS seems low at present55.
There is no shortage of advocates for digitally scarce goods, and the innovation they offer. Even in the pages of Nature Climate Change, Howson20 argues: "Remaining overly fixated on the inefficiency of some cryptocurrencies is likely to encourage throwing the blockchain baby out with Bitcoin's bathwater." But the danger of path dependence and technological lock-in with an emergent industry56,57 supports the argument that POW-based cryptocurrencies, which dominate market share, do indeed merit special attention. Our counterfactuals show that extreme changes would be required to make BTC sustainable (e.g., on the renewable mix). POW-based cryptocurrencies are on an unsustainable path. If the industry doesn't shift its production path away from POW, or move towards POS, then this class of digitally scarce goods may need to be regulated, and delay will likely lead to increasing global climate damages.
Climate damages of Bitcoin mining
Estimates of climate damages from Bitcoin mining follow methods described in the existing literature in this space5,29. The primary estimate of interest is electricity consumption per BTC coin mined (in kWh per coin), as derived from the daily network hash rate of the BTC blockchain58; this is the number of calculations on the network in gigahashes per second (GH/s). Using an estimate of average efficiency of BTC mining rigs, in joules (J) per GH, we calculated total electricity consumption (in kWh/day) of the network in Eq. (1), after converting J/s to kilowatts (kW) and multiplying by 24 h per day:
$${\text{electricity }}\,\,{\text{consumption}} \left( {\frac{{{\text{kWh}}}}{{{\text{day}}}}} \right) = \,{\text{hash }}\,\,{\text{rate}}\left( {\frac{{{\text{GH}}}}{{\text{s}}}} \right) \times {\text{efficiency}}\left( {\frac{{\text{J}}}{{{\text{GH}}}}} \right) \times \left( {\frac{{\text{s}}}{{\text{J}}}\frac{{{\text{kW}}}}{1000}\frac{{24 \,{\text{h}}}}{{{\text{day}}}}} \right).$$
We calculated total BTC coins mined per day in Eq. (2) using average time in minutes for a block to be added to the blockchain per day59 and the miner reward in BTC coins per block:
$${\text{coins/day}} = {\text{reward}} \left( {\frac{coins}{{block}}} \right) \times {\text{block }}\,\,{\text{time}} \left( {\frac{blocks}{{{\text{min}}}}} \right) \times \left( {\frac{{24\, {\text{h}}}}{{{\text{day}}}} \cdot \frac{{60 \,{\text{min}}}}{{\text{h}}}} \right).$$
Dividing electricity consumption of the network by the number of coins yields the electricity per coin in Eq. (3):
$${\text{electricity}}\,\,{\text{ per}}\,\,{\text{ coin}} \left( {\frac{{{\text{kWh}}}}{{{\text{coin}}}}} \right) = {\text{electricity}}\,\,{\text{ consumption}} \left( {\frac{{{\text{kWh}}}}{{{\text{day}}}}} \right) \div \left( {\frac{{{\text{coin}}}}{{{\text{day}}}}} \right).$$
Multiplying electricity per coin by a global average estimate of the greenhouse gas emission factor (EF) for electricity in the BTC network (in kg CO2e/kWh) produces our estimate of emissions per coin in Eq. (4). The emission factors used are provided in the Supplementary Data.
$${\text{emissions }}\,\,{\text{per}}\,\,{\text{ coin}} \left( {\frac{{t CO_{2} e}}{coin}} \right) = {\text{electricty}} \left( {\frac{{{\text{kWh}}}}{{{\text{coin}}}}} \right) \times {\text{EF}} \left( {\frac{{kg CO_{2} e}}{{{\text{kWh}}}} \cdot \frac{t}{{1000\, {\text{kg}}}}} \right).$$
Climate damages per coin are calculated as emissions per coin times the SCC (in $/t CO2e) in Eq. (5):
$${\text{damages}}\,\,{\text{ per}}\,\,{\text{ coin}} \left( {\frac{\$ }{coin}} \right) = {\text{emissions}} \left( {\frac{{t CO_{2} e}}{coin}} \right) \times {\text{SCC}} \left( {\frac{\$ }{{t CO_{2} e}}} \right).$$
Damages as a share of coin price takes the damages per coin and divides by the daily market price of BTC60. All estimates of annual or multi-year damages per coin or damages per share of coin price take a daily-coin-generated weighted average across days (i.e., weighted by number of coins generated each day).
Mining rigs improved the efficiency of hash calculations per unit of energy over our study period. For BTC, we calculated annual average rig efficiency from sales data in30 for 2016–2018, and then used the efficiency of the popular ANTminer s15 for rig efficiency for 2021. We fit a non-linear relationship (Eq. 6) between this data to compute a declining but flattening rig energy usage per hash for any day in our study period:
$${\text{efficiency}} \left( \frac{J}{GH} \right) = 1.3415 \times 10^{9} \exp \left\{ { - 0.00054 \,\,{\text{days}}} \right\}$$
where days is the number of days since 1/1/1900.
Greenhouse gas emissions of electricity generation of the BTC network of miners comes from37. We averaged their monthly estimates of global emission factors (kg CO2e/kWh) from September 2019 through August 2021, and applied this average across our study period. The emission factors in37 are based on mining pool locations and country and sub-country (China and US) electricity mixes and generation-source-specific emission factors. As sensitivity analyses, we used emission factors from two other sources: (i) from30, and; (ii) the US average electricity mix by year using electricity source and generation mix estimates from various US government agencies61,62. Results from these analyses are provided Supplementary Table 3 and are qualitatively similar to our baseline results.
Comparison commodities climate damages
Climate damages from 16 comparison commodities are calculated: electricity generation by source (hydropower, wind, solar, nuclear power, natural gas, and coal); crude oil processed and burned as gasoline; automobile use and manufacturing (sport utility vehicles (SUVs) and mid-sized sedans); agricultural meat production (chicken, pork, and beef), and; precious metals mining (rare earth oxides (REOs), copper, platinum group metals (PGMs), and gold). For each commodity we use estimates of full lifecycle CO2e emissions per unit of production, and multiply this by the SCC to obtain climate damages per unit. Climate damages per unit are divided by market price to get damages as a share of commodity value. All commodity price and CO2e emissions data per unit production are provided in the Supplemental Data.
For the electricity sector, we used the average lifecycle CO2e emissions per kWh electricity generated for the US from the NREL61, by source type, and the electricity generation mix by source type for each year from the US EIA62. For the market price of electricity, we use the 2016–2021 average retail price across the residential, commercial, industrial, and transportation sectors from the US EIA63.
For the agricultural meat sector, we obtained estimates of the lifecycle CO2e emissions per head from the FAO64,65; for North America (pork), for North America (broilers), for North America (beef). We adjusted for average quantity of meat per carcass to get emissions per kg of meat (pork: 65%, beef: 65%, chicken: 100%) using data from university state extension services66,67. The chicken price is per carcass (not per kg of meat) and thus 100% of the carcass is used. Price data are averaged from 2016 to 2020, obtained from the USDA Economic Research Service for pork, beef, and chicken68.
For gasoline from crude oil, we use an estimate of the well-to-wheel lifecycle emissions from the literature69 and the 2016–2021 average retail price of gasoline from the US EIA70.
For vehicles, over a 15-years lifetime, we use estimates of the total cost of ownership and vehicle operation emissions, assuming 14,263 miles annually71 based on a 2019 Ford Explorer for a sport utility vehicle (SUV) and a 2019 Toyota Camry for a mid-sized sedan. We add vehicle emissions from fabrication and materials production and extraction using data from the peer-reviewed literature72.
For precious metals, annual prices (US$ per troy ounce, US$ per lb, or US$ per kg) for rare earth oxides (REOs), copper, platinum group metals (PGMs), and gold were obtained from the 2021 USGS Mineral Commodity Summaries for 2016–202073. Full lifecycle CO2e emissions per unit mass come from74 for gold, from the International Platinum Group Metals Association75 for PGMs, from76 for copper, and from77 for REOs.
All data used in this paper are included in the article and in the Supplementary Information file or are publicly available online as noted.
Brekke, J. K. & Fischer, A. Digital scarcity. Internet Policy Rev. https://doi.org/10.14763/2021.2.1548 (2021).
Sotheby's. 2021. NFT's: Redefining Digital Ownership and Scarcity. Sotheby's Metaverse, April 6, 2021. https://www.sothebys.com/en/articles/nfts-redefining-digital-ownership-and-scarcity.
Brekke, J. K. & Alsindi, W. Z. Cryptoeconomics. Internet Policy Rev. https://doi.org/10.14763/2021.2.1553 (2021).
de Vries, A. Bitcoin boom: What rising prices mean for the network's energy consumption. Joule 5(3), 509–513 (2021).
Krause, M. J. & Tolaymat, T. Quantification of energy and carbon costs for mining cryptocurrencies. Nature Sustain. 1(11), 711–718 (2018).
Platt, M., Sedlmeir, J., Platt, D., Tasca, P., Vadgama, N. & Ibanez, J. Energy footprint of blockchain consensus mechanisms: Beyond proof-of-work. Pre-print. http://arxiv.org/abs/2109.03667v5 (2021).
Zhang, R. & Chan, W.K. Evaluation of Energy Consumption in Blockchains with Proof of Work and Proof of Stake. Journal of Physics: Conference Series, Volume 1584, 4th International Conference on Data Mining, Communications and Information Technology (DMCIT 2020) 21–24 May 2020, Xi'an, China. (2020).
de Vries, P. An analysis of cryptocurrency, Bitcoin and the future. Int. J. Bus. Manag. Commer. 1(2), 1–9 (2016).
Johnson, K. Decentralized finance: Regulating cryptocurrency exchanges. William Mary Law Rev. 62(6), 1914–2001 (2020).
Akyildirim, E., Corbet, S., Lucey, B., Sensoy, A. & Yarovaya, L. The relationship between implied volatility and cryptocurrency returns. Financ. Res. Lett. 33, 1–10. https://doi.org/10.1016/j.frl.2019.06.010 (2020).
Corbett, S., Lucey, B. & Yarovaya, L. Datestamping the bitcoin and ethereum bubbles. Financ. Res. Lett. 26, 81–88. https://doi.org/10.1016/j.frl.2017.12.006 (2018).
Pennec, G., Fieldler, I. & Lennart, A. Wash trading at cryptocurrency exchanges. Financ. Res. Lett. 43, 1–7 (2021).
Cong, L., Li, X., Tang, K. & Yang, Y. Crypto wash trading. SSRN Electron. J. https://doi.org/10.2139/ssrn.3530220 (2020).
Vincent, O. & Evans, O. Can cryptocurrency, mobile phones and internet herald sustainable financial sector development in the developing world? J. Transnatl. Manag. 24(3), 259 (2019).
Anyfantaki, S. & Topaloglou, N. Diversification, Integration and Cryptocurrency Market (March 29, 2018). SSRN Working Paper. URL: https://ssrn.com/abstract=3186474 (2018).
Pilkington, M., Crudu, R. & Gibson Grant, L. Blockchain and bitcoin as a way to lift a country out of poverty - tourism 2.0 and e-governance in the Republic of Moldova. Int. J. Internet Technol. Secur. Trans. 7(2), 115–143 (2017).
Benetton, M. & Copiani, G. 2021. Investors' Beliefs and Cryptocurrency Prices. Working Paper, Yale University. https://cowles.yale.edu/3a/bcwp-investors-beliefs-and-asset-prices-structural-model-cryptocurrency-demand.pdf.
de Vries, A. Bitcoin's growing energy problem. Joule 2(5), 801–805 (2018).
Levitt, S. D., List, J. A. & Syverson, C. Toward an understanding of learning by doing: Evidence from an automobile assembly plant. J. Polit. Econ. 121(4), 643–681 (2013).
Howson, P. Tackling climate change with blockchain. Nat. Clim. Chang. 9, 644–645 (2019).
Sedlmeir, J., Buhl, H. U., Fridgen, G. & Keller, R. The energy consumption of blockchain technology: Beyond myth. Bus. Inform. Syst. Eng. 62, 599–608 (2020).
US Committee on Energy and Commerce Staff. 2022. Memorandum: Hearing on "Cleaning up Cryptocurrency: The Energy Impacts of Blockchain." January 20, 2022. Subcommittee on Oversight and Investigations, Committee on Energy and Commerce (Chairman Frank Pallone Jr.). US Congress.
Truby, J. Decarbonizing Bitcoin: Law and policy choices for reducing the energy consumption of Blockchain technologies and digital currencies. Energy Res. Social Sci. 44, 399–410 (2018).
Li, J., Li, N., Peng, J., Cui, H. & Wu, Z. Energy consumption of cryptocurrency mining: A study of electricity consumption in mining cryptocurrencies. Energy 168, 160–168 (2019).
Chohan, U. W. A history of bitcoin. SSRN Working Paper. https://doi.org/10.2139/ssrn.3047875 (2017).
CoinMarketCap.com. 2021. All Cryptocurrencies, Market Cap. URL: https://coinmarketcap.com/all/views/all/. Accessed Dec 25, 2021.
Dimitri, N. Bitcoin mining as a contest. Ledger 2, 31–37 (2017).
Houy, N. The bitcoin mining game. Ledger 1, 53–68 (2016).
Goodkind, A. L., Jones, B. A. & Berrens, R. P. Cryptodamages: Monetary value estimates of the air pollution and human health impacts of cryptocurrency mining. Energy Res. Soc. Sci. 59, 101281 (2020).
Stoll, C., Klaaßen, L. & Gallersdörfer, U. The carbon footprint of bitcoin. Joule 3(7), 1647–1661 (2019).
US EIA. 2021a. World Electricity Net Consumption. US Energy Information Agency. https://www.eia.gov/international/data/world/electricity/electricity-consumption. Accessed Jan 6, 2022.
John, A., Shen, S., & Wilson, T. 2021. China's top regulators ban crypto trading and mining, sending bitcoin tumbling. https://www.reuters.com/world/china/china-central-bank-vows-crackdown-cryptocurrency-trading-2021-09-24/. Reuters, Accessed Oct 19, 2021.
Blandin, A., Pieters, G., Wu, Y., Eisermann, T., Dek, A., Taylor, S., & Njoki, D. 2020. 3rd Global Cryptoasset Benchmarking Study. University of Cambridge Judge Business School. https://www.jbs.cam.ac.uk/faculty-research/centres/alternative-finance/publications/3rd-global-cryptoasset-benchmarking-study/. Accessed Oct 19, 2021.
Mora, C. et al. Bitcoin emissions alone could push global warming above 2 C. Nat. Clim. Chang. 8(11), 931–933 (2018).
Badea, L. & Mungiu-Pupӑzan, M. C. The economic and environmental impact of bitcoin. IEEE Access 9, 48091–48104 (2021).
World Bank, The. 2021. CO2 emissions (kt). URL: https://data.worldbank.org/indicator/EN.ATM.CO2E.KT. Accessed on Oct 19, 2021.
de Vries, A., Gallersdorfer, U., Klaaben, L. & Stoll, C. Revisiting Bitcoin's carbon footprint. Joule https://doi.org/10.1016/j.joule.2022.02.005 (2022).
Pindyck, R. The social cost of carbon revisited. J. Environ. Econ. Manag. 94, 140–160 (2019).
Carleton, T., & M. Greenstone, M. 2021. Updating the United States government's social cost of carbon. Energy Policy Institute at the University of Chicago. Working paper No. 2021-04. https://ssrn.com/abstract=3764255 Accessed Dec 19, 2021.
Rennert, K., Prest, B., Pizer, W., Anthoff, D., Kingdon, C., Rennels, L., Cooke, R., Rafery. A., Sevcikova, H., and Errickson. F. The social cost of carbon: Advances in long-term probabilistic projections in population, GDP, emissions and discount rates. Brookings Papers on Economic Activity BPEA FA21. (2021).
Nordhaus, W. D. Revisiting the social cost of carbon. Proc. Natl. Acad. Sci. 114(7), 1518–1523 (2017).
Nordhaus, W. D. How fast should we graze the global commons? Am. Econ. Rev. 72(2), 242–246 (1982).
Van den Bremer, T. & Van der Ploeg, F. The risk-adjusted carbon price. Am. Econ. Rev. 111(9), 2782–27810 (2021).
Interagency Working Group on the Social Cost of Carbon (IWG SCC). 2010. Technical Support Document: Technical Update of the Social Cost of Carbon for Regulatory Impact Analysis Under Executive Order 12866. Washington, DC. https://19january2017snapshot.epa.gov/sites/production/files/2016-12/documents/sc_co2_tsd_august_2016.pdf
Interagency Working Group on the Social Cost of Carbon (IWG SCC). 2021. Technical Support Document: Social Cost of Carbon, Methane, and Nitrous Oxide Interim Estimates under Executive Order 1399. Interagency Working Group on Social Cost of Greenhouse Gases, Washington, DC.
Bendiksen, C. & Gibbons, S. (2019). The Bitcoin Mining Network: Trends, Composition, Average Creation Cost, Electricity Consumption & Sources. CoinShares Research White Paper. https://coinshares.com/research/bitcoin-mining-network-december-2019. Accessed Feb 22, 2022.
Dyhrberg, A. H. Hedging capabilities of bitcoin. Is it the virtual gold? Financ. Res. Lett. 16, 139–144 (2016).
Safari, A., Das, N., Langhelle, O., Roy, J. & Assadi, M. Natural gas: A transition fuel for sustainable energy system transformation? Energy Sci. Eng. 7(4), 1075–1094 (2019).
Eshel, G. et al. A model for 'sustainable' US beef production. Nat. Ecol. Evol. 2(1), 81–85 (2018).
Whitehouse, The. 2022. Executive Order on Responsible Development of Digital Assets. March 9, 2022, Presidential Actions. https://www.whitehouse.gov/briefing-room/presidential-actions/2022/03/09/executive-order-on-ensuring-responsible-development-of-digital-assets/.
Eckett, T. 2022. Will Bitcoin Become the New Digital Gold? ETF Stream. March 9, 2022. https://www.etfstream.com/features/will-bitcoin-become-the-new-digital-gold/.
Popper, N. Digital Gold: Bitcoin and the Inside Story of the Misfits and Millionaires Trying to Reinvent Money (Harpers, 2016).
Frankenfield, J. 2022. Proof-of-Stake (PoS). Investopedia. https://www.investopedia.com/terms/p/proof-stake-pos.asp.
Ethereum.org. 2022. Ethereum Energy Consumption. https://ethereum.org/en/energy-consumption/.
Locke, T. 2022. Climate groups say Bitcoin can be 99% greener with one key change. Here's why it won't happen. Fortune Magazine. March 29, 2022. https://fortune.com/2022/03/29/bitcoin-climate-pollution-greenpeace-chris-larsen/.
Arthur, W. B. Competing technologies, increasing returns and lock-in by historical events. Econ. J. 99(394), 116–131 (1989).
Arthur, W. B. Positive feedbacks in the economy. Sci. Am. 262(2), 92–99 (1990).
Blockchain.com. 2021. Total Hash Rate (TH/s). URL: https://www.blockchain.com/charts/hash-rate. Accessed Dec 25, 2021.
BitInfoCharts.com. 2021. Bitcoin Block Time historical chart. URL: https://bitinfocharts.com/comparison/bitcoin-confirmationtime.html#1y. Accessed Dec 25, 2021.
Yahoo! Finance. 2021. Cryptocurrencies. URL: https://finance.yahoo.com/cryptocurrencies/. Accessed on Dec 25, 2021.
NREL. 2021. Life Cycle Assessment Harmonization. US Department of Energy, National Renewable Energy Laboratory. URL: https://www.nrel.gov/analysis/life-cycle-assessment.html. Accessed Dec 25, 2021.
US EIA. 2021b. Electric Power Monthly. US Energy Information Administration. URL: https://www.eia.gov/electricity/monthly/. Accessed Dec 25, 2021.
US EIA. 2021c. Electric Power Monthly, Table 5.6.A. Average Price of Electricity to Ultimate Customers by End-Use Sector. US Energy Information Administration. URL: https://www.eia.gov/electricity/monthly/epm_table_grapher.php?t=epmt_5_6_a. Accessed Dec 25, 2021.
FAO. (2013a). Greenhouse gas emissions from pig and chicken supply chains: A global life cycle assessment. Food and Agricultural Organization of the United Nations, Animal Production and Health Division.
FAO. (2013b). Greenhouse gas emissions from ruminant supply chains: A global life cycle assessment. Food and Agricultural Organization of the United Nations, Animal Production and Health Division.
South Dakota State University Extension. 2020. How Much Meat Can You Expect from a Fed Steer? URL: https://extension.sdstate.edu/how-much-meat-can-you-expect-fed-steer. Accessed Dec 25, 2021.
University of Illinois Extension. 2002. Swine, Illinois Livestock Trail. URL: http://livestocktrail.illinois.edu/porknet/questionDisplay.cfm?ContentID=4696. Accessed on Dec 25, 2021.
USDA ERS. 2021. Meat Price Spreads. US Department of Agriculture Economic Research Service. URL: https://www.ers.usda.gov/data-products/meat-price-spreads/. Accessed Dec 25, 2021.
Laurenzi, I. J., Bergerson, J. A. & Motazedi, K. Life cycle greenhouse gas emissions and freshwater consumption associated with Bakken tight oil. Proc. Natl. Acad. Sci. 113(48), E7672–E7680 (2016).
US EIA. 2021d. U.S. All Grades All Formulations Retail Gasoline Prices. US Energy Information Administration. URL https://www.eia.gov/dnav/pet/hist/LeafHandler.ashx?n=pet&s=emm_epm0_pte_nus_dpg&f=m. Accessed Dec 25, 2021.
US DOE. 2021. Vehicle Cost Calculator. US Department of Energy Vehicle Technologies Office. URL: https://afdc.energy.gov/calc/. Accessed Dec 25, 2021.
Ma, H., Balthasar, F., Tait, N., Riera-Palou, X. & Harrison, A. A new comparison between the life cycle greenhouse gas emissions of battery electric vehicles and internal combustion vehicles. Energy Policy 44, 160–173 (2012).
USGS. Mineral Commodity Summaries 2021. US Department of the Interior, US Geological Survey, Reston, VA. (2021).
Norgate, T. & Haque, N. Using life cycle assessment to evaluate some environmental impacts of gold production. J. Clean. Prod. 29, 53–63 (2012).
Bossi, T. & Gediga, J. The environmental profile of platinum group metals. Johns. Matthey Technol. Rev. 61(2), 111–121 (2017).
Nilsson, A. E. et al. A review of the carbon footprint of Cu and Zn production from primary and secondary sources. Minerals 7(9), 168 (2017).
Koltun, P., & Tharumarajah, A. Life cycle impact of rare earth elements. International Scholarly Research Notices. (2014).
The authors would like to acknowledge and thank Eytan Libedinsky for his contributions as a research assistant on this project.
Department of Economics, University of New Mexico, 1 University of New Mexico, MSC 05 3060, Albuquerque, NM, 87131, USA
Benjamin A. Jones, Andrew L. Goodkind & Robert P. Berrens
Benjamin A. Jones
Andrew L. Goodkind
Robert P. Berrens
Conceptualization: B.A.J., A.L.G., R.P.B.; methodology: B.A.J., A.L.G., R.P.B.; investigation: B.A.J, A.L.G., R.P.B.; visualization: A.L.G.; writing original draft: B.A.J., R.P.B.; writing, review, and editing: B.A.J., A.L.G., R.P.B.
Correspondence to Benjamin A. Jones.
Jones, B.A., Goodkind, A.L. & Berrens, R.P. Economic estimation of Bitcoin mining's climate damages demonstrates closer resemblance to digital crude than digital gold. Sci Rep 12, 14512 (2022). https://doi.org/10.1038/s41598-022-18686-8
Received: 13 April 2022 | CommonCrawl |
Search all SpringerOpen articles
Robotics and Biomimetics
A multi-jointed underactuated robot hand with fluid-driven stretchable tubes
Yuangen Wei1,
Yini Ma2 &
Wenzeng Zhang1
Robotics and Biomimetics volume 5, Article number: 2 (2018) Cite this article
Inspired from flexible bending of octopus' tentacles and outside-driving kind of traditional exoskeletons, this paper proposed a novel self-adaptive underactuated finger mechanism, called OS finger. OS finger is similar to an octopus' tentacle and consists of an artificial muscle which is through all joints and driven by fluid, eight serial-hinged joints, and force-changeable assembly. The force-changeable assembly is mainly composed of a spring and elastic rubber membrane, which is coordinated for stable grasping by a layer of rubber material in the surface of the finger. OS finger can execute different grasping modes depending on the shapes and dimensions of the grasped objects and grip objects in a gentle and form-fitting manner. The OS finger combines good qualities of both rigid grasp of traditional fingers and form-fitting grasp of flexible fingers. Kinematic analysis and experimental results show that the OS robot Hand with four OS fingers is valid for precise pinching, self-adaptive powerful encompassing, and grasping forces that are freely changeable in a wide range. With the advantage of high self-adaptation, various grasp configurations and large range of grasping forces, the OS Hand has a wide range of applications in the area of service robotics which requires a lot of flexible operations of general grasping, moving and releasing.
In recent years, breathtaking developments in robotics have been witnessed all over the world. The field of robotic hands including dexterous hands and underactuated hands is emphasized and developed.
Over the past 30 years, researchers have made plentiful achievements on the study of dexterous hand. For instance, Stanford/JPL dexterous hand was designed and analyzed by Salisbury et al. [1], which has three 3-DOF fingers actuated by 12 DC motors; each joint of this hand can be flexed and extended independently by one actuator. Gifu II hand, designed by Kawasaki et al. [2], has 5 fingers whose all joints are actuated by servomotors, which can perform dexterous object manipulation like the human hand; Utah/MIT dexterous hand was designed by Jacobsen et al. [3], which has four 4-DOF fingers with 32 independent tendons and 32 pneumatic cylinders [4]; the hand can be used as a high flexible tool for the study of machine dexterity. Dexterous hand can do almost all the movements and gestures of human hand. In fact, almost each DOF of a dexterous hand needs an actuator to drive, which makes the hand high dependence on control and high cost on manufacturing and using expenses.
On the contrary, underactuated hands use fewer motors to drive more DOFs, and the underactuated hands have a very amazing feature: self-adaptation in grasping, which let the hands easy to control. Many studies have done in the field of underactuated hands: Birglen et al. [5,6,7] designed many kinds of underactuated grippers and gave force analyses on them. Dollar et al. [8, 9] gave a SDM robust robotic grasper which uses a single actuator to actuate 8 DOFs; Tan et al. [10] designed a multi-fingered hand using hydraulic actuation with fluidic actuators; the hand has 14 DOFs which can bend when hydraulic pressure is applied by a water pump. Underactuated hand does not have complex sensor, algorithm and control systems. But the insufficiency of underactuated hand is that the contact points on objects are too narrow to protect the grasped objects.
To make up the shortcoming of the hands mentioned above, some flexible robot hands have earned widespread respect and have been widely researched [11]. Excellent examples are the Multi-Choice Gripper [12], the Flexible Shape Gripper [13], the soft tentacles [14] and so on. However, the grasping force of these hands is too small to grasp a heavier object.
We find that some underactuated hand exoskeletons have developed [15, 16]. Transmission mechanisms of these hand exoskeletons are designed outside the joints. And the underactuated mechanisms are significant for simplifying control systems. These hand exoskeletons can get much more powerful grasping force due to the bigger outside-driving space. Also, we learn that more joints can better adapt to objects of different shapes and sizes [17]. Inspired by all of that and the flexible bending of octopus' tentacles, this paper proposes a novel self-adaptive underactuated multi-fingered hand (OS Hand), combing the advantages of traditional rigid robot hands and flexible robot hands. OS Hand has high adaptability and powerful grasping force which can be widely used in industrial fields.
Design of the OS Hand
In this section, the principle of OS finger (called Artificial Tentacle) is presented firstly. Secondly, the working process of the Artificial Tentacle is presented. Finally, compositions of OS Hand are introduced.
Principle of artificial tentacle
The artificial tentacle consists of an artificial muscle, eight serial-hinged joints and force-changeable assembly, as shown in Fig. 1. The artificial muscle is composed of a stretchable flexible tube which goes through all joints and is driven by fluid. The force-changeable assembly is mainly composed of a spring and elastic rubber membrane, which is coordinated for stable grasping by a layer of rubber material in the surface of the tentacle. There is fluid in the elastic rubber membrane. The joints of the eight serial-hinged joints are made of rigid links, providing support for artificial tentacles and improving the grasping strength. The strength of joints and stretchable flexible tube are adequate to sustain heavy objects.
Principle of artificial tentacle. 1—motor; 2—base; 3—force-changeable assembly; 4—artificial muscle; 5—eight serial-hinged joints; 6—fluid. a Principle of force-changeable assembly. b Artificial tentacle
Grasping process of artificial tentacle
Grasping process of artificial tentacle is shown in Fig. 2. The initial position of the tentacle is shown in Fig. 2a, e, where the artificial muscle is in a contracted state.
Grasping process of artificial tentacle. a Grasping step 1 of regular object. b Grasping step 2 of regular object. c Grasping step 3 of regular object. d Grasping step 4 of regular object. e Grasping step 1 of irregular object. f Grasping step 2 of irregular object. g Grasping step 3 of irregular object. h Grasping step 4 of irregular object
When the motor runs forward, the fluid is pushed into the artificial muscle. Then, the artificial muscle elongates, and the artificial tentacle rotates toward object. When the motor continues to run, the fluid is continuously pushed into the artificial muscle, and the artificial muscle constantly elongates. Each joint rotates separately until it meets the object and then stops. When all joints meet the object, the grasping process finishes, as is shown in Fig. 2.
When the motor continues to run, the force of the spring changes and the artificial tentacle can get different grasping forces.
For different shapes and sizes of objects, the artificial tentacle has a good self-adaptability and can get a very reliable grasping strength, as is shown in Fig. 2d, h.
Components of OS Hand
As is shown in Fig. 3, OS Hand has four artificial tentacles. All the artificial tentacles are the same. And the four artificial muscles of the four artificial tentacles, i.e., the four stretchable flexible tubes, are connected. That's to say, the liquid distribution pipes are connected. So the four artificial tentacles are driven by only one motor concurrently. And that's of great significance to simplify the driving system as well as the control system and can help to reduce costs. What's more, each artificial tentacle is connected with a gear, and the four gears are meshed together, as is shown in Fig. 3b. And one of the four gears is connected with a motor, so the pose of the tentacles can change by driving the motor. And that make it possible for OS Hand to execute variety of grasping modes, and so the practicality of OS Hand can be increased.
Components of OS Hand. a Virtual model of OS Hand. b Perspective of the virtual model
Kinematics analysis
OS Hand has a wide range of applications. In order to confirm the position and posture of the tentacles, we employed the D–H method to perform the kinematics analysis.
Since the four tentacles in our experiment are all the same, we focus on only one tentacle. The kinematic analysis of single tentacle is shown in Fig. 4.
Kinematic analysis of single tentacle
During our analysis, \(x_{0} y_{0} z_{0}\) is a static world coordinate system and others are joint coordinate systems. Considering the size of the finger and manufacturability of the parts, we set the D–H parameters as Table 1.
Table 1 D–H parameters
The interpretations of the quantities are as follows:
\(\alpha_{i}\): angle of two adjacent joint axes.
\(l_{i}\): length of joint.
\(d_{i}\): distance between joints.
\(\theta_{i}\): angle of two adjacent joints.
The correspondence of the physical quantities is shown in Fig. 5.
Correspondence of the physical quantities. a Step 1. b Step 2. c Step 3
For modularization of the joint units, we set:
$$l_{1} = l_{2} = \cdots = l_{i} = l_{i + 1}$$
$$\alpha_{i} = 0$$
$$d_{\text{i}} = 0$$
So we can get the transformation matrix between two adjacent joints, i.e.,
$${\mathbf{A}}_{i}^{i - 1} = \left[ {\begin{array}{*{20}l} {\cos \theta_{i} } \hfill & { - \sin \theta_{i} } \hfill & 0 \hfill & {l_{i} \cos \theta_{i} } \hfill \\ {\sin \theta_{i} } \hfill & {\cos \theta_{i} } \hfill & 0 \hfill & {l_{i} \sin \theta_{i} } \hfill \\ 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill \\ \end{array} } \right]$$
Then we can obtain the transformation matrix of joint coordinates \(\{ i\}\) relative to the static coordinates \(\left\{ 0 \right\}\):
$${\mathbf{T}}_{i}^{0} = {\mathbf{A}}_{1} {\mathbf{A}}_{2} \cdots {\mathbf{A}}_{i}$$
As \({\mathbf{A}}_{i}^{i - 1}\) (i = 1, 2… 15) is the function of the joint variable \(\theta_{\text{i}}\), \({\mathbf{T}}_{i}^{0}\) is the function of the joint variable \(\theta_{\text{i}}\). Therefore, one can figure out the position and posture of each joint by measuring the value of the joint variable \(\theta_{i}\) (i = 1, 2… 15). And hence one can figure out the position and posture of the tentacle by measuring the value of the joint variable \(\theta_{i}\) (i = 1, 2… 15).
Grasping force analysis
In this paper, we replace rigid phalanx with artificial tentacles, hoping that the hand possesses high adaptability and grasping force. Therefore, the force analysis is essential and necessary. And we propose a mathematical model which is discussed in this section to analyze the distribution of grasping force. For simplification, the analysis neglects the gravity of the finger and the friction between joints and the object.
\(l_{i}\): the length of joint i, mm;
\(F_{i}\): the force at the contact point of joint i and object, N;
\(h_{i}\): the distance between the contact point of joint i and adjacent axes, mm;
\(M_{\text{o}}\): total output torque, Nm;
\(\theta_{i}\): angle of joint i and joint i − 1, °;
\(M_{i}\): the torque relative to the fixed point O which is caused by \(F_{i}\), Nm.
According to mechanics' law, the torque of force F with arm L is
$$M = F \cdot L$$
And according to Fig. 7, we can conclude that
$$\begin{aligned} M_{o} & = F_{o} \cdot h_{o} + F_{1} \cdot (h_{1} + l_{0} \cdot \cos (\theta )) \\ & \quad + F_{2} \cdot (h_{2} + l_{1} \cdot \cos (\theta_{2} ) + l_{0} \cdot \cos (\theta_{2} + \theta_{1} )) \\ & \quad + \cdots + F_{14} \cdot (h_{14} + l_{13} \cdot \cos (\theta_{14} ) + l_{12} \cdot \cos (\theta_{14} + \theta_{13} )) \\ & \quad + \cdots + l_{0} \cdot \cos (\theta_{14} + \theta_{13} + \cdots + \theta_{2} + \theta_{1} )) \\ \end{aligned}$$
We mark that
$$\begin{aligned} M_{i} & = F_{i} \cdot (h_{i} + l_{i - 1} \cdot \cos (\theta_{i} ) + l_{i - 2} \cdot \cos (\theta_{i} + \theta_{i - 1} ) \\ & \quad + \cdots + l_{0} \cdot \cos (\theta_{i} + \theta_{i - 1} + \cdots + \theta_{1} )) \\ \end{aligned}$$
Then, Eq. (7) can be expressed as:
$$\left\{ {\begin{array}{*{20}l} {M_{o} = \sum\limits_{i = 0}^{n} {M_{i} } } \hfill \\ {M_{i} = F_{i} \cdot (h_{i} + l_{i - 1} \cdot \cos (\theta_{i} ) + l_{i - 2} \cdot \cos (\theta_{i} + \theta_{i - 1} )} \hfill \\ {\quad + \cdots + l_{0} \cdot \cos (\theta_{i} + \theta_{i - 1} + \cdots + \theta_{1} )} \hfill \\ \end{array} } \right.$$
When we assume that each joint is small enough, we can consider the contact point between each joint and object to be the middle of the joint. In that case, the mathematical model can be simplified,
$$h_{i} + l_{i} /2$$
What's more, we can consider every joint to be all the same, so it can be simplified as:
$$l_{\text{i}} = l_{\text{i-1}} = \cdots = l_{0}$$
So we finally get Eq. (12), i.e.,
$$\left\{ {\begin{array}{*{20}l} {M_{0} = \sum {M_{\text{i}} } } \hfill \\ {M_{\text{i}} = l_{\text{i}} F_{\text{i}} \left( {\frac{1}{2} + \cos (\theta_{\text{i}} ) + \cos (\theta_{\text{i}} + \theta_{\text{i-1}} )} \right.} \hfill \\ {\left. {\quad + \cdots + \cos (\theta_{\text{i}} + \theta_{\text{i-1}} + \cdots \theta_{1} )} \right)} \hfill \\ \end{array} } \right.$$
\(\theta_{\text{k}}\) is always no larger than 180° and meanwhile no smaller than 0°. Then it can be concluded that:
\(M_{i}\) decreases as \(\theta_{i}\) increases when other variables are fixed.
\(M_{i}\) decreases as \(l_{i}\) decreases when other variables are fixed.
Analysis results and discussion
We can draw images based on Eq. (12), which are shown in Figs. 6, 7 and 8.
Relationship of \(M_{i} - l_{i} - \theta_{i}\)
Relationship of \(M_{i} - F_{i} - \theta_{i}\)
Relationship of \(M_{i} - l_{i} - F_{i}\)
The relationship of \(M_{\text{i}} - l_{\text{i}} - \theta_{\text{i}}\) is shown in Fig. 6, where \(F_{i}\) is − 3, − 1, 1 and 3. We can get the same conclusions from the figure:
So we can draw a significant conclusion: Keeping \(l_{i - 1}\), \(l_{i - 2}\), …, \(l_{0}\) fixed, when the grasping force and total output torque are identical, \(M_{i}\) decreases as \(l_{i}\) decreases, which demonstrates that the force distribution is more suitable for protecting object under this condition. That is to say, the OS Hand outperforms the traditional robotic hands in force distribution, and the OS Hand can better prevent object from being damaged.
And the relationship of \(M_{i} - F_{i} - \theta_{i}\) is shown in Fig. 7, where \(l_{i}\) is 5, 10, 15, 20.
The figure demonstrates that, with arbitrary length of joints, \(M_{i}\) increases as \(F_{i}\) increases and \(M_{i}\) decreases as \(\theta_{i}\) increases. However, \(M_{i}\) decreases as \(l_{i}\) decreases. Due to that, while choosing the parameters of the prototype, we should minimize the length of joint under the constraint of maximizing grasping force. That's to say, the OS Hand performs better in protecting objects from being damaged.
The relationship of \(M_{i} - l_{i} - F_{i}\) is shown in Fig. 8, and \(\theta_{i}\) is 0°, 30°, 60° and 90°.
The figure demonstrates that, with arbitrary \(\theta_{i}\), \(M_{i}\) increases as \(F_{i}\) increases and \(M_{i}\) decreases as \(l_{i}\) decreases. That is to say, the more flexible OS Hand can better protect object. Besides, grasping force has more influence on the torque \(M_{i}\) than the length of joints. When the grasping force is assigned to −1 N, the influence on torque \(M_{i}\) from the length of joint almost goes away. In order to fully unleash the advantages of OS Hand, input grasping force should be far from − 1 N.
Experiment results and discussion
To prove what we designed and analyzed are correct, grasping experiment of OS Hand was conducted. Figure 9a shows the prototype of the OS finger (in the closing pose of the finger for grasping objects). And the prototype of the OS Hand is shown in Fig. 9b, the artificial muscle is in initial state and the slight contraction of the artificial muscle makes the artificial tentacles open out. The parts needed to fabricate the OS Hand are shown in Fig. 9c. When the gear motor starts, the four meshed gears will rotate and the hand pose will change, as is shown in Fig. 9d.
Prototype of the OS Hand. a Prototype of OS finger. b Prototype of OS Hand (gesture 1). c Parts needed to fabricate the OS finger. d Prototype of OS Hand (gesture 2)
Figure 10 shows the grasping progress. When starting the motor, the transmission plays a role and fluid is injected into the stretchable flexible tube, causing the elongation of the stretchable flexible tube, which drives the joints to rotate. The joints terminate until they contact object, and then, the process of adapting to object is finished. When releasing the object, fluid is extracted from the stretchable flexible tube, causing the contraction of the stretchable flexible tube. Then, the artificial tentacles open out and the object is released. It is worth noting that after the fluid is injected into the stretchable flexible tube, fluid cannot outflow spontaneously, which provides reliable grasping.
Grasping process experiment of OS Hand
Furthermore, in order to test the grasping performance of the OS Hand, various kinds of objects are adopted in the experiments, as is shown in Figs. 11, 12 and 13.
Experiments of OS Hand with three tentacles
Experiments of OS Hand with for tentacles
Experiments of OS Hand connected with robot arm
The figures show that the OS Hand combines good qualities of both rigid grasp of traditional grippers and form-fitting grasp of flexible hands. The OS Hand can match the shape and size of the object automatically. Experimental results confirm that the OS Hand is valid for precise pinching, self-adaptive powerful encompassing, proving its practicability.
In addition, during the experiment we have also measured the size and weight of the grasped object to verify its practicability. In fact, the maximum grasping weight of the robot hand has a lot to do with the motor and the stretchable flexible tube. And under our experimental conditions, a bunch of keys, an egg, a ball with a radius of 120 mm, a box of 300 mm × 150 mm × 260 mm, a whole bottle of 350 ml of water, etc., can all be grasped steadily. We also use a dynamometer to measure the maximum grasping weight of the prototype and that is about 6.1 N. We believe that the maximum grasping weight will be much greater with better materials and manufacturing.
In order to test the timeliness, we also attach great importance to the response time of OS Hand. We test the closing time of the hand for grasping objects and its opening time for releasing objects. Subject to the speed of our motor, the response time is between 1.2 and 4.7 s. Considering that pneumatics power drive will respond much more rapidly, we connect the OS Hand with a robot arm and drive the hand with pneumatics power and test the response time, as is shown in Fig. 13. And the experimental results are very exciting. The hand is almost instantaneously grasping and releasing, and the response time is less than 1 s.
From the experiment result, we can find the advantages of OS Hand as follows:
Great performance of underactuation and self-adaptation: In the experiment, there are even as many as 32 degrees of freedom in the OS Hand, but only two motors are enough to actuate it. The redundancy of the hand is high, and the adaptability is great;
High integration: modular design is adopted in the design of the hand. The motor, the transmission, the fluid and the control circuit all can be embedded in the palm;
Strong grasping ability: the joints of the eight serial-hinged joints are made of rigid links, providing support for artificial tentacles and improving the grasping strength. The strength of joints and stretchable flexible tube are adequate to sustain heavy objects. Reasonable structure design and stretchable flexible tube realize the stable and high adaptive grasping. The hand can achieve the encompassing grasp and fingertip grasp. All kinds of common objects can be grasped steadily.
Inspired from flexible bending of octopus' tentacles and outside-driving kind of traditional hand exoskeletons, this paper proposes a novel self-adaptive underactuated multi-fingered hand (OS Hand), which has four flexible tentacles.
Each tentacle can execute different grasping modes depending on the shapes and dimensions of the objects grasped and grip objects in a gentle and form-fitting manner.
The OS Hand combines good qualities of both powerful grasp of traditional grippers and form-fitting grasp of flexible hands.
Experimental results show that the OS Hand is valid for precise pinching, self-adaptive powerful encompassing, and grasping forces that are freely changeable in a wide range.
With the advantages of high self-adaptation, various grasp configurations and large range of grasping forces, the OS Hand has a wide range of applications in the area of service robotics which requires a lot of flexible operations of general grasping, moving and releasing.
Salisbury JK, Craig JJ. Articulated hands: force control and kinematic issues. Int J Robot Res. 1982;1(4):4–17.
Kawasaki H, Komatsu T, Uchiyama K, Kurimoto T. Dexterous anthropomorphic robot hand with distributed tactile sensor: Gifu hand II. In: Proceedings of the IEEE international conference on systems, man, and cybernetics, Tokyo, Japan, October; 1999. P. 782–7.
Jacobsen SC, Iversen EK, Knutti DF, Johnson RT, Biggers KB. Design of the UTAH/M.I.T. Dextrous Hand. In: Proceedings of the IEEE international conference on robotics and automation, San Francisco, USA, April; 1986. P. 1520–32.
Guo G, Gruver WA, Qian X. A new design for a dexterous robotic hand mechanism. IEEE Control Syst Mag. 1992;12(4):35–8.
Birglen L, Gosselin CM. Kinetostatic analysis of underactuated fingers. IEEE Trans Robot Autom. 2004;20(2):211–21.
Birglen L, Gosselin CM. Fuzzy enhanced control of an underactuated finger using tactile and position sensors. In: Proceedings of the 2005 IEEE international conference on robotics and automation, Barcelona, Spain, April; 2005. P. 2320–25.
Birglen L, Gosselin CM. On the force capability of underactuated fingers. In: Proceedings of the 2003 IEEE international conference on robotics and automation, Taipei, Taiwan, September; 2003. P. 1139–45.
Dollar AM, Howe RD. The SDM hand as a prosthetic terminal device: a feasibility study. In: Proceedings of the 2007 IEEE 10th international conference on rehabilitation robotics, Noordwijk, The Netherlands, June; 2007. P. 978–83.
Dollar AM, Howe RD. Designing robust robotic graspers for unstructured environments. In: Proceedings of the robotics: science and systems conference, workshop on manipulation for human environments, Philadelphia, PA, August; 2006.
Tan LQ, Xie SQ, Lin IC, Lin T. Development of a multifingered robotic hand. In: Proceedings of the 2009 IEEE international conference on information and automation, Zhuhai/Macau, China, June; 2009. P. 1541–45.
Zhou X, Majidi C, O'Reilly OM. Soft hands: an analysis of some gripping mechanisms in soft robot design. Int J Solids Struct. 2015;64–65(7):155–65.
MultiChoiceGripper. Festo AG & Co. KG. 54835 en 4/2014.
FlexShapeGripper. Festo AG & Co. KG. 50060 en 4/2015.
Paek J, Cho I, Kim J. Corrigendum: microrobotic tentacles with spiral bending capability based on shape-engineered elastomeric microtubes. Sci Rep. 2015;5:10768.
Iqbal J, Khan H, Tsagarakis NG, et al. A novel exoskeleton robotic system for hand rehabilitation—conceptualization to prototyping. Biocybern Biomed Eng. 2014;34(2):79–89.
Battezzato A. Kinetostatic analysis and design optimization of an n-finger underactuated hand exoskeleton. Mech Mach Theory. 2015;88(6):86–104.
Hussain I, Salvietti G, Spagnoletti G, et al. A soft supernumerary robotic finger and mobile arm support for grasping compensation and hemiparetic upper limb rehabilitation. Robot Auton Syst. 2017;93(7):1–12.
YW and YM are the students to conduct this research. WZ is the academic advisor. All authors read and approved the final manuscript.
The authors are grateful for the support granted by Tsinghua University in conducting this research.
Yuangen Wei, born in 1995, is a student in the Department of Mechanical Engineering, Tsinghua University. Yini Ma, born in 1997, is a student in the Department of Physics, Tsinghua University. Wenzeng Zhang, born in 1975, is currently an associate professor in the Department of Mechanical Engineering, Tsinghua University, China. Research interests are the areas of humanoid robotics, robotic hand, robot mechanism, underactuated grasping, robot vision and robot welding.
All authors declare that they have no competing interests.
This research was supported by National Natural Science Foundation of China (No. 51575302) and Natural Science Foundation of Beijing (No. J170005) and Tsinghua University Initiative Scientific Research Program (No. 20161080166).
Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, China
Yuangen Wei & Wenzeng Zhang
Department of Physics, Tsinghua University, Beijing, 100084, China
Yini Ma
Yuangen Wei
Wenzeng Zhang
Correspondence to Wenzeng Zhang.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Wei, Y., Ma, Y. & Zhang, W. A multi-jointed underactuated robot hand with fluid-driven stretchable tubes. Robot. Biomim. 5, 2 (2018). https://doi.org/10.1186/s40638-018-0086-6
Received: 10 November 2017
Multi-fingered robot hand
Self-adaptive mechanism
Biologically inspired robotics
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Upper respiratory tract microbiota is associated with small airway function and asthma severity
Yi Li1,
Congying Zou2,
Jieying Li3,
Wen Wang3,
Yue Guo3,
Lifang Zhao3,
Chunguo Jiang3,
Peng Zhao4 &
Xingqin An1
BMC Microbiology volume 23, Article number: 13 (2023) Cite this article
Characteristics of airway microbiota might influence asthma status or asthma phenotype. Identifying the airway microbiome can help to investigate its role in the development of asthma phenotypes or small airway function.
Bacterial microbiota profiles were analyzed in induced sputum from 31 asthma patients and 12 healthy individuals from Beijing, China. Associations between small airway function and airway microbiomes were examined.
Composition of sputum microbiota significantly changed with small airway function in asthma patients. Two microbiome-driven clusters were identified and characterized by small airway function and taxa that had linear relationship with small airway functions were identified.
Our findings confirm that airway microbiota was associated with small airway function in asthma patients.
Asthma is a heterogeneous disease characterized by inflammation and hyperresponsive in airways, which has several phenotypes and endotypes that may response differently to therapies. Despite important advances in asthma, including greater awareness, timely diagnosis, and pharmacological interventions targeted at airway inflammation, control of asthma in patients remains unsatisfactory.
A possible reason for poor asthma control might be that other than "Eosinophils asthma phenotype" or "Neutrophil asthma phenotype", some patients express a "small airways phenotype", which has small airways inflammation and dysfunction that is not being targeted or controlled by current therapies. The small airways are defined by an internal airway diameter of < 2 mm. They have a generation number that is generally higher than 8, and they account for 98.8% (approximately 4500 ml) of the total lung volume, compared to that the large airways account for only 1.2% (approximately 50 ml). Though inflammation and remodeling in asthma involve the large airways, the small airways are the major site of airflow limitation, and where the intensity of the inflammation may be even higher than that in large airways. Transbronchial biopsy findings show that small airways are the major site of inflammation and contain immunocytes that putatively account for the tissue remodeling noted [1, 2]. Thus, small airways might affect the pathobiology of asthma and small airway dysfunction may contribute to poor asthma control [1,2,3,4], and the small airways of individuals with asthma are increasingly recognized as a potential therapeutic target [2, 4, 5].
The microbiota in human airways changes with disease. With the bacterial 16S ribosomal RNA gene sequencing technique, different microbiota were identified between asthma phenotypes, suggesting that microbial patterns in the airways may influence distinct phenotypes of asthma [6,7,8] and allergic inflammation [9]. Airway microbiota composition is also associated with the degree of airway hyperresponsiveness among patients with less controlled asthma. Indeed, several bacterial taxa, including Streptococcus pneumonia, Staphylococcus aureus, Moraxella catarrhalis, Pseudomonas aeruginosa, and Haemophilus influenza, were reported to be associated with asthma exacerbation or development [10, 11]. Moreover, studies suggest that airway microbiome in asthma patients is probably a result of complex interactions between the inflammatory milieu and the drug effects, and microbial-derived mechanisms might be the reason of poor response to the treatment. For example, treatment with a combination of inhaled corticosteroids (ICSs) and oral glucocorticoids correlates positively with an increased abundance of Proteobacteria and Pseudomonas, and with a decreased abundance of Bacteroidetes, Fusobacteria, and Prevotella [12]. Meanwhile, a unique enrichment of Haemophilus, Neisseria, Fusobacterium, Porphyromonas species and the Sphingomonodaceae family along with depletion in Mogibacteriaceae and Lactobacillales was observed in mild asthma patients without being treated with ICSs [13].
In this study, the association between airway microbiota pattern and small airway function was explored. Results from lung function tests were related to the bacterial flora in study subject sputum.
Pulmonary function measurements
The measurements of spirometry function were conducted by Jaeger Masterscreen PFT (Viasys Healthcare, Höchberg, Germany) according to the recommendations of the Chinese National Guidelines of Pulmonary Function Test [14]. Following indices were used to characterize small airway function: forced expiratory volume in first second (FEV1), forced expiratory vital capacity (FVC), peak expiratory flow (PEF), maximal expiratory flow at 25% vital capacity (MEF25), maximal expiratory flow at 50% vital capacity (MEF50), percentage of tested MEF25 to predicted MEF25 (MEF25pred%), percentage of tested MEF50 to predicted MEF50 (MEF50 pred%) and forced expiratory flow between 25 and 75% (MEF (75/25)).
All individuals with asthma were patients from the Respiratory Department in Chaoyang Hospital, Beijing, while 12 healthy individuals were recruited from routine physical examination department in the same institution. The age distribution of these healthy people were from 28 to 58 and they were ruled out of asthma and other respiratory diseases by scan examination and pulmonary function tests according to the Global Strategy for Asthma Management and Prevention [15, 16].
Among the 31 individuals with asthma, we took a cut-off value of 65% for MEF25pred% and MEF50 pred% to define study groups according to the Chinese Thoracic Society [17,18,19]. We defined patients who had a MEF25pred% lower than 65% as the MEF25pred%-low group (26 people), and others with a MEF25pred% value higher than 65% as the MEF25pred%-high group (5 people). The MEF50pred%-low group and MEF50pred%-high group were similarly defined, and 14 patients were grouped in the MEF50pred%-high group versus 17 in the MEF50pred%-low group.
As MEF50 and MEF25 are similar indices of small airway function, and because the sample size of the MEF50pred%-high group and MEF50pred%-low group is closer than those of MEF25pred% groups, we compared the sputum microbiome only between the MEF50 groups and the healthy individuals.
Subject characteristics are presented in Table 1.
Sampling of induced sputum
Induced sputum from asthma patients and health individuals was collected according to standardized protocols [20, 21]. Study subjects were pre-treated with inhaled salbutamol to relax airway smooth muscle and to prevent acute asthma attack. Then they inhaled a nebulized solution of 3% saline over a 2-minute period, spat out the saliva, took 2 deep inspirations of saline, and coughed sputum into a separate cup. This procedure was repeated for six times. Subjects were instructed to rinse orally with water and to blow their nose after each inhalation to avoid contamination with saliva and post-nasal drip. Sputum samples were collected into sterilized pots and stored at − 80 °C for bacterial DNA extraction. Peak flow is monitored throughout the procedure, if patients feel uncomfortable or symptoms occurred, the induction was stopped.
DNA extraction, PCR amplification and Illumina sequencing
Microbial DNA was extracted from induced sputum. The final DNA concentration and purification were determined by NanoDrop 2000 UV-vis spectrophotometer (Thermo Scientific, Wilmington, USA), and DNA quality was checked by 1% agarose gel electrophoresis. The V3-V4 hypervariable regions of the bacteria 16S rRNA gene were amplified with primers 338F (5′- ACTCCTACGGGAGGCAGCAG-3′) and 806R (5′-GGACTACHVGGGTWTCTAAT-3′) by thermocycler PCR system (GeneAmp 9700, ABI, USA). The PCR reactions were conducted using the following program: 3 min of denaturation at 95 °C, 27 cycles of 30 s at 95 °C, 30s for annealing at 55 °C, and 45 s for elongation at 72 °C, and a final extension at 72 °C for 10 min. PCR reactions were performed in triplicate 20 μL mixture containing 4 μL of 5 × FastPfu Buffer, 2 μL of 2.5 mM dNTPs, 0.8 μL of each primer (5 μM), 0.4 μL of FastPfu Polymerase and 10 ng of template DNA. The resulted PCR products were extracted from a 2% agarose gel and further purified using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA) and quantified using QuantiFluor™-ST (Promega, USA) according to the manufacturer's protocol.
Purified amplicons were pooled in equimolar and paired-end sequenced (2 × 300) on an Illumina MiSeq platform (Illumina, San Diego, USA) according to the standard protocols [22, 23].
The analysis was conducted by following the "Atacama soil microbiome tutorial" of Qiime2docs along with customized program scripts (https://docs.qiime2.org/2019.1/). Briefly, raw data FASTQ files were imported into the format which could be operated by QIIME2 system using qiime tools import program. Demultiplexed sequences from each sample were quality filtered and trimmed, de-noised, merged, and then the chimeric sequences were identified and removed using the QIIME2 dada2 plugin to obtain the feature table of amplicon sequence variant (ASV). The QIIME2 feature-classifier plugin was then used to align ASV sequences to a pre-trained GREENGENES 13_8 99% database (trimmed to the V3V4 region bound by the 338F/806R primer pair) to generate the taxonomy table. Any contaminating mitochondrial and chloroplast sequences were filtered using the QIIME2 feature-table plugin.
Experimental materials and reagents are included in supplementary material (suppl. Table 1).
Statistics and identification of bacterial communities
We used a rank test method, the Kruskal–Wallis test to examine the differences between groups. The linear Discriminant Analysis Effect Size (LEfSe) method [24] was employed to compare the bacterial composition between groups, with the cutoff p-value set as 0.05 (after Benjamini-Hochberg false discovery rate correction). Additionally, Kyoto Encyclopedia of Genes and Genomes (KEGG) functional profiles of microbial communities were predicted with Phylogenetic Reconstruction of Unobserved States (PICRUSt) [25].
Microbiome Multivariable Associations with Linear Models (MaAsLins) were run to test for associations between microbiomes and clinical variables using the MaAsLin 2 R/Bioconductor software package [26, 27]. The linear mixed-effect model could be expressed as follows:
$$Bacterial\;taxon\:\sim\:(intercept)\:+\:small\;airway\;index\:+\:(\mathit1\;/subject).$$
All analyses were performed using R studio (version 1.1.453) [28] with R software (version 3.5.1) [29] supported with the following software packages: vegan, metacoder [30], MaAsLin2 [27], ggplot2, Tax4Fun2 [31], and mixOmics [32].
Clinical characteristics of the study subjects
Clinical features of the subjects are shown in Table 1, all significantly (p < 0.05) different indices between MEF25pred%-high group and MEF25 pred%-low group or between MEF50pred%-high group and MEF50 pred%-low group were marked with a "*". As expected, FEV1, FEV1/FEC, MEF25, MEF50, MEF75 and MEF (75/25) values were significantly (p < 0.05) lower in MEF25pred%-low group than those in MEF25 pred%-high group, meanwhile neutrophil and Fractional exhaled nitric oxide (FeNO) were significantly (p < 0.05) higher in in MEF25pred%-low group than those in MEF25 pred%- high group. VC, PEF, FEV1, FEV1/FEC, MEF25, MEF50, MEF75 and MEF (75/25) values were significantly (p < 0.05) lower in MEF50pred%-low group than those in MEF50 pred%-high group. Associations between these significantly different indices and microbiome were investigated by MaSlin2 (in later sections).
Table 1 Clinical characteristics of study subjectsa
No significant difference of blood eosinophils or serum IgE was observed between these two pairs of groups.
Sputum microbiome compositions
A total of 2,305,983 valid reads were generated for the 43 samples. After filtering for low-quality reads, 51,245 sequence reads were used for subsequent analyses and resulted in 12,265 OTUs. The average percentage of input passed filter was approximately 85%, and average percentage of input non-chimeric was approximately 77%.
We first examined the sputum microbiome composition. Taxa barplots and pie chart of bacterial genera in healthy control subjects and MEF50pred%-low group are presented in Fig. 1A, B and C . At the genus level, the top five genera of the healthy control sputum microbiome were Prevotella (19.57%), Veillonella (9.74%), Neisseria (6.80%), Streptococcus (5.63%), Porphyromonas (3.30%). The top five genera of MEF50pred%-low group was Prevotella (12.86%), Streptococcus (10.24%), Veillonella (9.27%), Fusobacterium (4.18%) and Neisseria (3.43%) .
The sputum microbiome at the genus level. A Bar plot of all the samples, each bar shows the relative abundance of one individual B) Pie chart of the microbiome composition at genus level in MEF50pred%-low group. C Pie chart of the microbiome composition at genus level in healthy individuals. D Phylogenetic map of the median relative abundance differences in bacterial taxa between the healthy control group and the MEF50pred%-low group, the ending circle of each branch represented for species (n = 29). The depth of color of the nodes corresponds to the degree of difference in median relative abundance of the bacterial taxa. The darker the color of the phylogenetic branches, the higher median differences, whereas gray nodes and branches indicate no significant differences
We then and compared the difference in the median relative abundance of taxa between the healthy individuals and MEF50pred%-low group, as the metagenomics phylogenetic map shows in Fig. 1D.
It could be seen from Fig. 1D that the largest significant (p < 0.05) difference in the median relative abundance of taxa was observed in the genus Prevotella, which was in accordance with the difference in microbiome composition. At the species level in this genus, significant (p < 0.05) difference was observed in species Prevotella nanceiensis (P. nanceiensis), Prevotella nigrescens (P. nigrescens), Prevotella copri (P. copri) and Prevotella pallens (P. pallens), and all these species had a relative abundance higher than 0.01% (Supplementary Fig. 1).
The second largest significant (p < 0.05) difference in the median relative abundance of taxa was observed in genus Streptococcus. At the species level in this genus, the relative abundance of Streptococcus infantis (S. infantis) was significantly different between MEF50pred%-low group and healthy control group (Supplementary Fig. 1).
Other species that had significant (p < 0.05) difference in relative abundance between MEF50pred%-low group and the healthy controls include Campylobacter rectus (C. rectus) and Collinsella aerofaciens (C. aerofaciens) (Supplementary Fig. 1).
Partial least squares discriminant analysis (PLS-DA) of microbial difference
The PLS-DA model was established to identify the contribution of taxa to the difference in the community structure between the groups. Figure 2 shows the results of supervised PLS-DA plots concerning the microbial difference between MEF25 pred% and MEF50 pred% functional groups. It could be seen that the two clusters were characterized by composition difference according to MEF25 pred% (Fig. 2A), MEF50 pred% (Fig. 2B) function and asthma severity.
Supervised PLS-DA plots with confidence ellipse, arrows point to the outcome category of each subject, including mild asthma and severe asthma as a subgroup. A MEF25 pred% group with a subgroup of asthma status. B MEF50 pred% group with a subgroup of asthma status
Moreover, a heat map of the Euclidian distance of taxa between clusters characterized by MEF50 pred% function groups was shown in Fig. 3, indicating the distribution of taxa to component 1 in each sample.
Clustered image maps by different MEF50 pred% groups, including asthma status as a subgroup. Samples are represented in columns and taxa in rows. The colored side at the top of the heatmap indicates different groups. (Note: this plot was created with package mixOmics [32] of R software (version 3.5.1) [29])
It could be seen that in the MEF50pred%-low group, at the species level, Johnsonella ignava, Rothia dentocariosa (R. dentocariosa), C. rectus, Treponema socranskii (T. socranskii), P. nigrescens, Treponema amylovorum, Aggregatibacter segnis (A. segnis), and Corynebacterium durum had the largest Euclidian distance between the two clusters. Meanwhile, in the MEF50pred%-high group, Veillonella dispar, P. pallens, P. nanceiensis, and P. melaninogenica had the largest Euclidian distance between the two clusters.
Linear associations between sputum microbiome and small airway indices
Mixed multiple linear regression analysis (MaAslin) was performed to explore whether there was a linear relationship between sputum microbiomes with MEF25, MEF50, MEF75, PEF, MEF (75/25), and FEV1/FVC. Figure 4 shows the heat map of these significant (p < 0.05) estimates, indicating the magnitude of coefficients in the linear associations.
MaAslin analysis of the heat map between small airway indices and microbiome. Only significant (p < 0.05) associations are shown. The numbers in the figure indicate the magnitude of coefficients in the linear associations
It could be seen that MEF (75/25) and FEV1/FVC had most associations with the microbiome. Only two species, P. nanceiensis and P. pallens, had positive associations with MEF25, MEF50, MEF (75/25), and FEV1/FVC levels, whereas the species J. ignava, R. dentocariosa, W. succinogenes, and C. rectus had negative associations with the three MEF indices. The species P. piscolens had a negative association with MEF25, MEF50, MEF (75/25), and FEV1/FVC levels. The species Selenomonas noxia had a negative association with MEF75 and FEV1/FVC, while the species T. socranskii had a negative association with MEF50, MEF75/25, and FEV1/FVC. Lastly, the species Streptococcus. anginosus and Prevotella tannerae had negative associations with MEF25, MEF75/25, and FEV1/FVC.
KEGG pathway analysis
16S rDNA amplicon data were supplemented with genomicdata using PICRUSt. Genes from different bacteria likely to perform the same function have been grouped into KEGG orthologues (KO) by the Kyoto Encyclopedia of Genes and Genomes (KEGG). Differentially abundant KOs were screened by using the Bonferroni-corrected Wilcoxon rank sum test for differences between healthy individuals and MEF50pred%-low group. Significant (p < 0.05) differences are shown in Fig. 5. In the MEF25pred%-low and MEF25 pred%-high groups, changes in the microbial flora of function genes in six categories were related to pathways associated with metabolism of cofactors and vitamins, transport and catabolism, biosynthesis and secondary metabolism, immune disease, and the endocrine system (Fig. 5A). For the MEF50pred%-low and MEF50-high groups, changes in the function genes were in genes associated with energy and carbohydrate metabolism, replication and repair, protein folding, sorting and degradation, amino acid metabolism, drug resistance, xenobiotics, and infectious disease (Fig. 5B).
KEGG pathway analysis between the study groups. A MEF25pred%-low and MEF25 pred%-high groups. B MEF50pred%-low and MEF50 pred%-high groups. (Note: all the KEGG identifiers were from https://www.kegg.jp/kegg/)
In this study, we found significant differences in the composition, relative abundance, biomarkers and signaling pathways of airway microbiome between small airway functional groups and healthy controls. Two microbiome-driven clusters were identified and characterized by small airway function, and change in the microbiome composition between small airway functional group was observed. Our study gave evidence to the connection between respiratory tract microbiota and small airway function in asthma patients.
Although the precise role of bacterium in airway inflammation remains to be established, some genera or bacteria were reported to be associated with asthma severity and phenotype. Specifically, genera Haemophilus, Moraxella, and Neisseria of the phylum Proteobacteria, or species Haemophilus influenzae and Moraxella catarrha, were associated with worse asthma control [6, 8, 33, 34]. In this study, we also found some associations between specific bacteria and small airway functions. First, we observed two species, P. pallens and P. nanceiensis, were correlated with better small airway function and better asthma status.
These two species had positive linear estimates with MEF50, MEF25, MEF (75/25) and FEV1/FVC. More than that, P. nanceiensis was a biomarker in the healthy control group (Supplementary Fig. 2), and its relative abundance significantly (p < 0.05) decreased in small airway dysfunction groups (MEF25pred%-low and MEF50pred%-low groups); and it had the largest decreased fold-difference in MEF50pred%-low group (Supplementary Fig. 3). This is in accordance with those studies that reported P. nanceiensis as a "beneficial" commensal bacterium in respiratory system. In one of those studies, compared with healthy airways, abundance of P. nanceiensis decreased in the airways of patients with chronic obstructive pulmonary disease (COPD), asthma, diabetes, celiac disease, and chronic periodontitis [35, 36]. In another study about children with Henoch-Schönlein Purpura [37], P. nanceiensis was observed to be positively correlated with IgA increase. IgA is important at mucosal surfaces for maintaining homeostasis [38, 39], and it complexes activate eosinophils and neutrophils in inflammation. In this situation, P. nanceiensis might have participated in the immune responses. This is in accordance with earlier findings that increased P. nanceiensis was associated with diminished neutrophilic airway inflammation, suggesting that P. nanceiensis is related to Th2-high type asthma [40]. So it is possible that some commensal bacteria of the airways may participate in the regulation of local and distant immune responses [41].
We also observed that some taxa that had negative associations with small airway functions or enriched significantly (p < 0.05) in MEF25pred%-low or MEF50pred%-low group. Many of these taxa play a role in human lung, oral and cardiovascular diseases [42]. Among these taxa, C. rectus, which had the largest negative estimate with all small airway functional indices, was enriched significantly (p < 0.05) in MEF25pred%-low and MEF50pred%-low groups. C. rectus was reported to be associated with periodontal disease [43], and was linked to coronary artery disease, lung abscess, empyema, brain abscess, and osteomyelitis [43, 44]. The precise reasons for these associations are unclear. However, evidence showed that C. rectus increased production of the proinflammatory cytokines IL-6 and IL-8 in human gingival fibroblasts [45], suggesting it may induce an inflammatory milieu in other tissues.
In this study, P. nigrescens was also observed to have a negative estimate with MEF (75/25) and FEV1/FVC. More recently, P. nigrescens was reported to be associated with signs of carotid atherosclerosis in patients without periodontitis and endodontic infections [46, 47]. The later finding of dental colonization suggests possible distal spread of either the bacteria or inflammatory mediators such as cytokines. Still, patients with asthma show increased risk of bacterial infection. Certain bacterial species may transition from benign to pathogenic activities under some conditions but whether this is true in asthma requires additional research.
R. dentocariosa was the only taxa that had negative estimates with all small airway and lung functions observed in this study. R. dentocariosa is a normal commensal bacterium of the oral cavity and is associated with dental caries and periodontal disease. The bacterium is also reported to be associated with septic arthritis, pneumonia, arteriovenous infection, and acute bronchitis [48]. Of note, R. dentocariosa can upregulation production of TNF-a by T cells [49].
S. anginosus was another taxa observed in our study to have a negative relationship with small airway function and it has been reported to be associated with pharyngitis and infections of internal organs and certain body fluids [50].
Functional analysis using PICRUSt showed clear differences between the bacterial predicted metabolic functions in different study group in our work. Pathway analysis of changes in the microbial flora genes indicated that they were related to carbohydrate and amino acid metabolism, cellular processes, and human diseases, and that the changes were distributed in different proportions. These findings are in accordance with other reports and suggest increased metabolic activity of the airway microbiome in asthmatic individuals [51, 52]. However, due to the limitation of PICRUSt, this prediction did not correspond to specific genera. Combining these analytic approaches may yield new insights.
The present study has a number of limitations. First, the cohort sample size is moderate and may not accurately reflect the true population. Second, some important indexes, such as IgA, were not tested for all patients with asthma. Further, the role of seasonal irritants, pollutants and smoke ingestion, such as from tobacco, was not tested in this study.
To sum up, our work gave evidence that small airway function was associated with respiratory tract microbiome, and commensal microorganisms may participate in the regulation of local and distant immune responses. Our findings could provide some information to therapy for patients with "small airway phenotype" asthma.
The 16S RNA datasets generated and analyzed during the current study are now accessible in the NCBI repository: https://www.ncbi.nlm.nih.gov/bioproject/PRJNA879958.
ACQ6:
Asthma Control Questionnaire 6
GINA:
Global Initiative for Asthma
FeNO:
Fractional Exhaled Nitric Oxide
FEV1:
Forced Expiratory Volume in 1 s
FVC:
MEF25:
Maximal Expiratory Flow in 25% Vital Capacity
MEF25pred% :
Percentage of Tested MEF25 of Predicted MEF25
Percentage of Tested MEF50 of Predicted MEF50.
PEF:
Peak Expiratory Flow Rate
Kyoto Encyclopedia of Genes and Genomes
LEfSe:
Linear Discriminant Analysis Effect Size method
PLS-DA:
Partial Least Squares Discriminant Analysis
PICRUSt:
Phylogenetic Investigation of Communities by Reconstruction of Unobserved States
VC:
Maglio A, Vitale C, Pellegrino S, Calabrese C, Parente R, Triggiani M, et al. Small airways function: evaluation in a population of adult patients with severe asthma and potential use as a response biomarker for anti-IL5 therapy. Virtual Congress 2020, E-poster session, Number: 2265. 2020. https://doi.org/10.1183/13993003.congress-2020.2265.
Alfieri V, Aiello M, Pisi R, Tzani P, Mariani E, Marangio E, et al. Small airway dysfunction is associated to excessive bronchoconstriction in asthmatic patients. Respir Res. 2014;15(1):86–93. https://doi.org/10.1186/s12931-014-0086-1.
Gao J, Wu H, Wu F. Small airway dysfunction in patients with cough variant asthma: a retrospective cohort study. BMC Pulm Med. 2021;21(9):49–52.
Contoli M, Santus P, Papi A. Small airway disease in asthma: pathophysiological and diagnostic considerations. Curr Opin Pulm Med. 2015;21(1):68–73.
Farah CS, et al. Association between peripheral airway function and neutrophilic inflammation in asthma. Respirology. 2015;20(6):975–81.
Huang YJ, Boushey HA. The bronchial microbiome and asthma phenotypes. Am J Respir Crit Care Med. 2013;188(10):1178–80.
Claassen S, et al. The association between faecal microbiota and asthma or wheezing: a systematic review and meta-analysis. Int J Infect Dis. 2014;21:336.
Huang YJ, Nariya S, Harris JM, Lynch SV, Choy DF, Arron JR, Boushey H. The airway microbiome in patients with severe asthma: Associations with disease features and severity. J Allergy Clin Immunol. 2015;136(4):874–84.
Ilmarinen P, Tuomisto LE, Kankaanranta H. Phenotypes, risk factors, and mechanisms of adult-onset asthma. Mediat Inflamm. 2015;2015:1–19.
Bassis CM, et al. Analysis of the upper respiratory tract microbiotas as the source of the lung and gastric microbiotas in healthy individuals. Mbio. 2015;6(2):e00037.
Huang YJ, Boushey HA. The microbiome in asthma. J Allergy Clin Immunol. 2015;135(1):25–30. https://doi.org/10.1016/j.jaci.2014.11.011.
Denner DR, et al. Corticosteroid therapy and airflow obstruction influence the bronchial microbiome, which is distinct from that of bronchoalveolar lavage in asthmatic airways. J Allergy Clin Immunol. 2016;137(5):1398-+.
Durack J, Lynch SV, Nariya S, Bhakta NR, Beigelman A, Castro M, et al. Features of the bronchial bacterial microbiome associated with atopy, asthma, and responsiveness to inhaled corticosteroid treatment. J Allergy Clin Immunol. 2017;140(1):63–75.
Pulmonary Function Workgroup of Chinese Society of Respiratory Diseases (CSRD), Chinese Medical Association. The Chinese national guidelines of pulmonary function test. Chin J Tuberc Respir Dis. 2014;37:566–71.
GINA Science Committee, G.B.o.D., GINA Dissemination and Implementation Committee, GINA Executive Director, Asthma management and prevention for adults and children older than 5 years: a pocket guide for health professionals. 2021.
GINA Science Committee, G.B.o.D., GINA Dissemination and Implementation Committee, GINA Executive Director, Global Strategy for Asthma management and Prevention (2021 update). 2021.
Liu Y, Zhang L, Li HL, Liang BM, Oliver BG. Small airway dysfunction in asthma is associated with perceived respiratory symptoms, non-type 2 airway inflammation, and poor responses to therapy. Respiration. 2021;100(8):767–79. https://doi.org/10.1159/000515328.
Ciprandi G, Capasso M, Tosca M, Salpietro C, Salpietro A, Marseglia G, et al. A forced expiratory flow at 25‐75% value <65% of predicted should be considered abnormal: a real-world, cross-sectional study. Allergy Asthma Proc. 2012;33(1):5–8. https://doi.org/10.2500/aap.2012.33.3524.
Xiao D, Chen Z, Wu S, Huang K, Chen S. Prevalence and risk factors of small airway dysfunction, and association with smoking, in china: findings from a national cross-sectional study. Lancet Respir Med. 2020;8(11):1081–93.
Chanez P, et al. Sputum induction. Eur Respir J Suppl. 2002;37(Supplement 37):3s.
Djukanovic R, et al. Standardised methodology of sputum induction and processing. Eur Respir J Suppl. 2002;37(Supplement 37):51s.
Bokulich NA, et al. Optimizing taxonomic classification of marker-gene amplicon sequences with QIIME 2's q2-feature-classifier plugin. Microbiome. 2018;6(1):90.
Callahan, B.J., et al., DADA2: high resolution sample inference from amplicon data. 2015.
Afgan E, Baker D, Batut B, van den Beek M, Bouvier D, Čech M, et al. The galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2018 update. Nucleic Acids Res. 2018;46(Issue W1):W537–W544. https://doi.org/10.1093/nar/gky379.
Kanehisa M, Furumichi M, Sato Y, Ishiguro-Watanabe M, Tanabe M. KEGG: integrating viruses and cellular organisms. Nucleic Acids Res. 2021;49:D545–51.
(2020)., M.H.e.a., Multivariable Association in Population-scale Meta-omics Studies, http://huttenhower.sph.harvard.edu/maaslin2. 2020.
Mallick H, R.A., McIver LJ, MaAsLin 2: multivariable Association in Population-scale Meta-omics Studies. R/Bioconductor package, http://huttenhower.sph.harvard.edu/maaslin2. 2020.
Team R. RStudio: Integrated Development for R. RStudio. Boston: PBC; 2020.
Development C. R: a language and environment for statistical computing R Foundation for Statistical Computing. Vienna: R Development Core Team; 2011.
Foster Z, Sharpton T, Grunwald N. Metacoder: an R package for visualization and manipulation of community taxonomic diversity data. Phytopathology. 2017;107(12):118.
Wemheuer, B.a.F, Tax4Fun2: Tax4Fun2: predicting functional profiles from metagenomic 16S rRNA data.. R package version 1.1.5. https://github.com/bwemheu/Tax4Fun2. 2020.
Kim-Anh Le Cao F.R, Ignacio Gonzalez, Sebastien Dejean, Benoit Gautier, Francois, P.M. Bartolo, Jeff Coquery, FangZou Yao and Benoit Liquet, mixOmics: Omics Data Integration Project. R package version 6.1.1. https://CRAN.R-project.org/package=mixOmics. 2016.
Green B, et al. Pathogenic Bacteria in induced sputum in severe asthma. Thorax. 2008;63:A49.
Huang YJ, Nariya S, Lynch SV, Harris J, Choy D, Arron JA, Boushey H. The airway microbiome in severe asthma. J Allergy Clin Immunol. 2015;136 (4):874–84. https://doi.org/10.1016/j.jaci.2015.05.044.
Goodson JM, Hartman ML, Shi P, Hasturk H, Yaskell T, Vargas J, et al. The salivary microbiome is altered in the presence of a high salivary glucose concentration. PLoS ONE. 2017;12(3):e0170437. https://doi.org/10.1371/journal.pone.0170437.
Sakamoto M, et al. Changes in oral microbial profiles after periodontal treatment as determined by molecular analysis of 16S rRNA genes. J Med Microbiol. 2004;53(6):563–71.
B, X.W.A. et al., Gut microbiota dysbiosis is associated with Henoch-Schnlein Purpura in children. Int Immunopharmacol, 2018. 58: 1–8.
Sonnenberg GF, Artis D. Innate lymphoid cell interactions with microbiota: implications for intestinal health and disease. Immunity. 2012;37(4):601–10.
Pabst and Oliver. New concepts in the generation and functions of IgA. Nat Rev Immunol. 2012;12(12):821–32.
Larsen JM, Musavian HS, Butt TM, Ingvorse C, Brix S. COPD and asthma-associated Proteobacteria, but not commensal Prevotella spp. promote TLR2-independent lung inflammation and pathology[J]. Immunology. 2014;144(2):333–42.
Madura LJ, et al. Divergent pro-inflammatory profile of human dendritic cells in response to commensal and pathogenic Bacteria associated with the airway microbiota. PLoS One. 2012;7(2):e31976.
Rai AK, et al. Dysbiosis of salivary microbiome and cytokines influence oral squamous cell carcinoma through inflammation. Arch Microbiol. 2021;203(1):137–52.
Shiga Y, Hosomi N, Nezu T, Nishi H, Aoki S, Nakamori M, et al. Association between periodontal disease due to Campylobacter rectus and cerebral microbleeds in acute stroke patients. PLoS ONE. 2020;15(10):e0239773. https://doi.org/10.1371/journal.pone.0239773.
Zhu XH, et al. Campylobacter rectus infection leads to lung abscess: a case report and literature review. Infect Drug Resistance. 2021;14:2957–63.
Wang BN, Kraig E, Kolodrubetz D. Use of defined mutants to assess the role of the campylobacter rectus S-layer in bacterium-epithelial cell interactions. Infect Immun. 2000;68(3):1465–73.
Yakob M, et al. Prevotella nigrescens and Porphyromonas gingivalis are associated with signs of carotid atherosclerosis in subjects with and without periodontitis. J Periodontal Res. 2011;46(6):749–55.
Gharbia SE, Haapasalo M, Shah HN, Kotiranta A, Lounatmaa K, Pearce MA, Devine DA. Characterization of Prevotella intermedia and Prevotella nigrescens isolates from periodontic and endodontic infestions. J Periodontol. 1994;65(1):56–61. https://doi.org/10.1902/jop.1994.65.1.56.
Shakoor S, et al. Rothia dentocariosa endocarditis with mitral valve prolapse: case report and brief review. Infection. 2011;39(2):177–9.
Kataoka H, Taniguchi M, Fukamachi H, Arimoto T, Morisaki H, Kuwata H. Rothia dentocariosa induces tnf-alpha production in a tlr2-dependent manner. Pathog Dis. 2013;71(1):65–8.
Ruoff KL. Streptococcus anginosus ("Streptococcus milleri"): the unrecognized pathogen. Clinical Microbiol Rev. 1988;1:102–8.
Mikkelsen H, et al. Interrelationships between colonies, biofilms, and planktonic cells of Pseudomonas aeruginosa. J Bacteriol. 2007;189(6):2411–6.
Green BJ, Wiriyachaiporn S, Grainge C, Rogers GB, Kehagia V, et al. Potentially pathogenic airway bacteria and neutrophilic Inflammation in treatment resistant severe asthma. PLoS ONE. 2014;9(6):e100645. https://doi.org/10.1371/journal.pone.0100645.
We thank Ms. Wu Xu for helping to collect the sputum samples.
Natural Science Foundation of Beijing Municipality (No. 7192069) and the National Natural Science Foundation of China (grant no. 42175184).
State Key Laboratory of Severe Weather of CMA, Chinese Academy of Meteorological Sciences, Beijing, 100081, China
Yi Li & Xingqin An
Department of Surgery, Beijing ChaoYang Hospital, Capital Medical University, Chaoyang District, Beijing, China
Congying Zou
Department of Respiratory and Critical Care Medicine, Beijing Institute of Respiratory Medicine and Beijing Chao-Yang Hospital, Capital Medical University, No.8, Gongtinan Road, Chaoyang District, Beijing, 100020, China
Jieying Li, Wen Wang, Yue Guo, Lifang Zhao & Chunguo Jiang
Department of Health and Environmental Sciences, Xi'an Jiaotong-Liverpool University, Suzhou, China
Peng Zhao
Jieying Li
Wen Wang
Yue Guo
Lifang Zhao
Chunguo Jiang
Xingqin An
Yi Li: data analysis and manuscript writing; Congying Zou: administrative support; Jieying Li: recording and interpretation of clinic metadata; Yue Guo: recording and interpretation of blood cell count data; Lifang Zhao: interpretation of lung function test result; Chunguo Jiang: interpretation of blood cell count data; Wen Wang: Conception and design; Peng zhao: R software support; Xingqin An: R software support. The author(s) read and approved the final manuscript.
Correspondence to Wen Wang.
The study was approved by the Ethics Committee of the Chaoyang Hospital (ethics NO.2020-Re-425 [26]).
All participants were informed about the study aims and informed consent was obtained from all participants for the participation in the study.
All methods were performed in accordance with the relevant guidelines and regulations by the Ethics Committee of the Chaoyang Hospital.
Additional file 1: Sup. Table 1.
Experimental materials and reagents. Sup. Figure 1. Differences between MEF50 function groups in microbiome composition at species level. Sup. Figure 2. Taxonomy tree and LDA scores of the groups. (A) Taxonomy tree and LDA scores between MEF50predicted%-low and MEF50predicted%-high groups. (B) Taxonomy tree and LDA scores between the MEF50predicted%-low group and the healthy control group. Circles from within to outward indicate the classification from the phylum to the genus, respectively. Each small circle represents a taxon with its diameter proportional to the relative abundance. Dots with different colors denote the core species of each group. Histogram showing the LDA scores of the biomarkers with statistical differences. Sup. Figure 3. Volcano plot of fold change (≥ 1.2-fold, adjusted p < 0.05) between the MEF50predicted%-low and MEF50predicted%-high groups in subjects with asthma.
Li, Y., Zou, C., Li, J. et al. Upper respiratory tract microbiota is associated with small airway function and asthma severity. BMC Microbiol 23, 13 (2023). https://doi.org/10.1186/s12866-023-02757-5
Ashtma
Mirobiome
Small airway function
Maximal expiratory flow | CommonCrawl |
HiFive: a tool suite for easy and efficient HiC and 5C data analysis
Michael EG Sauria1,
Jennifer E. Phillips-Cremins2,
Victor G. Corces3 &
James Taylor1
Genome Biology volume 16, Article number: 237 (2015) Cite this article
11k Accesses
The chromatin interaction assays 5C and HiC have advanced our understanding of genomic spatial organization, but analysis approaches for these data are limited by usability and flexibility. The HiFive tool suite provides efficient data handling and a variety of normalization approaches for easy, fast analysis and method comparison. Integration of MPI-based parallelization allows scalability and rapid processing time. In addition to single-command analysis of an entire experiment from mapped reads to interaction values, HiFive has been integrated into the open-source, web-based platform Galaxy to connect users with computational resources and a graphical interface. HiFive is open-source software available from http://taylorlab.org/software/hifive/.
In the more than a decade since the vast majority of the human genome was first sequenced, it has become clear that sequence alone is insufficient to explain the complex gene and RNA regulatory patterns seen over time and across cell types in eukaryotes. The context of specific sequences – whether from combinations of DNA-binding transcription factors (TFs) [1–3], methylation of the DNA itself [4, 5], or local histone modifications [4, 6] – is integral to how the cell utilizes each sequence element. Although we have known about the potential roles that sequentially distant but spatially proximal sequences and their binding and epigenetic contexts play in regulating expression and function, it has only been over the past decade that new sequencing-based techniques have enabled high-throughput analysis of higher-order structures of chromatin and investigation into how these structures interact among themselves and with other genomic elements to influence cellular function.
Several different sequencing methods for assessing chromatin interactions have been devised, all based on preferentially ligating spatially close DNA sequence fragments. These approaches include ChIA-Pet [7], tethered chromosome capture [8], and the chromatin conformation capture technologies of 3C, 4C, 5C, and HiC [9–12] (Additional file 1: Figure S1). While these assays have allowed a rapid expansion of our understanding of the nature of genome structure, they also have presented some formidable challenges.
In both HiC and 5C, systematic biases resulting from the nature of the assays have been observed [13, 14], resulting in differential representation of sequences in the resulting datasets. While analyses at a larger scale are not dramatically affected by these biases due to the large number of data points being averaged over, higher-resolution approaches must first address these challenges. This is becoming more important as the resolution of experiments is increasing [15]. Several analysis methods have been described in the literature and applied to correcting biases in HiC [14–21] and 5C data [22–24]. There is still room for improving our ability to remove this systematic noise from the data and resolve finer-scale features and, perhaps more importantly, for improving the usability and reproducibility of normalization methodologies.
A second challenge posed by data from these types of assays is one of resources. Unlike other next-generation sequencing assays where even single-base resolution is limited to a few billion data points, these assays assess pairwise combinations, potentially increasing the size of the dataset by several orders of magnitude. For a three billion base pair genome cut with a six-base restriction enzyme (RE), the number of potential interaction pairs is more than half a trillion (if considering both fragment ends) while a four-base RE can yield more than two and a half quadrillion interaction pairs. Even allowing that the vast majority of those interactions will be absent from the sequencing data, the amount of information that needs to be handled and the complexity of normalizing these data still pose a major computational hurdle, especially for investigators without access to substantial computational resources.
Here we describe HiFive, a suite of tools developed for handling both HiC and 5C data using a combination of empirically determined and probabilistic signal modeling. HiFive has performance on par or better than other available methodologies while showing superior speed and efficient memory usage through parallelization and data management strategies. In addition to providing a simple interface with no preprocessing or reformatting requirements, HiFive offers a variety of normalization approaches including versions of all commonly used algorithmic approaches allowing for straightforward optimization and method comparison within a single application. In addition to its command line interface, HiFive is also available through Galaxy, an open-source web-based platform, connecting users with computational resources and the ability to store and share their 5C and HiC analyses. All of these aspects of HiFive make it simple to use and fast, and make its analyses easily reproducible.
The HiFive analysis suite
HiFive was designed with three goals: first, to provide a simple-to-use interface for flexible chromatin interaction data analysis; second, to provide well-documented support for 5C analysis; and third, to improve performance over existing methodologies while reducing analysis runtimes. These are accomplished through a stand-alone program built on a Python library designed for customizable analysis and supported under the web-based platform Galaxy.
HiFive provides three methods of use: the command line; the Internet; or as a development library. The command line interface provides users with the ability to perform analyses as a series of steps or as a single unified analysis. The only inputs that HiFive requires are a description of the genomic partitioning and interaction data, either directly as mapped reads or counts of reads associated with the partitioned genome (for example, fragment pairs and their observed reads). HiFive handles all other formatting and data processing. In addition, HiFive has been bundled as a set of tools available through Galaxy (Fig. 1). This not only provides support with computational resources but also ensures simple installation of all prerequisite libraries and packages. HiFive was also created to allow custom creation of analysis methods as a development library for chromatin interaction analysis through extensive documentation and an efficient data-handling framework.
HiFive's tool interface through Galaxy. HiFive tools are available through the Galaxy toolshed, providing a graphical interface and showing tool option inter-dependencies
Organization of HiFive
At its core, HiFive is a series of hierarchical data structures building from general to specific information. There are four primary file types that HiFive creates, all relying on the Numpy scientific computing Python package for efficient data arrays and fast data access. These structures include genomic partitioning, observed interactions, distance-dependence relationship and normalization parameters, and heatmaps of observed and expected interactions. By separating these attributes, many datasets can utilize the same genomic partitioning and multiple analyses can be run using the same interaction data without the need to reload or process information.
Data processing and filtering
In order to process 5C or HiC data, the first step after mapping is converting the data into a form compatible with the desired workflow. HiFive appears to be nearly alone in its ability to handle mapped data without additional processing (the only exception is HiCLib [17]). Reads can be read directly from numerous BAM-formatted files, and this may be done as an independent step or within the integrated one-step analysis. HiCLib also possesses the ability to input data directly from mapped read files. In all other cases, reads need to be converted to other formats. HiCPipe [14] provides scripts for some but not all of these processes, while HiCNorm [16] relies on pre-binned reads. In all cases aside from HiFive, a workflow is required to move from mapped reads to normalization.
Filtering is accomplished in two phases, during the initial processing of reads and during project creation (Additional file 1: Figures S2 and S3). The first phase limits data to acceptable paired-end combinations. For 5C data, this means reads mapping to fragments probed with opposite-orientation primers. HiC data use two criteria, total insert size (a user-specified parameter) and orientation/fragment relationship filtering. In the latter case, reads originating from non-adjacent fragments or from adjacent fragments and in the same orientation are allowed, similar to Jin et al. [19] (Additional file 1: Figure S4). The second phase, common to both 5C and HiC data, is an iterative filtering based on numbers of interactions per fragment or fragment end (fend). Briefly, total numbers of interactions for each fragment are calculated, and fragments with insufficient numbers of interaction partners are removed along with all of their interactions. This is repeated until all fragments interact with a sufficient number of other non-filtered fragments. This filtering is crucial for any fragment or fend-specific normalization scheme to ensure sufficient interdependency between interaction subsets to avoid convergence issues.
Distance-dependence signal estimation
One feature of HiFive that is notably absent from nearly all other available analysis software is the ability to incorporate the effects of sequential distance into the normalization. One exception to this is HiTC [21], which uses a loess regression to approximate the distance-dependence relationship of 5C data to genomic distance. This method does not, however, allow for any other normalization of 5C data. Another is Fit-Hi-C [25], although this software assigns confidence estimates to mid-range contact bins rather than normalizing entire datasets. This feature is of particular importance for analysis of short-range interactions such as this in 5C data, or for making use of counts data rather than a binary observed/unobserved indicator. For 5C data, HiFive uses a linear regression to estimate parameters for the relationship between the log-distance and log-counts (Additional file 1: Figure S5). HiC data require a more nuanced approximation because of the amount of data involved and the non-linear relationship over the range of distances queried. To achieve this, HiFive uses a linear piece-wise function to approximate the distance-dependent portion of the HiC signal, similar but distinct from that used by Fit-Hi-C. HiFive partitions the total range of interactions into equally sized log-transformed distance bins with the exception of the smallest bin, whose upper bound is specified by the user. Mean counts and log-transformed distances are calculated for each bin and a line is used to connect each set of adjacent bin points (Additional file 1: Figure S6). For distances extending past the first and last bins, the line segment is simply extended from the last pair of bins on either end. Simultaneously, a similar distance-dependence function is constructed using a binary indicator of observed/unobserved instead of read counts for each fend pair. All distances are measured between fragment or fend midpoints.
HiFive normalization algorithms
HiFive offers three different normalization approaches. These include a combinatorial probability model based on HiCPipe's algorithm called 'Binning', a modified matrix-balancing approach called 'Express', and a multiplicative probability model called 'Probability'. In the Binning algorithm, learning is accomplished in an iterative fashion by maximizing each set of characteristic bin combinations independently each round using the Broyden–Fletcher–Goldfarb–Shanno algorithm for maximum likelihood estimation.
The Express algorithm is a generalized version of matrix balancing. While it can use the Knight-Ruiz algorithm [26] for extremely fast standard matrix balancing (ExpressKR), the Express algorithm also has the ability to take into account differing numbers of possible interactions and find corrections weighted by these numbers of interactions. The set of valid interactions is defined as set A, interactions whose fends have both passed the above-described filtering process and whose inter-fend distance falls within user-specified limits. In addition, because counts are log-transformed for 5C normalization, only non-zero interactions are included in set A. For each interaction c between fends or fragments i and j for HiC and 5C, respectively, in the set of valid interactions A, correction parameter f i is updated as in (1) for HiC and (2) for 5C.
$$ {f}_i^{\prime }={f}_i\sqrt{\frac{{\displaystyle \sum_{j\in {A}_i}\frac{c_{ij}}{E_{ij}}}}{{\displaystyle \sum_{j\in {A}_i}1}}} $$
$$ {f_i}^{\prime }={f}_i+\frac{{\displaystyle \sum_{i,j}^{c_{ij}\in {A}_i}\left[ \ln \left({c}_{ij}\right)-{E}_{ij}\right]}}{{\displaystyle \sum_{i,j}^{c_{ij}\in {A}_i}2}} $$
The expected value of each HiC interaction is simply product of the exponent of the expected distance-dependent signal D(i,j) and the fend corrections (3).
$$ {E}_{ij}={e}^{D\left(i,j\right)}{f}_i{f}_j $$
5C interactions have expected values that correspond to the log-transformed count and are the sum of each signal component (4).
$$ {E}_{ij}=D\left(i,j\right)+{f}_i+{f}_j $$
By scaling the row sums based on number of interactions, the weighted matrix balancing allows exclusion of interactions based on interaction size criteria without skewing correction factors due to non-uniform restriction site distribution, position along the chromosome, or filtered fragments or fends due to read coverage. Because it can incorporate the distance-dependent signal, the Express algorithm can operate on counts data unlike most other matrix balancing approaches, although it also can be performed on binary data (observed vs. unobserved) or log-transformed counts for HiC and 5C, respectively. This algorithm allows for adjustment of counts based on the estimated distance-dependence signal prior to normalization in both weighted (1 and 2) and unweighted (Knight-Ruiz) versions.
The multiplicative Probability algorithm models the data assuming some probability distribution with a prior equal to the estimated distance-dependent signal. HiC data can be modeled either with a Poisson or binomial distribution (Additional file 1: Figure S7). In the case of the binomial distribution, counts are transformed into a binary indicator of observed/unobserved and the distance-dependence approximation function is based on this same binary data. 5C data are modeled using a lognormal distribution. In both cases only counts in the set of reads A (described above) are modeled.
For both the Express and Probability algorithms, a backtracking-line gradient descent approach is used for learning correction parameters. This allows the learning rate r to be updated each iteration t to satisfy the Armijo criteria (5) based on the cost C, ensuring that parameter updates are appropriate.
$$ Armijo={C}_t-{C}_{t-1}r{\displaystyle \sum_{i\in A}{\left(\nabla {f}_i\right)}^2} $$
Filtering interactions by interaction size
Chromatin topology is organized around highly reproducible regions of frequent local interactions termed 'topological domains' [27]. Within these structures it has been observed that specific features can influence the frequency of interactions in a biased and differential way up- and downstream of them, such as transcript start sites (TSS) and CTCF-bound sites [14]. In order to account for systematic noise and bias without confounding normalization efforts with meaningful biological-relevant structures, HiFive allows filtering out of interactions using interaction size cutoffs. In order to assess the effects of filtering out shorter-sized interactions, we analyzed data both with and without a lower interaction distance cutoff. For HiC data we analyzed two mouse embyronic stem cells (ESC) datasets with no lower limit and with a lower distance limit of 500 Kb using each of the described normalization algorithms. This size was chosen to eliminate all but the weakest interaction effects observed for TSSs and CTCF-bound sites [28]. HiC normalization performance was assessed using the inter-dataset correlations. For 5C data, there is a much smaller range of interactions. In order to handle this, we set a lower interaction size cutoff of 50 Kb. 5C normalization performance was assessed as the correlation between 5C data and HiC data of the same cell type [27] and binned based on probed 5C fragments to create identically partitioned sets of interactions and normalized using HiFive's Probability algorithm.
The differences in HiC algorithm performances with and without the lower interaction size cutoff were varied, although the largest effects were seen when data were binned in 10 and 50 Kb bins for intra-chromosomal interactions and for overall inter-chromosomal interactions (Additional file 1: Figure S8). Overall, excluding short-range interactions made little difference for the Express algorithm but did improve the performance of the Probability and Binning algorithms. The 5C algorithms showed an opposite result, with almost universal decrease in performance when short-range interactions are excluded (Additional file 1: Figure S9). As a result, learning HiC normalization parameters using HiFive algorithms was performed excluding interactions shorter than 500 Kb and 5C analyses were performed using all interaction sizes. All analyses subsequent to normalization (for example, dataset correlations) were performed across all interactions.
Analyzing 5C data
To date, limited work has focused on processing of 5C data to remove technical biases [22–24, 29]. Of that, none has been formalized in published analysis software. In order to assess HiFive's performance in normalizing 5C data, we used two different 5C mouse ESC datasets [23, 24] and found correlations to HiC data of the same cell type [27] and binned based on probed 5C fragments to create identically partitioned sets of interactions (Fig. 2, Additional file 1: Figures S9 and S10). HiC interactions were normalized using either HiFive's probability algorithm (Fig. 2) or HiCPipe (Additional file 1: Figure S10) and heatmaps were dynamically binned to account for sparse coverage (see Additional file 1: Methods: 5C-HiC data correlations). Correlations were found between all non-zero pairs of bins (fragment level resolution) following log-transformation. All of HiFive's 5C algorithms showed an improved correlation with HiC data compared to raw data, regardless of HiC normalization approach. The Binning algorithm showed the least improvement, likely due to the limits on the number of bins into which features could be partitioned and characteristics missing from the model, such as primer melting temperature. The standard matrix-balancing approach (ExpressKR) showed good improvement, although not quite as good as the Express and Probability algorithms. All of these normalizations were accomplished in less than one minute proceeding from a BED file and a counts file to heatmaps.
5C analysis performance. HiFive normalization of 5C data and their correlation to corresponding HiC data. a Correlation of 5C data (intra-regional only) with the same cell type and bin-coordinates in HiC data, normalized using HiFive's probability algorithm for two different datasets and using each of HiFive's algorithms. b Heatmaps for a select region from each dataset, un-normalized, normalized using HiFive's probability algorithm, and the corresponding HiC data, normalized and dynamically binned
HiC analysis software comparison
Several algorithms have been proposed to handle interaction data normalization (Table 1). These analysis approaches can be divided into two varieties, probabilistic and matrix balancing. The probabilistic approach is further divided into combinatorial and multiplicative corrections. The combinatorial probability model is implemented in HiCPipe [14] and remains one of the most popular approaches. This approach uses one or more restriction fend characteristics partitioned into ranges of values and iteratively learns correction values for each combination of ranges based on a binomial distribution of observed versus unobserved fend interactions. A multiplicative modeling approach is used in the analysis software HiCNorm [16]. HiCNorm uses a Poisson regression model using binned counts instead of binary output and assuming that biases from fend characteristics are multiplicative between bin combinations. A different multiplicative approach is matrix balancing, which finds a value for each row/column of a symmetric matrix (or in this case, heatmap) such that after multiplication of each matrix value by its associated row and column values, the sum of each row and column is one. This has been described with at least four different implementations in the literature [15, 17, 20, 30] although only two software packages making use of it have been published (HiCLib [17], now included in the R package HiTC [21] and Hi-Corrector [20]). For this paper, we chose to use our own implementation of the algorithm described by Knight and Ruiz [26] for comparison due to speed and ease of use considerations.
Table 1 A comparison of HiC Analysis software algorithms and features
Method performances
To assess HiC analysis method performances we used two different pairs of HiC datasets [15, 27, 31], finding interaction correlations across different restriction digests of the same cell type genomes. The Dixon et al. data [27] were produced using mouse ESCs digested with either HindIII or NcoI, yielding approximately 4 Kb fragments. The Selvaraj et al. data [31] were produced from human GM12878 cells using HindIII, while the Rao et al. data [15] were produced from human GM12878 cells using the 4 bp restriction enzyme MboI, producing approximately 250 bp fragments. This allowed assessment of method performance and data handling across a range of experimental resolutions. Correlations were calculated for 10 mutually exclusive intra-chromosomal (cis) interaction ranges and across all cis interactions simultaneously for four binning resolutions. Correlations were also calculated for inter-chromosomal interactions for two resolutions.
HiC analysis methods showed varied performances across interaction size ranges, resolutions, and datasets for intra-chromosomal interactions (Fig. 3a and b). For small interaction sizes, HiFive's Probability and Express algorithms performed consistently well regardless of resolution. At longer interaction distances the Express algorithm typically outperformed the Probability algorithm. HiCNorm showed a nearly opposite performance with poorer inter-dataset correlations for shorter-range interactions but higher correlations at longer ranges, relative to other methods. HiCPipe's performance appeared to depend on binning resolution. At higher resolutions (≤50 Kb), HiCPipe performed worse than the majority of methods. However at lower resolutions it tended to outperform other methods, regardless of interaction size range. HiFive's Binning algorithm had a more consistent performance around the middle of all of the methods across all binning resolutions, with the exception of the 1 Mb resolution for the human data where it performed the worst. Standard matrix balancing consistently performed at or near the bottom of the group regardless of the interaction size range or resolution.
HiC method comparison. Interaction correlations between datasets created with different restriction enzymes for multiple normalization schemes across different binning resolutions. Two datasets are shown, mouse and human. Each mouse dataset was produced using a six-base restriction enzyme. The human datasets were mixed, one produced with a six-base cutter and the other with a four-base cutter. a Data were normalized using several approaches and compared for correlation between two mouse HiC datasets. Interactions were broken down into 10 groups of non-overlapping cis interaction ranges for four resolution levels. b Correlations for 10 different non-overlapping cis interaction ranges at each resolution for each analysis approach. c Overall mouse dataset correlations for each resolution for intra-chromosomal (cis) and inter-chromosomal (trans) interactions. d Overall human dataset correlations for each resolution for intra-chromosomal (cis) and inter-chromosomal (trans) interactions
Correlations across all intra-chromosomal interactions showed much more consistency between analysis methodologies (Fig. 3c and d). This is primarily due to the fact that the main driver of inter-dataset correlation, the interaction distance-counts relationship, was present in all of the analyzed data. HiFive's Probability and Express algorithms were again top performers across almost every intra-chromosomal comparison, although the Probability algorithm showed a decreasing advantage with decreasing binning resolution. HiCNorm, HiCPipe, matrix balancing, and HiFive's Binning algorithm were highly consistent in terms of performance for the mouse datasets. For the human inter-dataset correlations HiCPipe and matrix balancing showed a slightly better performance than average while HiCNorm faired worse. HiFive's Express algorithm was still the top performer.
Inter-chromosomal datasets showed a wider range of performances and were strongly dependent on which datasets were being analyzed (Fig. 3c and d). For mouse inter-chromosomal interactions, HiFive's Probability and Express algorithms performed much better than other methods at the 250 Kb binning resolution, but consistent with other methods at the 1 Mb resolution. HiCNorm showed worse performance at both bin sizes for the mouse datasets. HiCPipe showed the best performance at the 1 Mb resolution, slightly above other methods, but the second worst performance at the 250 Kb resolution. Results for the human datasets were more consistent across resolutions. HiCNorm, HiFive's Express algorithm, and matrix balancing performed best in both cases with Express doing slightly better at the 250 Kb resolution and HiCNorm at the 1 Mb resolution. The remaining methods showed similar performance to each other, although HiFive's Probability algorithm performed slightly worse than HiFive's Binning algorithm and HiCPipe.
The inconsistency between results for cis and trans interactions suggests that no approach is ideal for both types of interactions. To further explore this we looked at the effects of pseudo-counts in the Binning/HiCPipe normalization scheme and the effects of distance-dependence on normalization. Pseudo-counts are values added to both expected and observed reads to mitigate the impact of stochastic effects. HiCPipe showed a stronger performance compared to HiFive's Binning algorithm at longer ranges and at larger bin sizes. We determined that the primary difference was the inclusion of pseudo-counts in all feature bins prior to normalization. By progressively adding counts, we found that cis interaction correlations decreased at shorter interaction ranges and overall, although the correlations increased at longer ranges and for trans interactions (Additional file 1: Figure S11).
We also performed parallel analyses using our weighted matrix balancing algorithm, Express, with and without the estimated distance-dependence signal removed prior to normalization (Additional file 1: Figure S12). This showed a similar effect to the addition of pseudo-counts, such that leaving the distance-signal relationship intact resulted in stronger long-range interaction correlations in larger bin sizes, stronger 1 Mb binned trans correlations, and poorer overall cis interaction correlations across all bin sizes.
Computational requirements
In order to determine the computational requirements of each analysis method, we ran each analysis on an abbreviated dataset consisting of a single chromosome of cis interactions from the mouse NcoI dataset starting from loading data through producing a 10 Kb heatmap. All normalizations were run using a single processor and publicly available scripts/programs. The exception to this was binning the counts and fragment feature data for HiCNorm. No script was provided for this step so one was written in R to supplement HiCNorm's functionality.
Runtimes varied greatly between normalization methods, ranging from less than 7 min to approximately 12.5 h (Fig. 4). With the exception of HiFive's Probability algorithm, HiFive performed better in terms of runtime than all other algorithms. HiCPipe and HiCNorm both showed long runtimes at least an order of magnitude above other methods. The slowest approach, though, was HiFive's Probability algorithm. This was due to its modeling of every interaction combination across the chromosome. HiFive's implementation of the Knight-Ruiz matrix balancing algorithm, ExpressKR, showed a dramatically faster runtime than any other approach. This was the result of HiFive's fast data loading and efficient heatmapping without the need for distance-dependence parameter calculations.
Running time for HiC analysis methods. For each method, the runtime in minutes is partitioned into the time required for each stage of the processes. All times were determined running methods on an abbreviated dataset of chromosome 1 for the mouse HindIII dataset using a single processor. Note that because of several extremely long runtimes, the graph includes multiple splits
Because of the ever-increasing resolution of experiments and the corresponding size of interaction datasets, scalability is a primary concern for HiC data analysis. Although we compared methods on an even playing field, this does not reflect the complete performance picture when considering finer-scale resolution, processing a complete genome interaction dataset, and more available computational resources.
There are two approaches to determining analysis resolution, prior to or after normalization. Of the methods presented, only HiCNorm determines the resolution of analysis prior to normalization. While it performs well, this means that the processing time and memory requirements scale exponentially with resolution. We were unable to perform any analyses at resolutions for bin sizes smaller than 10 Kb using this approach. The remaining methods all find correction values for individual fends, meaning that corrections are performed prior to binning interactions.
The increase in dataset size, either due to genome size itself or a finer-scale partitioning of the genome, can be offset by employing more processing power by means of parallelization. HiCLib and HiCNorm do not appear to have any such capability. HiCPipe does have the ability to parallelize calculation of model bin sizes prior to normalization and calculations for heatmap production, although a single processor performs all of the actual normalization calculations. HiFive, on the other hand, has the ability to run in parallel for nearly all phases of an analysis. The two exceptions are loading the initial data and filtering reads, although the latter is very fast already. All normalization algorithms, including the Knight-Ruiz algorithm implemented in HiFive, have been parallelized for HiC analysis using MPI. The parallelization is implemented in such a way that the additional memory overhead for each new process is minimal.
HiC analysis remains a challenging subject, as demonstrated by the varied performances across all methodologies discussed here. No single approach appears to be ideally suited for all cases, suggesting that the experimental goal should drive the choice of analysis software. It is unclear how best to assess HiC normalization performance as there is no 'gold standard' for determining the quality of a HiC dataset or how well systematic noise has been accounted for during an analysis. As seen in the differences in correlation between mouse and human datasets (Fig. 3), factors such as restriction fragment size distributions, cut site density, sequencing depth, and HiC protocol can dramatically impact the similarity of resulting datasets. Further, in order to detect biologically relevant features against the background of the distance-signal relationship, the data need to be transformed, typically using a log-transformation. This skews the resulting comparison by ignoring interactions for which no reads have been observed, an increasing problem as binning size decreases or interaction size increases. At longer ranges, non-zero bins are sparse and dominated by macro features (such as A-B compartments), a situation that can result in increasing correlations (Fig. 3a and b). Two observations suggest this is not an artifact. First, the long-range interaction correlation increase is seen in the human but not mice data, reflecting differences in genome organization. Second, the correlation increases are seen across all methodologies and algorithms.
Normalization software attempts to account for many of these confounding factors and allow direct comparison between datasets produced by different labs, protocols, and even across species although what can reasonably be expected in terms of this normalization process is unclear. This question depends on many factors and we may not have sufficient understanding of chromatin architecture variability across a cell population to answer it accurately. The resolution (bin size), similarity of datasets in terms of sequencing depth, restriction fragment size distributions, and protocol, as well as cell population size and population similarity from which the HiC libraries were made will all influence the correlation. At a low resolution, say 1 Mb, we should expect nearly a perfect correlation. However, at much higher resolution differences in mappability and RE cut-site frequency will strongly influence the correlation. Further, we need to consider the distance dependence of the signal as this is the strongest driver of the correlation and can give a false impression of comparability between datasets.
To address these normalization challenges, we have created HiFive, an easy-to-use, fast, and efficient framework for working with a variety of chromatin conformation data types. Because of the modular storage scheme, re-analysis and downstream analysis is made easier without additional storage or processing requirements. We have included several different normalization approaches and made nearly all aspects of each algorithm adjustable, allowing users to tune parameters for a wide range of analysis goals. HiFive is parallelized via MPI, making it highly scalable for nearly every step of HiC data processing.
For 5C data, HiFive is the only analysis program available for normalization and allows easy management of 5C data. We have demonstrated that 5C data normalizations performed by HiFive greatly improve consistency between 5C data and corresponding HiC data across multiple datasets.
We have also shown HiFive's performance in handling HiC data. HiFive is consistently performing at or above other available methods as measured by inter-dataset correlations for cis interactions. In addition, we have demonstrated that HiFive is tunable to achieve superior trans performance if desired, albeit at the expense of performance across cis interactions. HiFive has also proved capable of handling very high-resolution data, making it useful for the next generation of HiC experimental data.
In terms of performance considerations, our analysis suggests that, out of all of the methods considered, the balance between speed and accuracy is best achieved by HiFive-Express or HiFive-ExpressKR. This appears to be true regardless of resolution or dataset size. In order to get this performance, it is crucial to use the distance-dependence adjustment prior to normalizing, necessitating the need to pre-calculate the distance-dependence function. Because this requires iterating over every possible interaction, using multiple processors is highly recommended. If not possible, HiFive-ExpressKR without distance correction is a robust fallback method. If computational resources are not a limiting factor, we recommend HiFive-Probability. With approximately 100 CPUs, the high-resolution human data were processed in about a day. At fine-scale binning, this approach yields the best results of all methods.
While HiFive allows for superior normalization of data compared to other available software under many conditions, it also provides users with alternative options for fast analysis with minimal computational requirements at only a slight accuracy cost, opening high-resolution HiC and 5C analysis to a much larger portion of the scientific community. HiFive is available at http://taylorlab.org/software/hifive/. Source code is provided under an MIT license and at https://github.com/bxlab/hifive or installed using pip from http://pypi.python.org.
chromosome conformation capture
3C carbon copy
base pair
cis:
intra-chromosomal
ESC:
Embryonic stem cell
Fend:
fragment-end
Kb:
kilobase
Mb:
megabase
MPI:
message passing interface
restriction enzyme
inter-chromosomal
Arnone MI, Davidson EH. The hardwiring of development: organization and function of genomic regulatory systems. Development. 1997;124:1851–64.
Zinzen RP, Girardot C, Gagneur J, Braun M, Furlong EE. Combinatorial binding predicts spatio-temporal cis-regulatory activity. Nature. 2009;462:65–70.
He A, Kong SW, Ma Q, Pu WT. Co-occupancy by multiple cardiac transcription factors identifies transcriptional enhancers active in heart. Proc Natl Acad Sci U S A. 2011;108:5632–7.
Cantone I, Fisher AG. Epigenetic programming and reprogramming during development. Nat Struct Mol Biol. 2013;20:282–9.
Varriale A. DNA Methylation, epigenetics, and evolution in vertebrates: facts and challenges. Int J Evol Biol. 2014;2014:475981.
Kimura H. Histone modifications for human epigenome analysis. J Hum Genet. 2013;58:439–45.
Fullwood MJ, Han Y, Wei CL, Ruan X, Ruan Y. Chromatin interaction analysis using paired-end tag sequencing. Curr Protoc Mol Biol. 2010;Chapter 21:Unit 21.15.21–25.
Kalhor R, Tjong H, Jayathilaka N, Alber F, Chen L. Genome architectures revealed by tethered chromosome conformation capture and population-based modeling. Nat Biotechnol. 2012;30:90–8.
Dekker J, Rippe K, Dekker M, Kleckner N. Capturing chromosome conformation. Science. 2002;295:1306–11.
Dostie J, Richmond TA, Arnaout RA, Selzer RR, Lee WL, Honan TA, et al. Chromosome Conformation Capture Carbon Copy (5C): a massively parallel solution for mapping interactions between genomic elements. Genome Res. 2006;16:1299–309.
Zhao Z, Tavoosidana G, Sjolinder M, Gondor A, Mariano P, Wang S, et al. Circular chromosome conformation capture (4C) uncovers extensive networks of epigenetically regulated intra- and interchromosomal interactions. Nat Genet. 2006;38:1341–7.
Lieberman-Aiden E, van Berkum NL, Williams L, Imakaev M, Ragoczy T, Telling A, et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science. 2009;326:289–93.
van Berkum NL, Dekker J. Determining spatial chromatin organization of large genomic regions using 5C technology. Methods Mol Biol. 2009;567:189–213.
Yaffe E, Tanay A. Probabilistic modeling of Hi-C contact maps eliminates systematic biases to characterize global chromosomal architecture. Nat Genet. 2011;43:1059–65.
Rao SS, Huntley MH, Durand NC, Stamenova EK, Bochkov ID, Robinson JT, et al. A 3D map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell. 2014;159:1665–80.
Hu M, Deng K, Selvaraj S, Qin Z, Ren B, Liu JS. HiCNorm: removing biases in Hi-C data via Poisson regression. Bioinformatics. 2012;28:3131–3.
Imakaev M, Fudenberg G, McCord RP, Naumova N, Goloborodko A, Lajoie BR, et al. Iterative correction of Hi-C data reveals hallmarks of chromosome organization. Nat Methods. 2012;9:999–1003.
Hu M, Deng K, Qin Z, Dixon J, Selvaraj S, Fang J, et al. Bayesian inference of spatial organizations of chromosomes. PLoS Comput Biol. 2013;9:e1002893.
Jin F, Li Y, Dixon JR, Selvaraj S, Ye Z, Lee AY, et al. A high-resolution map of the three-dimensional chromatin interactome in human cells. Nature. 2013;503:290–4.
CAS PubMed PubMed Central Google Scholar
Li W, Gong K, Li Q, Alber F, Zhou XJ. Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data. Bioinformatics. 2015;31:960–2.
Servant N, Lajoie BR, Nora EP, Giorgetti L, Chen CJ, Heard E, et al. HiTC: exploration of high-throughput 'C' experiments. Bioinformatics. 2012;28:2843–4.
Rousseau M, Fraser J, Ferraiuolo MA, Dostie J, Blanchette M. Three-dimensional modeling of chromatin structure from interaction frequency data using Markov chain Monte Carlo sampling. BMC Bioinformatics. 2011;12:414.
Nora EP, Lajoie BR, Schulz EG, Giorgetti L, Okamoto I, Servant N, et al. Spatial partitioning of the regulatory landscape of the X-inactivation centre. Nature. 2012;485:381–5.
Phillips-Cremins JE, Sauria ME, Sanyal A, Gerasimova TI, Lajoie BR, Bell JS, et al. Architectural protein subclasses shape 3D organization of genomes during lineage commitment. Cell. 2013;153:1281–95.
Ay F, Bailey TL, Noble WS. Statistical confidence estimation for Hi-C data reveals regulatory chromatin contacts. Genome Res. 2014;24:999–1011.
Knight PA, Ruiz D. A fast algorithm for matrix balancing. IMA J Numerical Anal. 2013;33:1029–47.
Dixon JR, Selvaraj S, Yue F, Kim A, Li Y, Shen Y, et al. Topological domains in mammalian genomes identified by analysis of chromatin interactions. Nature. 2012;485:376–80.
Floyd SR, Pacold ME, Huang Q, Clarke SM, Lam FC, Cannell IG, et al. The bromodomain protein Brd4 insulates chromatin from DNA damage signalling. Nature. 2013;498:246–50.
Naumova N, Imakaev M, Fudenberg G, Zhan Y, Lajoie BR, Mirny LA, et al. Organization of the mitotic chromosome. Science. 2013;342:948–53.
Cournac A, Marie-Nelly H, Marbouty M, Koszul R, Mozziconacci J. Normalization of a chromosomal contact map. BMC Genomics. 2012;13:436.
Selvaraj S, RD J, Bansal V, Ren B. Whole-genome haplotype reconstruction using proximity-ligation and shotgun sequencing. Nat Biotechnol. 2013;31:8.
Research reported in this publication was supported by the National Institutes of Health under awards R01GM035463 to VC and R01DK065806 to JT and by American Recovery and Reinvestment Act (ARRA) funds through grant number RC2HG005542 to JT. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Departments of Biology and Computer Science, Johns Hopkins University, Baltimore, MD, 21218, USA
Michael EG Sauria & James Taylor
Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19103, USA
Jennifer E. Phillips-Cremins
Department of Biology, Emory University, Atlanta, GA, 30322, USA
Victor G. Corces
Michael EG Sauria
Correspondence to James Taylor.
MEGS, JEPC, VGC, and JT conceived the project and developed feature requirements. JEPC made significant contributions to the design of the 5C tools. MEGS developed all algorithms, designed and wrote all software, and wrote the manuscript. JT contributed to the manuscript and supported the project. All authors read and approved the final manuscript.
Additional file 1:
A detailed methods section, a table listing the sources of datasets used, and Supplemental figures. (PDF 2245 kb)
A tar archive containing the HiFive software library. (BZ2 578 kb)
A tar archive containing all scripts using to generate the data, analyses, and figures presented in this paper. (BZ2 20009 kb)
The software documentation. (PDF 530 kb)
Sauria, M.E., Phillips-Cremins, J.E., Corces, V.G. et al. HiFive: a tool suite for easy and efficient HiC and 5C data analysis. Genome Biol 16, 237 (2015). https://doi.org/10.1186/s13059-015-0806-y
Chromatin conformation
Spatial organization
The three dimensional organization of the nucleus
Submission enquiries: editorial@genomebiology.com | CommonCrawl |
Symplectic geometry and equivariant topology
Org: Thomas Baird (Memorial) and Derek Krepski (Manitoba)
MAXIME BERGERON, University of British Columbia
Kempf-Ness Theory and Character Varieties [PDF]
Let $V$ be a complex vector space equipped with an action of a reductive algebraic group $G\subset \mathrm{GL}(V)$. If $K\subset G$ is a maximal compact Lie subgroup then there is always a natural symplectic structure on $V$ for which the action of $K$ is Hamiltonian. In this setting we can consider two ``quotients'' of $V$: the geometric invariant theory quotient of $V$ by $G$ and the symplectic reduction of $V$ by $K$. Kempf-Ness theory describes striking connections between these two worlds and in this talk I will explain how these ideas can be adapted to understand the topology of $G-$character varieties of finitely generated nilpotent groups.
PETER CROOKS, University of Toronto
The Torus-Equivariant Cohomology of Nilpotent Orbits in Semisimple Lie Algebras [PDF]
Let $G$ be a connected, simply-connected complex semisimple group with Lie algebra $\mathfrak{g}$. An adjoint $G$-orbit is called nilpotent if it lies in the nilpotent cone of $\mathfrak{g}$. This talk aims to introduce nilpotent orbits in the context of equivariant topology and geometry. \newline We will begin by introducing nilpotent orbits as objects studied at the interface of symplectic geometry, representation theory, and algebraic geometry. We will subsequently restrict our attention to two distinguished nilpotent $G$-orbits, the regular and minimal orbits. I will present some recent work on the equivariant cohomology of each orbit for the action of a maximal torus of $G$.
JONATHAN FISHER, University of Toronto
Quivers and Higgs bundles [PDF]
Quiver representations may be used to construct a large class of holomorphic Poisson varieties. We show that varieties associated to star-shaped quivers may be mapped into Hitchin systems over $\mathbf{P}^1$, giving them the structure of algebraic completely integrable systems. In the case of rank 2 bundles, this gives an interesting relationship between Hitchin systems and the moduli space of polygons in $\mathbf{R}^3$. This is based on joint work with Steven Rayan.
SEAN FITZPATRICK, Western
Higher rank Boothby-Wang fibrations [PDF]
The Boothby-Wang theorem gives the conditions under which a contact manifold is a prequantum circle bundle over a symplectic manifold. Since there are many different ways to characterize contact geometry, it is perhaps not surprising that there are several inequivalent generalizations to distributions of higher corank. The definition of a contact metric structure provides one such generalization, known as an almost S-structure, to which a generalized Boothby-Wang theorem applies: I will show that a manifold equipped with a regular almost S-structure is a principal torus bundle over a symplectic manifold. Time permitting, I will describe other properties of such structures, including symplectization, associated Jacobi structures, and connections to CR geometry.
MATTHIAS FRANZ, University of Western Ontario
Big polygon spaces produce maximal syzygies in equivariant cohomology [PDF]
Let $T=(S^1)^r$ be a torus. We present a new class of compact orientable $T$-manifolds, called ``big polygon spaces''. Like polygon spaces, which appear as their fixed point sets, they depend on a length vector $\ell\in\mathbb{R}_{\ge0}^r$. Although the equivariant cohomology of a big polygon space $X(\ell)$ is never free over $H^*(BT)$, one can observe interesting phenomena for suitable $\ell$. In particular, $H_T^*(X(\ell))$ can be descibed by the ``GKM method'', and the equivariant Poincaré pairing for $X$ can be perfect. The existence of such $T$-manifolds was unknown so far. More generally, $H_T^*(X(\ell))$ can be a syzygy of any order less than $r/2$ over $H^*(BT)$, which shows that a bound on the syzygy order obtained by Allday--Franz--Puppe is sharp.
MARCO GUALTIERI, University of Toronto
Log affine manifolds and symplectic geometry [PDF]
I will describe a construction of log affine manifolds which is useful for studying symplectic and generalized complex structures.
ERIC HARPER, McMaster University
SU(N) Casson-Lin invariants for links in $S^3$ [PDF]
In 1992, X.-S. Lin introduced a Casson-type invariant $h(K)$ of knots $K \subset S^3$ via a signed count of conjugacy classes of irreducible $SU(2)$ representations of the knot group $\pi_1(S^3-K)$ where all meridians of $K$ are represented by trace-free $SU(2)$ matrices. Lin showed that $h(K)$ equals one-half the knot signature of $K$. With N. Saveliev, we defined an invariant of $2$-component links $L \subset S^3$ using a construction analogous to Lin's. The invariant $h(L)$ is a signed count of conjugacy classes of certain projective $SU(2)$ representations of the link group $\pi_1 (S^3-L)$. We showed that $h(L)$ equals the linking number. In a recent joint work with H.~U. Boden, we introduce invariants for $n$-component links $L$ in $S^3$ where $n \geq 2$. The invariants are denoted $h_{N,a}(L)$ where $a=(a_1,\ldots,a_n)$ is an $n$-tuple of integers and each $a_i$ labels the $i$-th component of the link. They are defined as a signed count of conjugacy classes of certain projective $SU(N)$ representations of $\pi_1 (S^3-L)$. In this talk, we will outline their construction, give a vanishing result for split links, and discuss some preliminary computations.
SHENGDA HU, Wilfrid Laurier University
Generalized holomorphic bundles, Part II [PDF]
This is the second part of a joint talk with Ruxandra Moraru.
We discuss an analogue of the Hermitian-Einstein equations for generalized K\"ahler manifolds. We also introduce a notion of stability for generalized holomorphic bundles on generalized K\"ahler manifolds, and establish a Kobayashi-Hitchin-type correspondence between stable bundles and solutions of the generalized Hermitian-Einstein equations.
LISA JEFFREY, University of Toronto
Intersection cohomology of universal imploded cross-section [PDF]
(Joint work with Nan-Kuo Ho)
If G is a compact Lie group, and G acts semifreely on a Hamiltonian G-space, then the preimage of the Lie algebra of the maximal torus contains only finitely many points in each orbit. More generally to get a space with this property we define the imploded cross-section of a Hamiltonian G-space by quotienting each orbit by the commutator subgroup of the stabilizer. The universal imploded cross-section is the imploded cross-section of the cotangent bundle of G -- it can be used to construct the imploded cross-section of a general Hamiltonian G-manifold.
For SU(2) the universal imploded cross-section is a complex vector space of dimension 2, so its topology is trivial. In general the universal imploded cross-section is singular, but topological invariants distinguishing it from a point are not known. We compute the intersection cohomology of the universal imploded cross-section of SU(3), and show that it is nontrivial.
RUXANDRA MORARU, University of Waterloo
Generalized holomorphic bundles, Part I [PDF]
This is the first part of a joint talk with Shengda Hu.
Generalized holomorphic bundles, introduced by Gualtieri in 2004, are the analogues of holomorphic vector bundles in the generalized geometry setting. For some generalized complex structures, these bundles correspond to co-Higgs bundles, flat bundles or Poisson modules. I will give an overview of what is known about generalized holomorphic bundles, and describe their moduli spaces in some specific examples.
BRENT PYM, University of Oxford
Categorified isomonodromic deformations via Lie groupoids [PDF]
Given a meromorphic connection on a Riemann surface, one can seek deformations of the connection in which the locations of the poles are varied but the monodromy and Stokes data are held fixed. The solutions of this ``isomonodromy problem'' are unique up to isomorphism and can often be written explicitly in terms of special functions, such as the Painlevé transcendents. I will describe joint work with Marco Gualtieri in which we categorify this picture, promoting the classical special functions to functors using the theory of Morita equivalence for Lie groupoids. The Morita equivalences in question are themselves the solutions of an isomonodromy problem---the one for which the initial condition is the meromorphic projective connection provided by the uniformization theorem.
DAN RAMRAS, Indiana University-Purdue University Indianapolis
Moduli spaces of representations [PDF]
I'll discuss recent work regarding the homotopy groups of moduli spaces of representations. For fundamental groups of Riemann surfaces, this work leads to a computation of the fundamental group of the GL(n) moduli space, as well as a complete understanding of the homotopy type of the stable moduli space of SU(n) representations. Connections to Goldman's symplectic form and the associated Ramadas-Singer-Weitsman line bundle will also be discussed.
SOUMEN SARKAR, University of Regina
Complex cobordism of quasitoric orbifolds [PDF]
We construct manifolds and orbifolds with quasitoric boundary. We show that these manifolds and orbifolds with boundary has a stable complex structure. These induce explicit (orbifold) complex cobordism relations among quasitoric man- ifolds and orbifolds. In particular, we show that a quasitoric orbifold is complex cobordant to some copies of fake weighted projective spaces. The main tool is the theory of toric topology.
YANLI SONG, University of Toronto
The Cubic Dirac Operator and Geometric Quantization [PDF]
In this talk, I will reformulate the quantization of Hamiltonian $G$-spaces as push-forward of the Dirac element in $K$-homology of crossed product of $C^{*}$-algebras. After localization, we can artificially construct a Dirac operator which is a mixture of algebraic cubic Dirac operator and geometric Spin$^{c}$-Dirac operator. This will reduce the quantization commutes with reduction theorem to a easy case. By small calculation, we obtain a simplified proof to the theorem. I will also explain how to apply this method to the quasi-Hamiltonian $G$-spaces.
JORDAN WATTS, University of Illinois at Urbana-Champaign
Coarse Moduli Spaces of Stacks over Manifolds [PDF]
Consider a Lie group acting properly on a manifold. In the literature, the orbit space of the action has been equipped with various definitions of "smooth structure" for the purpose of extending differential geometry/topology to this space. Examples include differential structures and diffeologies. However, these structures often forget certain invariants induced by the group action. Stacks, on the other hand, encode many of these invariants into the so-called quotient stack.
In this talk, I will show how any stack over manifolds has an underlying coarse moduli space equipped with a diffeology which, in the case of a geometric stack, corresponds to the orbit space of a representative Lie groupoid equipped with the quotient diffeology. Moreover, there is a fully faithful functor from diffeological spaces into stacks. This gives us a unifying category in which we can directly compare, in the case of a Lie group action for instance, information encoded by the diffeology versus information encoded by the quotient stack. Time permitting, I will give an example of one such invariant.
This is joint work with Seth Wolbert. | CommonCrawl |
Understanding the Temperature Coefficient of a Voltage Reference
June 06, 2019 by Dr. Steve Arar
How does temperature affect the output of a voltage reference? What is a temperature coefficient specification?
Voltage references produce a stable voltage that's ideally independent of changes in supply voltage, temperature, load, and other external factors. They are widely used in data converters, power supplies, measurement and control systems. The accuracy of such systems can be directly affected by the accuracy of the employed voltage reference.
There are several specifications that allow us to characterize the various aspects of a voltage reference accuracy. This article looks at the temperature coefficient (tempco) specification that characterizes the temperature-induced variations in the output of a voltage reference.
What Is a Temperature Coefficient Specification?
While the output of a voltage reference should be ideally independent of temperature, a real-world voltage reference exhibits temperature-induced variations in the output. Figure 1 below shows the output of LT1021-5. The nominal output voltage is 5 V but, as you can see, it's not 100% independent of temperature.
Figure 1. Image courtesy of Analog Devices.
The temperature coefficient (or temperature drift) of a voltage reference is the specification that characterizes the temperature-induced errors of the output. The common method (definition) is called "Box method" that uses the following equation:
$$TCV_{O} = \frac{V_{max} - V_{min}}{V_{nominal}(T_{max} - T_{min})} \times 10^{6}$$
This method considers the error over a specified temperature range (Tmax - Tmin). In this temperature range, the maximum and minimum of the output are subtracted to find the maximum variation in the output (Vmax - Vmin). The maximum output variation is divided by the temperature range multiplied by the nominal output value (Vnominal).
The result is multiplied by 106 to specify the tempco in ppm/°C (part per million /°C). Figure 2 below shows the upper and lower limits of the output voltage along with the temperature limits for the LT1021-5 voltage reference.
The boundaries form a box where the box diagonal is proportional to the tempco given by the above equation. As you can see, Vmax and Vmin are about 5.001 V and 5 V, respectively. Considering the temperature range from -50°C to 125°C, we obtain:
$$TCV_{O} = \frac{5.001 -5}{5 \big(125 - (-50) \big)} \times 10^{6} = 1.14 \; ppm/^{\circ} C$$
According to page 3 of the datasheet, the typical value for the LT1021-5 tempco is 2 ppm/°C. Note that Vmax and Vmin are not necessarily related to Tmax and Tmin. They just determine the maximum and minimum values of the output voltage in the temperature range from Tmin to Tmax.
The Drift Curve: Temperature Drift and Drift Error
The tempco specification doesn't give us the shape of the temperature-induced variations. Consider a voltage reference that has a nominal output of 5 V and a tempco of 1.14 ppm/°C. We saw that the LT1021-5 exhibits these specs (Figure 1); however, we can envision countless voltage references with these specs. Two hypothetical examples are shown in Figure 3 and 4.
The unit of the tempco specification (ppm/°C) can mislead us to the wrong idea that the error is linear meaning that if we increase the temperature by 1°C, the output voltage will change by 1 ppm. However, we saw that tempco is defined in a way that doesn't give us any information about the shape of the variations. It only gives us the maximum variation that we can expect in a specified temperature range.
Since the error is not linear, some manufacturers give the tempco of a device in more than one temperature range. For example, the MAX6025A is specified as a 20 ppm/°C device in the range -40°C to +85°C. However, in the range 0°C to +70°C, it exhibits a tempco of 15 ppm/°C. Hence, depending on the operating temperature range of an application, we can consider the MAX6025A as either a 20 ppm/°C or 15 ppm/°C device. Note that the tempco is given in a specified temperature range. We can use it to estimate the error only in the specified range. Estimating the error out of the specified range is inadvisable unless the temperature behavior of a given device is well understood.
How to Calculate the Temperature Coefficient
Let's see how we can determine the required tempco for a system. As an example, assume that we have a 10-bit ADC and the voltage reference is used to set the ADC full-scale value. Suppose that we want the temperature-induced error to be less than half LSB of the system.
If we assume that the nominal output of the voltage reference is VFS, the LSB of our 10-bit system will be $$\frac{V_{FS}}{2^{10}}$$. Hence, the total variation of the voltage reference output should be less than $$\frac{V_{FS}}{2^{11}}$$. With a temperature range of -25°C to 75°C, we obtain:
$$TCV_{O} = \frac{\frac{V_{FS}}{2^{11}}}{V_{FS} \big(75 - (-25) \big)} \times 10^{6} = 4.88 \; ppm/^{\circ} C$$
Hence, we need a voltage reference with tempco less than 4.88 ppm/°C. For the above calculation, we only aimed to satisfy one condition: Keeping the total variation of the reference voltage below half LSB. With a tempco of 4.88 ppm/°C, we know that the total variation of the reference voltage is less than half LSB. What can we conclude about the absolute value of the reference voltage? We can consider two extreme cases:
The minimum value of the reference voltage is its nominal value (VFS) and its maximum value is VFS + 0.5 LSB. In this case, the variation shape is similar to that depicted in Figure 3.
The maximum value of the reference voltage is its nominal value (VFS) and its minimum value is VFS - 0.5 LSB. This case is similar to that depicted in Figure 4.
As you can see, a tempco of 4.88 ppm/°C guarantees that the variation is less than half LSB (regardless of the shape of the variations). However, depending on the voltage drift characteristics of a given device, the absolute value can be somewhere between VFS - 0.5 LSB to VFS + 0.5 LSB. Hence, if a particular application mandates keeping the absolute value below half LSB, we can simply choose a voltage reference that keeps the variation below ¼ LSB. The lower the drift, the more costly the product will be. Therefore, we need to consider the design requirements carefully to avoid overdesigning.
Moreover, note that Figures 3 and 4 depict hypothetical voltage drift characteristics. Many practical voltage references, especially the compensated bandgap devices, have an S-shaped curve (See Figure 5).
Figure 5 Image courtesy of Analog Devices.
Self-Heating of a Voltage Reference
The temperature range used to specify the tempco of a device refers to the die temperature. The power dissipated in a device can lead to a difference between the die temperature and the ambient temperature. In this case, we should estimate the die temperature and calculate the drift error based on the die temperature range. For more information, please refer to this application note from Maxim.
Review on the Basics of the Tempco of a Voltage Reference
Voltage references are widely used in data converters, power supplies, measurement and control systems. The temperature coefficient (tempco) of a voltage reference allows us to characterize the temperature-induced errors in the reference output.
The common method (definition) for calculating the temperature coefficient is the "Box method". It's important to note that the tempco specification doesn't give us any information about the shape of the temperature-induced variations. It only allows us to calculate the maximum error that can occur in a specified temperature range.
Introduction to Bandgap Voltage References
Low-Noise Voltage References: Understanding the Noise Performance of a Voltage Reference IC
Microchip Targets Temperature Stability in New Cost-effective Vref IC
Stable and Precise Voltages: A New Family of Low-Power, High-Precision Voltage Reference ICs
Thermocouple Basics—Using the Seebeck Effect for Temperature Measurement
Voltage Reference
tempco
die temperature
Thermocouple Signal Conditioners and Signal Conditioning Near the Cold Junction
by Dr. Steve Arar
Understanding Silent Switcher Technology: High Efficiency, Low EMI
by Analog Devices
On-skin Telehaptic Device Allows Users to Transmit Touch Remotely
Battery Storage Schemes Employ Freeze-Thaw Approach and More
by Nicholas St. John | CommonCrawl |
Gevrey normal forms for nilpotent contact points of order two
Topological entropy by unit length for the Ginzburg-Landau equation on the line
February 2014, 34(2): 663-676. doi: 10.3934/dcds.2014.34.663
Non-normal numbers with respect to Markov partitions
Manfred G. Madritsch 1,
Université de Lorraine, Institut Elie Cartan de Lorraine, UMR 7502, Vandoeuvre-lès-Nancy, F-54506, France
Received December 2012 Revised December 2012 Published August 2013
We call a real number normal if for any block of digits the asymptotic frequency of this block in the $N$-adic expansion equals the expected one. In the present paper we consider non-normal numbers and, in particular, essentially and extremely non-normal numbers. We call a real number essentially non-normal if for each single digit there exists no asymptotic frequency of its occurrence. Furthermore we call a real number extremely non-normal if all possible probability vectors are accumulation points of the sequence of frequency vectors. Our aim now is to extend and generalize these results to Markov partitions.
Keywords: Non-normal numbers, symbolic dynamical system, Markov partitions, Baire category..
Mathematics Subject Classification: Primary: 11K16, 37B10; Secondary: 11A63, 54H2.
Citation: Manfred G. Madritsch. Non-normal numbers with respect to Markov partitions. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 663-676. doi: 10.3934/dcds.2014.34.663
S. Albeverio, M. Pratsiovytyi and G. Torbin, Singular probability distributions and fractal properties of sets of real numbers defined by the asymptotic frequencies of their $s$-adic digits,, Ukraïn. Mat. Zh., 57 (2005), 1163. doi: 10.1007/s11253-006-0001-0. Google Scholar
S. Albeverio, M. Pratsiovytyi and G. Torbin, Topological and fractal properties of real numbers which are not normal,, Bull. Sci. Math., 129 (2005), 615. doi: 10.1016/j.bulsci.2004.12.004. Google Scholar
I.-S. Baek and L. Olsen, Baire category and extremely non-normal points of invariant sets of IFS's,, Discrete Contin. Dyn. Syst., 27 (2010), 935. doi: 10.3934/dcds.2010.27.935. Google Scholar
G. Barat, V. Berthé, P. Liardet and J. Thuswaldner, Dynamical directions in numeration,, Numération, 56 (2006), 1987. doi: 10.5802/aif.2233. Google Scholar
E. Borel, Les probabilités dénombrables et leurs applications arithmétiques,, (French) Palermo Rend., 27 (1909), 247. Google Scholar
K. Dajani and C. Kraaikamp, "Ergodic Theory of Numbers,", Carus Mathematical Monographs, (2002). Google Scholar
K. Gröchenig and A. Haas, Self-similar lattice tilings,, J. Fourier Anal. Appl., 1 (1994), 131. doi: 10.1007/s00041-001-4007-6. Google Scholar
J. Hyde, V. Laschos, L. Olsen, I. Petrykiewicz and A. Shaw, Iterated Cesàro averages, frequencies of digits, and Baire category,, Acta Arith., 144 (2010), 287. doi: 10.4064/aa144-3-6. Google Scholar
D. Lind and B. Marcus, "An Introduction to Symbolic Dynamics and Coding,", Cambridge University Press, (1995). doi: 10.1017/CBO9780511626302. Google Scholar
L. Olsen, Extremely non-normal continued fractions,, Acta Arith., 108 (2003), 191. doi: 10.4064/aa108-2-8. Google Scholar
L. Olsen, Applications of multifractal divergence points to sets of numbers defined by their $N$-adic expansion,, Math. Proc. Cambridge Philos. Soc., 136 (2004), 139. doi: 10.1017/S0305004103007047. Google Scholar
L. Olsen, Applications of multifractal divergence points to some sets of $d$-tuples of numbers defined by their $N$-adic expansion,, Bull. Sci. Math., 128 (2004), 265. doi: 10.1016/j.bulsci.2004.01.003. Google Scholar
L. Olsen, Extremely non-normal numbers,, Math. Proc. Cambridge Philos. Soc., 137 (2004), 43. doi: 10.1017/S0305004104007601. Google Scholar
L. Olsen and S. Winter, Normal and non-normal points of self-similar sets and divergence points of self-similar measures,, J. London Math. Soc. (2), 67 (2003), 103. doi: 10.1112/S0024610702003630. Google Scholar
T. Šalát, Zur metrischen Theorie der Lürothschen Entwicklungen der reellen Zahlen,, Czechoslovak Math. J., 18 (93) (1968), 489. Google Scholar
T. Šalát, A remark on normal numbers,, Rev. Roumaine Math. Pures Appl., 11 (1966), 53. Google Scholar
T. Šalát, Über die Cantorschen Reihen,, Czechoslovak Math. J., 18 (93) (1968), 25. Google Scholar
B. Volkmann, Über Hausdorffsche Dimensionen von Mengen, die durch Zifferneigenschaften charakterisiert sind. VI,, Math. Z., 68 (1958), 439. Google Scholar
B. Volkmann, On non-normal numbers,, Compositio Math., 16 (1964), 186. Google Scholar
In-Soo Baek, Lars Olsen. Baire category and extremely non-normal points of invariant sets of IFS's. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 935-943. doi: 10.3934/dcds.2010.27.935
Manfred G. Madritsch, Izabela Petrykiewicz. Non-normal numbers in dynamical systems fulfilling the specification property. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4751-4764. doi: 10.3934/dcds.2014.34.4751
Michael Jakobson, Lucia D. Simonelli. Countable Markov partitions suitable for thermodynamic formalism. Journal of Modern Dynamics, 2018, 13: 199-219. doi: 10.3934/jmd.2018018
Thomas Ward, Yuki Yayama. Markov partitions reflecting the geometry of $\times2$, $\times3$. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 613-624. doi: 10.3934/dcds.2009.24.613
Patrik Nystedt, Johan Öinert. Simple skew category algebras associated with minimal partially defined dynamical systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 4157-4171. doi: 10.3934/dcds.2013.33.4157
Annibale Magni, Matteo Novaga. A note on non lower semicontinuous perimeter functionals on partitions. Networks & Heterogeneous Media, 2016, 11 (3) : 501-508. doi: 10.3934/nhm.2016006
Olof Heden, Faina I. Solov'eva. Partitions of $\mathbb F$n into non-parallel Hamming codes. Advances in Mathematics of Communications, 2009, 3 (4) : 385-397. doi: 10.3934/amc.2009.3.385
Wen-Guei Hu, Song-Sun Lin. On spatial entropy of multi-dimensional symbolic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3705-3717. doi: 10.3934/dcds.2016.36.3705
H. M. Hastings, S. Silberger, M. T. Weiss, Y. Wu. A twisted tensor product on symbolic dynamical systems and the Ashley's problem. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 549-558. doi: 10.3934/dcds.2003.9.549
Boris Kalinin, Victoria Sadovskaya. Normal forms for non-uniform contractions. Journal of Modern Dynamics, 2017, 11: 341-368. doi: 10.3934/jmd.2017014
Snir Ben Ovadia. Symbolic dynamics for non-uniformly hyperbolic diffeomorphisms of compact smooth manifolds. Journal of Modern Dynamics, 2018, 13: 43-113. doi: 10.3934/jmd.2018013
Ricardo Miranda Martins. Formal equivalence between normal forms of reversible and hamiltonian dynamical systems. Communications on Pure & Applied Analysis, 2014, 13 (2) : 703-713. doi: 10.3934/cpaa.2014.13.703
Yunping Jiang, Yuan-Ling Ye. Convergence speed of a Ruelle operator associated with a non-uniformly expanding conformal dynamical system and a Dini potential. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4693-4713. doi: 10.3934/dcds.2018206
Xiaoyue Li, Xuerong Mao. Population dynamical behavior of non-autonomous Lotka-Volterra competitive system with random perturbation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 523-545. doi: 10.3934/dcds.2009.24.523
Zhong-Jie Han, Gen-Qi Xu. Dynamical behavior of networks of non-uniform Timoshenko beams system with boundary time-delay inputs. Networks & Heterogeneous Media, 2011, 6 (2) : 297-327. doi: 10.3934/nhm.2011.6.297
Zhaoquan Xu, Jiying Ma. Monotonicity, asymptotics and uniqueness of travelling wave solution of a non-local delayed lattice dynamical system. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 5107-5131. doi: 10.3934/dcds.2015.35.5107
Aline Cerqueira, Carlos Matheus, Carlos Gustavo Moreira. Continuity of Hausdorff dimension across generic dynamical Lagrange and Markov spectra. Journal of Modern Dynamics, 2018, 12: 151-174. doi: 10.3934/jmd.2018006
David Burguet, Todd Fisher. Symbolic extensionsfor partially hyperbolic dynamical systems with 2-dimensional center bundle. Discrete & Continuous Dynamical Systems - A, 2013, 33 (6) : 2253-2270. doi: 10.3934/dcds.2013.33.2253
P.K. Newton. The dipole dynamical system. Conference Publications, 2005, 2005 (Special) : 692-699. doi: 10.3934/proc.2005.2005.692
Santiago Cañez. Double groupoids and the symplectic category. Journal of Geometric Mechanics, 2018, 10 (2) : 217-250. doi: 10.3934/jgm.2018009
Manfred G. Madritsch | CommonCrawl |
A geometric Hall-type theorem
by Andreas F. Holmsen, Leonardo Martinez-Sandoval and Luis Montejano PDF
Proc. Amer. Math. Soc. 144 (2016), 503-511 Request permission
We introduce a geometric generalization of Hall's marriage theorem. For any family $F = \{X_1, \dots , X_m\}$ of finite sets in $\mathbb {R}^d$, we give conditions under which it is possible to choose a point $x_i\in X_i$ for every $1\leq i \leq m$ in such a way that the points $\{x_1,\dots ,x_m\}\subset \mathbb {R}^d$ are in general position. We give two proofs, one elementary proof requiring slightly stronger conditions, and one proof using topological techniques in the spirit of Aharoni and Haxell's celebrated generalization of Hall's theorem.
Ron Aharoni, Ryser's conjecture for tripartite 3-graphs, Combinatorica 21 (2001), no. 1, 1–4. MR 1805710, DOI 10.1007/s004930170001
Ron Aharoni and Eli Berger, The intersection of a matroid and a simplicial complex, Trans. Amer. Math. Soc. 358 (2006), no. 11, 4895–4917. MR 2231877, DOI 10.1090/S0002-9947-06-03833-5
Ron Aharoni, Eli Berger, and Ran Ziv, Independent systems of representatives in weighted graphs, Combinatorica 27 (2007), no. 3, 253–267. MR 2345810, DOI 10.1007/s00493-007-2086-y
R. Aharoni, E. Berger, and R. Meshulam, Eigenvalues and homology of flag complexes and vector representations of graphs, Geom. Funct. Anal. 15 (2005), no. 3, 555–566. MR 2221142, DOI 10.1007/s00039-005-0516-9
Ron Aharoni, Maria Chudnovsky, and Andreĭ Kotlov, Triangulated spheres and colored cliques, Discrete Comput. Geom. 28 (2002), no. 2, 223–229. MR 1920141, DOI 10.1007/s00454-002-2792-6
Ron Aharoni and Penny Haxell, Hall's theorem for hypergraphs, J. Graph Theory 35 (2000), no. 2, 83–88. MR 1781189, DOI 10.1002/1097-0118(200010)35:2<83::AID-JGT2>3.0.CO;2-V
A. Björner, Topological methods, Handbook of combinatorics, Vol. 1, 2, Elsevier Sci. B. V., Amsterdam, 1995, pp. 1819–1872. MR 1373690
Anders Björner, Nerves, fibers and homotopy groups, J. Combin. Theory Ser. A 102 (2003), no. 1, 88–93. MR 1970978, DOI 10.1016/S0097-3165(03)00015-3
Anders Björner, Bernhard Korte, and László Lovász, Homotopy properties of greedoids, Adv. in Appl. Math. 6 (1985), no. 4, 447–494. MR 826593, DOI 10.1016/0196-8858(85)90021-1
Jack Edmonds, Submodular functions, matroids, and certain polyhedra, Combinatorial Structures and their Applications (Proc. Calgary Internat. Conf., Calgary, Alta., 1969) Gordon and Breach, New York, 1970, pp. 69–87. MR 0270945
P. Hall, On representatives of subsets, J. London Math. Soc. 10 (1935) 26–30.
P. Haxell, On forming committees, Amer. Math. Monthly 118 (2011), no. 9, 777–788. MR 2854000, DOI 10.4169/amer.math.monthly.118.09.777
Matthew Kahle, Topology of random clique complexes, Discrete Math. 309 (2009), no. 6, 1658–1671. MR 2510573, DOI 10.1016/j.disc.2008.02.037
Gil Kalai and Roy Meshulam, A topological colorful Helly theorem, Adv. Math. 191 (2005), no. 2, 305–311. MR 2103215, DOI 10.1016/j.aim.2004.03.009
Roy Meshulam, The clique complex and hypergraph matching, Combinatorica 21 (2001), no. 1, 89–94. MR 1805715, DOI 10.1007/s004930170006
Roy Meshulam, Domination numbers and homology, J. Combin. Theory Ser. A 102 (2003), no. 2, 321–330. MR 1979537, DOI 10.1016/S0097-3165(03)00045-1
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 05D15, 52C35
Retrieve articles in all journals with MSC (2010): 05D15, 52C35
Andreas F. Holmsen
Affiliation: Department of Mathematical Sciences, KAIST, Daejeon, South Korea
Email: andreash@kaist.edu
Leonardo Martinez-Sandoval
Affiliation: Instituto de Matemáticas, National University of Mexico at Querétaro, Juriquilla, Querétaro 76230, Mexico – and – Institut de Mathémathiques et de Modélisation de Montpellier, Univesité de Montpellier, Place Eugéne Bataillon, 34095 Montpellier Cedex, France
MR Author ID: 1004558
Email: leomtz@im.unam.mx
Luis Montejano
Affiliation: Instituto de Matemáticas, National University of Mexico at Querétaro, Juriquilla , Querétaro 76230, Mexico
Email: luis@matem.unam.mx
Received by editor(s): December 20, 2014
Received by editor(s) in revised form: January 8, 2015, and January 14, 2015
Published electronically: June 26, 2015
Additional Notes: The first author would like to thank the Instituto de Matemáticas, UNAM at Querétaro for their hospitality and support during his visit. The second and third authors wish to acknowledge support from CONACyT under Project 166306, support from PAPIIT–UNAM under Project IN112614 and support from ECOS Nord project M13M01. The third author was supported by CONACyT Scholarship 277462
Communicated by: Patricia L. Hersh
Journal: Proc. Amer. Math. Soc. 144 (2016), 503-511
MSC (2010): Primary 05D15, 52C35
DOI: https://doi.org/10.1090/proc12733 | CommonCrawl |
Home > Turkish Journal of Chemistry > Vol. 44 (2020) > No. 3
Turkish Journal of Chemistry
Volume 44, Number 3 (2020)
Cover and Contents
Phosphorus-nitrogen compounds- (Part 50): correlations between structural parameters for cylophosphazene derivatives containing ferrocenyl pendant arm(s)
NURAN ASMAFİLİZ, GAMZE ELMAS, AYTUĞ OKUMUŞ, SELEN BİLGE KOÇAK, and ZEYNEL KILIÇ
hosphorus-nitrogen compounds (Part 51): the relationship between spectroscopic and crystallographic data of mono- and di-spirocyclophosphazene derivatives with 4-fluoro/nitrophenylmethyl pendant arm/arms
AYTUĞ OKUMUŞ, GAMZE ELMAS, NURAN ASMAFİLİZ, SELEN BİLGE KOÇAK, and ZEYNEL KILIÇ
Combined ligand and structure-based virtual screening approaches for identification of novel AChE inhibitors
KADER ŞAHİN and SERDAR DURDAĞI
Syntheses and antibacterial activities of 4 linear nonphenolic diarylheptanoids
ŞEMSİ BETÜL DEMİR, HATİCE SEÇİNTİ, NESLİHAN ÇELEBİOĞLU, MURAT ÖZDAL, ALEV SEZEN, ÖZLEM GÜLMEZ, ÖMER FARUK ALGUR, and HASAN SEÇEN
Analysis of electrochemical impedance spectroscopy response for commercial lithium-ion batteries: modeling of equivalent circuit elements
UĞUR MORALI and SALİM EROL
Mesoporous starch aerogels production as drug delivery matrices: synthesis optimization, ibuprofen loading, and release property
AKBAR MOHAMMADI and JAFAR SADEGH MOGHADDAS
Effect of polyelectrolyte complex formation on the antibacterial activity of copolymer of alkylated 4-vinylpyridine
MURAT TOPUZOĞULLARI
Development and validation of an HPLC method for determination of rofecoxib in bovine serum albumin microspheres
ESRA DEMİRTÜRK, EMİRHAN NEMUTLU, SELMA ŞAHİN, and LEVENT ÖNER
A novel fluorimetric method for determination of pseudoephedrine hydrochloridein pharmaceutical formulations and blood serum
NAZLI FARAJZADEH and NASRIN RANJBAR NADER
Allylphenoxypiperidinium halides as corrosion inhibitors of carbon steel and biocides
GUNAY MEHDIYEVA MUZAKIR, MUSA BAIRAMOV RZA, SHAHNAZ HOSSEINZADEH BAHADOR, and GULNARA HASANOVA MUSA
Synthesis and effect of substituent position, metal type on the electrochemical properties of (3-morpholin-4-ylpropoxy) groups substituted cobalt, manganese phthalocyanines
ZEKERİYA BIYIKLIOĞLU and HÜSEYİN BAŞ
Gas phase polymerization of ethylene towards UHMWPE
GÖZDE GEÇİM and ERTUĞRUL ERKOÇ
Extraction of heavy metal complexes from a biofilm colony for biomonitoring the pollution
SEDAT SÜRDEM and HACI MEHMET DOĞAN
Synthesis and liquid crystalline properties of new triazine-based $\pi $-conjugated macromolecules with chiral side groups
NİHAT AKKURT, MOHAMMED HADI ALI AL-JUMAILI, HALE OCAK, FATİH ÇAKAR, and LOKMAN TORUN
Synthesis, characterization and crystal structures of platinum(II) saccharinate complexes with 1,5-cyclooctadiene
CEYDA İÇSEL
2-[(3-Aminoalkyl-(alkaryl-,aryl-))-1 H-1,2,4-triazol-5-yl]anilines: synthesis and anticonvulsant activity
YULYA MARTYNENKO, GALINA BEREST, NINA BUKHTIAYROVA, IGOR BELENICHEV, OLEKSIY VOSKOBOINIK, and SERGIY KOVALENKO
Box-Behnken design for removal of uranium(VI) from aqueous solution using poly(ethylene glycol) based dicationic ionic liquid impregnated chitosan
SÜLEYMAN İNAN, TAŞKIN MUMCU, and SERAP SEYHAN BOZKURT
Improvement of the adhesion of conductive poly(m-toluidine) onto chemically reduced-wool fabrics
MERYEM KALKAN ERDOĞAN, MERAL KARAKIŞLA, and MEHMET SAÇAK
Colorimetric cadmium ion detection in aqueous solutions by newly synthesized Schiff bases
ZİYA AYDIN and MUSTAFA KELEŞ
Corrosion and wear behaviour of highly porous Ti-TiB-TiN$_{\mathbf{x}}$ in situ composites in simulated physiological solution
FATİH TOPTAN
Comparative modelling of a novel enzyme: Mus musculus leucine decarboxylase
ARİF SERCAN ŞAHUTOĞLU
Isothermal compressibility and isobaric thermal shrinkage of a porous $\alpha$-alumina compact: thermodynamic calculations
YÜKSEL SARIKAYA, MÜŞERREF ÖNAL, and ABDULLAH DEVRİM PEKDEMİR
One-step synthesis of hierarchical [B]-ZSM-5 using cetyltrimethylammonium bromide as mesoporogen
BÜŞRA KARAKAYA YALÇIN and BAHAR İPEK
Prof. Dr. Ahmet GÜL
İstanbul Technical University
Associate Editors-in-Chief
Mustafa Kemal SEZGİNTÜRK
Çanakkale Onsekiz Mart University, Turkey
Nurettin Mengeş
Van Yüzüncü Yıl University, Turkey
Önder Metin
Koç University, Turkey
All Issues Vol. 46, No. 6 Vol. 46, No. 5 Vol. 46, No. 4 Vol. 46, No. 3 Vol. 46, No. 2 Vol. 46, No. 1 Vol. 45, No. 6 Vol. 45, No. 5 Vol. 45, No. 4 Vol. 45, No. 3 Vol. 45, No. 2 Vol. 45, No. 1 Vol. 44, No. 6 Vol. 44, No. 5 Vol. 44, No. 4 Vol. 44, No. 3 Vol. 44, No. 2 Vol. 44, No. 1 Vol. 43, No. 6 Vol. 43, No. 5 Vol. 43, No. 4 Vol. 43, No. 3 Vol. 43, No. 2 Vol. 43, No. 1 Vol. 42, No. 6 Vol. 42, No. 5 Vol. 42, No. 4 Vol. 42, No. 3 Vol. 42, No. 2 Vol. 42, No. 1 Vol. 41, No. 6 Vol. 41, No. 5 Vol. 41, No. 4 Vol. 41, No. 3 Vol. 41, No. 2 Vol. 41, No. 1 Vol. 40, No. 6 Vol. 40, No. 5 Vol. 40, No. 4 Vol. 40, No. 3 Vol. 40, No. 2 Vol. 40, No. 1 Vol. 39, No. 6 Vol. 39, No. 5 Vol. 39, No. 4 Vol. 39, No. 3 Vol. 39, No. 2 Vol. 39, No. 1 Vol. 38, No. 6 Vol. 38, No. 5 Vol. 38, No. 4 Vol. 38, No. 3 Vol. 38, No. 2 Vol. 38, No. 1 Vol. 37, No. 6 Vol. 37, No. 5 Vol. 37, No. 4 Vol. 37, No. 3 Vol. 37, No. 2 Vol. 37, No. 1 Vol. 36, No. 6 Vol. 36, No. 5 Vol. 36, No. 4 Vol. 36, No. 3 Vol. 36, No. 2 Vol. 36, No. 1 Vol. 35, No. 6 Vol. 35, No. 5 Vol. 35, No. 4 Vol. 35, No. 3 Vol. 35, No. 2 Vol. 35, No. 1 Vol. 34, No. 6 Vol. 34, No. 5 Vol. 34, No. 4 Vol. 34, No. 3 Vol. 34, No. 2 Vol. 34, No. 1 Vol. 33, No. 6 Vol. 33, No. 5 Vol. 33, No. 4 Vol. 33, No. 3 Vol. 33, No. 2 Vol. 33, No. 1 Vol. 32, No. 6 Vol. 32, No. 5 Vol. 32, No. 4 Vol. 32, No. 3 Vol. 32, No. 2 Vol. 32, No. 1 Vol. 31, No. 6 Vol. 31, No. 5 Vol. 31, No. 4 Vol. 31, No. 3 Vol. 31, No. 2 Vol. 31, No. 1 Vol. 30, No. 6 Vol. 30, No. 5 Vol. 30, No. 4 Vol. 30, No. 3 Vol. 30, No. 2 Vol. 30, No. 1 Vol. 29, No. 6 Vol. 29, No. 5 Vol. 29, No. 4 Vol. 29, No. 3 Vol. 29, No. 2 Vol. 29, No. 1 Vol. 28, No. 6 Vol. 28, No. 5 Vol. 28, No. 4 Vol. 28, No. 3 Vol. 28, No. 2 Vol. 28, No. 1 Vol. 27, No. 6 Vol. 27, No. 5 Vol. 27, No. 4 Vol. 27, No. 3 Vol. 27, No. 2 Vol. 27, No. 1 Vol. 26, No. 6 Vol. 26, No. 5 Vol. 26, No. 4 Vol. 26, No. 3 Vol. 26, No. 2 Vol. 26, No. 1 Vol. 25, No. 4 Vol. 25, No. 3 Vol. 25, No. 2 Vol. 25, No. 1 Vol. 24, No. 4 Vol. 24, No. 3 Vol. 24, No. 2 Vol. 24, No. 1 Vol. 23, No. 4 Vol. 23, No. 3 Vol. 23, No. 2 Vol. 23, No. 1 Vol. 22, No. 4 Vol. 22, No. 3 Vol. 22, No. 2 Vol. 22, No. 1 Vol. 21, No. 4 Vol. 21, No. 3 Vol. 21, No. 2 Vol. 21, No. 1 Vol. 20, No. Ek Vol. 20, No. 4 Vol. 20, No. 3 Vol. 20, No. 2 Vol. 20, No. 1 | CommonCrawl |
Dynamic iteration stopping algorithm for non-binary LDPC-coded high-order PRCPM in the Rayleigh fading channel
Rui Xue1,
Yanbo Sun1 &
Qiang Wei1
EURASIP Journal on Wireless Communications and Networking volume 2016, Article number: 62 (2016) Cite this article
This paper mainly studies the association between non-binary low-density parity-check codes and high-order partial response continuous phase modulation, which prevents information loss in the mutual conversion of bit and symbol probabilities. Although the iterative detection and decoding technique applied in this system can obtain good performance/complexity tradeoff, the iterative process still encounters the problems of positive feedback and relatively large decoding delay, similar to other iterative coded modulation systems. The inhibitory effects of different extrinsic information exchange methods on positive feedback under different signal-to-noise ratio (SNR) conditions are investigated in this work to address this issue. Two dynamic iterative stopping algorithms, namely, cross entropy and hard decision aided combined with weighted extrinsic information exchange for cases with medium and high SNRs, are then proposed. Extrinsic information exchange between the demodulator and the decoder is conducted in the two algorithms through weighted processing. Iterative detection is subsequently performed based on two stopping criteria of dynamic iterative decoding. Theoretic analysis and simulation results for the Rayleigh fading channel show that the combination of weighted extrinsic information exchange and the two dynamic iterative stopping algorithms effectively resists positive feedback and improves the convergence of iterative detection and bit error rate performance. Such a combination also reduces the average iteration number to improve the real-time performance of iterative detection and decoding.
The two important design criteria in a wireless communication system include power and bandwidth efficiency. Continuous phase modulation (CPM) is a general class of constant envelope modulation that achieves high spectral efficiency with low spectral sidelobes by requiring a smooth phase transition between adjacent symbols [1]. These characteristics make CPM an ideal choice, with applications in different stringent communication systems employing nonlinear power amplifiers, such as satellite communication [2, 3], satellite mesh networks [4, 5] and satellite navigation [6, 7]. Rimoldi [8] presented a CPM modulator that can be decomposed into a cascade of a time-invariant convolutional encoder (continuous phase encoder, CPE) operating on a ring of integers and of a time-invariant memoryless modulator (MM). In order to obtain further improvement in power efficiency, a forward error correcting (FEC) code is employed as an outer code to combine with CPM. This combination is the so-called serially concatenated CPM (SCCPM). A lot of work on the outer code has been recently carried out. Among the most attractive options emerged till now, a prominent role is played by convolutional codes (CC), short binary low-density parity-check (LDPC) codes, parallel turbo codes (PTCs) and extended Bose-Chaudhuri-Hocquenghem (eBCH) code with soft-decision decoding [9].
Several binary FEC codes, such as turbo [10] and LDPC codes [11, 12] combined with CPM, can improve power efficiency and attain high error correcting capacity. However, the association between high-order CPM and binary codes suffers from significant information loss during the conversion from symbol probabilities to bit probabilities and its inversion in the process of transferring extrinsic information [13]. In addition, the convergence threshold of binary LDPC-coded CPM is higher than the Shannon limit. In view of these problems in binary-coded CPM, q-ary (q >2) LDPC codes are introduced as the outer code scheme in this paper.
NB-LDPC codes outperform their binary counterparts in a block fading channel model, under which the channel periodically varies within a codeword [14]. Comparison with binary LDPC codes, PTCs and eBCH codes over jamming channels disturbed by pulsed jamming, continuous wave jamming or pseudo-noise jamming shows that NB-LDPC codes are the best solution to long code lengths [15]. Applying NB-LDPC codes allows for the design of efficient transmission schemes with high spectral efficiency [16]. Moreover, NB-LDPC-coded modulation exhibits lower receiver latency than binary LDPC-coded modulation, especially when complex equalization schemes are employed [17]. In our previous work [18], it has been shown that the combination of LDPC codes over GF(q) and high-order PRCPM in additive white Gaussian noise (AWGN) channel achieves better tradeoff between power efficiency and bandwidth efficiency than binary LDPC-coded CPM.
Reference [19] reported that CPM is a promising solution for future satellite communication systems in the Ka band because it yields a constant envelope signal that enables the nonlinear power amplifier to operate near saturation. From the power efficiency point of view, a serially coded CPM can provide high power gain when the iterative decoding process is performed. The complete analysis in [2] has proven that SCCPM is a valid alternative scheme for the uplink of satellite communications regardless of the presence of broadband and narrowband transmissions. In view of the information loss in the combination of high-order CPM and binary codes, the NB-LDPC-coded high-order PRCPM scheme is considered a potential alternative for the uplink of satellite communications. Unfortunately, the scheme with iterative detection and decoding technique exhibits positive feedback and relatively large decoding delay during iterative detection, similar to other iterative coded modulation systems. To address these problems, two dynamic iterative stopping algorithms, namely, cross entropy (CE) and hard decision aided (HDA) based on weighted extrinsic information exchange, are proposed in this paper for cases with medium and high SNRs.
The rest of the paper is organized as follows. Section 2 introduces the system of the q-ary LDPC-coded PRCPM and the modified maximum a posteriori (MAP) algorithm for CPM in the Rayleigh fading channel. Section 3 investigates the convergence speed in the iteration and the performance of different extrinsic information exchange methods at various SNRs. Section 4 studies the technique of weighed extrinsic information exchange and its parameter setting. Section 5 expounds the CE and HDA dynamic iteration stopping algorithms. Section 6 discusses the simulations. Finally, we conclude the paper in Section 7.
System of NB-LDPC-coded PRCPM
The M-ary CPM signal in [20] is described as follows:
$$\begin{array}{@{}rcl@{}} s(t,\boldsymbol{\chi})=\sqrt{2E_{s}/T}\text{cos}(2\pi f_{0}t+\psi(t,\boldsymbol{\chi})+\psi_{0}),~\quad t\geq0,~ \end{array} $$
where f 0, ψ 0 and χ denote the carrier frequency, initial phase shift and sequence of the M-ary symbol (χ i ∈{±1,±3,…±(M−1)}), respectively. T and E s denote the symbol period and energy, respectively. ψ(t,χ) denotes the information carrying phase defined as
$$\begin{array}{@{}rcl@{}} \psi(t,\boldsymbol{\chi})=2\pi h\sum_{i=0}^{\infty}\chi_{i} q(t-iT), \end{array} $$
where h=k/p (k and p are relatively prime positive integers) is the modulation index; q(t) is the integral of a positive normalized frequency pulse g (t); and g (t) is non-zero for L symbol periods, with a full response for L=1 and a partial response for L>1. The physical tilted phase is provided by [8] as
$${} \begin{aligned} \phi(\tau+nT,\mathbf{U})&=R_{2\pi}\left[2\pi {hR}_{p}\left[\sum_{i=0}^{n-L}U_{i}\right] +4\pi h\sum_{i=0}^{L-1}U_{n-i}q\right.\\ &\left.\quad\times(\tau+iT)+W(\tau){\vphantom{\left[\sum_{i=0}^{n-L}U_{i}\right]}}\right],~0\leq\tau<T, \end{aligned} $$
where U i =(χ i +(M−1))/2, R x [ ·] is a modulo-x function and W(τ) is a data-independent function expressed as follows:
$$ \begin{aligned} W(\tau)&=\pi h (M-1)\tau/T-2\pi h(M-1)\sum_{i=0}^{L-1}q(\tau+iT)\\ &\quad+\pi h(L-1)(M-1),~0\leq\tau<T. \end{aligned} $$
The following transmitted signal is expressed when Eqs. (1) to (4) are combined.
$$\begin{array}{@{}rcl@{}} \hspace{5mm}s(t,\mathbf{U})=\sqrt{2E_{s}/T}\text{cos}(2\pi f_{1}t+\phi(t,\mathbf{U})+\psi_{0}), \end{array} $$
where f 1=f 0−h(M−1)/2T. The transmitted signal is completely specified by the current symbol U n , the L−1 previous data symbols [ U n−1,···,U n−L+1], and the accumulated value of \(V_{n}=R_{p}(\sum \limits _{i=0}^{n-L}U_{i})\). Therefore, CPM is decomposed as a CPE followed by an MM.
CPE is regarded as a convolutional code with a code rate of "1" because of its memorial and recursive characteristics. The model of q-ary LDPC-coded M-ary CPM over a Rayleigh fading channel with iterative decoding is established when the CPE is viewed as an inner code serially concatenated with the q-ary LDPC encoder (Fig. 1).
System model of q-ary LDPC-coded M-ary PRCPM
For a particular code rate R and codeword length N, a sparse parity check matrix H= [ h i,j ] P×N (P=N(1−R)) with h i,j ∈GF(q=2b) (b is a positive integer) is needed to construct. In the transmitter, input information symbol vector \(\mathbf {U}^{o}=(U_{0}^{o},U_{1}^{o},\cdot \cdot \cdot,U^{o}_{K-1})\) of size K=N−P is encoded by the q-ary LDPC encoder into codeword C o=(C 0,C 1,···,C N−1). Codeword C o is then sent to the interleaver, whose output is then mapped to U i (e.g. Gray mapping) if the finite field size of the NB-LDPC encoder is not equal to the alphabet size of CPM. CPE subsequently produces inner code symbol vector C i, which goes through MM to form modulated signal vector s(t,U). In this paper, we always assume that q is equal to M, and the symbol mapping module is unnecessary. For convenience, we use the notation "LRC" for a raised cosine pulse of length L symbol intervals. Complex signal vector s(t,U) is then finally transmitted over a Rayleigh fading channel.
The iterative receiver mainly consists of two soft-input soft-output (SISO) decoders for the inner CPM and an outer q-ary LDPC encoder. Demodulation and decoding are performed by iteration (called "outer iteration") between CPM-SISO and LDPC-SISO. The Log-MAP algorithm is employed by the CPM-SISO subsystem. A log-domain belief propagation iterative decoding algorithm (called "inner iteration") combined with fast Fourier transform simplified as Log-FFT-BP [21] is performed in the LDPC-SISO subsystem. LDPC-SISO in the investigated system is run with five times inner iterations per outer iteration. The SISO algorithm is a nice generalization of the BCJR algorithm, which takes a priori probabilities of both information and code symbols and computes extrinsic a posterior probabilities (APPs) of both information and code symbols [22], as shown in Fig. 1. The decision device finally selects the information symbol with the maximum APP in the last iteration.
Modified MAP algorithm for CPM in the Rayleigh fading channel
CPE can be represented by a trellis if modulation index h (h=k/p) is a rational number and when k and p are relatively prime positive integers. The SISO algorithm can be applied to the trellis if only the proper transition probability distribution \( p(\underline {r}_{n}|\underline {c}_{n})\) is used in the branch metric \(\gamma _{n}=p(\underline {r}_{n}|\underline {c}_{n}=C)\cdot \text {Pr}(U_{n}=U)\) [22]. Accordingly, \(\underline {r}_{n}\) is a sufficient statistic based on channel observations during symbol interval n, U∈U i, C∈C i. The transition probability distribution \( p(\underline {r}_{n}|\underline {c}_{n})\) is a joint probability density function (PDF) conditioned on the CPE code symbol \(\underline {c}_{n}\). A bank of p M L complex filters is matched to the CPM signals in each symbol interval and sampled once every symbol interval to produce a sufficient statistic.
We let the output of these filters sampled at time t=(n+1)T constitute sufficient statistic (vector) \( \underline {r}_{n} \) with mean vector \(\underline {m}_{n}\) and covariance matrix \(\underline {\Lambda }\). Given that the components of \( \underline {r}_{n} \) are Gaussian and partially correlated in all hypotheses, their joint PDF conditioned on the transmission of \(\underline {c}_{n}\) is described as [23]
$$\begin{array}{@{}rcl@{}} \hspace{7mm}p(\underline{r}_{n}|\underline{c}_{n})\varpropto\text{exp}\left\{-(\underline{r}_{n}-\underline{m}_{n})^{H}\Lambda^{-1}(\underline{r}_{n}-\underline{m}_{n})\right\}, \end{array} $$
where (·)H is the conjugate transposition operator.
A Rayleigh fading channel is regarded as a memoryless channel if interleaving is sufficient. We adjust the calculation methods of transition probability distribution over a slow Rayleigh fading channel as follows. If channel state information (CSI) \(\underline {a}_{n}\) is known for the receiver, \(p(\underline {r}_{n}|\underline {c}_{n})\) is modified as
$$\begin{array}{@{}rcl@{}} p(\underline{r}_{n}|\underline{c}_{n})\!\varpropto\text{exp}\left\{\!-(\underline{r}_{n}-\underline{a}_{n}\underline{m}_{n})^{H}\Lambda(\underline{a}_{n})^{-1}(\underline{r}_{n}-\underline{a}_{n}\underline{m}_{n})\right\}\!,~ \end{array} $$
where \(\underline {a}_{n}\underline {m}_{n}\) is the mean vector of the matched filter output. The mean of the Rayleigh fading amplitude is used, and \(\underline {r}_{n}\) is assumed to be a joint Gaussian distribution if the CSI is unknown for the receiver. \(p(\underline {r}_{n}|\underline {c}_{n})\) is then computed as follows:
$${} {\small{\begin{aligned} \!p(\underline{r}_{n}|\underline{c}_{n})\varpropto \text{exp}\left\{-(\underline{r}_{n}-E_{A}(a)\underline{m}_{n})^{H}\Lambda(\underline{a}_{n})^{-1}(\underline{r}_{n}-E_{A}(a)\underline{m}_{n})\right\}\!, \end{aligned}}} $$
where E A (a) is the mean of the Rayleigh fading amplitude.
Analysis of the iterative detection process
Convergence of iteration
Although maximum likelihood sequence detection over all the concatenated elements is the optimum detector, the process is often too complex to realize. Turbo decoding provides a sub-optimum but often realizable approximation [24]. The iterative detection of a non-binary LDPC-coded high-order CPM with interleaving is examined in this study. The channel encoding and CPM modulation operations for coded data over narrowband channels are viewed as a serial concatenation of two finite-state machines separated by an interleaver. Joint demodulation and decoding are iteratively performed using the extrinsic information exchange between a CPM-SISO demodulator and a SISO channel decoder [25].
The abovementioned analysis shows that the iterative detection mechanism for demodulation and decoding is established on the basis of interchanging and transferring extrinsic information. The extrinsic information in each iterative process is the extra information that the individual SISO decoder obtains in the iterative process. This information has nothing to do with the system information and priori information. The extrinsic information is delivered to the other sub-decoder as a priori information through interleaving (or deinterleaving). Decoding performance is improved by exchanging extrinsic information. Therefore, extrinsic information is a key factor in iterative decoding.
During the process of research, we have observed that the iterative detection in q-ary LDPC-CPM has the phenomenon of positive feedback; in other words, the bit error rate (BER) performance degrades with increasing iterations. Figure 2 reports the convergence speed of the 8-ary LDPC-coded PRCPM in a slow flat Rayleigh fading channel. The information frame length in this simulation is 384 bits, and the code rate of the 8-ary LDPC code with variable node degree distribution λ(x)=0.38354x+0.04237x 2+0.57409x 3 is 1/2. The random interleaver and Gray mapping are employed. The inner iteration number is set to 5. No CSI is available, and E A (a)=0.8862. An excellent implementation scheme denoted by 8M2RC (h=0.5) is considered for the PRCPM modulator. This scheme provides a good tradeoff between power efficiency and bandwidth efficiency [11]. 8M2RC indicates that alphabet size M is equal to 8, and the frequency pulse is a raised cosine (RC) with length L=2. Figure 2 a clearly shows that the positive feedback phenomenon is obvious in the investigated system. The BER curves are prone to increase with a certain number of iterations for E b /N 0 of 1.4 and 2 dB. Moreover, the positive feedback is no longer outstanding after approximately 10 iterations and the trend then begins to fluctuate at a pool BER level. However, BER performance is improved by the increase in the iteration number (Fig. 2 b) when E b /N 0 is varied from 3 to 4 dB. Analysis shows that the positive feedback phenomena becomes increasingly serious in the region with low SNR.
Iterative convergence speed of the 8-ary LDPC-8M2RC over the Rayleigh fading channel for different SNRs. a E b /N 0= 1.4 and 2 dB. b E b /N 0= 3 and 4 dB
The phenomenon existing in q-ary LDPC-CPM can be explained by the theory of discrete-time dynamical system defined on a continuous set [26]. Similar to SCCPM systems, the phase trajectories of q-ary LDPC-CPM mainly fall into one of the following three modes: (1) convergence to an unequivocal fixed point; (2) convergence to an indecisive fixed point; and (3) convergence to a fixed set. The majority of frames belong to mode 1 or 2. Mode 3 mostly occurs in the waterfall region. The interleaving for the short frame q-ary LDPC-CPM system is insufficient. Furthermore, the possibility of burst error remains high, which makes BER oscillation more serious than in the long frame ones. Positive feedback in the BER of the system arises when the number of oscillation frames is sufficiently large.
Exchange methods of extrinsic information
Iterative detection is a key technology to improve BER performance and reduce realization complexity in turbo-like receivers. Iteration between the demodulator and the decoder is established by transferring extrinsic information. Current research reveals the following three main exchange methods of extrinsic information (i.e. direct exchange, average exchange and weighted exchange).
Direct exchange is an original method, which is also called simple exchange. The extrinsic information provided by the previous SISO is interleaved or de-interleaved and then directly transmitted to the other SISO decoder without any processing as a prior information (Fig. 1). This method is expressed as follows:
$$\begin{array}{@{}rcl@{}} {\mathbf{\pi}}^{'}({\mathbf{u}}^{\mathbf{j}};O)~= \pi({\mathbf{u}}^{\mathbf{j}};O), \end{array} $$
$$\begin{array}{@{}rcl@{}} {\mathbf{\pi}}^{'}({\mathbf{c}}^{\mathbf{j}};O)~=\pi({\mathbf{c}}^{\mathbf{j}};O), \end{array} $$
where π(u j;O) and π(c j;O) denote the probability distributions of information and code symbols in the log-domain, respectively.
Average exchange takes the mean of the extrinsic information exported by one SISO decoder in the previous iterations as a prior information in the current iteration. This method is expressed as follows:
$$\begin{array}{@{}rcl@{}} {{\mathbf{\pi }}^{'}}^{\left(l \right)}({{\mathbf{u}}^{\mathbf{j}}};O)~{\mathbf{= }}\log \left({\frac{1}{l}\sum\limits_{i = 1}^{l} {\exp \left({{{\mathbf{\pi }}^{\left(i \right)}}({{\mathbf{u}}^{\mathbf{j}}};O)} \right)}} \right), \end{array} $$
$$\begin{array}{@{}rcl@{}} {{\mathbf{\pi }}^{'}}^{\left(l \right)}({{\mathbf{c}}^{\mathbf{j}}};O)~{\mathbf{= }}\log \left({\frac{1}{l}\sum\limits_{i = 1}^{l} {\exp \left({{{\mathbf{\pi }}^{\left(i \right)}}({{\mathbf{c}}^{\mathbf{j}}};O)} \right)}} \right), \end{array} $$
where π (i)(u j;O) and π (i)(c j;O) are the log-domain probability distributions of information and code symbols in the ith iteration. l denotes the present number of iterations.
In weighted exchange, the extrinsic information exported by the former decoder is delivered to the weighting function instead of being directly transmitted to the next decoder. More details on the weighted exchange method are discussed in Section 4.
Both average and weighted methods inhibit the positive feedback phenomenon and effectively improve BER performance in convolutional coded CPM systems [27, 28]. Hence, in this study, we combine the average method with the weighted method (i.e. the mean of extrinsic information is derived by weighting before transmitting to the next decoder). A comparative study of different extrinsic information exchange methods in q-ary LDPC-CPM systems is one of the main contents of this paper.
Figure 3 shows a comparison of iterative convergence speed in an 8-ary coded 8M2RC system with the direct, average, weighted and combined methods when E b /N 0 is set to 0.4 and 4.4 dB. All the other simulation parameters are similar to those in the settings in Fig. 2. Figure 3 a shows that the average and combined methods both inhibit positive feedback to some extent at E b /N 0=0.4 dB, avoid violent BER oscillation and maintain BER at a stable level with increasing iteration number. All the four methods effectively inhibit positive feedback, and BER continuously decreases with increasing iteration number (Fig. 3 b). The weighted method exhibits the best iterative convergence and the lowest BER at the same iteration compared with other methods at E b /N 0=4.4 dB. For instance, the BER of 8.26×10−4 is attainable for the weighted exchange method at the seventh iteration, whereas the BER for the direct method is only 2.91×10−3. Notably, the BER performances of the average and combined methods are worse than that of the direct exchange method at high SNR. Figure 4 shows a comparison of BER in the four methods when the outer iteration number is uniformly fixed at 8. All the other parameters are similar to those in the former. The BER performance order of different methods is expressed as follows when E b /N 0 is changed from 2.8 to 4.8 dB (Fig. 4): weighted method > direct method > combination method > average method. Thus, weighted exchange is an effective method to improve the power efficiency of the investigated scheme.
Iterative convergence speed of the four exchange methods based on 8-ary LDPC-coded 8M2RC over the Rayleigh fading channel. a E b /N 0= 0.4 dB. b E b /N 0= 4.4 dB
BER performance of the four exchange methods
Weighted exchange method of extrinsic information
The simulation results in our previous work [12] indicated that the weighted exchange of extrinsic information in binary LDPC-coded CPM over a Rayleigh fading channel effectively inhibits the positive feedback phenomenon and improves the BER performance under the condition of low SNR. Comparison with the direct exchange method shows that the weighted exchange method can attain a 0.2 ∼ 0.3 dB gain at BER =10−3 with an information block length of 384 bits and a modulation index of 1/2 when the iteration number is varied from 5 to 15. Inspired by the study of Xue et al. [12], we design a method based on weighted extrinsic information probability for q-ary LDPC-CPM systems.
The CPM and LDPC decoders compute the probability distributions of information and code symbols according to the additive SISO (A-SISO) based on the Log-MAP algorithm presented in [29] as follows:
$${} {\pi_{k,j}}({u^{j}},O)\! =\! \log\!\left[\!\sum\limits_{u:{U_{k}^{j}} = {u^{j}}} \!\!\exp\! \left\{ {{\mathbf{\pi }}_{\mathbf{k}}} ({\mathbf{u}};O) \,+\,\! \sum\limits_{i = 1,i \ne j}^{{k_{0}}} {{\pi_{k,j}}} ({u^{i}};I)\!\right\}\!\!\right]\!, $$
$${} {\pi_{k,j}}({c^{j}},O) = \log\! \left[\sum\limits_{c:{C_{k}^{j}} = {c^{j}}} \!\exp\! \left\{{{\mathbf{\pi }}_{\mathbf{k}}}({\mathbf{c}};O) \,+\, \sum\limits_{i = 1,i \ne j}^{{n_{0}}}{{\pi_{k,j}}} ({c^{i}};I)\right\}\!\!\right]\!, $$
where π k (u;O) and π k (c;O) indicate the probabilities of information and code symbols at time k, respectively. k 0 and n 0 denote the length of a single information and the code symbol, respectively.
Figure 1 shows that the extrinsic information probabilities of the inner SISO output are sent to the weighted function module before being delivered to the outer SISO. Similarly, the extrinsic information probabilities of the outer SISO output are sent to the weighted function module before being delivered to the inner SISO.
$$\begin{array}{@{}rcl@{}} {\mathbf{\pi}}^{'}({\mathbf{u}}^{\mathbf{j}};O)=\psi ({\mathbf{\pi}}({{\mathbf{u}}^{\mathbf{j}}};O)), \end{array} $$
$$\begin{array}{@{}rcl@{}} {{\mathbf{\pi}}^{'}}({{\mathbf{c}}^{\mathbf{j}}};O)=\psi ({\mathbf{\pi}}({{\mathbf{c}}^{\mathbf{j}}};O)), \end{array} $$
where ψ(·) denotes the weighted function of size q×N, and.∗ indicates point multiplication. ψ(·) is used as the function of the extrinsic information probabilities for convenience and to adaptively adjust the extrinsic information according to its size.
$$\begin{array}{@{}rcl@{}} {\psi(\mathbf{\pi})} = a{\mathbf{\pi}}.*\exp (- \beta |{\mathbf{\pi}}|) \end{array} $$
where α and β denote the weighted parameters decided by the interleaver length, SNR and other parameters through simulation experiments. The iterative decoding scheme based on the weighted extrinsic information method is shown in Fig. 5.
Principle diagram of iterative detection based on the weighted method
A comparative analysis of the computational complexity of the four methods is provided in Table 1. The computational complexities of CPM-SISO and LDPC-SISO in the receiver are also displayed in the same table, where w c is the uniform column weight of sparse parity check matrix H and n inner denotes the inner iterations for LDPC-SISO. The weighted method obviously has a moderate computational complexity that is comparable to that of the average method and much lower than that of the combined method. Compared with that of the direct method, the computational complexity of the weighted method only increases by 6M in multiplications and 2M in log &exp operations and is far less than that of CPM-SISO. Therefore, the entire computational complexity for coded CPM schemes mainly depends on the CPM demodulator. The computational complexity also linearly increases with the iteration number. A preferable tradeoff between computational complexity and BER performance should be considered. Two dynamic iterative stopping algorithms based on weighted extrinsic information exchange are proposed in this study to address this problem. More details are shown in Section 5.
Table 1 Computational complexity of q-ary LDPC-coded PRCPM using various extrinsic information exchange methods
From Eq. (17), it results that the weighted parameter is the key factor that affects performance. Excellent performance has been obtained with values of α∈[0.6,0.9] and β∈[0.001,0.01] in similar coded CPM systems [10, 12]. However, the specific α and β values in the system we investigate need to be determined using simulation experiments. Figure 6 reports the BER performance with various weighted parameter combinations of α and β. The length of the information frame in the simulation is 768 bits. The code rate of the 8-ary LDPC code with variable node degree distribution λ(x)=0.1290x+0.4839x 2+0.3871x 5 is 2/3. The other simulation parameters are the same as before. Figure 6 shows that the BER curves with α∈[0.6,0.9] and β∈[0.001,0.01] almost coincide when E b /N 0 is in the 3.2 ∼ 4.4 dB region. The BER performance of the weighted method with different parameters is much better than that of the direct method. The combination of α=0.7 and β=0.01 is selected because of the fact that the smallest BER of 8.68×10−6 is achievable at E b /N 0 of 4.4 dB.
BER versus E b /N 0 as parameterized by the weighted parameters. a α=0.6, b α=0.7, c α=0.8 and d α=0.9
Iterative stopping algorithm
The number of outer iterations in the iterative detection process of q-ary LDPC-CPM systems is usually set to a fixed positive integer. However, not all received sequences have optimal decoding results at the same number of iterations. For several specific sequences, error-free decoding can be achieved by a few iterations. Continuous iteration would increase computational complexity and iterative decoding delay. A dynamic iteration would possibly improve detection efficiency and reduce decoding delay if some iterative stopping criteria are introduced into the detection process. In this study, cross entropy (CE) [30] and hard decision-aided (HDA) [31] stopping criteria are incorporated into the iterative process. Two dynamic iteration stopping algorithms based on weighted extrinsic information exchange are then developed. The application of the two criteria in q-ary LDPC-CPM systems' signal detection is subsequently deduced.
CE stopping criterion
The CE stopping criterion, which was first proposed in turbo code by J. Hagenauer, has been widely applied in iterative decoding. S. Zhang successfully introduced the criterion into bit-interleaved coded modulation with iterative decoding [32]. A variety of improved stopping criteria based on CE (e.g. sign change ratio [33] and HAD) have been introduced successively since then.
We utilize the probability matrix of information symbols [ P(u;O) m,k ] M×K exported by the M-ary LDPC-SISO module in two adjacent iterations to compute the mean of CE, T(i), directly and to avoid the mutual conversion between likelihood ratio and probability.
$$\begin{array}{@{}rcl@{}}{} T(i)=\frac{1}{K}\sum_{k=0}^{K-1}\sum_{m=0}^{M-1} P^{i}(u;O)_{m,k}\text{log}\frac{P^{i}(u;O)_{m,k}}{P^{i-1}(u;O)_{m,k}},i\ge\!2, \end{array} $$
where M is the alphabet size of CPM, K is the length of the information symbol vector, and P i(u;O) m,k denotes the probability of the event that the kth element in the information symbol vector is equal to (m−1). The threshold of the CE stopping criterion is expressed as follows:
$$\begin{array}{@{}rcl@{}} T(i)\le wT(1), \end{array} $$
where w is the adjusting parameter. Table 2 shows the effects of adjusting parameters on system performance with the CE stopping criterion at E b /N 0= 4 dB. The combination of α=0.7 and β=0.01 is selected. The other simulation parameters are similar to those in Fig. 6. Table 2 shows that the average iteration number gradually increases with the decrease in w. Meanwhile, BER is continuously reduced. The improvement in BER is no longer prominent once w approaches a fairly small order of magnitude. Therefore, the value of w should be determined by a specific requirement in different systems.
Table 2 BER performance of different adjusting parameters
HDA stopping criterion
In each iteration, the information bit sequence \([\!\hat {b}_{k}]\), namely, the output of the demapper, whose input is the information symbol hard decision sequence exported by the q-ary LDPC-SISO module can predict convergence during the iterative decoding process. The iteration process is considered convergent if the sign of the information bit sequence does not change in two adjacent iterations.
$$\begin{array}{@{}rcl@{}} [\!\hat{b}_{k}]^{i-1}\oplus [\!\hat{b}_{k}]^{i}=0. \end{array} $$
The HDA stopping criterion is introduced into the q-ary LDPC-CPM systems. Furthermore, the iterative stopping condition is simplified by counting the number D(i) of the event where the sign of the information sequence is different in two adjacent iterations. In other words, the iteration is stopped if D(i)≤Q×K×log2 M; otherwise, the iteration continues. Accordingly, Q is defined as a constant with a general range of [0.001,0.01], and K is the length of the information symbol block.
The validity of the dynamic iterative stopping algorithms combined with weighted extrinsic information exchange is then tested. Monte Carlo simulations based on MATLAB software are performed to evaluate the performance of the proposed systems in the Rayleigh fading channel (i.e. slow flat Rayleigh fading channel, E A (a)=0.8862, and no CSI is available). The other simulation parameters are similar to those in Fig. 6. More details on the parameter settings are discussed in Section 4.
Figure 7 shows the BER performance of the CE and HDA iterative stopping algorithms combined with the direct and weighted methods in the 8-ary LDPC-8M2RC system. The outer iteration number is set to 16. Accordingly, w=10−4 and Q=0.01 are separately employed in the CE and HDA iterative stopping algorithms. The BER performance of the CE and HDA iterative stopping algorithms based on the direct method (abbreviated as CE-direct and HDA-direct, respectively) exhibits a negligible degradation with respect to the direct method alone. Similarly, the CE and HDA iterative stopping algorithms based on the weighted method (abbreviated as CE-weighted and HDA-weighted, respectively) exhibit minor BER performance degradation with respect to the weighted method alone. Notably, the BER of the CE-direct and HDA-direct algorithms gradually declines when E b /N 0 is more than 4.4 dB and begins to step into the error floor region. Compared with the direct method, the weighted method achieves about 0.2 dB gain at BER = 10−3. The BER of the CE-weighted and HDA-weighted algorithms continuously decreases when E b /N 0 exceeds 4.4 dB and effectively avoids entering the error floor region. The reason is that the weighted method prevents the positive feedback phenomenon from occurring. Moreover, the CE and HDA iterative stopping algorithms stop iterations in time before going into positive feedback. The weighted method improves the iterative convergence to reach a low error rate as SNR increases.
BER performance of the CE-weighted and HDA-weighted algorithms
Figure 8 shows the average iteration number of the CE-weighted and HDA-weighted algorithms. The average iteration number of the two proposed algorithms gradually decreases with the increase in SNR. Accordingly, 4.175 and 3.665 iteration times are separately attainable for the CE-direct and HDA-direct algorithms at E b /N 0 of 4.4 dB. Meanwhile, 3.88 and 3.38 iteration times are respectively available for the CE-weighted and HDA-weighted algorithms at the same SNR. Analysis of Figs. 8 and 9 shows that the CE-weighted and HDA-weighted algorithms reduce the average iterative number and computational complexity. These algorithms also significantly improve the performance of BER and real-time in iterative detection compared with the direct method alone. Therefore, the combination of weighted extrinsic information exchange and the two iterative stopping algorithms can offer double insurance in enhancing reliability and reducing the decoding delay.
Average iteration number of the CE-weighted and HDA-weighted algorithms
BER performance of the investigated scheme and several alternative schemes
For a fair comparison, several alternative schemes provided by literature are selected from several similar coded CPM systems to investigate the BER performance of 8-ary LDPC-coded 8M2RC. The turbo-CPM scheme is discussed in [10], where a constituent encoder with the generator polynomial (10, 04, 15) in octal representation is employed by turbo. Convolutional coded CPM (CC-CPM) schemes are studied in [23] and [34], where the generator polynomials of CC are (13, 06, 16) [23] and (11, 06, 16) [34], respectively. The binary LDPC-coded CPM (BLDPC-CPM) scheme is considered in [35] with the same variable node degree distribution λ(x)=0.1290x+0.4839x 2+0.3871x 5 as the 8-ary LDPC code used in Fig. 6. The weighted parameters in the abovementioned schemes have been optimized using simulation experiments, as mentioned in Section 4. Figure 9 shows that CC-CPM and turbo-CPM obtain a better BER performance than the investigated scheme when E b /N 0 is varied from 3.2 to 4 dB. Meanwhile, the BER of the investigated scheme has the highest convergence speed with the increase in SNR, which is helpful in reaching a low BER order of magnitude. The investigated scheme earns gains of at least 0.3, 0.95 and 1.05 dB in the region of BER =10−5 against CC-CPM with (11, 06, 11), CC-CPM with (13, 06, 16) and BLDPC-CPM schemes, respectively. In the region of BER =10−4, an approximate 1.5 dB gain is attained by the investigated scheme compared with turbo-CPM. Thus, the association between NB-LDPC codes and CPM is superior to the other candidates in the aspect of power efficiency.
To avoid information loss in the mutual conversion of bit and symbol probabilities when binary LDPC codes are combined with high-order CPM, the combination of NB-LDPC codes and high-order PRCPM with the same number of levels is considered a possible candidate for the uplink of satellite communications. In view of the positive feedback and relatively large decoding delay in iterative detection, we propose CE and HDA iterative stopping algorithms based on weighted extrinsic information exchange for cases with medium and high SNRs. A lot of simulation results in the slow flat Rayleigh channel demonstrate that the NB-LDPC-coded CPM scheme has better BER performance than CC-CPM, turbo-CPM, and BLDPC-CPM for the medium and high SNR scenarios. Comparison with the direct method shows that the two proposed algorithms can inhibit positive feedback and improve the power efficiency and reliability of the system we investigate. Moreover, the said algorithms also significantly reduce the average iteration number to improve the real-time performance of iterative detection.
NB-LDPC:
non-binary low-density parity-check
PRCPM:
partial response continuous phase modulation
SNR:
cross entropy
HDA:
hard decision aided
CPE:
continuous phase encoder
memoryless modulator
SCCPM:
serially concatenated continuous phase modulation
maximum a posteriori
PEG:
progressive edge growth
SISO:
soft-input soft-output
CSI:
channel state information
BICM-ID:
bit-interleaved coded modulation with iterative decoding
SCR:
sign change ratio
CE Sundberg, Continuous phase modulation. IEEE Commun. Mag. 24(4), 25–38 (1986).
AG Amat, CA Nour, C Douillard, Serially concatenated continuous phase modulation for satellite communications. IEEE Trans. Wirel. Commun.8(6), 3260–3269 (2009).
BF Beidas, S Cioni, UD Bie, A Ginesi, R Iyer-Seshadri, P Kim, LN Lee, D Oh, A Noerpel, M Papaleo, A Vanelli-Coralli, Continuous phase modulation for broadband satellite communications: design and trade-offs. Int. J. Satell. Commun. Netw.31(5), 249–262 (2013).
R Suffritti, F Lombardo, A Piemontese, A Vanelli-Coralli, EA Candreva, G Colavolpe, R Baroni, S Andrenacci, GE Corazza, N Alagha, in IEEE Global Commun. Conf. (GLOBECOM). Energy efficient CPM waveforms for satellite mesh networks (AnaheimCA, 2012), pp. 3317–3321.
P Remlein, Energy efficient continuous phase modulation signals for satellite intelligent transportation systems. IET Circ. Devices Syst.8(5), 406–411 (2014).
A Emmanuele, F Zanier, G Boccolini, M Luise, Spread-spectrum continuous-phase-modulated signals for satellite navigation. IEEE Trans. Aerosp. Electron. Syst.48(4), 3234–3249 (2012).
R Xue, YB Sun, DF Zhao, CPM signals for satellite navigation in the S and C bands. Sensors. 15(6), 13184–13200 (2015).
BE Rimoldi, A decomposition approach to CPM. IEEE Trans. Inform. Theory. 34(2), 260–270 (1988).
M Baldi, F Chiaraluce, R Garello, N Maturo, I Aguilar Sanchez, S Cioni, Analysis and performance evaluation of new coding options for space telecommand links—Part I. AWGN channels. Int. J. Satell. Commun. Netw.33(6), 509–525 (2015).
R Xue, DF Zhao, TL Zhu, in IEEE Int. Conf. on Wireless Commun., Networking and Mobile Computing. An improved method for the convergence of iterative detection in turbo-CPM system (Beijing, China, 2009), pp. 445–449.
R Xue, DF Zhao, CL Xiao, in IEEE Int. Conf. on Commun. and Technology. Power and bandwidth efficient LDPC coded CPM with iterative decoding (Nanjing, China, 2010), pp. 1019–1022.
R Xue, CL Xiao, in IEEE Int. Conf. on Commun. and Inform. Technology. An improved iterative decoding method for LDPC coded CPM systems in Rayleigh fading channel (Hammamet, Tunisia, 2012), pp. 341–346.
O Abassi, L Conde-Canencia, M Mansour, E Boutillon, in IEEE Wireless Commun. and Networking Conf. Non-binary low-density parity-check coded cyclic code-shift keying (Shanghai, China, 2013), pp. 3890–3894.
NB Chang, DL Romero, in IEEE Conf. Record of the Forty Sixth Asilomar Conf. on Signals, Systems and Computers (ASILOMAR). Non-binary coded modulation and iterative detection for high spectral efficiency in MIMO (Pacific Grove, CA, 2012), pp. 458–462.
M Baldi, F Chiaraluce, R Garello, N Maturo, I Aguilar Sanchez, S Cioni, Analysis and performance evaluation of new coding options for space telecommand links—Part II. jamming channels. Int. J. Satell. Commun. Netw.33(6), 527–542 (2015).
S Nowak, G Smietanka, R Kays, in IEEE Int. Symp. on Broadband Multimedia Systems and Broadcasting (BMSB). High efficiency broadband transmission with LDPC codes over GF (2s) (Nuremberg, Germany, 2011), pp. 1–6.
M Arabaci, IB Djordjevic, in IEEE Int. Conf. on Telecommun. in Modern Satellite Cable and Broadcasting Services (TELSIKS). Binary and nonbinary LDPC-coded modulations for generalized fading channels (Niš, Serbia, 2011), pp. 148–151.
R Xue, CL Xiao, in IEEE Int. Conf. on Wireless Commun., Networking and Mobile Computing. Power and bandwidth efficient q-ary LDPC coded partial response continuous phase modulation (Shanghai, China, 2012), pp. 1–4.
R Chaggara, ML Boucheret, C Bazile, E Bouisson, A Ducasse, JD Gayrard, in IEEE 12th European Signal Processing Conf. Continuous phase modulation for future satellite communication systems in Ka band (Auckland, New Zealand, 2004), pp. 1083–1086.
T Aulin, CE Sundberg, Continuous phase modulation—Part I. Full response signaling. IEEE Trans. Commun.29(3), 196–209 (1981).
H Song, JR Cruz, Reduced-complexity decoding of q-ary LDPC codes for magnetic recording. IEEE Trans. Magn.39(2), 1081–1087 (2003).
S Bentdetto, D Divsalar, G Montorsi, F Pollara, A soft-input soft-output APP module for iterative decoding of concatenated codes. IEEE Commun. Letters. 1(1), 22–24 (1997).
P Moqvist, T Aulin, Serially concatenated continuous phase modulation with iterative decoding. IEEE Trans. Commun.49(11), 1901–1915 (2001).
TL Tapp, RL Mickelso, in Military Communications Conference Proceedings. Turbo detection of coded continuous-phase modulation (Atlantic City, NJ, 1999), pp. 534–537.
KR Narayanan, GL Stüber, Performance of trellis-coded CPM with iterative demodulation and decoding. IEEE Trans. Commun.49(4), 676–687 (2001).
T Richardson, The geometry of turbo-decoding dynamics. IEEE Trans. Inform. Theory. 46(1), 9–23 (2000).
ZX Han, WB Bi, XZ Zhang, A method to improve the iterative detection convergence of SCCPM. J. Electron. Inf. Technol.29(2), 274–277 (2007).
JH Sun, ZY Li, Stopping algorithm for iterative decoding based on the average extrinsic information exchange for SCCPM. J. Xidian University (Natural Science). 35(4), 716–720 (2008).
S Benedetto, D Divsalar, G Montorsi, F Pollara, A soft-input soft-output maximum a posteriori (MAP) module to decode parallel and serial concatenated codes. TDA Prog. Rep.42(127), 1–20 (1996).
J Hagenauer, E Offer, L Papke, Iterative decoding of binary block and convolutional codes. IEEE Trans. Inform. Theory. 42(2), 429–445 (1996).
RY Shao, S Lin, M Fossorier, Two simple stopping criteria for turbo decoding. IEEE Trans. Commun.47(8), 1117–1120 (1999).
S Zhang, JP Li, CS Cai, in IEEE Int. Conf. on Wireless Commun. & Signal Processing (WCSP). A variable iterative decoding scheme for BICM-ID based on cross-entropy (Nanjing, China, 2009), pp. 1–4.
YF Wu, BD Woerner, WJ Ebel, A simple stopping criterion for turbo decoding. IEEE Commun. Letters. 4(8), 258–260 (2000).
ZX Han, WB Bi, XZ Zhang, in IET Int. Conf. on Wireless, Mobile and Multimedia Networks. An improved iterative detection method for serially concatenated CPM (Hangzhou, China, 2006), pp. 1–4.
R Xue, CL Xiao, in Int. Conf. on Wireless Commun., Networking and Mobile Computing (WiCOM). A joint coded modulation scheme and its iterative receiving for deep-space communications (Shanghai, China, 2012), pp. 1–4.
This paper is supported by the National Natural Science Foundation of China (Grant No. 61403093), Science Foundation of Heilongjiang Province of China for Returned Scholars (Grant No. LC2013C22), the Assisted Project by Heilongjiang Province of China Postdoctoral Funds for Scientific Research Initiation (Grant No. LBH-Q14048) and China Fundamental Research Funds for Central Universities (Grant No. HEUCF1508).
College of Information and Communication Engineering, Harbin Engineering University, Harbin, 150001, China
Rui Xue
, Yanbo Sun
& Qiang Wei
Search for Rui Xue in:
Search for Yanbo Sun in:
Search for Qiang Wei in:
Correspondence to Yanbo Sun.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Xue, R., Sun, Y. & Wei, Q. Dynamic iteration stopping algorithm for non-binary LDPC-coded high-order PRCPM in the Rayleigh fading channel. J Wireless Com Network 2016, 62 (2016) doi:10.1186/s13638-016-0562-z
DOI: https://doi.org/10.1186/s13638-016-0562-z
Non-binary LDPC code
Extrinsic exchange | CommonCrawl |
Compact Design of A Lightweight Rehabilitative Exoskeleton for Restoring Grasping Function in Patients with Hand Paralysis
Vaheh Nazari, Majid Pouladian, Yong-Ping Zheng, Monzurul Alam
Millions of individuals suffer from upper extremity paralysis caused by neurological disorders including stroke, traumatic brain injury, spinal cord injury, or other medical conditions. In order to restore motor control and enhance the quality of life of these patients, daily exercises and strengthening training are necessary. Robotic hand exoskeletons can substitute for the missing motor control and help to restore the functions performed in daily operations. They can also facilitate neuroplasticity to help rehabilitate hand function through routine use. However, most of the hand exoskeletons are bulky, stationary, and cumbersome to use.
We have utilized a recent design of a hand exoskeleton (Tenoexo) and modified the design to prototype a motorized, lightweight, fully wearable rehabilitative hand exoskeleton by combining rigid parts with a soft mechanism capable of producing various grasps needed for the execution of daily tasks. We have tested the performance of our developed hand exoskeleton in restoring hand functions in two quadriplegics with chronic cervical cord injury.
Mechanical evaluation of our exoskeleton showed that it can produce fingertip force up to 8 N and can cover 91.5 degree of range of motion in just 3 seconds. We further tested the robot in two quadriplegics with chronic hand paralysis, and observed immediate success on independent grasping of different daily objects.
The results suggest that our exoskeleton is a viable option for hand function assistance, allowing patients to regain lost finger control for everyday activities.
Health Economics & Outcomes Research
Assistive device
three-layered sliding spring mechanism
hand paralysis
Quadriplegia.
Many people around the world suffer from hand function impairment caused by neurological disorders such as stroke [1], traumatic brain injury [2], and spinal cord injury [3], which limits their ability to perform basic daily activities. Due to the rehabilitation plateau of these individuals the remaining ability of the hands are not expected to further increase, despite undertaking conventional procedures to regain hand function such as orthopedic surgery, medicine, or physical and occupational therapy [4]. Therefore, these individuals live with their remaining abilities and by compensatory techniques to complete everyday activities. Additionally, assistive tools such as feeding utensils, key turners, and writing devices are often used by these individuals to improve independence and safety in activities of daily living (ADL) [5].
By enhancing the efficiency on practical gripping capabilities, wearable robotic hand exoskeletons increase the user's independence [6]. In recent years, robotic technology has been adopted for physical rehabilitation to provide enhanced treatment and comprehensive recovery of these individuals [7]. Different robotic systems for the upper limb have been recently introduced especially to the acute and chronic stroke survivors. By powering the hand movements to accomplish everyday activities, assistive exoskeletons have shown the ability to improve the quality of life in patients with cervical cord injury [8]. However, these robotic systems like Hand of Hope [9], FESTO (FESTO, Esslingen, Germany), Milebot (MileBot, Hand Rehabilitation Exoskeleton Robot, Shenzhen, China), Handy Rehab (HandyRehab, Hong Kong, China) and etc. are very bulky and cumbersome to use.
Over the last two decades significant research has been conducted to design and develop upper-limb wearable exoskeletons for rehabilitation purposes [10]. The technology is however still challenging in the areas of mechanism design, sensing, and human–robot interaction, despite the strong efficiency and growing market for upper-limb exoskeletons. Some of the important aspects of designing an ergonomic exoskeleton device are mechanical architecture and kinematic analysis [11]. Exoskeletons for assistive hands in the current state-of-the-art often used rigid connection mechanisms. Mechanical links are used in linkage-based devices to create finger-flexion-like motions through kinematic chains [12]. Since forces, especially force directions, can be precisely controlled, this is advantageous for safe interaction. However, rigid link structures have a low degree of conformity and a high form factor by their very existence [5]. One of the most common actuation mechanisms embedded in soft exosuit is a tendon/cable-driven mechanism, which typically involves several actuators [13, 14]. Such mechanisms are naturally lightweight and low-profile. However, the applied forces, especially the force directions, are difficult to manage correctly, posing a danger to the user [5]. A pneumatic actuator could save significant weight while producing high torque. However, this type of actuator adds more complications to the controller's design. Furthermore, heavy pumps and/or compressed gas tanks, can compromise the system's portability, oil/lubricant contamination may occur, and downtime/maintenance is increased [5, 15]. Hydraulic actuators may be able to meet the need for even more torque production, especially for augmenting human capabilities. Its control is less accurate than electric motors, similar to pneumatic actuators, and incompressible liquid from a pump may contaminate the whole device, jeopardizing protection [15]. Because of the difficulty and flexibility of the human hand, choosing the mechanism and type of actuators to create robotic exoskeleton to handle and assist hand movements is still a big challenge.
In the present study, by modifying the design of a recent exoskeleton developed by Bützer et al. (2021), we prototyped an advanced compact, cost effective, lightweight, fully wearable rehabilitative hand exoskeleton. First, we designed the finger mechanism with a strong focus on safety, convenience, and usability in everyday life then completed the exoskeleton and tested the performance of our system in terms of grip types, range of motion (ROM), fingertip force and weight. Finally, we tested usability in everyday life, including convenience, safety, and weight, and looked into the immediate impact on the functional ability of two individuals with neuromotor hand impairments with chronic cervical spinal cord injury (SCI). At the end, we compared our exoskeleton to similar works, highlighting the benefits and limitations.
I. Design
A. Design Requirements for exoskeleton
Among patients with various neuromotor disorders (e.g., SCI, stroke, and brachial plexus injury), the form and level of necessary assistance for everyday activities varies significantly in the presence of spasticity, contractures, muscle tone, and joint stiffness in the hand [5]. Hence, in the present study, we tried to design the exoskeleton in a way that most individuals can use it in daily activities. In this section, from the literature, studies, and functional tests with previous designs in patients with neuromotor hand impairments, we extracted detailed criteria for the design case. By considering the following requirements, we proposed a useful device for patients.
Types of the functional grasping: Vergara et al. [16] and Bullock et al. [17] found that we need four grasping tasks (palmar pinch, medium wrap, parallel extension, and lateral pinch [denomination of Feix et al. [18]] and a flat hand in order to perform over 80% of all grasping tasks in everyday life.
Range of motion (ROM): Bain et al. [19] found that the functional range of motion of the fingers, to perform 90% of the activities are 19°–71°, 23°–87° and 10°-64° at the metacarpophalangeal (MCP), proximal interphalangeal (PIP) and Distal interphalangeal (DIP) joints respectively. Feix et al. [18] examined current human grasp taxonomies and combined them into a new taxonomy known as "The Grasp Taxonomy." They demonstrated the thumb's important function in performing different grasping types, by rearranging grasps according to the thumb's Adduction-Abduction motion [20].
Grasping force: The human hand's functional use is needed for a wide range of daily tasks such as grasping objects, feeding, dressing, and washing. Bützer et al. [1] discovered that 10 N of fingertip force is needed to lift items weighing up to 1 kg, such as water bottles (to drink).
Weight: It is important to create a lightweight exoskeleton in order for the user to find it more comfortable to wear. Other hand exoskeletons usually weigh between 300 g to 5 Kg [5, 21].
Safety: At all times, a hand exoskeleton must ensure the user's safety. The exoskeleton's mechanical and control mechanisms must account for normal finger joint motions and hand size. Furthermore, mechanical limitations must ensure that finger joints are not subjected to excessive pressures [22].
Comfort: Since the user must wear the brace during activity, a hand exoskeleton must be convenient for the user. The device's kinematics and ergonomic nature must ensure that it does not induce discomfort or exhaustion [12].
B. Three-layered sliding spring mechanism
The main mechanism for gripping movements and providing the necessary fingertip force is the flexion/extension of the fingers and it is challenging to develop a mechanism which can mimic the finger flexion and extension. Inspired by the exoskeleton developed by Bützer et al. [5], to design a lightweight exoskeleton, we used 3 layered sliding springs (Fig. 1) to imitate human finger flexion and extension.
The mechanism is composed of two main parts: blades and solid bodies (Fig. 1A). On top of the fixed spring blade, two sliding springs are placed. The relative length of the springs changes as the sliding spring is moved, resulting in spring bending. Bending can be localized in three parts together with the springs using rigid elements linking the two springs, resulting in a final motion that mimics the flexion/extension of a human finger (Fig. 1B-C).
We have designed a V-shape configuration (Fig. 2) with two angled sliding springs, to produce the desired fingertip force with the three-layered sliding spring mechanism.
The required torque in the joints in the three-layered sliding spring system increases with finger length for a given fingertip force. A higher torque can be achieved to produce adequate fingertip force by increasing the moment of inertia Ix of the rectangular profile of the springs.
$$Ix= \frac{w*{t}^{3}}{12}$$
Where t thickness and w is width of the blade.
By increasing t or w, Ix increases, which allows us to produce more fingertip force. The sliding spring blades' width and thickness have rotated by an angle Ө=35°, so that the moment of inertia in the spring blade axis Ix' remains constant while the moment of inertia perpendicular to the finger flexion/extension plane Ix' increases. Also, blades have distance (d) with the axis of rotation (x"-y") which we considered in our final equation Ix" (Fig. 2C):
$$Ix"= {cos}^{2}\left(\theta \right)*\frac{w*{t}^{3}}{12}+{sin}^{2}\left(\theta \right)* \frac{{w}^{3}*t}{12}+wt {\left(d\right)}^{2}$$
We utilized cold rolled stainless steel strips (grade 301, Jiangyin Transens Metal Products Co., Ltd., Jiangsu, China), with more than 1700 MPa tensile strength and hardness between 557–600 HV, for spring blades and used 3D printers to produce rigid bodies (black nylon material, VPrint 3D, Hong Kong). In the finger mechanism, we used 2 blades with 4 mm width and 0.3 mm thickness as sliding blades and 6.5 mm width and 0.2 mm thickness stainless steel strip as a fixed blade.
C. Finger mechanism
To assist the users with finger flexion and extension, we designed a finger mechanism for each finger by using a lead-screw mechanism to push and pull the sliding blades (Fig. 3). This mechanism consists of a motor with an M3 screw on it, a lead, 3D printed parts, and blades (Fig. 3A). We connected the blades to the lead and installed brass threaded insert (Shenzhen Huaxianglian Hardware Co., Ltd., Guangdong, China) into the lead in order to make it move forward and backward with the motor shaft rotation (Fig. 3B).
According to the previous study [5] of the evaluation of maximum fingertip force in the function of the input force, we assumed that the required input force to make the blades slide and produce necessary fingertip force is about 60 N. To identify suitable motor for our mechanism we used following equation:
$$T=F\text{*}\frac{Dm}{2} \left(\frac{L+\mu \text{*}\pi \text{*}Dm}{\pi \text{*}Dm-\mu L} \right)+\left(\frac{F*Dm* \mu }{2}\right)$$
Where T is torque, Dm pitch diameter of screw, L lead and \(\mu\) coefficient of friction.
Based on the equation above, we utilized a 12v DC motor (Shenzhen Sinlianwei technology co. LTD, Shenzhen, China) with the stall torque of 1.2 Kg.cm and angular speed of 800 rpm to move the blades and make the mechanism bend.
D. Thumb abduction and adduction
The function of thumb is extremely crucial in hand activity, especially in ADLs that require gripping or pinching. The thumb must be able to abduct and adduct as well as be used in pad opposition (e.g., precision pinch) or side opposition to perform these more commonly used grip forms (e.g., lateral pinch). The thumb mechanism of our exoskeleton is divided into two main motions. To perform flexion and extension in the thumb, we used the same 3 layered mechanisms, whereas, to execute abduction and adduction, we connected the thumb to the main body in such a way that it has rotational motion in the carpometacarpal (CMC) joint (Fig. 4A). By using a spring blade which has the ability to rotate around the point where it is connected to the thumb (Fig. 4B) and a slider which is moved by a small geared motor (Fig. 4C) with stall torque of 1.3 Kg.cm and rotational speed of 148 rpm (Fuzhou Bringsmart Intelligent Tech. Co., Ltd, Fuzhou, China), we produced a force on the rigid body of the mechanism, near the MCP joint that made the thumb mechanism rotate around the CMC joint to mimic abduction/adduction motion (Fig. 4D).
To move the slider, we used two strong fishing wires (wires were mounted in such a way that they passed through the grooves created in the main body to move the slider) connected to the slider and the motor. The blade was almost fully within the main module while the thumb was abducted (Fig. 4D situation I). When the slider was moved by a motor, the spring blade was pushed out of the main body and abducted the thumb (Fig. 4D situation II and III).
E. Ring and little finger mechanism
We removed the finger mechanism for the little finger in order to have space in our exoskeleton to place a small motor to move the slider for thumb abduction and adduction. Instead, we created an extra part that was connected to the ring finger mechanism allowing to bend the little finger alongside with the ring finger (Fig. 5).
F. Hand fixation
To apply as little pressure to the intrinsic hand muscles as possible when wearing the robot and securing the user's hand and fingers, we used straps for each finger (Fig. 6I) and one wide strap in the palm parallel to the abductor pollicis brevis muscle (Fig. 6II). We also recommended the patients wear cotton gloves underneath the robot for more comfort.
G. EMG control
Control commands for the actuators of our hand exoskeleton is taken from surface EMG signals. The EMG signals can be recorded by surface electrodes placed on different arm, hand and shoulder muscles based on each individual's residual motor condition after a cervical cord injury [23]. For instance, a C5 injury preserves innervation of shoulder and elbow flexors while C6 injury spares wrist extensors and C7 injury spares elbow extensors.
EMG electrodes are interfaced with a low noise instrumentation amplifier (INA128, Texas Instruments Inc., Dallas, USA). EMG signals are then filtered (10–500 Hz Bandpass) and amplified (×1000) by an operational amplifier (OPA188, Texas Instruments Inc., Dallas, USA) before digitized by a microcontroller (STM32F103, STMicroelectronics, Geneva, Switzerland) for real-time bio-signal processing (Fig. 7) to distinguish the most possible intended hand motion (i.e. hand opening or closing). For bio-signal processing, a linear envelope detection strategy is applied where the EMG signal is first rectified (|Xi|) and then smoothed using following equation:
$$M{A}_{n}= \frac{\sum _{i=1}^{n}{D}_{i}}{n}$$
Where, n is the number of periods in the moving average and Di demand in period i.
The control strategy for grasping is based on the maximum voluntary contraction (MVC) signals and is triggered by an adjustable threshold. When the EMG amplitude crosses the preset MVC value, a trigger is sent to the driver circuit (DRV8833, Texas Instruments Inc., Dallas, USA) to run the motors to execute a grasping or hand opening function (Supplementary video 1).
II. Experimental methods
A. Measuring the types of grasping and the ROM of hand exoskeleton
Each joint of our hand exoskeleton is designed to flex to a maximum of 70 degrees in order to achieve the necessary range of motion. However, the length of the sliding blades, limits the total flexion. Hence, we measured the average finger flexion/extension angle to assess the ROM of the fingers. To evaluate finger mechanism, first we tested it on a healthy individual (male, 25 years old, right-handed). The participant's finger was in relaxed status and the mechanism performed the finger flexion and extension from the original position (finger was in extended position) to the flexed position. Next, we evaluated the exoskeleton for different grasping types. We tested the functionality of the executable grasp types by asking the study participant to grasp a number of objects with the assistance of the exoskeleton. We chose objects which are used in daily activities, such as a spoon, bottle of water, paper cup, pen, cellphone, and key (Fig. 8).
B. Fingertip force measurement
To evaluate the output force produced by finger mechanism, we tested the mechanism in a custom benchtop setup (Fig. 9). After the finger mechanism of the exoskeleton was completely assembled, we fixed the finger mechanism and a load cell on the test bench (Hunan Tech Electronic co., LTD., Changsha, China) with two plates and an interface board (Arduino Uno, Arduino LLC, Italy). To make the mechanism flex and measure the output force, we attached power supply to the motor of the finger mechanism. We then measured the fingertip force for different input voltages (5–12v).
C. Measuring the dimension and specifications of the finger mechanism and whole exoskeleton robot
To make the exoskeleton portable and comfortable, the robot should be small and lightweight. In order to evaluate the size and weight of the robot, after evaluating the fingertip force and ROM, we measured the size and the weight of the exoskeleton.
D. Test on end users
Two chronic quadriplegics with cervical cord injury evaluated the hand exoskeleton device for normal ADL and functional tasks. Both participants had lesions at around C5-C6 cervical level (American Spinal Injury Association Impairment Scales: A and C) and were male with an average age of 32.5 ± 10.6, injured over 2 years. Both participants had severe hand impairments and could actively flex and extend their fingers.
In order to control the robot by the study participants, we first evaluated their forearm EMG signals. We connected the EMG electrodes on the patients' hand and recorded the EMG signals using an oscilloscope. We then programmed the microcontroller based on their EMG and assessed the robot's ability to help the grasping functions. In the test, 5 objects (Fig. 8) were used to emulate daily activities such as picking up a key, self-feeding, and holding objects such as a bottle, pen, cup and a spoon. The participants were first asked to grasp the objects without the help of the exoskeleton and later with the assistance of the exoskeleton.
A. ROM and types of grasping
We designed the finger mechanism in a way that each joint is able to flex up to 70 degrees in order to accomplish the needed range of motion. However, the entire flexion is limited by the length of the actively moving spring. The bending motion with and without the finger mechanism were measured in the same experimental setup to compare to the human natural bending motion. On the human finger the maximum angles to grasp key, observed were 60 ± 3° at the MCP, 35 ± 3° at the PIP and 25 ± 3°. As a result, we measured the overall finger flexion/extension angle and found that the maximum flexion in the MCP, PIP, and DIP joints was 50, 32.5, and 9 degrees, respectively (Fig. 10).
B. Fingertip force
The force produced by the finger mechanism is very important since the robot should produce enough force to help patients to grasp and lift objects. For self-feeding the needed force to hold and lift a bottle of water weighing 1 Kg, the robot should produce at least 10 N fingertip force. To evaluate the force produced by the exoskeleton, we measured the maximum fingertip force of the index and middle finger mechanism. Figure 11, illustrates that the maximum force produced by the mechanism in 12v was around 8 N.
C. Size and weight of the exoskeleton
After assembling the exoskeleton, we measured the size and weight of the robot. Since we used 3D printing technology and also utilized 3 layered sliding blade mechanisms to mimic finger flexion and extension, the final exoskeleton weight was 228 grams. The size of the main body, including the index, middle, and ring finger mechanism, was 190 × 85 × 25mm and the size of the thumb mechanism was 130 × 17 × 15mm.
D. Users' performance
Both of our study participants had chronic hand paralysis unable to do ADL independently, and hence required significant assistance for daily living. We found that their flexor digitorum superficialis and extensor digitorum muscles still had residual EMG activities during volitional intent of finger flexion and extension even though they could not move their fingers significantly. Both participants were asked to clinch their fists for 3 seconds (Fig. 12, gray area) and then relax their hands. Figure 12 shows the forearm muscle activities of these study individuals during intention of opening and closing their hand. We used these forearm EMG signals to control the hand exoskeleton.
We then evaluated these users for accomplishing daily tasks such as self-feeding, operating the key and holding different objects with different shapes and sizes. We found that both users were unable to grasp and hold most objects regardless of their size or weight (Table 1). However, when fitted with the robot exoskeleton, both participants succeeded in holding and operating all the object including the one that they could hold without the hand exoskeleton (Table 1). Furthermore, our tests indicated that the exoskeleton could assist in performing the four most used grip types: palmar pinch, medium wrap, parallel extension, and lateral pinch.
Similar to the robot developed by Bützer and colleagues [5], in our design, we used a V-shaped 3-layered spring blades mechanism. We, however, eliminated the cumbersome cables mechanism to allow the robot to be compact and lightweight. In addition, we included a versatile mechanism to perform thumb abduction and adduction movements to assist users in executing the most frequently utilized grasp types. Our design is comparable to the other existing hand exoskeletons (Table 2). The mechanical evaluation of the finger mechanism showed that our design can provide a functional range of motion by bending the user's finger up to 91.5 degrees in 3 seconds. Also, the finger mechanism can produce up to 8 N fingertip force which can help the user to grasp and lift objects such as keys, paper cop, spoon, a full 500 mL water bottle etc. (Supplementary video 2). We also showed that the exoskeleton presented in this study can assist users, especially individuals with cervical SCI, in daily activities immediately after wearing it. Hence, no pre-training is needed.
comparison of recent exoskeletons with our exoskeleton
Number of actuators
Type of control
Maximum Fingertip force (N)
Weight (gr)
Our Design
Up to 91.5 degrees
EMG sensors
RELab tenoexo
Up to 105 degrees
Yun Mini
Hand of hope
Linier DC motor
Rigid Links
Flexo-glove
Tendon-driven
ATmega2560
22 N pinch force, 48 N power grasp force
Bowden cables
70% of normal hand ROM
Arduino Mega 2560 Rev3
Exo-Glove Poly
~ 164
Micro controller (TMS320F2808)
Analog switch
HandMATE
Linier actuator
~ 190 degrees
Teensy 3.6 microcontroller
Custom Android app
~ 2.45
N/A: Not Available
However, one of the main limitations of our design is the control system. It utilizes simple linear envelope of surface EMG signal which can be variant between the users and thus need individual adjustments. In the future, a more advanced EMG classification such as artificial intelligence can be implemented to allow more reliable control to individual fingers or the robot. Another limitation maybe the fingertip force which is expected to be 10 N to lift items weighing up to 1 Kg [5] whereas our finger mechanism of the robot currently produces up to 8 N.
In this article, we presented a modified design of a lightweight wearable hand exoskeleton to improve grasping function of patients with hand paralysis. We tested the exoskeleton on two participants with severe hand impairments and evaluated the functionality and usability of the robot in the ADL. The results strongly support the functionality restoration and usability of the robot in performing daily activities.
ADL: activities of daily living, ROM: range of motion, SCI: spinal cord injury, MCP: metacarpophalangeal, PIP: proximal interphalangeal, DIP: Distal interphalangeal, CMC: carpometacarpal, MVC: maximum voluntary contraction
All the experimental procedures in the current study, were in accordance with the guidelines and approval of the Human Subjects Ethics Sub-committee of The Hong Kong Polytechnic University.
Available of data and material
The datasets of the experiments in the current study are available from the first author on request.
This research study was supported by the Hong Kong Polytechnic University (UAKB).
VN and MA designed the study. MP, YP and MA conceived the experiments. VN performed the experiments. MA supervised the project. VN and MA analyzed the data and wrote the manuscript. All authors read and approved the final manuscript.
We like to thank the study participants for their patience and supports.
[1] E. S. Lawrence et al., "Estimates of the Prevalence of Acute Stroke Impairments and Disability in a Multiethnic Population," Stroke, vol. 32, no. 6, pp. 1279-1284, 2001, doi: doi:10.1161/01.STR.32.6.1279.
[2] A. R. Rabinowitz and H. S. Levin, "Cognitive Sequelae of Traumatic Brain Injury," Psychiatric Clinics of North America, vol. 37, no. 1, pp. 1-11, 2014/03/01/ 2014, doi: 10.1016/j.psc.2013.11.004.
[3] D. G. Kamper, "Restoration of Hand Function in Stroke and Spinal Cord Injury," in Neurorehabilitation Technology, D. J. Reinkensmeyer and V. Dietz Eds. Cham: Springer International Publishing, 2016, pp. 311-331.
[4] K. S. Beekhuizen, "New perspectives on improving upper extremity function after spinal cord injury," (in eng), J Neurol Phys Ther, vol. 29, no. 3, pp. 157-62, Sep 2005, doi: 10.1097/01.npt.0000282248.15911.38.
[5] T. Bützer, O. Lambercy, J. Arata, and R. Gassert, "Fully Wearable Actuated Soft Exoskeleton for Grasping Assistance in Everyday Activities," Soft Robotics, vol. 8, no. 2, pp. 128-143, 2021, doi: 10.1089/soro.2019.0135.
[6] R. A. Bos et al., "A structured overview of trends and technologies used in dynamic hand orthoses," Journal of NeuroEngineering and Rehabilitation, vol. 13, no. 1, p. 62, 2016/06/29 2016, doi: 10.1186/s12984-016-0168-z.
[7] J. Arata, K. Ohmoto, R. Gassert, O. Lambercy, H. Fujimoto, and I. Wada, "A new hand exoskeleton device for rehabilitation using a three-layered sliding spring mechanism," in 2013 IEEE International Conference on Robotics and Automation, 6-10 May 2013 2013, pp. 3902-3907, doi: 10.1109/ICRA.2013.6631126.
[8] Y. Yun et al., "Improvement of hand functions of spinal cord injury patients with electromyography-driven hand exoskeleton: A feasibility study," Wearable Technologies, vol. 1, p. e8, 2020, Art no. e8, doi: 10.1017/wtc.2020.9.
[9] K. Y. Tong et al., "An intention driven hand functions task training robotic system," in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, 31 Aug.-4 Sept. 2010 2010, pp. 3406-3409, doi: 10.1109/IEMBS.2010.5627930.
[10] M. R. Islam, C. Spiewak, M. Rahman, and R. Fareh, "A Brief Review on Robotic Exoskeletons for Upper Extremity Rehabilitation to Find the Gap between Research Porotype and Commercial Type," Advances in Robotics & Automation, vol. 06, 01/01 2017, doi: 10.4172/2168-9695.1000177.
[11] M. A. Gull, S. Bai, and T. Bak, "A Review on Design of Upper Limb Exoskeletons," Robotics, vol. 9, no. 1, p. 16, 2020, doi: 10.3390/robotics9010016.
[12] M. Sarac, M. Solazzi, and A. Frisoli, "Design Requirements of Generic Hand Exoskeletons and Survey of Hand Exoskeletons for Rehabilitation, Assistive, or Haptic Use," IEEE Transactions on Haptics, vol. 12, no. 4, pp. 400-413, 2019, doi: 10.1109/TOH.2019.2924881.
[13] B. B. Kang, H. In, and K. Cho, "Modeling of tendon driven soft wearable robot for the finger," in 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 30 Oct.-2 Nov. 2013 2013, pp. 459-460, doi: 10.1109/URAI.2013.6677311.
[14] T. Shahid, D. Gouwanda, S. G. Nurzaman, and A. A. Gopalai, "Moving toward Soft Robotics: A Decade Review of the Design of Hand Exoskeletons," Biomimetics, vol. 3, no. 3, p. 17, 2018, doi: 10.3390/biomimetics3030017.
[15] Y. Shen, P. W. Ferguson, and J. Rosen, "Chapter 1 - Upper Limb Exoskeleton Systems—Overview," in Wearable Robotics, J. Rosen and P. W. Ferguson Eds.: Academic Press, 2020, pp. 1-22.
[16] M. Vergara, J. L. Sancho-Bru, V. Gracia-Ibáñez, and A. Pérez-González, "An introductory study of common grasps used by adults during performance of activities of daily living," (in eng), J Hand Ther, vol. 27, no. 3, pp. 225-33; quiz 234, Jul-Sep 2014, doi: 10.1016/j.jht.2014.04.002.
[17] I. M. Bullock, J. Z. Zheng, S. D. L. Rosa, C. Guertler, and A. M. Dollar, "Grasp Frequency and Usage in Daily Household and Machine Shop Tasks," IEEE Transactions on Haptics, vol. 6, no. 3, pp. 296-308, 2013, doi: 10.1109/TOH.2013.6.
[18] T. Feix, J. Romero, H. B. Schmiedmayer, A. M. Dollar, and D. Kragic, "The GRASP Taxonomy of Human Grasp Types," IEEE Transactions on Human-Machine Systems, vol. 46, no. 1, pp. 66-77, 2016, doi: 10.1109/THMS.2015.2470657.
[19] G. I. Bain, N. Polites, B. G. Higgs, R. J. Heptinstall, and A. M. McGrath, "The functional range of motion of the finger joints," Journal of Hand Surgery (European Volume), vol. 40, no. 4, pp. 406-411, 2015/05/01 2014, doi: 10.1177/1753193414533754.
[20] V. K. Nanayakkara, G. Cotugno, N. Vitzilaios, D. Venetsanos, T. Nanayakkara, and M. N. Sahinkaya, "The Role of Morphology of the Thumb in Anthropomorphic Grasping: A Review," (in English), Frontiers in Mechanical Engineering, Review vol. 3, no. 5, 2017-June-30 2017, doi: 10.3389/fmech.2017.00005.
[21] M. Li et al., "An Attention-Controlled Hand Exoskeleton for the Rehabilitation of Finger Extension and Flexion Using a Rigid-Soft Combined Mechanism," (in English), Frontiers in Neurorobotics, Original Research vol. 13, no. 34, 2019-May-29 2019, doi: 10.3389/fnbot.2019.00034.
[22] B. Buchholz and T. J. Armstrong, "A kinematic model of the human hand to evaluate its prehensile capabilities," Journal of Biomechanics, vol. 25, no. 2, pp. 149-162, 1992/02/01/ 1992, doi: 10.1016/0021-9290(92)90272-3.
[23] S. Mateo, A. Roby-Brami, K. T. Reilly, Y. Rossetti, C. Collet, and G. Rode, "Upper limb kinematics after cervical spinal cord injury: a review," Journal of NeuroEngineering and Rehabilitation, vol. 12, no. 1, p. 9, 2015/01/30 2015, doi: 10.1186/1743-0003-12-9.
[24] A. Mohammadi, J. Lavranos, P. Choong, and D. Oetomo, "Flexo-glove: A 3D Printed Soft Exoskeleton Robotic Glove for Impaired Hand Rehabilitation and Assistance," in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 18-21 July 2018 2018, pp. 2120-2123, doi: 10.1109/EMBC.2018.8512617.
[25] L. Randazzo, I. Iturrate, S. Perdikis, and J. d. R. Millán, "mano: A Wearable Hand Exoskeleton for Activities of Daily Living and Neurorehabilitation," IEEE Robotics and Automation Letters, vol. 3, no. 1, pp. 500-507, 2018, doi: 10.1109/LRA.2017.2771329.
[26] B. B. Kang, H. Choi, H. Lee, and K.-J. Cho, "Exo-Glove Poly II: A Polymer-Based Soft Wearable Robot for the Hand with a Tendon-Driven Actuation System," Soft Robotics, vol. 6, no. 2, pp. 214-227, 2019, doi: 10.1089/soro.2018.0006.
[27] M. Sandison et al., "HandMATE: Wearable Robotic Hand Exoskeleton and Integrated Android App for At Home Stroke Rehabilitation," in 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 20-24 July 2020 2020, pp. 4867-4872, doi: 10.1109/EMBC44109.2020.9175332.
Supplementaryvideo1.mp4 | CommonCrawl |
Predicting mammalian hosts in which novel coronaviruses can be generated
Predicting the potential for zoonotic transmission and host associations for novel viruses
Pranav S. Pandit, Simon J. Anthony, … Christine K. Johnson
Predicting mammalian species at risk of being infected by SARS-CoV-2 from an ACE2 perspective
Yulong Wei, Parisa Aris, … Xuhua Xia
SARS-CoV-2 host prediction based on virus-host genetic features
Irina Yuri Kawashima, Maria Claudia Negret Lopez, … Ronaldo Fumio Hashimoto
Genomic recombination events may reveal the evolution of coronavirus and the origin of SARS-CoV-2
Zhenglin Zhu, Kaiwen Meng & Geng Meng
Ecology, evolution and spillover of coronaviruses from bats
Manuel Ruiz-Aravena, Clifton McKee, … Raina K. Plowright
Predicting hosts based on early SARS-CoV-2 samples and analyzing the 2020 pandemic
Qian Guo, Mo Li, … Huaiqiu Zhu
Functional assessment of cell entry and receptor usage for SARS-CoV-2 and other lineage B betacoronaviruses
Michael Letko, Andrea Marzi & Vincent Munster
Origin and evolution of pathogenic coronaviruses
Jie Cui, Fang Li & Zheng-Li Shi
A novel group of avian astroviruses from Neotropical passerine birds broaden the diversity and host range of Astroviridae
Izaskun Fernández-Correa, Daniel A. Truchado, … Laura Benítez
Maya Wardeh ORCID: orcid.org/0000-0002-2316-54601,2,
Matthew Baylis ORCID: orcid.org/0000-0003-0335-187X1,3 &
Marcus S. C. Blagrove ORCID: orcid.org/0000-0002-7510-167X4
Nature Communications volume 12, Article number: 780 (2021) Cite this article
Ecological epidemiology
Viral reservoirs
Matters Arising to this article was published on 12 September 2022
Novel pathogenic coronaviruses – such as SARS-CoV and probably SARS-CoV-2 – arise by homologous recombination between co-infecting viruses in a single cell. Identifying possible sources of novel coronaviruses therefore requires identifying hosts of multiple coronaviruses; however, most coronavirus-host interactions remain unknown. Here, by deploying a meta-ensemble of similarity learners from three complementary perspectives (viral, mammalian and network), we predict which mammals are hosts of multiple coronaviruses. We predict that there are 11.5-fold more coronavirus-host associations, over 30-fold more potential SARS-CoV-2 recombination hosts, and over 40-fold more host species with four or more different subgenera of coronaviruses than have been observed to date at >0.5 mean probability cut-off (2.4-, 4.25- and 9-fold, respectively, at >0.9821). Our results demonstrate the large underappreciation of the potential scale of novel coronavirus generation in wild and domesticated animals. We identify high-risk species for coronavirus surveillance.
The generation and emergence of three novel respiratory coronaviruses from mammalian reservoirs into human populations in the last 20 years, including one which has achieved pandemic status, suggests that one of the most pressing current research questions is: in which reservoirs could the next novel coronaviruses be generated and emerge from in future? Armed with this knowledge, we may be able to reduce the chance of emergence into human populations, such as by the strict monitoring and enforced separation of the identified hosts, in live animal markets, farms, and other close-quarters environments; or we may be able to develop potential mitigations in advance.
Coronaviridae are a family of positive sense RNA viruses, which can cause an array of diseases. In humans, these range from mild cold-like illnesses to lethal respiratory tract infections. Seven coronaviruses are known to infect humans1, SARS-CoV, MERS-CoV and SARS-CoV-2 causing severe disease, while HKU1, NL63, OC43 and 229E tend towards milder symptoms in most patients2.
Coronaviruses undergo frequent host-shifting events between non-human animal species, or non-human animals and humans3,4,5, a process that may involve changes to the cells or tissues that the viruses infect (virus tropism). Such shifts have resulted in new animal diseases (such as bovine coronavirus disease6 and canine coronavirus disease7), and human diseases (such as OC438 and 229E9). The aetiological agent of COVID-19, SARS-CoV-2, is proposed to have originated in bats10 and shifted to humans via an intermediate reservoir host, likely a species of pangolin11.
Comparison of the genetic sequences of bat and human coronaviruses has revealed five potentially important genetic regions involved in host specificity and shifting, with the Spike receptor binding domain believed to be the most important3,12. Homologous recombination is a natural process, which brings together new combinations of genetic material, and hence new viral strains, from two similar non-identical parent strains of virus. This recombination occurs when different strains co-infect an individual animal, with sequences from each parent strain in the genetic make-up of progeny virus. Homologous recombination has previously been demonstrated in many important viruses such as human immunodeficiency virus (HIV)13, classical swine fever virus14 and throughout the Coronaviridae12,15. Homologous recombination in Spike has been implicated in the generation of SARS-CoV-215, although investigations are still ongoing.
As well as instigating host-shifting, homologous recombination in other regions of the virus genome could also introduce novel phenotypes to coronavirus strains already infectious to humans. There are at least seven potential regions for homologous recombination in the replicase and Spike regions of the SARS-CoV genome alone, with possible recombination partner viruses from a range of other mammalian and human coronaviruses16. Recombination events between two compatible partner strains in a shared host could thus lead to future novel coronaviruses, either by enabling pre-existing mammalian strains to infect humans, or by adding new phenotypes arising from different alleles to pre-existing human-affecting strains.
The most fundamental requirement for homologous recombination to take place is the co-infection of a single host with multiple coronaviruses. However, our understanding of which hosts are permissive to which coronaviruses, the prerequisite to identifying which hosts are potential sites for this recombination (henceforth termed 'recombination hosts'), remains extremely limited. Here, we utilise a similarity-based machine-learning pipeline to address this significant knowledge gap. Our approach predicts associations between coronaviruses and their potential mammalian hosts by integrating three perspectives or points of view encompassing: (1) genomic features depicting different aspects of coronaviruses (e.g., secondary structure, codon usage bias) extracted from complete genomes (sequences = 3271, virus strains = 411); (2) ecological, phylogenetic and geospatial traits of potential mammalian hosts (n = 876); and (3) characteristics of the network that describes the linkage of coronaviruses to their observed hosts, which expresses our current knowledge of sharing of coronaviruses between various hosts and host groups.
Topological features of ecological networks have been successfully utilised to enhance our understanding of pathogen sharing17,18, disease emergence and spill-over events19, and as means to predict missing links in host–pathogen networks20,21,22. Here, we capture this topology, and relations between coronaviruses and hosts in our network, by means of node (coronaviruses and hosts) embeddings using DeepWalk23—a deep learning method that has been successfully used to predict drug-target24 and IncRNA-disease associations25.
Our pipeline transforms the above features into similarities (between viruses and between hosts) and uses them to give scores to virus–mammal associations indicating how likely they are to occur. Our framework then ensembles its constituent learners to produce testable predictions of mammalian hosts of multiple coronaviruses, in order to answer the following questions: (1) which species may be unidentified mammalian reservoirs of coronaviruses? (2) What are the most probable mammalian host species in which coronavirus homologous recombination could occur? And (3) which coronaviruses are most likely to co-infect hosts, and thus act as sources for future novel viruses?
In the following work, we deploy a meta-ensemble of similarity learners from the three complementary perspectives (viral, mammalian and network) and use it to predict which mammals are hosts of multiple coronaviruses. Using this pipeline, we demonstrate that there is currently a large underappreciation of the potential scale of novel coronavirus generation in wild and domesticated animals. Specifically, we predict there are 11.5-fold more coronavirus–host associations, over 30-fold more potential SARS-CoV-2 recombination hosts, and over 40-fold more host species with four or more different subgenera of coronaviruses than have been observed to date at >0.5 mean probability cut-off (2.4-, 4.25- and 9-fold, respectively, at >0.9821). We use these data to identify potential high-risk species, which we recommend for coronavirus surveillance.
Predicted recombination hosts of SARS-CoV-2
Our pipeline to predict associations between coronaviruses and their mammalian hosts indicated a total of 126 non-human species in which SARS-CoV-2 could be found, mean probability cut-off > 0.5, when subtracting (adding) standard deviation (SD) from the mean the number of predicted hosts is 85 (169). For simplicity, we report SD hereafter as −/+ from predicted values at reported probability cut-offs, here: SD = −41/+43. The number of predicted SARS-CoV-2 associations at cut-offs >0.75 and ≥0.9821 was: 103 (−40/+141) and 17 (−8/+126), respectively. The breakdown of these hosts by order is shown in Table 1. Figure 1 illustrates these predicted hosts, the probability of their association with SARS-COV-2, as well as numbers of known and unobserved (predicted) coronaviruses that could be found in each potential reservoir of SARS-CoV-2 (Supplementary Data 1 lists full predictions).
Table 1 Observed and predicted number of hosts of SARS-CoV-2 (by mammalian order), and observed and predicted number of hosts with ten or more coronaviruses (from our set of 411 species or strains).
Fig. 1: Model predictions for potential hosts of SARS-Cov-2.
Predicted hosts are grouped by order (inner circle). Middle circle presents probability of association between host and SARS-CoV-2 (grey scale indicates predicted associations with probability in range > 0.5 to ≤0.75. Red scale indicates predicted associations with probability in range > 0.75 to <0.9821. Blue to purple scale present indicates associations with probability ≥ 0.9821). Yellow bars represent number of coronaviruses (species or strains) observed to be found in each host. Blue stacked bars represent other coronaviruses predicted to be found in each host by our model. Predicted coronaviruses per host are grouped by prediction probability into three categories (from inside to outside): ≥0.9821, >0.75 to <0.9821 and >0.5 to ≤0.75. Results for humans and lab rodents are not shown to prevent the scale from contracting and making other comparisons difficult. Supplementary Fig. 14 illustrates full results including these hosts. Full results are listed in Supplementary Data 1.
Summary of predictions for all coronaviruses
Overall, our pipeline predicted 4438 (SD = −1903/+2256, cut-off > 0.5) previously unobserved associations that potentially exist between 300 (SD = 0/+3) mammals and 204 coronaviruses (species or strains, SD = −60/+13). The number of unobserved associations at probability cut-offs >0.75 and ≥0.9821 was: 3087 (−1747/+2391) between 300 (−16/+0) mammals and 181 (−127/+26) coronaviruses and 601 (−412/+3723) between 224 (−91/+76) mammals and 31 (−7/+171) coronaviruses, respectively. Our model predicts there are 115 (0/+3) [115 (−4/+0), 96 (−31/+19), at cut-offs > 0.75 and ≥0.9821] mammalian species with no previously observed associations with the 411 input viruses (hereafter we display results derived from >0.5 cut-off; results obtained at >0.75 and ≥0.9821 cut-offs are presented in square brackets).
On average, each coronavirus (species or strain, complete genome available, n = 411) is predicted to have 12.56 (−4.92/+5.83) mammalian hosts [9.06 (−4.51/+6.18); 2.64 (−1.06/+9.62)]. Similarly, each mammalian species (n = 876, known hosts = 185, predicted hosts = 300 (−0/+3) [300 (−4/+0); 281 (−0/+19)]) is host to, on average, 5.55 (−2.17/+2.58) coronaviruses [9.06 (−4.51/+6.18); 1.17 (−0.47/+4.25)]. Supplementary Data 2 and 3 provide results for coronaviruses and mammalian hosts, respectively.
Figure 2 presents 50 potential mammalian recombination hosts of coronaviruses. Our model predicts 231 (−115/+58) [140 (−104/+128); 13 (−7/+217)] mammalian species (excluding humans and lab rodents) that could host 10 or more of the 411 coronavirus species or strains for which complete genome sequences were available. The breakdowns of these hosts by order are shown in Table 1.
Fig. 2: Observed and predicted mammalian hosts for coronaviruses.
Columns present mammalian hosts in four categories: Artiodactyla and Perissodactyla (top 10 hosts by number of predicted coronaviruses that could be found in each host), Carnivora (top 15 hosts), Chiroptera (top 15 hosts), Rodentia (top 5 hosts) and others (top 5 hosts). Rows present viruses ordered into five taxonomic groups: alphacoronaviruses, betacoronaviruses, deltacoronaviruses, gammacoronaviruses and unclassified Coronavirinae. Yellow cells represent observed associations between the host and the coronavirus. Grey/red/blue cells indicate the probability of predicted associations in three increasing probability ranges. White cells indicate no known or predicted association between host and virus (beneath cut-off probability of 0.5). Supplementary Data 4 lists full results. These results exclude humans and lab rodents. Supplementary Data 5 lists predictions for humans. Supplementary Fig. 15 illustrates full results including these hosts.
Coronavirus–mammalian networks
The addition of predicted associations increased the diversity (mean phylogenetic distance) of mammalian hosts per coronavirus, as well as the diversity (mean genetic distance) of coronaviruses per mammalian host (Table 2 lists these changes). Furthermore, we captured the changes in structure of the bipartite network linking coronaviruses with their mammalian hosts (Fig. 3). On the one hand, the nestedness of the network increased (ranging from: 4.06-fold at 0.9821 to 10.17-fold at 0.50 cut-off, Table 2). On the other hand, the non-independence (checkerboard score (C-score)) of coronaviruses and mammalian hosts decreased with the addition of new links. Larger values of C-score suggest viral and host communities have little or no overlap in host or virus preferences (e.g., tendencies of coronavirus types to be clustered amongst certain host communities), as visualised in Fig. 3.
Table 2 Bipartite network metrics calculated for original and predicted networks at three probability cut-offs: ≥0.9821, >0.75, and >0.50.
Fig. 3: Bipartite networks linking coronaviruses with mammalian hosts.
Panel (A): original bipartite network based on known/observed virus–host associations extracted from meta-data accompanying genomic sequences and supplemented with publications data from the ENHanCEd Infectious Diseases Database (EID2). Panels (B–D) show predicted bipartite networks using our predicted virus–host associations at different cut-offs: ≥0.9821, >0.75 and >0.5, respectively, for mean probability of associations.
We validated our analytical pipeline externally against 20 held-out test sets (as described in method section below). On average, our GBM ensemble achieved AUC = 0.948 (±0.029 SD), 0.944 (±0.024), 0.843 (±0.045); true skill statistics (TSS) = 0.832 (±0.057), 0.887 (±0.048), 0.687 (±0.091); and F-score = 0.102 (±0.049), 0.141 (±0.055), 0.283 (±0.062), at probability cut-offs: >0.5, >0.75 and ≥0.9821, respectively (Supplementary Figs. 7–12).
In this study, we deployed a meta-ensemble of similarity learners from three complementary perspectives (viral, mammalian and network), to predict the occurrence of associations between 411 known coronaviruses and 876 mammal species. We predict 11.54-fold increase—prediction cut-off > 0.5 [8.33-fold, 2.43-fold, cut-offs >0.75, ≥0.9821, respectively, cut-offs presented in this format hereafter], leading to the prediction that there are many more mammalian species than are currently known in which more than one coronavirus can occur. These hosts of multiple coronaviruses are potential sources of new coronavirus strains by homologous recombination. Here, we discuss the large number of candidate hosts in which homologous recombination of coronaviruses could result in the generation of novel pathogenic strains, as well as the substantial underestimation of the range of viruses which could recombine based on observed data. Our results are also discussed in terms of which host species are high priority targets for surveillance, both short and long term.
Give that coronaviruses frequently undergo homologous recombination when they co-infect a host, and that SARS-CoV-2 is highly infectious to humans, the most immediate threat to public health is recombination of other coronaviruses with SARS-CoV-2. Such recombination could readily produce further novel viruses with both the infectivity of SARS-CoV-2 and additional pathogenicity or viral tropism from elsewhere in the Coronaviridae. (See Supplementary Data 1 and 3 for comprehensive list of mammals predicted to be hosts of SARS-CoV-2 as well as several other coronaviruses).
Taking only observed data, there are four non-human mammalian hosts known to associate with SARS-CoV-2 and at least one other coronavirus, and a total of 504 different unique interactions between SARS-CoV-2 and other coronaviruses (counting all combinations of virus and host individually). Any of these SARS-CoV-2 hosts that are also hosts of other coronaviruses are potential recombination hosts in which novel coronaviruses derived from SARS-CoV-2 could be generated in the future. However, when we add in our model's predicted interactions this becomes 126 SARS-CoV-2 hosts and 2544 total unique interactions [103 hosts and 1898 unique interactions; 17 hosts and 563 interactions]; indicating that observed data are missing 31.5-fold of the total number of predicted recombination hosts [25.75-fold; 4.25-fold], and 5.05-fold increase [3.77-fold; 1.12-fold] of the predicted unique associations. These large-fold increases in the number of predicted hosts and associations demonstrate that the potential for homologous recombination between SARS-CoV-2 and other coronaviruses, which could lead to new pathogenic strains, is highly underestimated, both in terms of the range of hosts as well as the number of interactions within known hosts.
Our model has successfully highlighted known important recombination hosts of coronaviruses, adding confidence to our methodology. The Asian palm civet (Paradoxurus hermaphroditus), a viverrid native to south and southeast Asia, was predicted by our model as a potential host of 32 [26; 10] different coronaviruses (in addition to SARS-CoV-2) (vs. 6 observed). Genetic evolution analysis has shown that SARS-CoV-2 is closely related to coronaviruses derived from P. hermaphroditus26 and has also highlighted its role as a reservoir for SARS-CoV27, strongly supporting our findings that it is an important host in coronavirus recombination. This, together with the close association of P. hermaphroditus26 with humans, for example, via bushmeat and the pet trade28 and in 'battery cages' for the production of Kopi luwak coffee, highlights both the ability and opportunity of this species to act as a recombination host, with significantly more coronaviruses than have been observed. Furthermore, our model highlights both the greater horseshoe bat (Rhinolophus ferrumequinum), which is a known recombination host of SARS-CoV29,30, as well as the intermediate horseshoe bat (Rhinolophus affinis), which is believed to be recombination host of SARS-CoV-210,31. Our model predicts R. ferrumequinum to be a host to 68 [47; 19] different coronaviruses (including SARS-CoV-2) (vs. 13 observed); and for R. affinis to host 45 [32; 14] (vs. 9 observed). Our model also highlights the pangolin (Manis javanica), a suspected intermediate host for SARS-CoV-211 as a predicted host of an additional 14 [11; 2] different coronaviruses (vs. 1 observed).
The successful highlighting of speculated hosts for SARS-CoV and SARS-CoV-2 homologous recombination adds substantial confidence that our model is identifying the most important potential recombination hosts. Furthermore, our results suggest that the number of viruses that could potentially recombine even within these known hosts has been significantly under-ascertained, indicating that there still remains significant potential for further novel coronavirus generation in future from current known recombination hosts.
Our pipeline also identifies a diverse range of species not yet associated with SARS-CoV-2 recombination, but which are both predicted to host SARS-CoV-2 and other coronaviruses. These hosts represent new targets for surveillance of novel human pathogenic coronaviruses. Amongst the highest priority is the lesser Asiatic yellow bat (Scotophilus kuhlii), a known coronavirus host32, common in east Asia but not well studied, and which features prominently with a large number of predicted interactions (48 [29; 12]). Our results also implicate the common hedgehog (Erinaceus europaeus), the European rabbit (Oryctolagus cuniculus) and the domestic cat (Felis catus) as predicted hosts for SARS-CoV-2 (confirmed for the cat33) and large numbers of other coronaviruses (20, 23, 65 [19, 18, 48; 7, 9, 24], for the hedgehog, rabbit and cat, respectively). The hedgehog and rabbit have previously been confirmed as hosts for other betacoronaviruses34,35, which have no appreciable significance to human health. Our prediction of these species' potential interaction with SARS-CoV-2 and considerable numbers of other coronaviruses, as well as the latter three species' close association to humans, identify them as high priority underestimated risks. In addition to these human-associated species, both the chimpanzee (Pan troglodytes) and African green monkey (Chlorocebus aethiops) have large numbers of predicted associations (51, 46 [47, 22; 3, 4]), and given their relatedness to humans and their importance in the emergence of viruses such as DENV36 and HIV37, also serve as other high priority species for surveillance.
The most prominent result for a SARS-CoV-2 recombination host is the domestic pig (Sus scrofa), having the most predicted associations of all included non-human mammals (121 [95; 38] additional coronaviruses). The pig is a major known mammalian coronavirus host, harbouring both a large number (26) of observed coronaviruses, as well as a wide diversity (listed in Supplementary Data 4). Given the large number of predicted viral associations presented here, the pig's close association to humans, its known reservoir status for many other zoonotic viruses, and its involvement in genetic recombination of some of these viruses38, the pig is predicted to be one of the foremost candidates an important recombination host.
As an example of the utilisation of our model from the perspective of likely future viral homologous recombination events, Banerjee et al.39 bioinformatically identified potential genomic regions of homologous recombination between MERS-CoV and SARS-CoV-2. They highlighted a significant risk of the highly human-to-human transmissible SARS-CoV-2 acquiring the considerably more pathogenic (i.e., in terms of case-fatality rate) phenotypes of MERS-CoV. The work presented here identifies 102 [75; 4] potential recombination hosts (excluding humans and laboratory rodents) of the two viruses. Together, our work and Banerjee et al.39, we provide evidence for both the possible production of a potentially severe future recombinant coronavirus and identify the hosts in which this threat is most likely to be generated (see Supplementary Data 6). We recommend monitoring for this event.
Alongside the more immediate threat of homologous recombination directly with SARS-CoV-2, we also present our predicted associations between all mammals and all coronaviruses. These associations represent the longer-term potential for background viral evolution via homologous recombination in all species. These data also indicate that there is a 11.54-fold underestimation in the number of associations, with 421 observed associations and 4438 predicted [3087 (8.33-fold); 601 (2.43-fold)]. This is visually represented in Fig. 3, which illustrates the bipartite network of virus and host for observed associations (A), and predicted associations (B–D); with a marked increase in connectivity between our mammalian hosts and coronaviruses, even at the most stringent probability cut-off. This indicates that the potential for homologous recombination between coronaviruses is substantially underestimated using just observed data.
Furthermore, our model predicts that the associations between more diverse coronaviruses is also underestimated, for example, the number of included host species with four or more different subgenera of coronaviruses increases by 41.57-fold from 7 observed to 291 predicted [39.57-fold, 277; 9.00-fold, 63] (Table 2 shows the degree of diversity of coronaviruses in mammalian host species highlighted in Fig. 2). The high degree of potential co-infections including different subgenera and genera seen in our results emphasises the level of new genetic diversity possible via homologous recombination in these host species. A similar array of host species is highlighted for total associations as was seen for SARS-CoV-2 potential recombination hosts, including the common pig, the lesser Asiatic yellow bat, and both the greater and intermediate horseshoe bats, whilst notable additions include the dromedary camel (Camelus dromedaries). The camel is a known host of multiple coronaviruses and the primary route of transmission of MERS-CoV to humans40. Our results suggest that monitoring for background viral evolution via homologous recombination would focus on a similar array of hosts, with a few additions, as monitoring for SARS-CoV-2 recombination. Again, our results strongly suggest that the potential array of viruses which could recombine in hosts is substantially underestimated, reinforcing the message that continued monitoring is essential.
Methodologically, the novelty of our approach lies in integrating three points of view: that of the coronaviruses, that of their potential mammalian hosts and that of the network summarising our knowledge to date of sharing of coronaviruses in their hosts. Additionally, the incorporation of similarity-based learners in our three-perspective approach enabled us to capture new hosts (i.e., with no known association with coronaviruses), thus avoiding a main limitation of approaches, which rely only on networks and their topology. By constructing a comprehensive set of similarity learners in each point of view and combining these learners non-linearly (via GBM meta-ensemble), a strength of our analytical pipeline is that it is able to predict potential recombination hosts of coronaviruses without any prerequisite knowledge or assumptions. Our method does not make assumptions about which parts of the coronavirus or host genomes are important, or integration of receptor (e.g., ACE2) information, or focusing on certain groups of hosts (e.g., bats or primates). This 'no-preconceptions' approach enables us to analyse without being restricted by our current incomplete knowledge of the specific biological and molecular mechanisms, which govern host-virus permissibility. Current restrictions include lack of sequencing, annotation and expression analysis of receptor (e.g., ACE2) in the vast majority of hosts, uncertainty over the receptor(s) utilised by many coronaviruses and knowledge of other factors leading to successful replication once the virus has entered the host cell. Whilst some of these details are known for a very limited number of well-studied hosts and coronaviruses, they are not for the vast majority, consequently, a study aiming for breadth of understanding across all mammalian hosts and coronaviruses is unable to utilise these limited data. Despite our 'no-preconceptions' approach having this distinct advantage, it is also a limitation of the predictions. As discussed in the next section, our predictions are consequently reliant upon a more limited set of information due to the breadth of the work. Where some data are available for a small subset of coronaviruses or their hosts (e.g., pathogenicity, virus titre), these data are not useable in this study as they do not exist for the vast majority of hosts/viruses.
We acknowledge certain limitations in our methodology, primarily pertaining to current incomplete data sets in the rapidly developing but still understudied field:
The inclusion only of coronaviruses for which complete genomes could be found limited the number of coronaviruses (species or strain) for which we could compute meaningful similarities, and therefore predict potential hosts. The same applies for our mammalian species—we only included mammalian hosts for which phylogenetic, ecological and geospatial data were available. As more data on sequenced coronaviruses or mammals become available in future, our model can be re-run to further improve predictions, and to validate predictions from earlier iterations.
Virological knowledge of understudied coronaviruses and their host interactions. For the vast majority of observed virus–host associations it is unknown if these hosts are natural, intermediate or 'dead-end' hosts. Also unknown are more clinical traits of the infections in the overwhelming majority of associations, such as: pathogenicity, likelihood of infection, virus titre during infection, duration of infection, etc. knowledge of all of these factors could greatly add to our ability to assess 'likeliness' of homologous recombination, however, the available data are too limited for a study with the breadth of interactions we characterise here, and hence were unable to be included.
Research effort, centring mainly on coronaviruses found in humans and their domesticated animals, can lead to overestimation of the potential of coronaviruses to recombine in frequently studied mammals, such as lab rodents that were excluded from the results reported here (similar to previous work17), and significantly, domesticated pigs and cats that we have found to be important recombination host species of coronaviruses. We believe that this limitation is partially mitigated; first, methodologically, the effect of research effort has been limited by capturing similarities from our three points of view (virus, host and network) and multiple characteristics therein. And second, this mitigation shows that in our results as other 'overstudied' mammals, such as cows and sheep, were not highlight by our model, which is consistent with them being considered less important hosts of coronaviruses, and certain understudied bats were highlighted as major potential hosts; together, these indicate that research effort is not a substantial driver of our results.
Recent testing of potential mammalian hosts for their susceptibility to SARS-CoV-2 has confirmed a number of our predictions, for example: Nyctereutes procyonoides41,42; Bovines (e.g., Bison bonasus, Bos taurus, Bos indicus, Bubalus bubalis), Capra hircus, Equus caballus, Lama (Vicugna) pacos, Manis javanica, Oryctolagus cuniculus, Panthera leo, Rousettus leschenaultii, Sus scrofa and Vulpes vulpes42; Chlorocebus aethiops, Neovison vison, Macaca mulatta and Rousettus aegyptiacus43. While limited in number, these post hoc confirmations add confidence to our framework and its predictions. As more host screening is performed in future, it will enable further evaluation of our predictions.
To follow-on from this work, we are investigating coronavirus–host interactions in two separate directions. The first is to expand our host range to include avian species, therefore, including the full range of important coronavirus hosts, and to inform our model with a species-level contact network for all hosts (indicating likeliness of a direct interaction). This will give a broader overview of potential coronavirus associations. Second, we are focusing our predictions on studying a subset of clinically important associations in more depth. This will allow us to utilise more specific information such as receptor and clinical data on the viraemia, which are only currently available for well-studied interactions.
In this study, we provide evidence that the potential for homologous recombination in mammalian hosts of coronaviruses is highly underestimated. The potential ability of the large numbers of hosts presented here to be hosts of multiple coronaviruses, including SARS-CoV-2, could provide the capacity for homologous recombination and hence potential production of further novel coronaviruses. Our methods deployed a meta-ensemble of similarity learners from three complementary perspectives (viral, mammalian and network), to predict each potential coronavirus–mammal association.
The current consensus is that SARS-CoV-2 was generated by homologous recombination; originally derived from coronaviruses in bats10 and then shifted to humans via an intermediate reservoir host, likely a species of pangolin11. Importantly, the lineage of SARS-CoV-2 was deduced only after the outbreak in humans. With the greater understanding of the extent of mammalian host reservoirs and the potential recombination hosts we identify here, a targeted surveillance programme is now possible which would allow for this generation to be observed as it is happening and before a major outbreak. Such information could help inform prevention and mitigation strategies and provide a vital early warning system for future novel coronaviruses.
Viruses and mammalian data
Viral genomic data
Complete sequences of coronaviruses were downloaded from Genbank44. Sequences labelled with the terms: 'vaccine', 'construct', 'vector', 'recombinant' were removed from the analyses. In addition, we removed those associated with experimental infections where possible. This resulted in a total of 3264 sequences for 411 coronavirus species or strains (i.e., viruses below species level on NCBI taxonomy tree). Of those, 88 were sequences of coronavirus species, and 307 sequences of strains (in 25 coronavirus species, with total number of species included = 92). Of our included species, six in total were unclassified Coronavirinae (unclassified coronaviruses).
Selection of potential mammalian hosts of coronaviruses
We processed meta-data accompanying all sequences (including partial sequences but excluding vaccination and experimental infections) of coronaviruses uploaded to GenBank to extract information on hosts (to species level) of these coronaviruses. We supplemented these data with species-level hosts of coronaviruses extracted from scientific publications via the ENHanCEd Infectious Diseases Database (EID2)45. This resulted in identification of 313 known terrestrial mammalian hosts of coronaviruses (regardless of whether a complete genome was available or not, n = 185 mammalian species for which an association with a coronavirus with complete genome was identified). We expanded this set of potential hosts by including terrestrial mammalian species in genera containing at least one known host of coronavirus, and which are known to host one or more other virus species (excluding coronaviruses, information of whether the host is associated with a virus were obtained from EID2). This results in total of 876 mammalian species which were selected.
Quantification of viral similarities
We computed three types of similarities between each two viral genomes as summarised below.
Biases and codon usage
We calculated proportion of each nucleotide of the total coding sequence length. We computed dinucleotide and codon biases46 and codon-pair bias, measured as the codon-pair score46,47 in each of the above sequences. This enabled us to produce for each genome sequence (n = 3264) the following feature vectors: nucleotide bias, dinucleotide bias, codon biased and codon-pair bias.
Following alignment of sequences (using AlignSeqs function in R package Decipher48), we predicted the secondary structure for each sequence using PredictHEC function in the R package Decipher48. We obtained both states (final prediction), and probability of secondary structures for each sequence. We then computed for each 1% of the genome length both the coverage (number of times a structure was predicted) and mean probability of the structure (in the per cent of the genome considered). This enabled us to generate six vectors (length = 100) for each genome representing: mean probability and coverage for each of three possible structures—Helix (H), Beta-Sheet (E) or Coil (C).
Genome dissimilarity (distance)
We calculated pairwise dissimilarity (in effect a hamming distance) between each two sequences in our set using the function DistanceMatrix in the R package Decipher48. We set this function to penalise gap-to-gap and gap-to-letter mismatches.
Similarity quantification
We transformed the feature (traits) vectors described above into similarities matrices between coronaviruses (species or strains). This was achieved by computing cosine similarity between these vectors in each category (e.g., codon-pair usage, H coverage, E probability). Formally, for each genomic feature (n = 10) presented by vector as described above, this similarity was calculated as follows:
$${\mathrm{sim}}_{{\mathrm{genomic}}_{\mathrm{l}}}\left( {s_m,s_n} \right) = {\mathrm{sim}}_{{\mathrm{genomic}}_{\mathrm{l}}}\left( {{\mathbf{V}}_m^{f_l},{\mathbf{V}}_n^{f_l}} \right) = \frac{{\mathop {\sum}\nolimits_{i = 1}^d {\left( {{\mathbf{V}}_m^{f_l}[i] \times {\mathbf{V}}_n^{f_l}[i]} \right)} }}{{\sqrt {\mathop {\sum}\nolimits_{i = 1}^d {{\mathbf{V}}_m^{f_l}[i]^2} } \times \sqrt {\mathop {\sum}\nolimits_{i = 1}^d {{\mathbf{V}}_n^{f_l}[i]^2} } }}$$
where sm and sn are two sequences presented by two feature vectors \({\mathbf{V}}_m^{f_l}\) and \({\mathbf{V}}_n^{f_l}\) from the genomic feature space fl (e.g., codon-pair bias) of the dimension d (e.g., d = 100 for H coverage).
We then calculated similarity between each pair of virus strains or species (in each category) as the mean of similarities between genomic sequences of the two virus strains or species (e.g., mean nucleotide bias similarity between all sequences of SARS-CoV-2 and all sequences of MERS-CoV presented the final nucleotide bias similarity between SARS-CoV-2 and MERS-CoV). This enabled us to generate 11 genomic features similarity matrices (the above 10 features represented by vectors and genomic similarity matrix) between our input coronaviruses. Supplementary Fig. 1 illustrates the process.
Similarity network fusion (SNF)
We applied SNF49 to integrate the following similarities in order to reduce our viral genomic feature space: (1) nucleotide, dinucleotide, codon and codon-pair usage biases were combined into one similarity matrix—genome bias similarity. And (2) Helix (H), Beta-Sheet (E) or Coil (C) mean probability and coverage similarities (six in total) were combined into one similarity matrix—secondary structure similarity.
SNF applies an iterative nonlinear method that updates every similarity matrix according to the other matrices via nearest neighbour approach and is scalable and is robust to noise and data heterogeneity. The integrated matrix captures both shared and complementary information from multiple similarities.
Quantification of mammalian similarities
We calculated a comprehensive set of mammalian similarities. Table 3 summarises these similarities and provides justification for inclusion. Supplementary Note 1 provides full details.
Table 3 Mammalian phylogenetic, ecological and geospatial similarities.
Quantification of network similarities
Network construction
We processed meta-data accompanying all sequences (including partial genome but excluding vaccination and experimental infections) of coronaviruses uploaded to Genbank44 (accessed 4 May 2020) to extract information on hosts (to species level) of these coronaviruses. We supplemented these data with virus–host associations extracted from publications via the EID2 Database45. This resulted in 1669 associations between 1108 coronaviruses and 545 hosts (including non-mammalian hosts). We transformed these associations into a bipartite network linking species and strains of coronaviruses with their hosts.
Quantification of topological features
The above constructed network summarises our knowledge to date of associations between coronaviruses and their hosts, and its topology expresses patterns of sharing these viruses between various hosts and host groups. Our analytical pipeline captures this topology, and relations between nodes in our network, by means of node embeddings. This approach encodes each node (here either a coronavirus or a host) with its own vector representation in a continuous vector space, which, in turn, enables us to calculate similarities between two nodes based on this representation.
We adopted DeepWalk23 to compute vectorised representations for our coronaviruses and hosts from the network connecting them. DeepWalk23 uses truncated random walks to get latent topological information of the network and obtains the vector representation of its nodes (in our case coronaviruses and their hosts) by maximising the probability of reaching a next node (i.e., probability of a virus–host association) given the previous nodes in these walks (Supplementary Note 2 lists further details).
Similarity calculations
Following the application of DeepWalk to compute the latent topological representation of our nodes, we calculated the similarity between two nodes in our network—n (vectorised as N) and m (vectorised as M), by using cosine similarity as follows24,25:
$${\mathrm{sim}}_{{\mathrm{network}}}\left( {n,m} \right) = {\mathrm{sim}}_{{\mathrm{network}}}\left( {{\mathbf{M}},{\mathbf{N}}} \right) = \frac{{\mathop {\sum}\nolimits_{i = 1}^d {\left( {m_i \times n_i} \right)} }}{{\sqrt {\mathop {\sum}\nolimits_{i = 1}^d {m_i^2} } \times \sqrt {\mathop {\sum}\nolimits_{i = 1}^d {n_i^2} } }}$$
where d is the dimension of the vectorised representation of our nodes: M, N; and mi and ni are the components of vectors M and N, respectively.
Similarity learning meta-ensemble—a multi-perspective approach
Our analytical pipeline stacks 12 similarity learners into testable meta-ensembles. The constituent learners can be categorised by the following three 'points of view' (see also Supplementary Fig. 4 for a visual description):
Coronaviruses—the virus point of view
We assembled three models derived from (a) genome similarity, (b) genome biases and (c) genome secondary structure. Each of these learners gave each coronavirus–mammalian association \(( {v_i \to m_j} )\) a score, termed confidence, based on how similar the coronavirus vi is to known coronaviruses of mammalian species mj, compared to how similar vi is to all included coronaviruses. In other words, if vi is more similar (e.g., based on genome secondary structure) to coronaviruses observed in host mj than it is similar to all coronaviruses (both observed in mj and not), then the association \(v_i \to m_j\) is given a higher confidence score. Conversely, if vi is somewhat similar to coronaviruses observed in mj, and also somewhat similar to viruses not known to infect this particular mammal, then the association \(v_i \to m_j\) is given a medium confidence score. The association \(v_i \to m_j\) is given a lower confidence score if vi is more similar to coronaviruses not known to infect mj than it is similar to coronaviruses observed in this host.
Formally, given an adjacency matrix A of dimensions \(\left| {\mathbf{V}} \right| \times \left| {\mathbf{M}} \right|\) where \(\left| {\mathbf{V}} \right|\) is number of coronaviruses included in this study (for which a complete genome could be found), and \(\left| {\mathbf{M}} \right|\) is number of included mammals, such that for each \(v_i \in {\mathbf{V}}\) and \(m_j \in {\mathbf{M}}\), aij = 1 if an association is known to exist between the virus and the mammal, and 0 otherwise. Then for a similarity matrix simviral corresponding to each of the similarity matrices calculated above, a learner from the viral point of view is defined as follows24,25:
$${\mathrm{confidence}}_{{\mathrm{viral}}}( {v_i \to m_j} ) = \frac{{\mathop {\sum}\nolimits_{l = 1,\,l \ne i}^{\left| {\mathbf{V}} \right|} {( {{\mathrm{sim}}_{{\mathrm{viral}}}( {v_i,\,v_l} ) \times a_{lj}} )} }}{{\mathop {\sum}\nolimits_{l = 1,\,l \ne i}^{\left| {\mathbf{V}} \right|} {{\mathrm{sim}}_{{\mathrm{viral}}}\left( {v_i,\,v_l} \right)} }}$$
Mammals—the host point of view
We constructed seven learners from the similarities summarised in Table 3. Each of these learners calculated for every coronavirus–mammalian association \(( {v_i \to m_j})\) a confidence score based on how similar the mammalian species mj is to known hosts of the coronavirus vi, compared to how similar mj is to mammals not associated with vi. For instance, if mj is phylogenetically close to known hosts of vi, and also phylogenetically distant to mammalian species not known to be associated with this coronavirus, then the phylogenetic similarly learner will assign \(v_i \to m_j\) a higher confidence score. However, if mj does not overlap geographically with known hosts of vi, then the geographical overlap learner will assign it a low (in effect 0) confidence score.
Formally, given the above-defined adjacency matrix A, and a similarity matrix simmammalian corresponding to each of the similarity matrices summarised in Table 3, a learner from the mammalian point of view is defined as follows24,25:
$${\mathrm{confidence}}_{{\mathrm{mammalian}}}( {v_i \to m_j} ) = \frac{{\mathop {\sum}\nolimits_{l = 1,\,l \ne j}^{\left| {\mathbf{M}} \right|} {( {{\mathrm{sim}}_{{\mathrm{mammalian}}}( {m_j,\,m_l} ) \times a_{il}} )} }}{{\mathop {\sum}\nolimits_{l = 1,\,l \ne j}^{\left| {\mathbf{M}} \right|} {{\mathrm{sim}}_{{\mathrm{mammalian}}}( {m_j,\,m_l} )} }}$$
Network—the network point of view
We integrated two learners based on network similarities—one for mammals and one for coronaviruses. Formally, given the adjacency matrix A, our two learners from the network point of view as defined as follows24:
$${\mathrm{confidence}}_{{\mathrm{network}}_{\mathbf{V}}}( {v_i \to m_j} ) = \frac{{\mathop {\sum}\nolimits_{l = 1,\,l \ne i}^{\left| {\mathbf{V}} \right|} {( {{\mathrm{sim}}_{{\mathrm{network}}}( {v_i,\,v_l} ) \times a_{lj}} )} }}{{\mathop {\sum}\nolimits_{l = 1,\,l \ne i}^{\left| {\mathbf{V}} \right|} {{\mathrm{sim}}_{{\mathrm{network}}}( {v_i,\,v_l} )} }}\;\;$$
$${\mathrm{confidence}}_{{\mathrm{network}}_{\mathbf{M}}}( {v_i \to m_j} ) = \frac{{\mathop {\sum}\nolimits_{l = 1,\,l \ne j}^{\left| {\mathbf{M}} \right|} {( {{\mathrm{sim}}_{{\mathrm{network}}}( {m_j,\,m_l} ) \times a_{il}} )} }}{{\mathop {\sum}\nolimits_{l = 1,\,l \ne j}^{\left| {\mathbf{M}} \right|} {{\mathrm{sim}}_{{\mathrm{network}}}( {m_j,\,m_l} )} }}$$
Ensemble construction
We combined the learners described above by stacking them into ensembles (meta-ensembles) using Stochastic Gradient Boosting (GBM). The purpose of this combination is to incorporate the three points of views, as well as varied aspects of the coronaviruses and their mammalian potential hosts, into a generalisable, robust model50. We selected GBM as our stacking algorithm following an assessment of seven machine-learning algorithms using held-out test sets (20% of known associations randomly selected, N = 5—Supplementary Fig. 14). In addition, GBM is known for its ability to handle non-linearity and high-order interactions between constituent learners51, and have been used to predict reservoirs of viruses46 and zoonotic hot-spots51.We performed the training and optimisation (tuning) of these ensembles using the caret R Package52.
Our GBM ensembles comprised 100 replicate models. Each model was trained with balanced random samples using tenfold cross-validation (Supplementary Fig. 4). Final ensembles were generated by taking mean predictions (probability) of constituent models. Predictions were calculated form the mean probability at three cut-offs: >0.5 (standard), >0.75 and ≥0.9821. SD from mean probability was also generated and its values subtracted/added to predictions, to illustrate variation in the underlying replicate models.
Validation and performance estimation
We validated the performance of our analytical pipeline externally against 20 held-out test sets. Each test set was generated by splitting the set of observed associations between coronaviruses and their hosts into two random sets: a training set comprising 85% of all known associations and a test set comprising 15% of known associations. These test sets were held-out throughout the processes of generating similarity matrices; similarity learning, and assembling our learners, and were only used for the purposes of estimating performance metrics of our analytical pipeline. This resulted in 20 runs in which our ensemble learnt using only 85% of observed associations between our coronaviruses and their mammalian hosts. For each run, we calculated three performance metrics based on the mean probability across each set of 100 replicate models of the GBM meta-ensembles: AUC, true skill statistics (TSS) and F-score.
AUC is a threshold-independent measure of model predictive performance that is commonly used as a validation metric for host–pathogen predictive models21,46. Use of AUC has been criticised for its insensitivity to absolute predicted probability and its inclusion of a priori untenable prediction51,53, and so we also calculated the TSS (TSS = sensitivity + specificity − 1)54. F-score captures the harmonic mean of the precision and recall and is often used with uneven class distribution. Our approach is relaxed with respect to false positives (unobserved associations), hence the low F-score recorded overall.
We selected three probability cut-offs for our meta-ensemble: 0.50, 0.75 and 0.9821. One extreme of our cut-off range (0.5) maximises the ability of our ensemble to detect known associations (higher AUC, lower F-score). The other (0.9821) is calculated so that 90% of known positives are captured by our ensemble, while reducing the number of additional associations predicted (higher F-score, lower AUC).
Changes in network structure
We quantified the diversity of the mammalian hosts of each coronavirus in our input by computing mean phylogenetic distance between these hosts. Similarly, we captured the diversity of coronaviruses associated with each mammalian species by calculating mean (hamming) distance between the genomes of these coronaviruses. We termed these two metrics: mammalian diversity per virus and viral diversity per mammal, respectively. We aggregated both metrics at the network level by means of simple average. This enabled us to quantify changes in these diversity metrics, at the level of network, with addition of predicted links at three probability cut-offs: >0.5, >0.75 and ≥0.9821.
In addition, we captured changes in the structure of the bipartite network linking CoVs with their mammalian hosts, with the addition of predicted associations, by computing a comprehensive set of structural properties (Supplementary Note 3) at the probability cut-offs mentioned above, and comparing the results with our original network. Here we ignore properties that deterministically change with the addition of links (e.g., degree centrality, connectance; Supplementary Table 2 lists all computed metrics and changes in their values). Instead, we focus on non-trivial structural properties. Specifically, we capture changes in network stability, by measuring its nestedness55,56,57; and we quantify non-independence in interaction patterns by means of C-score58. Supplementary Note 3 provides full definition of these concepts as well as other metrics we computed for our networks.
Genomic sequences of coronaviruses were obtained from NCBI GenBank, accession codes are listed in Supplementary Data 7. Coronaviruses–hosts associations were obtained from the ENHanCEd Infectious Diseases Database (EID2: https://eid2.liverpool.ac.uk/). Mammalian and geospatial data were obtained from open-access data sources. These sources are listed in detail, and their DOIs are provided in the Supplementary Information file. Data used can be found here: https://doi.org/10.6084/m9.figshare.13110896, with the exception of mammalian presence shapefiles and raw climate data (due to their large size)—these data can be obtained from the authors or directly from the sources listed in the Supplementary Information file.
All codes used in our analyses are made available via figshare (https://doi.org/10.6084/m9.figshare.13110896).
Andersen, K. G., Rambaut, A., Lipkin, W. I., Holmes, E. C. & Garry, R. F. The proximal origin of SARS-CoV-2. Nat. Med. 26, 450–452 (2020).
Corman, V. M., Muth, D., Niemeyer, D. & Drosten, C. Hosts and sources of endemic human coronaviruses. Adv. Virus Res. 100, 163–188 (2018). Academic Press Inc.
He, J. F. et al. Molecular evolution of the SARS coronavirus, during the course of the SARS epidemic in China. Science 303, 1666–1669 (2004).
Guan, Y. et al. Isolation and characterization of viruses related to the SARS coronavirus from animals in Southern China. Science 302, 276–278 (2003).
Rota, P. A. et al. Characterization of a novel coronavirus associated with severe acute respiratory syndrome. Science 300, 1394–1399 (2003).
Alekseev, K. P. et al. Bovine-like coronaviruses isolated from four species of captive wild ruminants are homologous to bovine coronaviruses, based on complete genomic sequences. J. Virol. 82, 12422–12431 (2008).
Lorusso, A. et al. Molecular characterization of a canine respiratory coronavirus strain detected in Italy. Virus Res. 141, 96–100 (2009).
Vijgen, L. et al. Evolutionary history of the closely related group 2 coronaviruses: porcine hemagglutinating encephalomyelitis virus, bovine coronavirus, and human coronavirus OC43. J. Virol. 80, 7270–7274 (2006).
Pfefferle, S. et al. Distant relatives of severe acute respiratory syndrome coronavirus and close relatives of human coronavirus 229E in bats, Ghana. Emerg. Infect. Dis. 15, 1377–1384 (2009).
Zhou, P. et al. A pneumonia outbreak associated with a new coronavirus of probable bat origin. Nature 579, 270–273 (2020).
Lam, T. T. Y. et al. Identifying SARS-CoV-2 related coronaviruses in Malayan pangolins. Nature 1–6 https://doi.org/10.1038/s41586-020-2169-0 (2020)
Graham, R. L. & Baric, R. S. Recombination, reservoirs, and the modular spike: mechanisms of coronavirus cross-species transmission. J. Virol. 84, 3134–3146 (2010).
Clavel, F. et al. Genetic recombination of human immunodeficiency virus. J. Virol. 63, 1455–1459 (1989).
Ji, W., Niu, D. D., Si, H. L., Ding, N. Z. & He, C. Q. Vaccination influences the evolution of classical swine fever virus. Infect. Genet. Evol. 25, 69–77 (2014).
Ji, W., Wang, W., Zhao, X., Zai, J. & Li, X. Cross-species transmission of the newly identified coronavirus 2019-nCoV. J. Med. Virol. 92, 433–440 (2020).
Zhang, X. W., Yap, Y. L. & Danchin, A. Testing the hypothesis of a recombinant origin of the SARS-associated coronavirus. Arch. Virol. 150, 1–20 (2005).
Wardeh, M., Sharkey, K. J. & Baylis, M. Integration of shared-pathogen networks and machine learning reveals the key aspects of zoonoses and predicts mammalian reservoirs. Proc. R. Soc. B Biol. Sci. 287, 20192882 (2020).
Luis, A. D. et al. Network analysis of host-virus communities in bats and rodents reveals determinants of cross-species transmission. Ecol. Lett. 18, 1153–1162 (2015).
Bogich, T. L. et al. Using network theory to identify the causes of disease outbreaks of unknown origin. J. R. Soc. Interface 10, 20120904 (2013).
Elmasri, M., Farrell, M. J., Davies, T. J. & Stephens, D. A. A hierarchical bayesian model for predicting ecological interactions using scaled evolutionary relationships. Ann. Appl. Stat. 14, 221–240 (2020).
Dallas, T., Park, A. W. & Drake, J. M. Predicting cryptic links in host-parasite networks. PLoS Comput. Biol. 13, e1005557 (2017).
Carlson, C. J., Zipfel, C. M., Garnier, R. & Bansal, S. Global estimates of mammalian viral diversity accounting for host sharing. Nat. Ecol. Evol. 3, 1070–1075 (2019).
Perozzi, B., Al-Rfou, R. & Skiena, S. DeepWalk: online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 701–710 (2014). https://doi.org/10.1145/2623330.2623732.
Zong, N., Kim, H., Ngo, V. & Harismendy, O. Deep mining heterogeneous networks of biomedical linked data to predict novel drug-target associations. Bioinformatics 33, 2337–2344 (2017).
Zhang, H. et al. Predicting lncRNA-disease associations using network topological similarity based on deep mining heterogeneous networks. Math. Biosci. 315, 108229 (2019).
Article MathSciNet CAS PubMed MATH Google Scholar
Li, C., Yang, Y. & Ren, L. Genetic evolution analysis of 2019 novel coronavirus and coronavirus from other species. Infect. Genet. Evol. 82, 104285 (2020).
Wang, L. F. & Eaton, B. T. Bats, civets and the emergence of SARS. Curr. Top. Microbiol. Immunol. 315, 325–344 (2007).
Nijman, V. et al. Trade in common palm civet Paradoxurus hermaphroditus in Javan and Balinese markets, Indonesia. Small Carniv. Conserv. 51, 11–17 (2014).
Lau, S. K. P. et al. Severe acute respiratory syndrome (SARS) coronavirus ORF8 protein is acquired from SARS-related coronavirus from greater horseshoe bats through recombination. J. Virol. 89, 10532–10547 (2015).
Li, W. et al. Bats are natural reservoirs of SARS-like coronaviruses. Science 310, 676–679 (2005).
Ceraolo, C. & Giorgi, F. M. Genomic variance of the 2019-nCoV coronavirus. J. Med. Virol. 92, 522–528 (2020).
Cui, J. et al. Evolutionary relationships between bat coronaviruses and their hosts. Emerg. Infect. Dis. 13, 1526–1532 (2007).
Shi, J. et al. Susceptibility of ferrets, cats, dogs, and other domesticated animals to SARS-coronavirus 2. Science 368, 1016–1020 (2020).
Saldanha, I. F. et al. Extension of the known distribution of a novel clade C betacoronavirus in a wildlife host. Epidemiol. Infect. 147, e169 (2019).
Lau, S. K. P. et al. Isolation and characterization of a novel betacoronavirus subgroup A coronavirus, rabbit coronavirus HKU14, from domestic rabbits. J. Virol. 86, 5481–5496 (2012).
Vasilakis, N. & Weaver, S. C. Chapter 1 the history and evolution of human dengue emergence. Adv. Virus Res. 72, 1–76 (2008).
Keele, B. F. et al. Chimpanzee reservoirs of pandemic and nonpandemic HIV-1. Science 313, 523–526 (2006).
Brown, I. H. The epidemiology and evolution of influenza viruses in pigs. Vet. Microbiol. 74, 29–46 (2000).
Banerjee, A. et al. Predicting the recombination potential of severe acute respiratory syndrome coronavirus 2 and Middle East respiratory syndrome coronavirus. J. Gen. Virol. jgv001491 https://doi.org/10.1099/jgv.0.001491 (2020).
Hui, D. S. et al. Middle East respiratory syndrome coronavirus: risk factors and determinants of primary, household, and nosocomial transmission. Lancet Infect. Dis. 18, e217–e227 (2018).
Freuling, C. M. et al. Susceptibility of raccoon dogs for experimental SARS-CoV-2 infection. Emerg. Infect. Dis. 26, 2982–2985 (2020).
Wu, L. et al. Broad host range of SARS-CoV-2 and the molecular basis for SARS-CoV-2 binding to cat ACE2. Cell Discov. 6, 68 (2020).
Hobbs, E. C. & Reid, T. J. Animals and SARS‐CoV‐2: species susceptibility and viral transmission in experimental and natural conditions, and the potential implications for community transmission. Transbound. Emerg. Dis. tbed.13885 https://doi.org/10.1111/tbed.13885 (2020).
Benson, D. A. et al. GenBank. Nucleic Acids Res. 41, D36–D42 (2013).
Wardeh, M., Risley, C., Mcintyre, M. K., Setzkorn, C. & Baylis, M. Database of host-pathogen and related species interactions, and their global distribution. Sci. Data 2, 150049 (2015).
Babayan, S. A., Orton, R. J. & Streicker, D. G. Predicting reservoir hosts and arthropod vectors from evolutionary signatures in RNA virus genomes. Science 362, 577–580 (2018).
Coleman, J. R. et al. Virus attenuation by genome-scale changes in codon pair bias. Science 320, 1784–1787 (2008).
Wright, E. S. DECIPHER: harnessing local sequence context to improve protein multiple sequence alignment. BMC Bioinform. 16, 322 (2015).
Wang, B. et al. Similarity network fusion for aggregating data types on a genomic scale. Nat. Methods 11, 333–337 (2014).
Zhang, W. et al. Predicting potential drug-drug interactions by integrating chemical, biological, phenotypic and network data. BMC Bioinform. 18, 18 (2017).
Allen, T. et al. Global hotspots and correlates of emerging zoonotic diseases. Nat. Commun. 8, 1124 (2017).
Kuhn, M. Building predictive models in R using the caret package. J. Stat. Softw. 28, 1–26 (2008).
Lobo, J. M., Jiménez-Valverde, A. & Real, R. AUC: a misleading measure of the performance of predictive distribution models. Glob. Ecol. Biogeogr. 17, 145–151 (2008).
Barbet-Massin, M., Jiguet, F., Albert, C. H. & Thuiller, W. Selecting pseudo-absences for species distribution models: how, where and how many? Methods. Ecol. Evol. 3, 327–338 (2012).
Staniczenko, P. P. A., Kopp, J. C. & Allesina, S. The ghost of nestedness in ecological networks. Nat. Commun. 4, 1–6 (2013).
Thébault, E. & Fontaine, C. Stability of ecological communities and the architecture of mutualistic and trophic networks. Science 329, 853–856 (2010).
Almeida-Neto, M., Guimarães, P., Guimarães, P. R., Loyola, R. D. & Ulrich, W. A consistent metric for nestedness analysis in ecological systems: reconciling concept and measurement. Oikos 117, 1227–1239 (2008).
Connor, E. F., Collins, M. D. & Simberloff, D. The checkered history of checkerboard distributions. Ecology 94, 2403–2414 (2013).
Gower, J. C. A general coefficient of similarity and some of its properties. Biometrics 27, 857–871 (1971).
M.W. acknowledges the support from BBSRC and MRC for the National Productivity Investment Fund (NPIF) fellowship (MR/R024898/1). M.W. and M.S.C.B acknowledge support from BBSRC IAA COVID - 168478. Establishment of the EID2 database was funded by a UK Research Council Grant (NE/G002827/1) to M.B., as part of an ERANET Environmental Health award to M.B.; subsequently, it has been further developed and maintained by BBSRC Tools and Resources Development Fund awards (BB/K003798/1; BB/N02320X/1) to M.B., and the National Institute for Health Research Health Protection Research Unit (NIHR HPRU) in Emerging and Zoonotic Infections at the University of Liverpool in partnership with Public Health England and Liverpool School of Tropical Medicine.
Department of Livestock and One Health, Institute of Infection, Veterinary & Ecological Sciences, University of Liverpool, Liverpool, UK
Maya Wardeh & Matthew Baylis
Department of Mathematical Sciences, University of Liverpool, Liverpool, UK
Maya Wardeh
Health Protection Research Unit in Emerging and Zoonotic Infections, University of Liverpool, Liverpool, UK
Matthew Baylis
Department of Evolution, Ecology and Behaviour, Institute of Infection, Veterinary & Ecological Sciences, University of Liverpool, Liverpool, UK
Marcus S. C. Blagrove
Conceived and designed the study: M.W. and M.S.C.B. Compiled the data and designed and implemented analytical pipeline: M.W. Analysed and interpreted the data: M.W. and M.S.C.B. Established the EID2 database: M.W. and M.B. Wrote the paper: M.W., M.B. and M.S.C.B.
Correspondence to Maya Wardeh or Marcus S. C. Blagrove.
Peer review information Nature Communications thanks Jie Cui, Rachel Graham, and Nicole Wheeler for their contribution to the peer review of this work. Peer reviewer reports are available.
Description of Additional Supplementary Files
Supplementary Data 1
Wardeh, M., Baylis, M. & Blagrove, M.S.C. Predicting mammalian hosts in which novel coronaviruses can be generated. Nat Commun 12, 780 (2021). https://doi.org/10.1038/s41467-021-21034-5
Complete mitogenome of the endangered and endemic Nicobar treeshrew (Tupaia nicobarica) and comparison with other Scandentians
Shantanu Kundu
Avas Pakrashi
Understanding the Role of Environmental Transmission on COVID-19 Herd Immunity and Invasion Potential
M.A Masud
Md. Hamidul Islam
Byul Nim Kim
Bulletin of Mathematical Biology (2022)
Evolutionary trajectory of SARS-CoV-2 and emerging variants
Jalen Singh
Pranav Pandit
Karen Mossman
Virology Journal (2021)
Metagenomic identification of a new sarbecovirus from horseshoe bats in Europe
Jack M. Crook
Ivana Murphy
Diana Bell
Veterinary Experiences can Inform One Health Strategies for Animal Coronaviruses
Olivia S. K. Chan
Katriona C. F. Bradley
Nathalie F. Mauroo
EcoHealth (2021)
Reply to: Machine-learning prediction of hosts of novel coronaviruses requires caution as it may affect wildlife conservation
Nature Communications Matters Arising Open Access 12 Sept 2022 | CommonCrawl |
How to compute the monstrous $ \int_0^{\frac{e-1}{e}}{\frac{x(2-x)}{(1-x)}\frac{\log\left(\log\left(1+\frac{x^2}{2-2x}\right)\right)}{2-2x+x^2}dx} $
A friend told me, that he found a closed form for the following integral: $$ \int_0^{\frac{e-1}{e}}{\frac{x(2-x)}{(1-x)}\frac{\log\left(\log\left(1+\frac{x^2}{2-2x}\right)\right)}{\left(2-2x+x^2\right)}dx} $$ I don't know if he's just messing around with me, but I wonder if this integral admits a closed form. I tried to expand the $\log(\log)$ term into a power series, but things got worse. So any help will be appreciated!
calculus integration definite-integrals closed-form
Harish Chandra Rajpoot
Redundant AuntRedundant Aunt
$\begingroup$ Wolfram Alpha can do it. Let $u=\ln\left(1 + \frac{x^2}{2-2x}\right)$ and then do integration by parts. $\endgroup$ – Christopher Carl Heckman Aug 21 '15 at 22:40
$\begingroup$ Observing $\log\left(\log\left(1+\frac{x^2}{2-2x}\right)\right) = \log(\log(2-2x+x^2) - \log(2-2x))$ could be useful $\endgroup$ – Blex Aug 21 '15 at 22:43
$\begingroup$ @Blex: how could it be useful, can you explain?i don't understand please $\endgroup$ – Bhaskara-III Sep 21 '16 at 10:39
Notice, we have $$\int_{0}^{\frac{e-1}{e}}\frac{x(2-x)}{1-x}\frac{\log\left(\log\left(1+\frac{x^2}{2-2x}\right)\right)}{2-2x+x^2}dx$$ $$=\int_{0}^{\frac{e-1}{e}}\frac{x(2-x)}{1-x}\frac{\log\left(\log\left(\frac{2-2x+x^2}{2-2x}\right)\right)}{2-2x+x^2}dx$$
Let, $$\log\left(\frac{2-2x+x^2}{2-2x}\right)=u$$ $$\implies \frac{d}{dx}\left(\log\left(\frac{2-2x+x^2}{2-2x}\right)\right)=\frac{d}{dx}(u)$$ $$\frac{1}{\left(\frac{2-2x+x^2}{2-2x}\right)}\cdot \left(\frac{(2-2x)(-2+2x)-(2-2x+x^2)(-2)}{(2-2x)^2} \right)=\frac{du}{dx}$$ $$\left(\frac{2-2x}{2-2x+x^2}\right)\cdot \left(\frac{2x(2-x)}{(2-2x)^2} \right)=\frac{du}{dx}$$ $$\frac{x(2-x)}{(1-x)}\frac{1}{(2-2x+x^2)}dx=du$$ Now, we have $$\int_{0}^{\log\left(\frac{e^2+1}{2e}\right)}\log(u)du$$
$$=\left[u\log(u)-u\right]_{0}^{\log\left(\frac{e^2+1}{2e}\right)}$$ $$=\left[u\log\left(\frac{u}{e}\right)\right]_{0}^{\log\left(\frac{e^2+1}{2e}\right)}$$
$$=\log\left(\frac{e^2+1}{2e}\right)\cdot\log\left(\frac{1}{e}\log\left(\frac{e^2+1}{2e}\right)\right)-\lim_{u\to 0}u\log\left(\frac{u}{e}\right)$$
$$=\log\left(\frac{e^2+1}{2e}\right)\cdot\log\left(\frac{1}{e}\log\left(\frac{e^2+1}{2e}\right)\right)-0$$
Hence, we get
$$\bbox[5px, border:2px solid #C0A000]{\color{red}{\int_{0}^{\frac{e-1}{e}}\frac{x(2-x)}{1-x}\frac{\log\left(\log\left(1+\frac{x^2}{2-2x}\right)\right)}{2-2x+x^2}dx}=\color{blue}{\log\left(\frac{e^2+1}{2e}\right)\cdot \log\left(\frac{1}{e}\log\left(\frac{e^2+1}{2e}\right)\right)}}$$
Harish Chandra RajpootHarish Chandra Rajpoot
$\begingroup$ Finally, someone ended their answer with flavour! $\endgroup$ – Mr Pie Feb 18 '18 at 13:52
Here comes the help! $$\mathcal{I}=(\varphi-1)\left(\ln(\varphi-1)-1\right)$$ $$\text{with}\qquad \varphi=\ln\left(\frac{1+e^{2}}{2}\right)$$ Namagiri is on fire today.
Bhaskara-III
Start wearing purpleStart wearing purple
$\begingroup$ What/ who is Namagiri? $\endgroup$ – mysatellite Aug 21 '15 at 23:49
$\begingroup$ How did you obtain this? I would recommend posting a development of the solution rather than the end result, which apparently is available via Maple. $\endgroup$ – Mark Viola Aug 22 '15 at 6:15
$\begingroup$ @user109899 Compute and simplify the derivative of $\ln\left(1+\frac{x^2}{2-2x}\right)$. $\endgroup$ – Start wearing purple Aug 22 '15 at 10:04
$\begingroup$ @L.G. Two examples make up a very small sample on which to base a general conclusion ... especially when that person is so highly mathematically inclined as you are. $\endgroup$ – Mark Viola Aug 22 '15 at 16:55
$\begingroup$ @Sky: en.wikipedia.org/wiki/Namagiri_Thayar $\endgroup$ – Eric Stucky Aug 23 '15 at 0:11
Not the answer you're looking for? Browse other questions tagged calculus integration definite-integrals closed-form or ask your own question.
Closed Form for $~\int_0^1\frac{\text{arctanh }x}{\tan\left(\frac\pi2~x\right)}~dx$
Crazy $\int_0^\infty{_3F_2}\left(\begin{array}c\tfrac58,\tfrac58,\tfrac98\\\tfrac12,\tfrac{13}8\end{array}\middle|\ {-x}\right)^2\frac{dx}{\sqrt x}$
Closed-form of the hypergeometric function ${_4F_3}\left(\begin{array}c1,1,\tfrac54,\tfrac74\\\tfrac32,2,2\end{array}\middle|\,-t\right)$
Calculating $\int_0^\infty(\log t)^n e^{-t}\ dt$
Closed form for $\int_0^1\log\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\mathrm dx$
Improper Integral $\int_0^1\left(\left\{\frac1x\right\}-\frac12\right)\frac{\log(x)}xdx$
Evaluating $\int_0^{\Large\frac{\pi}{2}}\left(\frac{1}{\log(\tan x)}+\frac{1}{1-\tan(x)}\right)^3dx$
A closed form for $\int_0^1 \frac{\left(\log (1+x)\right)^3}{x}dx$?
A difficult logarithmic integral ${\Large\int}_0^1\log(x)\,\log(2+x)\,\log(1+x)\,\log\left(1+x^{-1}\right)dx$
Need help with $\int_0^1\frac{\log(1+x)-\log(1-x)}{\left(1+\log^2x\right)x}\,dx$
Integral $\int_0^\infty\Big[\log\left(1+x^2\right)-\psi\left(1+x^2\right)\Big]dx$
Closed form 0f $I=\int _{ 0 }^{ 1 }{ \frac { \ln { x } { \left( \ln { \left( 1-{ x }^{ 2 } \right) } \right) }^{ 3 } }{ 1-x } dx }$
What is $\int_0^1 \frac{\log \left(1-x^2\right) \sin ^{-1}(x)^2}{x^2} \, dx$? | CommonCrawl |
View all Nature Research journals
Explore our content
Combining 3D single molecule localization strategies for reproducible bioimaging
Clément Cabriel ORCID: orcid.org/0000-0002-0316-03121,
Nicolas Bourg1,
Pierre Jouchet1,
Guillaume Dupuis2,
Christophe Leterrier ORCID: orcid.org/0000-0002-2957-20323,
Aurélie Baron4,
Marie-Ange Badet-Denisot4,
Boris Vauzeilles4,5,
Emmanuel Fort6 &
Sandrine Lévêque-Fort ORCID: orcid.org/0000-0002-9218-33631
Nature Communications volume 10, Article number: 1980 (2019) Cite this article
Here, we present a 3D localization-based super-resolution technique providing a slowly varying localization precision over a 1 μm range with precisions down to 15 nm. The axial localization is performed through a combination of point spread function (PSF) shaping and supercritical angle fluorescence (SAF), which yields absolute axial information. Using a dual-view scheme, the axial detection is decoupled from the lateral detection and optimized independently to provide a weakly anisotropic 3D resolution over the imaging range. This method can be readily implemented on most homemade PSF shaping setups and provides drift-free, tilt-insensitive and achromatic results. Its insensitivity to these unavoidable experimental biases is especially adapted for multicolor 3D super-resolution microscopy, as we demonstrate by imaging cell cytoskeleton, living bacteria membranes and axon periodic submembrane scaffolds. We further illustrate the interest of the technique for biological multicolor imaging over a several-μm range by direct merging of multiple acquisitions at different depths.
Despite recent advances in localization-based super-resolution techniques, nanoscale 3D fluorescence imaging of biological samples remains a major challenge, mostly because of its lack of versatility. While photoactivated localization microscopy (PALM) and (direct) stochastic optical reconstruction microscopy ((d)STORM) can easily provide a lateral localization precision (i.e., the standard deviation of the position estimates) down to 5–10 nm1,2,3,4, a great deal of effort is being made to develop quantitative and reproducible 3D super-localization methods. The most widely used 3D Single Molecule Localization Microscopy (SMLM) technique is astigmatic imaging, which relies on the use of a cylindrical lens to apply an astigmatic aberration in the detection path to encode the axial information in the shape of the spots, achieving an axial localization precision (standard deviation) down to 20–25 nm5—though the precision sharply varies with the axial position: 300 nm away from the focus, the precision is typically around 60 nm (see Supplementary Fig. 1a). Other Point Spread Function (PSF) shaping methods are also available6,7,8, but their implementations are not as inexpensive and straightforward. Still, all PSF shaping methods including astigmatic imaging suffer from several bias sources such as axial drifts, chromatic aberrations, field-varying geometrical aberrations, and sample tilts. These sources of biases often degrade the resolution or hinder colocalization and experiment reproducibility. Axial measurements can also be performed thanks to intensity-based techniques like Supercritical Angle Fluorescence (SAF)9,10,11,12,13,14, which relies on the detection of the near-field emission of fluorophores coupled into propagative waves at the sample/glass coverslip interface due to the index mismatch. Combined with SMLM, this technique, called Direct Optical Nanoscopy with Axially Localized Detection (DONALD) or Supercritical Angle Localization Microscopy (SALM), yields absolute axial positions (i.e., independent of the focus position) in the first 500 nm beyond the coverslip with a precision down to 15 nm15,16. The principle relies on the comparison between the SAF and the Undercritical Angle Fluorescence (UAF) components to extract the absolute axial position.
By combining complementary SAF and astigmatism axial information sources, we achieve a slowly varying localization precision over the capture range. Besides, as the SAF detection is insensitive to most axial detection biases inherent in PSF shaping, it provides an absolute reference used to correct the biases of the astigmatic detection. This method, which we call Dual-view Astigmatic Imaging with SAF Yield (DAISY), thus enables reliable and reproducible 3D super-localization imaging of biological samples. It is especially suited for multicolor studies and achieves precisions down to 15 nm.
Principle of DAISY and experimental setup
Starting from the efficient and straightforward astigmatic imaging, we propose to push back its previously mentioned limits; thanks to a novel approach based on a dual-view setup (Fig. 1a) that combines two features. First, it decouples the lateral and axial detections to optimize the 3D localization precision, and second, it uses two different sources of axial information: a strong astigmatism-based PSF measurement is merged with a complementary SAF information that provides an absolute reference. This reference is crucial to render the axial detection insensitive to axial drifts and sample tilts, as well as chromatic aberrations: unlike most other techniques that use fiducial markers17 or structure correlation5 to provide these corrections, here, we intend to use the fluorophores themselves as absolute and bias-insensitive references. Besides, by applying a large astigmatic aberration on one fluorescence path only, this technique optimizes the axial precision for the collected photon number (Supplementary Fig. 1b) and maintains a slowly varying localization precision over the imaging depth (Supplementary Fig. 1a). Unlike most PSF shaping implementations found in the literature, which use moderate aberrations5,18,19 to preserve the lateral resolution, the dual path detection allows one to fully benefit from the astigmatism capabilities. Indeed, as the lateral detection is mostly provided by the aberration-free path, the strong PSF shaping does not compromise the lateral detection. In order to merge the axial and lateral information sources, each is assigned a relative weight according to its localization precision (see Fig. 1b and Methods section). Such a setup exhibits a major improvement in terms of both axial precision and precision curve flatness despite only half of the photons being used for the axial localization far from the coverslip compared with a standard single-view PSF measurement microscope. As a result, DAISY exhibits a weakly anisotropic resolution over the whole capture range.
Description of the principle of DAISY and characterization of the precision. a Schematic of the setup. The DAISY module is placed between the microscope and the camera. After the beam splitter cube (BS), the Undercritical Angle Fluorescence (UAF) path contains a cylindrical lens, as well as a physical mask in a relay plane of the back focal plane of the objective to block the SAF photons. These two elements are not present in the epifluorescence (EPI) detection path, which comprises both the UAF and SAF components. The images are formed on the two halves of the same camera. UAF and EPI frames recorded by the camera on a given field (COS-7 cells, α-tubulin immunolabeling, Alexa Fluor 647) are also displayed (top right corner). For each PSF, the x and y widths are measured to obtain the astigmatic axial information, and the numbers of UAF and EPI photons are used to retrieve the SAF axial information. Finally, the axial astigmatic and SAF positions are merged together. Similarly, lateral positions are obtained by merging the lateral positions from the UAF and EPI paths. b Relative weights of the SAF and astigmatic axial detections (top) and of the UAF and EPI lateral positions (bottom) used to merge the positions in DAISY (see Methods section, Position merging section for the exact formulas). c Axial (top) and lateral (bottom) precisions of DAISY. The experimental data was taken on dark red 40-nm fluorescent beads distributed at various depths, each emitting a number of photons similar to Alexa Fluor 647. Five-hundred frames were acquired and the precisions were evaluated from the dispersion of the results for each bead. The CRLB contributions of each detection modality are also displayed, as well as the CRLB of DAISY for typical experimental conditions. d 3D (color-coded depth) DAISY image of actin (COS-7 cell, AF647-phalloidin labeling). e Zoom on the boxed region displayed in d. Scale bars: 5 μm (a) and (d), 2 μm (e)
DAISY localization precision measurement
We first performed the calibration of the astigmatism-based axial detection using 15 μm diameter latex microspheres coated with Alexa Fluor (AF) 647 as described in ref. 20 in order to account for the influence of the optical aberrations on the PSFs and thus eliminate this axial bias source (see Methods section). Then, to evaluate the localization precision of DAISY, we imaged dark red 40-nm diameter fluorescent beads located at various randomly distributed heights with a weak 637 nm excitation so that their emission level matched to that of AF647 in typical dSTORM conditions, i.e., 2750 UAF photons and 2750–5100 EPI photons (depending on the depth) per bead per frame on average (Fig. 1c). As it takes advantage of the good performance of the SAF detection near the coverslip, DAISY exhibits a resolution that slowly varies with depth: the lateral and axial precisions reach values as low as 8 nm and 12 nm, respectively (standard deviations), and they both remain better than 20 nm in the first 600 nm. Such precision is sufficient to resolve the hollowness of immunolabeled microtubules, as displayed in Supplementary Fig. 2. This feature is rather uncommon with astigmatic imaging implementations, which typically provide at best 20–25 nm axial precision5 and only in a limited axial range of ~300 nm according to Cramér-Rao Lower Bound (CRLB) calculations (Supplementary Figs. 1a and 3)—only the dual-objective implementation achieves better precisions, at the cost of a much increased complexity21. It is worth noticing that the experimental precisions are slightly worse than the CRLB, which represent a theoretical ideal. This discrepancy is most likely due to optical aberrations, which are not taken into account by the CRLB, and to the use of centroid detection (see Methods section), which is not expected to reach the lower limit.
Insensitivity to axial detection biases
Our technique thus provides precise 3D super resolution images (Fig. 1d, e); still, at this precision level, any experimental uncertainty or bias can have devastating effects on the quality of the obtained data. The first source of error that has to be dealt with is the drifts that typically come from a poor mechanical stability of the stage or from thermal drifts. Lateral drifts are well known and can often be easily corrected directly from the localized data using cross-correlation algorithms22. However, accounting for the axial drifts can be much more demanding since 3D cross-correlation algorithms require long calculation times unless they sacrifice precision. Tracking fiducial markers is also possible, but since it requires a specific sample preparation and is sensitive to photobleaching (unless a dedicated detection channel at a different wavelength is used17), it is not very practical. It is worth noticing that most commercially available locking systems typically stabilize the focus position at ±30 nm at best (Supplementary Fig. 4), which is hardly sufficient for high resolution imaging. As positions are measured relative to the focus plane with PSF shape measurement methods, axial drifts induce large losses of resolution. On the contrary, SAF detection yields absolute results; thus it is not sensitive to drifts. We use this feature to provide a reliable drift correction algorithm: for each localization, the axial position detected with the SAF and the astigmatic modalities are cross-correlated, which allows us to monitor the focus drift and to consequently correct the astigmatism results with an accuracy typically below 6 nm (see Methods section). To highlight the importance of this correction, we plotted the x–z and y–z profiles of a microtubule labeled with AF647 as a function of time with both an astigmatism-based detection and DAISY (Fig. 2a–c): unlike the DAISY profiles, the astigmatism profiles exhibit a clear temporal shift, which results in a dramatic apparent broadening of the filament.
Characterization of the performance of DAISY. a–c Illustration of the effect of axial drifts. a Depth map of microtubules (COS-7 cells, α-tubulin labeled with AF647). The x–z (b) and y–z (c) profiles of the boxed microtubule are plotted for both standard astigmatic imaging and DAISY. The time is color-coded over 1 h to highlight the effect of the temporal drift. d–f Effect of the chromatic aberration. d 2D localization image of microtubules (COS-7 cells, α-tubulin labeled with AF555 and β-tubulin labeled with AF647) sequentially imaged in two different colors (red: AF647, cyan: AF555). The x–z (e) and y–z (f) profiles of the boxed microtubule are plotted for both standard astigmatic imaging and DAISY. g Dual-color depth map of actin (cyan-blue) and tubulin (yellow-red) in COS-7 cells (actin labeled with AF647-phalloidin and α-tubulin labeled with a 560-nm excitable DNA-PAINT imager). h Influence of the sample tilt on the axial detection. The same field of 20-nm dark red fluorescent beads deposited on a coverslip was imaged with both standard astigmatic imaging and DAISY and the results were averaged over 500 frames to suppress the influence of the localization precision. The detected depth profile is plotted along the tilt axis. i Illustration of the image lateral deformation induced by the astigmatism. For the same acquisition (COS-7 cells, α-tubulin labeled with AF647), 2D images were reconstructed from the lateral positions measured on both the astigmatic UAF (in cyan) and the unastigmatic EPI (in red) detection paths of our setup, before the deformation correction (left) and after (right). The whole field and zooms on the boxed regions are both displayed. Scale bars: 2 μm (a) and (d), 5 μm (g) and (i) left, 1 μm (i) right insets
In the framework of quantitative biological studies, the axial detection can furthermore be hampered by the axial chromatic aberration due to dispersion by the lenses, including the objective lens. If uncorrected, such a chromatic shift induces a bias in the results of multicolor sequential acquisitions, thus hindering colocalization. However, as DAISY provides absolute axial information, thanks to the SAF measurement, and it is not sensitive to this chromatic aberration. We performed a two-color sequential acquisition on microtubules labeled with AF647 and AF555 (Fig. 2d–f). It illustrates the chromatic dependence inherent in standard PSF shaping detection (which exhibits chromatic shifts as large as 70 nm) and the insensitivity of DAISY to this effect (residual chromatic shift inferior to 5 nm). Because of the chromatic shift, the uncorrected astigmatism results appear somewhat inconsistent, whereas the colocalization is much more obvious with DAISY. Consequently, unbiased dual-color 3D images of biological samples can be obtained thanks to sequential acquisitions: we illustrate this on a sample with the actin and the tubulin labeled with AF647 and a 560-nm-excitable DNA-PAINT fluorophore, respectively (Fig. 2g).
It is well known that axial biases in PSF shaping measurements can further stem from tilts of the stage or sample holder, as well as from field-dependent geometrical optical aberrations. These issues were thoroughly studied by Diezmann et al., who reported discrepancies higher than 100 nm over one field of view23. Although assessing tilts on biological samples is difficult with PSF measurement methods, DAISY makes this measurement straightforward since the absolute reference provided by the SAF detection can be used to measure the values of the astigmatic axial positions detected for molecules at the coverslip as a function of their lateral positions and then correct the tilt. We performed DAISY acquisitions on 20-nm diameter fluorescent beads at the coverslip and displayed the z values obtained with both an astigmatism-based detection and DAISY. While the former exhibits a clear tilt ranging from −30 to +30 nm over a 30 μm wide field, the latter is insensitive to the tilt, with less than 2 nm axial discrepancy between the two sides of the field (Fig. 2h).
Aside from tilt effects, field-dependent aberrations also induce PSF shape deformations, leading to axial biases. Although we do not actually perform corrections, DAISY is less sensitive to that effect compared with standard astigmatism imaging: on the one hand, the SAF detection relies on intensity measurement, and on the other hand, as DAISY uses a high astigmatism, i.e., strongly aberrated PSFs, it exhibits little sensitivity to remaining field aberrations. To illustrate this phenomenon, we compared tilt-corrected axial positions obtained with 20-nm diameter fluorescent beads deposited on a coverslip between a standard weaker astigmatic detection (350 nm between the two focal lines, close to the values commonly found in the literature) and DAISY. We got rid of the dispersion due to the localization precision by averaging the results over time for each bead and we plotted the corresponding detected depth histograms over one 25-μm wide field of view (Supplementary Fig. 5). The widths of the distributions evidence a much lower impact on the DAISY detection (standard deviation equal to 21 nm) than on the standard astigmatic detection (standard deviation equal to 45 nm). In other words, the strong astigmatism is less sensitive to aberrations than a conventional astigmatism, and the biases are even further mitigated by the coupling with the SAF detection, which relies on photon counting, and is thus weakly sensitive to PSF shapes.
To illustrate the accuracy of the axial correction of the astigmatism data using the SAF measurement, we performed measurements on 40-nm fluorescent beads, both at the coverslip and distributed in the volume (Supplementary Fig. 6). In both cases, the axial correction algorithm seems very accurate (1 nm average discrepancy at the coverslip, and 3 nm in the volume, which is well below the localization precision). As the dispersion of the values increases for beads in the volume, this can be attributed to either the decay of the SAF signal in the volume, which causes the SAF localization precision to become non-negligible, or the influence of the previously mentioned field-dependent aberrations, which induce biases in the astigmatic positions according to the position in the field. This effect is present in conventional single-view PSF shaping imaging too, but it is difficult to detect unless a specifically designed calibration sample is used. The dispersion due to field-dependent aberrations could be mitigated by using a spatially resolved PSF calibration, as in ref. 23.
Lastly, the optical aberrations applied in PSF shaping-based setups not only deform the PSFs, but they may also distort the field itself laterally. For instance, when astigmatism is used, the system has two different focal lengths in x and y, which implies that the magnification is different in x and y. While this effect is of the order of a few percent, it definitely biases the results whenever it is necessary to measure lateral distances precisely unless this magnification discrepancy is duly calibrated. With DAISY, evaluating this image distortion is straightforward—thanks to the non-astigmatic detection path: a cross-correlation performed between the astigmatic (UAF) and the unaberrated (EPI) 2D SMLM images gives the optimal affine transformation to be applied to the astigmatic image—this combination of translation, rotation, and magnification directly provides the magnification difference between the x and y axes, which accounts for 3.5% approximately in our case (Fig. 2i). By applying the optimal affine transformation, the deformation is then corrected: the final lateral discrepancy between the two images was found to be below 6 nm over the whole 25 nm-wide field in Fig. 2i (see Supplementary Fig. 7 for a more detailed measurement of the registration error). It should be noticed, however, that a solution to avoid such a deformation would be to place the cylindrical lens in the Fourier plane, although most reported PSF shaping setups do not use this optical configuration. Also, more complex PSF shapes might induce complex field distortions—potentially making the correction more difficult.
Multicolor 3D super-resolution imaging of biological samples
To evidence the performance of DAISY for unbiased, reproducible, and quantitative experiments, we used it to image biological samples. We illustrate the performances in terms of resolution by performing acquisitions on living E. coli bacteria adhered to a coverslip. The envelope of bacteria was labeled with both AF647 and AF555 using a click chemistry process (see Methods section)24,25. Since the lipopolysaccharide (LPS) layer is thin in Gram-negative bacteria, this is a good sample to observe the influence of the localization precision. We present in Fig. 3a, b 2D and 3D images of a region of interest and in Fig. 3c an x–z slice along the line displayed in Fig. 3a. The measured diameter of the bacterium is around 1 μm but still it does not exhibit a strong loss of resolution at its edges. To evidence this, we also plotted the lateral and axial histograms in the boxed regions (Fig. 3c). The axial standard deviations were found to be, respectively, around 30 nm and 45 nm at the bottom and at the top of the cell, while lateral standard deviations were around 27 nm in both colors. Taking into account the size of the LPS layer (<10 nm), of the label—i.e., the DBCO-sulfo-biotin and streptavidin-AF construction—(10 nm) and the effect of the curvature of the bacterium over the width of the area used for the analysis (10 nm), these values are consistent with the localization precision curves plotted in Fig. 1c. As a comparison, the results obtained on the same sample with uncorrected astigmatism and with DONALD are provided in Supplementary Fig. 8. Like DAISY, DONALD features an absolute detection, unsensitive to both chromatic aberration and axial drift. However, the axial precision of DONALD deteriorates sharply with the depth due to the decay of the SAF signal; thus the top half of the sample (beyond 500 nm) is hardly visible, whereas DAISY clearly permits imaging up to 1 μm. Uncorrected astigmatism has the same capture range as DAISY, but since it lacks the absolute information, it exhibits an axial shift between the two colors, as well as a broadening of the histogram widths due to the axial drift.
DAISY results obtained from biological samples. a 2D SMLM image of living E. coli bacteria labeled with both AF647 (red) and AF555 (cyan) at the membrane. b 3D view of the field displayed in a. The depth is color-coded (one single colormap is used for both AF647 and AF555). c x–z slice along the line displayed in a and axial and lateral profiles in the boxed regions. The σ values stand for the standard deviations of the distributions. d–f 2D dual-color images of rat hippocampal neurons where the adducin and the β2-spectrin were labeled with AF647 and AF555, respectively. g Lateral profile along the axis of the yellow box displayed in e. h x–z slice along the green box displayed in f. Scale bars: 2 μm (a) and (e), 5 μm (d), 1 μm (f)
We then used DAISY to visualize the periodic submembrane scaffold present along the axon of cultured neurons26. We imaged the 3D organization of two proteins within this scaffold: adducin (labeled with AF647) that associates with the periodic actin rings, and β2-spectrin (labeled with AF555) that connect the actin rings (Fig. 3d–f). The lateral resolution allowed us to easily resolve the alternating patterns of adducin rings and β2-spectrin epitopes and their 190 nm periodicity (Fig. 3g)27. Thanks to the axial resolution of DAISY, we were also able to resolve the submembrane localization of both proteins across the whole diameter of the axon at 600 nm depth (Fig. 3h).
Extended depth imaging
Taking advantage of the features of DAISY for unbiased sequential imaging, we propose an implementation allowing single-color and multicolor imaging at wider depth ranges by stacking the results of multiple acquisitions on the same field at different heights. Although PSF measurement methods also allow this type of acquisitions, DAISY is especially suited in this case, thanks to its previously described intrinsic bias correction features. Since the SAF signal quickly decays with the depth in the first 500 nm above the coverslip, the absolute reference is accessible only in the first stack. Still, as it provides unbiased results, the top of this first stack serves as an absolute reference for the next stack, which is matched to the previous using an axial position cross-correlation algorithm. In other words, the first 1 μm unbiased slice is interlaced with the following one, which contains the positions between 600 nm and 1.6 μm (as described in the schematic in Fig. 4a). The absolute reference is thus transferred from the first slice onto the second, which becomes insensitive to axial detection biases. Similarly, the third slice, containing positions from 1.2 to 2.2 μm is intertwined with the second by position cross-correlation, and thus it also benefits from the absolute reference and the bias insensitivity that it brings. Several slices can be recorded and merged together to obtain an extended depth image—still, this is limited by photobleaching (although this can be mitigated by using (DNA-)PAINT labeling), as well as aberrations inherent in depth imaging, which cause the axial and lateral precisions to deteriorate away from the coverslip. Moreover, registration errors are likely to add as the number of slices increases, so using fiducial markers might be necessary to merge more slices. We illustrate the method with a single-color acquisition series (COS-7 cells, α-tubulin and β-tubulin labeled with AF647) in Fig. 4b–d: the stack of the three slices (Fig. 4e) obviously shows information in deep regions (beyond 1 μm) that would not be accessible with a single acquisition. We then imaged a dual-label tubulin-clathrin sample (COS-7 cells, light chain and heavy chain clathrin labeled with AF647, α-tubulin and β-tubulin labeled with 560-nm-excitable DNA-PAINT imager) in three sequential acquisitions while shifting the focus by 600 nm between each of them to obtain a 3D dual-color 2 μm imaging range set of data (Fig. 4f). Aside from the fact that no axial mismatch between the subsequent acquisitions is observed, the localization precision remains satisfactory after 1.5 μm as it is limited only by the effect of the spherical aberration and sample-induced aberrations. To evidence this, we measured the dispersion of the localizations on two clathrin spheres located close to the ventral membrane (200 nm depth) and the dorsal membrane (1500 nm depth), respectively (Fig. 4g–h, Supplementary Fig. 9). The lateral and axial standard deviations were found to be 16 nm in xy and 17 nm in z at 200 nm depth, and 20 nm in xy and 27 nm in z at 1500 nm depth—as expected, the axial precision is more affected by the effect of the aberrations in the volume than the lateral precision.
Extended depth imaging principle and results. a Description of the acquisition protocol: several sequential acquisitions are performed at different focus positions with a sufficient overlap between them to enable the stitching of the different slices (the focus is typically shifted by 600 nm between successive acquisitions, while the capture range is around 1 μm for each acquisition). b–d 3D images reconstructed from single-color tubulin acquisitions performed at three different focus positions (COS-7 cells, α-tubulin and β-tubulin labeled with AF647). e Final 3D image obtained by stitching the three consecutive acquisitions. The total range is around 2.2 μm. f 3D extended range dual-color image of clathrin (red-yellow) and tubulin (blue-green) obtained from three sequential acquisitions (for each color) at different heights (COS-7 cells, heavy chain and light chain clathrin labeled with AF647, α-tubulin and β-tubulin labeled with a 560-nm excitable DNA-PAINT imager). g, h x–y and x–z slices of two clathrin spheres taken from (f) at two different depths (200 and 1500 nm). The axial histograms of the x–z images are displayed on the right. Scale bars: 5 μm (b–f), 250 nm (g, h)
Thanks to the decoupling of the axial and lateral detections and to the combination of two axial SMLM techniques yielding complementary information, we could achieve reliable and unbiased imaging that enables quantitative studies on biological samples. DAISY offers a slowly varying, weakly anisotropic resolution over the whole micron-wide capture range, with a localization precision down to 15 nm. Thanks to both the SAF and the astigmatic detections, DAISY provides absolute axial results that prove to be insensitive to axial drifts and sample tilts, as well as chromatic aberration. These features make it especially suited for biological samples imaging near the coverslip, which finds applications in the framework of cell adhesion, motility processes, bacteria imaging or neuronal axons and dendrites studies. Moreover, stacking acquisitions performed at different heights also enables reproducible and reliable studies at more important depths, upto a few micrometers. Finally, as the implementation of the dual-view detection scheme we use is straightforward, it would also benefit any PSF measurement method other than astigmatism, such as double-helix PSF6, self-bending PSF7, saddle-point PSF8, and tetrapod28, which offer better performances in terms of localization precision and capture range.
Optical setup
A schematic of the optical setup used is presented in Fig. 1a. We used a Nikon Eclipse Ti inverted microscope with a Nikon Perfect Focus System. The excitation was performed thanks to five different lasers: 637 nm (Obis 637LX, 140 mW, Coherent), 561 nm (Genesis MX 561 STM, 500 mW), 532 nm (Verdi G5, 5 W, Coherent), 488 nm (Genesis MX 488 STM, 500 mW, Coherent), and 405 nm (Obis 405LX, 100 mW, Coherent). The corresponding 390/482/532/640 or 390/482/561/640 multiband filters (LF405/488/532/635-A-000 and LF405/488/561/635-A-000, Semrock) were used. The fluorescence was collected through a Nikon APO TIRF ×100 1.49 NA oil immersion objective lens, sent in the DAISY module and recorded on two halves of a 512 × 512-pixel EMCCD camera (iXon3, Andor). The camera was placed at the focal plane of the module of magnification 1.67 and the optical pixel size was ~100 nm. Finally, the imaging paths were calibrated in intensity to compensate the non-ideality of the 50–50 beam splitter, as well as the reflection on the cylindrical lens surface (this measurement was performed for each fluorescence wavelength). The object focal plane of the EPI path was typically at the coverslip (z = 0 nm) and the UAF path had two focal lines, at z = 0 nm and z = 800 nm for the y and x axes, respectively.
COS-7 cells were grown in DMEM with 10% FBS, 1% L-glutamin, and 1% penicillin/streptomycin (Life Technologies) at 37 °C and 5% CO2 in a cell culture incubator. Several days later, they could be plated at low confluency on cleaned round 25 mm diameter high resolution 1.5H glass coverslips (Marienfield, VWR). After 24 h, the cells were washed three times with PHEM solution (60 mM PIPES, 25 mM HEPES, 5 mM EGTA, and 2 mM Mg acetate adjusted to pH 6.9 with 1 M KOH) and fixed for 12 min in 4% PFA, 0.2% glutaraldehyde and 0.5% Triton; they were then washed 3 times in PBS (Invitrogen, 003000). Upto this fixation step, all chemical reagents were pre-warmed at 37 °C. The cells were post-fixed for 10 min with PBS + 0.1% Triton X-100, reduced twice for 10 min with NaBH4, and washed in PBS three times before being blocked for 15 min in PBS + 1% BSA.
The labeling step varied according to the required sample: in the case of actin labeling, the cells were incubated for 20 min with 3.3 nM phalloidin-AF647 (Thermo Fisher, A22287) in the dSTORM imaging buffer (Abbelight) before starting the acquisition—without removing the dSTORM buffer containing the phalloidin-AF647. On the contrary, immunolabeling of tubulin and clathrin required more preparation steps.
For AF647 α-tubulin, the cells were incubated for 1 h at 37 °C with 1:300 mouse anti-α-tubulin antibody (Sigma Aldrich, T6199) in PBS + 1% BSA. This was followed by three washing steps in PBS + 1% BSA, incubation for 45 min at 37 °C with 1:300 goat anti-mouse AF647 antibody (Life Technologies, A21237) diluted in PBS 1% BSA and three more washes in PBS.
For AF647 β-tubulin and AF555 α-tubulin, the cells were incubated for 1 h at 37 °C with 1:300 rabbit anti-β-tubulin antibody (Sigma Aldrich, T5293) in PBS + 1% BSA. This was followed by three washing steps in PBS + 1% BSA, incubation for 45 min at 37 °C with 1:300 goat anti-rabbit AF555 antibody (Life Technologies, A21430) diluted in PBS + 1% BSA and three more washes in PBS + 1% BSA. Then they were incubated again for 1 h at 37 °C with 1:300 mouse anti-α-tubulin antibody (Sigma Aldrich, T6199) in PBS + 1% BSA, washed three times, incubated for 45 min at 37 °C with 1:300 goat anti-mouse AF647 antibody (Life Technologies, A21237) diluted in PBS + 1% BSA and washed three more washes in PBS.
For AF647 α-tubulin and β-tubulin, the cells were incubated for 1 h at room temperature with 1:300 mouse β-tubulin antibody (Sigma Aldrich, T5293) in PBS + 1% BSA. This was followed by three washing steps in PBS + 1% BSA, incubation for 1 h at 37 °C with 1:300 mouse α-tubulin antibody (Sigma Aldrich, T6199) diluted in PBS 1% BSA, three more washes in PBS + 1% BSA, incubation for 45 min at 37 °C with 1:300 goat anti-mouse AF647 antibody (Life Technologies, A21237) diluted in PBS 1% BSA and three more washes in PBS.
For AF647 heavy chain and light chain clathrin and DNA-PAINT α-tubulin and β-tubulin, the cells were incubated for 1 h at 37 °C with 1:400 mouse anti-light chain clathrin antibody (Sigma Aldrich, C1985) in PBS + 1% BSA and washed three times with PBS + 1% BSA, incubated again for 1 h at 37 °C with 1:400 mouse anti-heavy chain clathrin antibody (Sigma Aldrich, C1860) in PBS + 1% BSA and washed three times with PBS + 1% BSA. Then, they were incubated for 45 min at 37 °C with 1:400 anti-mouse AF647 antibody (Life Technologies, A21237) in PBS + 1% BSA, washed three times with PBS + 1% BSA, and incubated again for 1 h at room temperature with 1:400 mouse β-tubulin antibody (Sigma Aldrich, T5293) in PBS + 1% BSA. This was followed by three washing steps in PBS + 1% BSA, incubation for 1 h at 37 °C with 1:400 mouse α-tubulin antibody (Sigma Aldrich, T6199) diluted in PBS 1% BSA, three more washes in PBS + 1% BSA, incubation for 2 h at 37 °C with 1:100 anti-mouse-D1 Ultivue secondary antibody diluted in antibody dilution buffer (Ultivue-2 kit, Ultivue) and washed three more washes in PBS.
In any case, after the immunolabeling of tubulin and/or clathrin, a post-fixation step was performed using PBS with 3.6% formaldehyde for 15 min. The cells were washed in PBS three times and then reduced for 10 min with 50 mM NH4Cl (Sigma Aldrich, 254134), followed by three additional washes in PBS.
To prepare the neuron samples, rat hippocampal neurons from E18 pups were cultured on 18 mm coverslips at a density of 6000 cm−2 according to previously published protocols29 and following guidelines established by the European Animal Care and Use Committee (86/609/CEE) and approval of the local ethics committee (agreement D18-055-8). After 16 days in culture, neurons were fixed using 4% PFA in PEM (80 mM Pipes, 5 mM EGTA, and 2 mM MgCl2, pH 6.8) for 10 min. After rinsing in 0.1 M phosphate buffer (PB), neurons were blocked for 60 min at room temperature in immunocytochemistry buffer (ICC: 0.22% gelatin, 0.1% Triton X-100 in PB). Following this, neurons were incubated with a chicken primary antibody against map2 (abcam, ab5392) mouse primary antibody against β2-spectrin (BD Bioscience, 612563) and a rabbit primary antibody against adducin (abcam, ab51130) diluted in ICC overnight at 4 °C, then after ICC rinses with AF 488, 555, and 647 conjugated secondary antibodies for 1 h at 23 °C.
The E. coli K12 (MG1655) cells were grown in 2YT medium (Sigma, Tryptone 16.0 g.L−1, Yeast extract 10.0 g.L−1, NaCl 5.0 g.L−1) at 37 °C under agitation (180 rpm). Overnight cultures were diluted 100 times in fresh medium (final volume 300 μL) containing Kdo-N3 (1.0 mM). Bacteria were incubated at 37 °C for 9 h under agitation (180 rpm). Then 200 μL of the obtained suspension were washed 3 times with PBS buffer (200 μL, 9700 × g, 1 min, room temperature). The pellet was re-suspended in 200 μL of a solution of DBCO-Sulfo-Biotin (JenaBioscience, CLK-A116) (0.50 mM in PBS buffer) and the suspension was vigorously agitated for 90 min at room temperature. Bacteria were washed 3 times with PBS buffer (200 μL, 9700 × g, 1 min, room temperature). The pellet was re-suspended in a solution of Streptavidin-AF647/Streptavidin-AF555 (20 μg.mL−1 each) (Invitrogen, ThermoFischer Scientific, S21374 and S32355) in PBS containing BSA (1.0 mg.mL−1, 200 μL) and the suspension was agitated at room temperature for 90 min in the dark. Bacteria were then washed 3 times with PBS buffer (200 μL, 9700×g, 1 min, room temperature). The pellet was re-suspended in PBS buffer (400 μL) and stored at 4 °C until analysis.
Fluorescent beads sample preparation
Twenty-nanometer fluorescent dark red beads samples (Fig. 2h, Supplementary Fig. 5) were prepared using a 5.10−7 dilution of the initial solution (F8783, Thermo Fisher). We performed the dilution in PBS + 5% glucose to match the index of the dSTORM imaging buffer, and we waited for 5 min before starting the acquisition so that the beads had time to deposit on the coverslip.
Hundred-nanometer diameter tetraspeck fluorescent beads samples (Supplementary Fig. 4) were prepared by diluting the initial solution (T7279, Thermo Fisher) at 5 × 10−4 in PBS + 5% glucose, and we waited for 5 min before starting the acquisition for the beads to deposit on the coverslip.
The samples of 40-nm diameter dark red fluorescent beads deposited on a coverslip (Supplementary Figs. 6a and 7a–c) were obtained by diluting the initial solution (10720, Thermo Fisher) at 5 × 10−7 in PBS + 5% glucose, and we waited for 5 min before starting the acquisition for the beads to deposit on the coverslip.
The samples of 40-nm diameter dark red fluorescent beads randomly distributed in the imaging volume (Fig. 1c, Supplementary Fig. 6b) were obtained by taking fixed, unlabeled COS-7 cells and adding 500 μL of beads solution (10720, Thermo Fisher) diluted at 5 × 10−7 in PBS during 5 min for beads to deposit before removing the solution and replacing it with PBS + 5% glucose. Beads stuck on the upper side of the membrane were thus located at random heights.
dSTORM and DNA-PAINT imaging on biological samples was performed using an oblique epifluorescence illumination configuration. To induce most of the molecules in a dark state, we used a dSTORM buffer (Abbelight Smart kit). The sample was lit with an irradiance of ~4 kW.cm−2 until a sufficient density of molecules was obtained—typically below one molecule per 4 μm2 (see Supplementary Note 1 for a study of the influence of the molecule density per frame on the localization performance). We then started the data acquisition with a 50-ms (for AF647) or 100-ms (for AF555) exposure time and 150 EMCCD gain. The total number of acquired frames was typically between 15,000 and 30,000 per acquisition.
For sequential dSTORM and DNA-PAINT acquisitions, the dSTORM acquisition was first performed as described above. Then, we removed the dSTORM buffer and added a 0.5 nM dilution of DNA-PAINT imagers in imaging buffer (I1-560, Ultivue-2 kit, Ultivue). To achieve single molecule regime, the sample was lit with an irradiance of ~4 kW.cm−2 and we then started the data acquisition with a 100-ms exposure time and 150 EMCCD gain. The total number of acquired frames was around 50,000.
Performance measurements on fluorescent beads were done at low illumination powers (0.15 kW.cm−2 for 20-nm diameter dark red beads and 0.025 kW.cm−2 for tetraspeck beads and 40-nm diameter dark red beads). The beads were immersed in PBS + 5% glucose and the exposure times and EMCCD gain were 50 ms and 150 ms, respectively. Except for the long-term axial drift tracking experiment, 500 frames were recorded for each performance characterization acquisition.
The acquisition was performed using the Nemo software (Abbelight).
Each 512 × 512-pixel frame was pre-processed by removing the pixel per pixel temporal median of the previous 10 frames in order to get rid of the slowly varying background without altering the number of photons in the PSFs. The filtered frames were then split in two parts corresponding to the UAF and EPI paths of the DAISY module, respectively. On the 512 × 256-pixel sub-frames, the PSFs were detected using a second order wavelet filtering associated with an intensity threshold (typically 1.0 for the EPI channel, 0.8 for the UAF channel). Each PSF was characterized using a center of mass detection to retrieve the lateral positions xEPI, yEPI, xUAF, and yUAF, and a Gaussian fitting to assess the PSF widths \(w_x^{{\mathrm{UAF}}}\), \(w_y^{{\mathrm{UAF}}}\), \(w_x^{{\mathrm{EPI}}}\), and \(w_y^{{\mathrm{EPI}}}\). A photon counting was also performed over a 2 × 2 μm square area centered on the PSF to determine the number of photons NEPI and NUAF. A filtering step based on photon numbers (typically 500 photons minimum for AF647), EPI PSF widths (\(80\,{\mathrm{nm}}\, < \,\sqrt {w_x^{{\mathrm{EPI}}}w_y^{{\mathrm{EPI}}}}\, < \,180\,{\mathrm{nm}}\)) and EPI PSF anisotropy (\(0.67\, < \,w_x^{{\mathrm{EPI}}}/w_y^{{\mathrm{EPI}}}\, < \,1.5\)) was then operated to get rid of false positive detections. Furthermore, pairs of localizations closer than 2 μm were discarded to avoid biases due to the signal from neighboring PSFs. Corrections were applied to photon numbers (as mentioned in the Optical setup section) and lateral positions xUAF and yUAF (to compensate the image deformation induced by the astigmatism as illustrated in Fig. 2i and Supplementary Fig. 7). Afterwards, the axial positions were calculated: the values of zSAF were obtained using the theoretical curve provided in ref. 15 whereas those of zastigmatic could be retrieved by fitting \(w_x^{{\mathrm{UAF}}} - w_y^{{\mathrm{UAF}}}\) to the calibration curve (see the Astigmatism calibration section) using a least squares calculation. Lateral drifts were then corrected using a temporal cross-correlation algorithm. Furthermore, zastigmatic positions were corrected using the SAF reference (see Astigmatism correction algorithm section).
Finally, the values of zSAF and zastigmatic were merged together, as well as the values of xEPI and xUAF, yEPI and yUAF (as described in the Position merging section).
All this processing was performed using a home-written Python code.
Astigmatism calibration
Although in the literature, the calibration of axial detection methods is often performed by using fluorescent beads deposited on a coverslip and defocusing the objective, this method is biased since it does not take into account the effect of the spherical aberration, which affects both the position of the focal plane (the so-called focal shift) and the shapes of the PSFs. While the former can be compensated using a calculated correction factor depending on several experimental parameters, there is no simple way to get to correct the latter to our knowledge. Thus, we chose to perform the calibration of the astigmatic detection using a sample of known geometry in the nominal acquisition conditions, i.e., with a fixed focus plane and dSTORM fluorophores. More specifically, we used a sample of 15 μm microspheres decorated with fluorophores (either AF647 or AF555), as described in ref. 20. By measuring the position of the center and the radius of the spheres, it is possible to calculate the expected axial position of each molecule from the measurement of its lateral position. Such an acquisition provides the lookup table giving the correspondence between PSF widths and axial positions.
Astigmatism correction algorithm
Before combining the two sources of axial information, the astigmatic positions were corrected in order to make them benefit from the SAF absolute detection. This was completed, thanks to a cross-correlation algorithm between the SAF and astigmatic positions measured for each molecule. As the SAF detection is efficient mostly close to the coverslip, we restricted the data to the subset of molecules verifying zSAF ∈ [−50 nm, 300 nm] in order to perform the cross-correlation in the domain where both axial information sources are precise and reliable.
First, we removed the tilt: the zSAF − zastigmatic axial discrepancy was calculated for each molecule from the data verifying zSAF ∈ [−50 nm, 300 nm]. The spatially resolved axial discrepancy information was used to calculate the tilt by fitting a plane to the data, which provided the tilt direction and amplitude. The astigmatic positions were corrected in accordance with this result.
Then data was divided in subsets of 1000 frames and distributed in series of 3D images with 100 nm lateral and 15 nm axial pixel sizes, each of them corresponding to a 1000 frame subset. For each subset, the SAF and astigmatism 3D images were cross-correlated allowing only axial displacements to maximize the overlap, which brought the correction to be applied to the astigmatic positions for the subset. Then, the results obtained for all the subsets were pooled and interpolated to generate the axial drift curve. Thanks to this correction, the astigmatic results were made absolute (i.e., referenced to the coverslip) and insensitive to both the chromatic aberration and the axial drift.
It is worth noting that the 1000-frame division corresponds to a 50-s sampling of the axial drift (with 50-ms exposure time). This value seems reasonable given the slow evolution of the drift: it is the result of a compromise between the bandwidth of the correction (a finer sampling allows a better correction of higher drift frequencies) and the robustness of the algorithm (if the amount of data is too low, the algorithm may not adequately converge or provide a wrong value). Shorter slices might be used with higher density samples. Similarly, acquisitions featuring a lower SNR or photon number would require larger pixels or larger slices to compensate the influence of the localization precision worsening. The final accuracy of the correction appears to be typically better than 3 nm (this was obtained by measuring the height of fluorophores deposited at the coverslip outside of cells).
Position merging
In DAISY acquisitions, the lateral positions were obtained by combining the two sources of lateral information according to their uncertainties (the CRLB values were used for that purpose). The exact formula follows the normal distribution combination law:
$$x^{{\mathrm{DAISY}}} = \left( {\frac{{x^{{\mathrm{UAF}}}}}{{(\sigma _x^{{\mathrm{UAF}}})^2}} + \frac{{x^{{\mathrm{EPI}}}}}{{(\sigma _x^{{\mathrm{EPI}}})^2}}} \right)\Bigg/\left( {\frac{1}{{(\sigma _x^{{\mathrm{UAF}}})^2}} + \frac{1}{{(\sigma _x^{{\mathrm{EPI}}})^2}}} \right)$$
$$y^{{\mathrm{DAISY}}} = \left( {\frac{{y^{{\mathrm{UAF}}}}}{{(\sigma _y^{{\mathrm{UAF}}})^2}} + \frac{{y^{{\mathrm{EPI}}}}}{{(\sigma _y^{{\mathrm{EPI}}})^2}}} \right)\Bigg/\left( {\frac{1}{{(\sigma _y^{{\mathrm{UAF}}})^2}} + \frac{1}{{(\sigma _y^{{\mathrm{EPI}}})^2}}} \right)$$
where \(\sigma _i^{{\mathrm{UAF}}}\) and \(\sigma _i^{{\mathrm{EPI}}}\) are the localization precisions in the direction i for the UAF and EPI detections, respectively (i.e., the standard deviations of the positions).
Similarly, the two sources of axial information were merged according to their uncertainties:
$$z^{{\mathrm{DAISY}}} = \left( {\frac{{z^{{\mathrm{SAF}}}}}{{(\sigma _z^{{\mathrm{SAF}}})^2}} + \frac{{z^{{\mathrm{astigmatic}}}}}{{(\sigma _z^{{\mathrm{astigmatic}}})^2}}} \right)\Bigg/\left( {\frac{1}{{(\sigma _z^{{\mathrm{SAF}}})^2}} + \frac{1}{{(\sigma _z^{{\mathrm{astigmatic}}})^2}}} \right)$$
where \(\sigma _z^{{\mathrm{SAF}}}\) and \(\sigma _z^{{\mathrm{astigmatic}}}\) are the axial localization precisions of the SAF and the astigmatic detections, respectively.
This combination optimizes the final precision, i.e., it provides the best precision attainable from the two sources given their respective uncertainties.
The relative weights used for DAISY are shown in Fig. 1b. It is worth noting that since localization precisions vary with depth, the corresponding weights vary accordingly. Notably, the weight of the SAF detection is more important than that of the axial astigmatic detection at the coverslip, but it quickly dwindles to almost zero after 500 nm. Similarly, the (unastigmatic) EPI detection is more precise in the first depth of field, whereas the (astigmatic) UAF detection dominates after 600 nm, where the EPI PSFs are too defocused to be detected.
Localization precision measurement
To obtain the localization precisions displayed in Fig. 1c, we prepared a sample of 40-nm dark red fluorescent beads randomly distributed in the imaging volume (see Fluorescent beads sample preparation section). The results of several 500-frame acquisitions were pooled and for each of them, the lateral drift was corrected. The average axial position was measured for each bead, as well as the standard deviations on the lateral and axial measured positions, which gave the localization precisions. The laser power was adjusted so that the photon numbers emitted by the beads matched those of AF647 (2750 UAF photons per PSF and 2750–5100 EPI photons, depending on the depth of the bead).
Using fluorescent beads seems to be a more reliable method to measure the localization precisions than with biological samples—unlike the fluorescent beads, the use of biological samples requires many assumptions on the size and geometry of the labeled target, the label (which is typically around 10–15 nm in the case of immunolabeling), the fluorophore itself, as well as the motion freedom of the label.
Cramér-Rao lower bound calculation
To derive the CRLB for DAISY, we first estimated the lower bounds associated to the astigmatic and the SAF detections separately. To this end, we assumed elliptical Gaussian PSFs for the UAF image and circular Gaussian PSFs for the EPI image. We used a realistic set of parameters corresponding to typical experimental conditions with AF647, i.e., 100 background photons per pixel on each path and a number of photons per PSF equal to 2750 for the UAF path and 2750–5100 for the EPI path (depending on the axial position). The CRLB of the SAF was adapted from30 and that of the astigmatism was derived from31. Finally, the DAISY axial CRLB was obtained from the previous results using Eq. (3). Similarly, the lateral CRLB for the UAF and EPI paths were obtained from32 and the lateral lower bound of DAISY was calculated from these results using Eqs. (1) and (2). See Supplementary Note 2 for a more exhaustive description of the CRLB calculations. These results were used to plot the curves displayed in Fig. 1c and Supplementary Figs. 1 and 3.
Note that the CRLB values are somewhat optimistic and that they are not necessarily expected to be reached in real experimental conditions because they do not account for optical aberrations, polarization effects on the PSF shape or for the ability of the localization algorithm to actually extract the best possible information.
The 3D view in Fig. 3b was obtained using the Nemo software (Abbelight).
A filter based on the local density of molecules associated with a threshold was applied on Fig. 4f–h to remove false positive detections.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Several localization datasets (data filtered, lateral drift corrected, DAISY axial correction not applied) are available on Github as test samples for the DAISY correction code: https://github.com/ClementCabriel/DAISYcorrection. The authors also uploaded one clathrin-AF647 dataset obtained with DAISY (all corrections performed, data not filtered) on the Shareloc platform: https://shareloc.xyz/#/view?u=z2Dig7bFraDdSHkXwg7Zhv. The authors will keep uploading datasets, both on Github and Shareloc. Other data are available from the corresponding authors upon reasonable request.
Code availability
The localization and the lateral drift correction may be performed with any localization software. The DAISY correction code is available on Github at this address: https://github.com/ClementCabriel/DAISYcorrection.
Betzig, E. et al. Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642–1645 (2006).
ADS CAS Article Google Scholar
Hess, S. T., Girirajan, T. P. K. & Mason, M. D. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys. J. 91, 4258–4272 (2006).
Rust, M. J., Bates, M. & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 3, 793–796 (2006).
van de Linde, S. et al. Direct stochastic optical reconstruction microscopy with standard fluorescent probes. Nat. Protoc. 6, 991–1009 (2011).
Huang, B., Wang, W., Bates, M. & Zhuang, X. Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 319, 810–813 (2008).
Pavani, S. R. P. et al. Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function. Proc. Natl Acad. Sci. USA 106, 2995–2999 (2009).
Jia, S., Vaughan, J. C. & Zhuang, X. Isotropic three-dimensional super-resolution imaging with a self-bending point spread function. Nat. Photonics 8, 302–306 (2014).
Shechtman, Y., Sahl, S. J., Backer, A. S. & Moerner, W. Optimal point spread function design for 3d imaging. Phys. Rev. Lett. 113, 133902 (2014).
ADS Article Google Scholar
Ruckstuhl, T., Enderlein, J., Jung, S. & Seeger, S. Forbidden light detection from single molecules. Anal. Chem. 72, 2117–2123 (2000).
Winterflood, C. M., Ruckstuhl, T., Verdes, D. & Seeger, S. Nanometer axial resolution by three-dimensional supercritical angle fluorescence microscopy. Phys. Rev. Lett. 105, 108103 (2010).
Enderlein, J., Gregor, I. & Ruckstuhl, T. Imaging properties of supercritical angle fluorescence optics. Opt. express 19, 8011–8018 (2011).
Barroca, T., Balaa, K., Delahaye, J., Lévêque-Fort, S. & Fort, E. Full-field supercritical angle fluorescence microscopy for live cell imaging. Opt. Lett. 36, 3051–3053 (2011).
Barroca, T., Balaa, K., Lévêque-Fort, S. & Fort, E. Full-field near-field optical microscope for cell imaging. Phys. Rev. Lett. 108, 218101 (2012).
Axelrod, D. Evanescent excitation and emission in fluorescence microscopy. Biophys. J. 104, 1401–1409 (2013).
Bourg, N. et al. Direct optical nanoscopy with axially localized detection. Nat. Photonics 9, 587–593 (2015).
Deschamps, J., Mund, M. & Ries, J. 3D superresolution microscopy by supercritical angle detection. Opt. Express 22, 29081 (2014).
Gustavsson, A.-K., Petrov, P. N., Lee, M. Y., Shechtman, Y. & Moerner, W. E. 3d single-molecule super-resolution microscopy with a tilted light sheet. Nat. Commun. 9, 123 (2018).
McGorty, R., Schnitzbauer, J., Zhang, W. & Huang, B. Correction of depth-dependent aberrations in 3d single-molecule localization and super-resolution microscopy. Opt. Lett. 39, 275 (2014).
Li, Y. et al. Real-time 3d single-molecule localization using experimental point spread functions. Nat. Methods 15, 367–369 (2018).
Cabriel, C., Bourg, N., Dupuis, G. & Lévêque-Fort, S. Aberration-accounting calibration for 3d single-molecule localization microscopy. Opt. Lett. 43, 174 (2018).
Xu, K., Babcock, H. P. & Zhuang, X. Dual-objective STORM reveals three-dimensional filament organization in the actin cytoskeleton. Nat. methods 9, 185–188 (2012).
Wang, Y. et al. Localization events-based sample drift correction for localization microscopy with redundant cross-correlation algorithm. Opt. Express 22, 15982 (2014).
von Diezmann, A., Lee, M. Y., Lew, M. D. & Moerner, W. E. Correcting field-dependent aberrations with nanoscale accuracy in three-dimensional single-molecule localization microscopy. Optica 2, 985 (2015).
Dumont, A., Malleron, A., Awwad, M., Dukan, S. & Vauzeilles, B. Click-mediated labeling of bacterial membranes through metabolic modification of the lipopolysaccharide inner core. Angew. Chem. Int. Ed. 51, 3143–3146 (2012).
Fugier, E. et al. Rapid and specific enrichment of culturable gram negative bacteria using non-lethal copper-free click chemistry coupled with magnetic beads separation. PLoS ONE 10, e0127700 (2015).
Papandréou, M.-J. & Leterrier, C. The functional architecture of axonal actin. Mol. Cell. Neurosci. 91, 151–159 (2018).
Xu, K., Zhong, G. & Zhuang, X. Actin, spectrin, and associated proteins form a periodic cytoskeletal structure in axons. Science 339, 452–456 (2013).
Shechtman, Y., Weiss, L. E., Backer, A. S., Sahl, S. J. & Moerner, W. E. Precise three-dimensional scan-free multiple-particle tracking over large axial ranges with tetrapod point spread functions. Nano Lett. 15, 4194–4199 (2015).
Kaech, S. & Banker, G. Culturing hippocampal neurons. Nat. Protoc. 1, 2406–2415 (2006).
Balzarotti, F. et al. Nanometer resolution imaging and tracking of fluorescent molecules with minimal photon fluxes. Science 355, 606–612 (2016).
Rieger, B. & Stallinga, S. The lateral and axial localization uncertainty in super-resolution light microscopy. ChemPhysChem 15, 664–670 (2014).
Stallinga, S. & Rieger, B. The effect of background on localization uncertainty in single emitter imaging, In 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI) 988–991 (IEEE, Barcelona, Spain, 2012).
We thank Ultivue for consumable gifts and Abbelight for software and buffers gifts. We acknowledge the contribution of the Centre de Photonique BioMédicale to cell culture and labeling. We also acknowledge the help of Marion Bardou with cell culture. We thank Rym Boudjemaa for her contribution to the bacteria labeling project. Finally, we thank Caroline Schou and Yann Kergutuil for their help regarding software analysis. This work was supported by the AXA research fund, the ANR (LABEX WIFI, ANR-10-LABX-24), the DIM CNANO Île-de-France, the IRS Bioprobe, the Mission interdisciplinarité of the CNRS, and LaserLab-Europe EUH2020654148. P.J. acknowledges a master funding from GDR ImaBio, and PhD funding from IDEX Paris Saclay (ANR-11-IDEX-0003-02).
Institut des Sciences Moléculaires d'Orsay, CNRS, Univ. Paris-Sud, Université Paris-Saclay, bâtiment 520, rue André Rivière, 91405, Orsay Cedex, France
Clément Cabriel, Nicolas Bourg, Pierre Jouchet & Sandrine Lévêque-Fort
Centre de Photonique BioMédicale, Univ. Paris-Sud, Université Paris-Saclay, CNRS, Fédération LUMAT, bâtiment 520, rue André Rivière, 91405, Orsay Cedex, France
Guillaume Dupuis
Aix-Marseille Université, CNRS, INP, NeuroCyto, 13284, Marseille, France
Christophe Leterrier
Centre de Recherche de Gif, Institut de Chimie des Substances Naturelles du CNRS, 91190, Gif-sur-Yvette, France
Aurélie Baron, Marie-Ange Badet-Denisot & Boris Vauzeilles
Laboratoire de Synthèse de Biomolécules, Institut de Chimie Moléculaire et des Matériaux d'Orsay, Univ. Paris-Sud, Université Paris-Saclay, CNRS, 91405, Orsay, France
Boris Vauzeilles
Institut Langevin, ESPCI Paris, PSL University, CNRS, 1 rue Jussieu, 75005, Paris, France
Emmanuel Fort
Clément Cabriel
Nicolas Bourg
Pierre Jouchet
Aurélie Baron
Marie-Ange Badet-Denisot
Sandrine Lévêque-Fort
C.C., N.B., P.J., G.D., E.F. and S.L.F. conceived the project. C.C. designed the optical setup and performed the acquisitions. C.C. and N.B. carried out simulations and data analysis. P.J. and C.C. performed the CRLB calculations. N.B. developed the dSTORM buffer. N.B., C.C. and P.J. optimized the immunofluorescence protocol. P.J. and C.C. prepared the COS-7 cells samples, C.L. prepared the neuron samples, A.B., M.-A. B.-D. and B.V. prepared the bacteria samples. All authors contributed to writing the manuscript.
Correspondence to Clément Cabriel or Sandrine Lévêque-Fort.
N.B., E.F., and S.L.F. are shareholders in Abbelight. The remaining authors declare no competing interests.
Journal peer review information: Nature Communications thanks Matthew Lew and other anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Cabriel, C., Bourg, N., Jouchet, P. et al. Combining 3D single molecule localization strategies for reproducible bioimaging. Nat Commun 10, 1980 (2019). https://doi.org/10.1038/s41467-019-09901-8
Temporal-Spatial-Color Multiresolved Chemiluminescence Imaging for Multiplex Immunoassays Using a Smartphone Coupled with Microfluidic Chip
Fang Li
, Lei Guo
, Zimu Li
, Jianbo He
& Hua Cui
Analytical Chemistry (2020)
Myosin 1b flattens and prunes branched actin filaments
Julien Pernier
, Antoine Morchain
, Valentina Caorsi
, Aurélie Bertin
, Hugo Bousquet
, Patricia Bassereau
& Evelyne Coudrier
Journal of Cell Science (2020)
The RNase J-Based RNA Degradosome Is Compartmentalized in the Gastric Pathogen Helicobacter pylori
Alejandro Tejada-Arranz
, Eloïse Galtier
, Lamya El Mortaji
, Evelyne Turlin
, Dmitry Ershov
, Hilde De Reuse
& Emmanuelle Charpentier
mBio (2020)
Ionizing Radiation Effects on Hs 578Bst Microtubules
L. Bruni
, M. Manghi
, E. Gioscio
, V. Caorsi
, F. M. Rizzi
& S. Croci
Frontiers in Physics (2020)
Defocused imaging exploits supercritical-angle fluorescence emission for precise axial single molecule localization microscopy
Philipp Zelger
, Lisa Bodner
, Lukas Velas
, Gerhard J. Schütz
& Alexander Jesacher
Biomedical Optics Express (2020)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online)
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
Mathematics Meta
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.
Average distance between two random points in a square
A square with side $a$ is given. What is the average distance between two uniformly-distributed random points inside the square?
For more general "rectangle" case, see here. The proof found there is fairly complex, and I am looking for a simpler proof for this special case. I expect it could be significantly simpler.
See also "line" case.
VividD
VividDVividD
We just have to compute: $$ I=\int_{[0,1]^4}\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}\,d\mu. \tag{1}$$ Assuming that $X_1$ and $X_2$ are two independent random variables, uniformly distributed over $[0,1]$, the pdf of their difference $\Delta X=X_1-X_2$ is given by: $$ f_{\Delta X}(x) = \left(1-|x|\right)\cdot\mathbb{1}_{[-1,1]}(x)\tag{2}$$ hence: $$\begin{eqnarray*} I &=& \iint_{[-1,1]^2}(1-|x|)(1-|y|)\sqrt{x^2+y^2}\,dx\,dy \\&=&4\iint_{[0,1]^2}xy\sqrt{(1-x)^2+(1-y)^2}\,dx\,dy\tag{3}\end{eqnarray*}$$ that is tedious to compute but still possible; we have:
$$ I = \frac{2+\sqrt{2}+5\operatorname{arcsinh}(1)}{15}=\frac{2+\sqrt{2}+5\log(1+\sqrt{2})}{15}=0.52140543316472\ldots$$
(OEIS A091505)
hence the average distance between two random points in $[0,a]^2$ is around the $52.14\%$ of $a$.
Jack D'AurizioJack D'Aurizio
$\begingroup$ Does this variable $I$ have a name? I believe I have seen it before used as a named constant. $\endgroup$
– esote
$\begingroup$ 3D version is called Robbins constant, not sure 2D version has a name. en.wikipedia.org/wiki/Robbins_constant $\endgroup$
– karakfa
Average distance between two points on a unit square.
What is the average distance of two points chosen uniformly on a unit square?
Unexpected examples of natural logarithm
Average Distance Between Random Points on a Line Segment
Average distance between two randomly chosen points in unit square (without calculus)
Average Distance Between Random Points in a Rectangle
How is the distance of two random points in a unit hypercube distributed?
Average minimum distance between $n$ points generate i.i.d. uniformly in the ball
Evaluate $\int _0^1\int _0^1\int _0^1\int _0^1\sqrt{(z-w)^2+(x-y)^2} \, dw \, dz \, dy \, dx$
Mean distance between matrix entries
Average euclidean distance between M normally distributed points
Average distance between random points inside a cube
Average shortest distance between some random points in a box
Average Distance Between Two Points on a Line
two random points on a unit square
$n$ points at random on line segment, average distance between two consecutive points | CommonCrawl |
In Underweight Women, Insufficient Gestational Weight Gain is Associated with Adverse Obstetric Outcomes
Alizee Montvignier Monnet, Delphine Savoy, Lise Preaubert, Pascale Hoffmann, Cécile Bétry
Subject: Medicine & Pharmacology, Nutrition Keywords: pregnancy; newborn; obstetric outcome; birth weight; foetal growth restriction; thinness
The pre-pregnancy BMI and the gestational weight gain are two important determinants of pregnancy outcomes. The aim of this study was to determine obstetric outcomes associated with insufficient gestational weight gain in women with a pre-pregnancy BMI < 18.5 kg/m2. This study was based on observational routinely-collected data from a University Hospital Maternity. The participants were allocated to the group sufficient or insufficient gestational weight gain: ≥ 12.5 kg and < 12.5 kg respectively. Primary outcomes were the adjusted birth weight in percentiles (%) and the proportion of SGA newborns. Secondary outcomes were obstetric and perinatal outcomes. A total of 135 participants with a median age of 28±8 years were included. The adjusted birth weight in percentiles was significantly lower in the insufficient gestational weight gain group (27.2±45.4 vs 42.6±48.8 %; P<0.001). Moreover, the insufficient gestational weight gain is associated with a higher risk of SGA (28.1% vs 11.3%; P=0.017). Our study also showed increased risks of premature rupture of membranes, anaemia and intrauterine growth restriction in women with an insufficient weight gain. Future studies should explore the risk factors associated with insufficient weight gain, in order to develop specific care for underweight pregnant women.
Maternal Body Mass Index and Gestational Weight Gain and Their Association with Pregnancy Complications and Perinatal Conditions
Martin Simko, Adrian Totka, Diana Vondrova, Martin Samohyl, Jana Jurkovicova, Michal Trnka, Anna Cibulkova, Juraj Stofko, Lubica Argalasova
Subject: Medicine & Pharmacology, Obstetrics & Gynaecology Keywords: retrospective hospital-based study, overweight, obesity, pregnancy pathologies, caesarean section, weight gain
Online: 10 April 2019 (12:34:37 CEST)
This study aimed to evaluate the impact of selected pregnancy pathologies statistically depending on overweight/obesity and excessive maternal weight gain during pregnancy on women who gave birth in the years 2013–2015 at the Second Department of Gynecology and Obstetrics at the University Hospital in Bratislava, Slovakia. In a retrospective study, we analyzed data gathered from the sample, which consisted of 7,122 women. Our results indicate a positive statistical dependency of the groups of women with overweight and obesity and gestational hypertension (adjusted odds ratio [AOR]=15.3; 95% CI 9.0−25.8 for obesity), preeclampsia (AOR=3.4; 95% CI 1.9−6.0 for overweight and AOR=13.2; 95% CI 7.7−22.5 for obesity), and gestational diabetes mellitus (AOR=1.9; 95% CI 1.2−2.9 for overweight and AOR=2.4; 95% CI 1.4−4.0 for obesity). A higher incidence of pregnancies terminated by cesarean section was observed in the group of obese women. Gestational weight gain above the IOM (the Institute of Medicine) recommendations was associated with a higher risk of pregnancy terminated by C-section (AOR=1.2; 95% CI 1.0−1.3), gestational hypertension (AOR=1.7; 95% CI 1.0−2.7), and infant macrosomia (AOR=1.7; 95% CI 1.3−2.1). Overweight and obesity during pregnancy significantly contribute to the development of pregnancy pathologies and increased incidence of cesarean section. Systematic efforts to reduce weight before pregnancy through pre-pregnancy dietary counseling, regular physical activity, and healthy lifestyle should be the primary goal.
Lifestyle Variations During and After the COVID-19 Pandemic: A Cross-Sectional Study of Dietary, Physical Activities and Weight Gain Among the Adult Population
Hanan Hammouri, Fidaa Almomani, Ruwa Abdel Muhsen, Aysha Abughazzi, Rawand Daghmash, Alaa Abudayah, Inas Hasan
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: COVID-19 Pandemic; Dietary patterns; BMI; Nutrition; Vitamins; Healthy Food; Dietary Supplements; Factor analyses; Internal Consistency; weight gain
Show abstract| Share
Since its inception in 2019, COVID-19 has been associated with significant changes in lifestyle-related behavior, including physical activity, diet, and sleep, which are vital to maintaining our well-being. This study measures lifestyle-related behavior during the COVID-19 pandemic lockdown using a 21-item questionnaire. The responses were collected from March 2021 to September 2021. Four hundred sixty-seven participants were engaged in assessing the changes caused by the pandemic and their effect on BMI. The validity and reliability of the questionnaire were tested for 71 participants. Cronbach's alpha values for the questionnaire all exceeded 0.7, demonstrating good validity and internal consistency for it. The effect of each question regarding physical activity and dietary habits over the BMI difference was studied using ANOVA. The study shows that more than half of the participants reported snacking more between meals and increased their sitting and screen time, while 74% felt more stressed and anxious. These indications were the cause of the increase in the BMI rate for individuals in the lockdown. In contrast, 62% of the participants showed more awareness about their health by increasing the intake of immunity-boosting foods, and 56% of the participants showed an increase in the consumption of nutrition supplements. Females and married individuals tended to be healthier, so their BMI showed stability compared to others based on their gender and marital status.
Effectiveness of Adherence to a Mediterranean Diet in the Management of Overweight Women: the Prospective Interventional Cohort Study
Jana Poráčová, Ivan Uher, Hedviga Vašková, Tatiana Kimáková, Milena Švedová, Mária Konečná, Marta Mydlárová Blaščáková, Vincent Sedlák
Subject: Biology, Physiology Keywords: Mediterranean Diet; weight loss; determinants of health; healthy lifestyle; clinically significant weight loss
Online: 29 July 2022 (09:52:36 CEST)
Evidence indicates that unhealthy eating habits constitute multilevel obstacles threatening our health and well-being—studies suggesting that consumer choices turn irremovably towards Western diets. Mediterranean diets (MD) have been identified as one of the most effective in preventing and treating overweight and obesity. Considering this scientific substantiation in prevention and treatment activity, the purpose of this investigation is to verify this evidence. In our prospective interventional study, we examined the effect of MD on body weight in a female cohort sample. The analyzed group consisted of (n=181) females divided into three distinct groups based on their age (tricenarian, quadragenarian, and quinquagenarian). Anthropometric (weight, BMI, FATP, VFATL, FFM, TBW, and BMR), biochemical examinations (urea, creatinine, uric acid, ALT, AST, GGT, CHOL, HDL-CH, non-HDL, LDL-CH, TAG, GLU, and CRP) and comprehensive, personalized three months MD program was completed on the examined subjects. We didn't establish convincing evidence of MD on weight reduction and its magnitude of correlation with a positive correspondence on selected determinants in all groups combined. There is a challenge to construct more robust prospective cohort studies that will incorporate add-on critical integrands that will be appropriate to monitor, evaluate and predict weight management in experimenting.
Low-Intensity Whole-Body Vibration: A Useful Adjuvant in Managing Obesity? A Pilot Study
Michele Gobbi, Cristina Ferrario, Marco Tarabini, Giuseppe Annino, Nicola Cau, Matteo Zago, Paolo Marzullo, Stefania Mai, Manuela Galli, Paolo Capodaglio
Subject: Medicine & Pharmacology, Allergology Keywords: obesity; irisin; whole-body vibration; exercise; weight loss; rehabilitation; weight management; muscle strength
The use of whole-body vibration (WBV) for therapeutic purposes is far from being standardized and only very recently an empirical foundation for reporting guidelines for human WBV studies has been published. Controversies about safety and therapeutic dosage stll exist. The present study aimed to investigate the metabolic and mechanical effects of low-intensity WBV in according to the ISO 2631 norm on subjects with obesity. 41 obese subjects (BMI≥ 35 kg/mˆ2) were recruited to participate in a 3-week multidisciplinary inpatient rehabilitation program including fitness training and WBV training. During WBV the posture was monitored with an optoelectronic system with 6 infrared cameras (Vicon, Vicon Motion System, Oxford, UK). The primary endpoints were: variation in body composition, factors of the metabolic syndrome, functional activity (sit-to-stand and 6-min walking test), muscle strength, and quality of life. Secondary endpoints were: modification of irisin, testosterone, growth hormone, IGF1 levels. We observed significant changes in salivary irisin levels, Group 2 (p<0.01) as compared to the control group, while muscle strength, function, and other metabolic and hormonal factors did not change after a 3-week low-intensity WBV training respect control group. Future studies are needed to deeper investigate the potential metabolic effect of low-intensity WBV in managing weight.
The Effect of Reducing Food Waste (Organic Waste) on the Weight of Cats (Felis catus)
Farid Rahimi
Subject: Biology, Animal Sciences & Zoology Keywords: cat; Tehran; weight loss; food access; waste reduction
This study aimed to investigate the effect of reducing the amount of organic waste on the weight of cats in Tehran. The weight of 4192 cats was measured from spring 2016 to the end of winter 2020. They were classified into 6 age groups, 2 gender groups, and 13 geographical areas. Their weight was measured for 48 months (16 seasons). The statistical parameters analysis showed no weight loss in 2017, but since 2018, cats have been losing weight every year. They had lost about 178g of their weight in 2018. The sharpest annual decrease was observed in 2019 when about 301g of weight loss was recorded. In the winter of 2020, 115g of weight loss took place. In the spring of 2017, no weight change was observed, but in the spring of 2018, the cats lost 155g of their weight. Their weight loss intensified in the spring of 2019 and about 299g of weight loss was observed. In the summer of 2017, as in the spring of the same year, no weight loss was recorded, but for the summer of 2018, the weight loss was evident and about 205g of the weight of the cats had been reduced. The weight loss in the summer of 2019 not only continued but intensified and about 304g of weight loss was recorded for cats. Weight change was not observed in the fall of 2017 as in the spring and summer of the same year. In the fall of 2018, weight loss was recorded for cats. They had lost about 324g of their weight in the fall of 2018. Also, they experienced a weight loss of about 218g in the fall of 2019. During the spring, summer, and autumn of 2017, no weight loss was observed in the cats for the winter of 2018, but in the winter of 2019, the cats faced the most severe weight loss (seasonally). They lost about 401g of weight in the winter of 2019. Of course, in the winter of 2020, about 186g of weight loss was observed in cats. The results showed that female cats did not lose weight in 2017 but experienced weight loss in 2018 with a weight loss of 181g. The weight loss of females intensified in 2019 and 294g of weight loss was recorded. Female cats lost 186g of their weight in the winter of 2020. Male cats did not lose weight like female cats in 2017. But in 2018, a weight loss of 166g was observed in male cats. The weight loss of male cats continued in 2019 and 311g of weight loss was recorded for them. However, in 2020, unlike females, weight loss was not observed in male cats. It can be said that both sexes lost more weight in the winter of 2019 than in other seasons. In 2017, weight loss was observed only for the region of 10, and in the same year, weight gain was recorded for the region of 15. But in 2018, except for regions 3, 4, 15, and 19, weight loss was observed in other regions. In 2019, the weight loss of cats spread and weight loss was observed in all regions except the region of 12. In winter 2020, weight loss was recorded only in the region of 4. In the end, it can be concluded that the weight of cats has decreased since the spring of 2018 because the beginning of the decrease in the amount of organic waste has been recorded since the winter of 2018. So, there is a direct relationship between the weight of cats and the amount of organic waste (access to food). The amount of garbage has been decreasing since the winter of 2018, and the average weight of cats has also been decreasing since the spring of 2018 due to the decrease in access to food.
Psychosocial Resources and Diet-Related Lifestyle in Overweight and Obesity: A Cluster-Based Study
Débora Godoy-Izquierdo, Raquel Lara, Adelaida Ogallar, Alejandra Rodríguez-Tadeo, María J. Ramírez, Estefanía Navarrón, Félix Arbinaga
Subject: Behavioral Sciences, Applied Psychology Keywords: body image; healthy diet; weight-related stigma; subjective well-being; excessive weight; cluster analysis
Online: 31 March 2021 (17:35:50 CEST)
This study explored intraindividual multidimensional profiles integrating psychosocial factors, namely, body image and satisfaction, weight-related self-stigma, positivity, and happiness, and behavioural-lifestyle factors, namely, adherence to a healthy diet, among Spanish adults with overweight or obesity. We further aimed to investigate the association of excess weight (i.e., measured body mass index, BMI) with the abovementioned multidimensional configurations. A convenience sample of adult individuals with excessive weight completed self-reports regarding the study variables, and their weight and height were measured. With a perspective centered on the individual, a cluster analysis established three distinct intraindividual psychosocial and diet-related profiles: a group of healthy individuals with excess weight; a group of individuals who were negatively affected by their excessive weight and showed the most distressed profile; and a group of dysfunctional individuals who seemed to be excessively unrealistic and optimistic regarding their excessive weight and unhealthy lifestyles. Furthermore, individuals in the affected cluster had higher obesity. The results showed that there are specific psychosocial and lifestyle profiles in the adult population with excess weight and that there are relationships among psychological, behavioural, and body-composition factors. For clinical application purposes, it is important to account for the heterogeneity within individuals who are obese and to individualize the interventions, with a focus from weight change to individual's overall well-being.
Psychosocial and Diet-Related Lifestyle Clusters in Overweight and Obesity
Online: 5 March 2021 (21:27:00 CET)
This study explored intraindividual multidimensional profiles integrating psychosocial factors, namely, body image and satisfaction, weight-related self-stigma, positivity, and happiness, and behavioural-lifestyle factors, namely, adherence to a healthy diet, among Spanish adults with overweight or obesity. We further aimed to investigate the association of excess weight (i.e., measured body mass index, BMI) with the abovementioned multidimensional configurations. A convenience sample of adult individuals with excessive weight completed self-reports regarding the study variables, and their weight and height were measured. With a perspective centred on the individual, a cluster analysis established three distinct intraindividual psychosocial and diet-related profiles: a group of healthy individuals with excess weight; a group of individuals who were negatively affected by their excessive weight and showed the most distressed profile; and a group of dysfunctional individuals who seemed to be excessively unrealistic and optimistic regarding their excessive weight and unhealthy lifestyles. Furthermore, individuals in the affected cluster had higher obesity. The results showed that there are specific psychosocial and lifestyle profiles in the adult population with excess weight and that there are relationships among psychological, behavioural, and body-composition factors. For clinical application purposes, it is important to account for the heterogeneity within individuals who are obese.
Attenuated Kinetic and Kinematic Properties During Slow Versus Traditional Velocity Resistance Exercise
Patricia Dietz, Andrew Fry, Trent Herda, Dimitrije Cabarkapa, Michael Lane, Matthew Andre
Subject: Life Sciences, Other Keywords: force, power, velocity, impulse, weight training
Online: 31 October 2018 (09:01:43 CET)
Purposely slow velocity resistance exercise (i.e., 10 s concentric and eccentric phases) is a popular training method, but limits the loads that can be lifted (e.g., <50% 1 RM). This study compared the biomechanical properties of purposely slow velocity (SLOW) and traditional resistance exercise (TRAD) that uses maximal lifting velocities. Healthy resistance-trained men (n=5) performed two testing sessions (barbell squat and bench press) in random-order; a SLOW session (1 set x 10 repetitions at 28% 1 RM, 10 s concentric & eccentric), and a TRAD session (3 x 10 at 70% 1 RM, controlled eccentric and maximal concentric). A force plate and linear position transducer were used to collect kinetic and kinematic data for every repetition of both protocols (α = 0.05). For both exercises, both concentric and eccentric mean force (N) and power (W) for each repetition was greater for TRAD. When the entire training session (squat + bench press) was examined, SLOW exhibited greater time under tension, while TRAD produced greater work (J) and impulse (N·s). Contrary to suggestions in both the lay and scientific literature, purposely slow resistance exercise produced less force, power, and work than traditional velocity resistance exercise.
Calculating Hamming Distance with the IBM Q Experience
José Manuel Bravo
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: quantum algorithm; Hamming weight; Hamming distance
In this brief paper a quantum algorithm to calculate the Hamming distance of two binary strings of equal length (or messages in theory information) is presented. The algorithm calculates the Hamming weight of two binary strings in one query of an oracle. To calculate the hamming distance of these two strings we only have to calculate the Hamming weight of the xor operation of both strings. To test the algorithms the quantum computer prototype that IBM has given open access to on the cloud has been used to test the results.
Prenatal Exposure to Polycyclic Aromatic Hydrocarbons and Growth Parameters
Radim Sram, Ivo Solansky, Anna Pastorkova, Milos Veleminsky, Jr., Milos Veleminsky, Katerina Urbancova, Darina Dvorakova, Jana Pulkrabova
Subject: Medicine & Pharmacology, Obstetrics & Gynaecology Keywords: birth weight, birth length, head circumference, placenta weight, growth 43 parameters, polycyclic aromatic hydrocarbons, monohydroxylated PAH metabolites
Online: 6 October 2022 (14:39:33 CEST)
Background and objectives: The impact of prenatal exposure to polycyclicaromatic 17 hydrocarbons (PAHs) on birth outcomes as weight, length, head circumference, placenta 18 weight, and Apgar. Materials and Methods: Two cohorts of children born in the years 2013 and 19 2014 in Karvina (Northern Moravia, N=144) and Ceske Budejovice (Southern Bohemia, 20 N=198), were studied for the relationship between the prenatal exposure to PAHs and growth 21 parameters up to two years of age. PAHs exposure was evaluated according to the concentration 22 of benzo[a]pyrene (B[a]P) in polluted air and monohydroxylated PAH metabolites (OH-PAHs) 23 in urine of newborns as well as their mothers. Data of growth parameters were obtained from 24 pediatric questionnaires up to 24 months. 25 Results: Concentrations of B[a]P were significantly higher in Karvina (p<0.001). OH-26 PAH metabolites were significantly higher in the mothers´ as well as in the newborns´ urine in 27 Karvina. The length was shorter in newborns in Karvina at birth (p<0.001), but this difference 28 was straightened out during next 3 to 24 months. Birth weight at the delivery did not differ 29 between newborns in Karvina and Ceske Budejovice. Newborns in both locations significantly 30 decreased their weight gain between birth and 3 months after delivery. OH-PAHs metabolites 31 in mother's or newborn's urine did not affect birth weight. Top 25% values of concentrations 32 of 2-OH-FLUO, 1-OH-NAP, 2-OH-NAP, 1-OH-PHEN, 2-OH-PHEN, 3-OH-PHEN, 4-OH-33 PHEN, and the sum of all-OH-PAHs higher than median in the newborns´ urine decreased their 34 length. 2-OH-PHEN top 25% of concentrations in the newborns´ urine decreased their head 35 circumference, 2-OH-FLUO, 1-OH-NAP, 2-OH-NAP, 1-OH-PHEN, 2-OH-PHEN, 3-OH-36 PHEN, 4-OH-PHEN, 9-OH-PHEN, 1-OH-PYR, and all-OH-PAHs decreased placenta weight; 37 2-OH-FLUO, 1-OH-NAP, 2-OH-NAP, 1-OH-PHEN, 2-OH-PHEN, 3-OH-PHEN, 4-OH-38 PHEN, and all-OH-PAHs decreased Apgar 5´. Conclusions: We observed that higher 39 concentration of PAHs determined as OH-PAHs metabolites in newborns´ urine decreased their 40 length, head circumference, placenta weight, and Apgar 5´, but did not affect birth weight.
Long-Term Effects of Vitamin D Supplementation in Obese Children During Integrated Weight-Loss Programme - A Double Blind Randomized Placebo-Controlled Trial
Michał Brzeziński, Agnieszka Jankowska, Magdalena Słomińska-Frączek, Paulina Metelska, Piotr Wiśniewski, Piotr Socha, Agnieszka Szlagatys-Sidorkiewicz
Subject: Medicine & Pharmacology, Nutrition Keywords: vitamin D; obesity; weight-loss; body composition
Background: Vitamin D was studied in regards to its possible impact on body mass reduction and metabolic changes in adults and children with obesity yet there were no studies assessing the impact of vitamin D supplementation during a weight management programme in children and adolescence. The aim of our study was to assess the influence of 26 weeks of vitamin D supplementation in overweight and obese children undergoing an integrated 12-months' long weight loss programme on body mass reduction, body composition and bone mineral density. Methods: A double-blind randomized placebo-controlled trial. Vitamin D deficient patients ( <30 ng/ml level of vitamin D) aged 6-14, participating in multidisciplinary weight management programme were randomly allocated to receiving vitamin D (1200 IU) or placebo for the first 26 weeks of the intervention. Results: Out of the 152 qualified patients, 109 (72%) completed a full cycle of four visits scheduled in the programme. There were no difference in the level of BMI change. Although the reduction was greater in the vitamin D vs. placebo group (-4.28 ± 8.43 vs. -2.53 ±6.10) the difference was not statistically significant (p=0.319). Similarly the reduction in fat mass – assessed both using bioimpedance and DEXa was achieved, yet the differences between the groups were not statistically significant. Conclusions: Our study ads substantial results to support the thesis on no effect of vitamin D supplementation on body weight reduction in children and adolescents with vitamin D insufficiency undergoing a weight management programme. Trial registration no: NCT 02828228; trial registration date: 8 June 2016 registered in: ClinicalTrials.gov.
Size-Dependent Rheological Variability of Levan Produced by Gluconobacter Albidus TMW 2.1191
Christoph Hundschell, Andre Braun, Daniel Wefers, Rudi Vogel, Frank Jakob
Subject: Materials Science, Biomaterials Keywords: levan; gluconobacter; exopolysaccharide; hydrocolloid; molecular weight; rheology
Online: 8 January 2020 (07:52:55 CET)
Levan is a fructan-type exopolysaccharide, which is produced by many microbes from sucrose via extracellular levansucrases. The hydrocolloid properties of levan depend on its molecular weight, while it is unknown why and to which extent levan is functionally diverse in dependence of its size. The aim of our study was to get deeper insights into the size-dependent, functional variability of levan. For this purpose, levans of different sizes were produced using the water kefir isolate Gluconobacter albidus TMW 2.1191 and subsequently rheologically characterized. Three levan types could be identified, which are similarly branched, but significantly differ in their molecular size and rheological properties among each other. The smallest levan (< 107 Da) produced without adjustment of the pH exhibited Newton-like flow behavior up to a specific concentration of 25% (w/v). On the contrary, larger levans (> 108 Da) produced at pH ≥ 4.5 were shear-thinning and showed a gel like behavior at ≥ 5% (w/v). A third (intermediate) levan variant was obtained via production in buffers at pH 4.0 and exhibited the properties of a viscoelastic fluid at ≥ 5% (w/v). Our study reveals that the variable size and composition of levan are controllable and more decisive for its functionality than the amount of exerted levan.
Litter Survival Differences between Divergently Selected Lines for Environmental Sensitivity in Rabbits
Ivan Agea, María-Luz García, Agustín Blasco, María-José Argente
Subject: Biology, Animal Sciences & Zoology Keywords: correlated response; pre-weaning; survival; weight; welfare
A divergent selection experiment on environmental sensitivity was performed in rabbits. The aim of this study was to estimate the correlated response in kits' weight and its survival, and weight distance from birth to weaning. Weight distance was calculated as the absolute values of the differences between the individual value and the mean value of its litter. Also, relationship between probability of survival at 4 d of age and weight at birth was studied. Environmental sensitivity was measured as litter size variability. A total of 2484 kits from 127 does of the low line (selected for reducing litter size variability) and 1916 kits of 114 does of the high line (selected for increasing litter size variability) of the 12th generation were weighed. Bayesian methodology was used to estimate the correlated response to selection, and LOGISTIC procedure of SAS was used to estimate the relationship between weight and probability of survival. Both lines showed similar individual weight at birth and at weaning, and similar survival at birth and at 4 d of age. Survival at weaning was higher in the low line than in the high line (0.67 and 0.62; P= 0.93). Weight distance was higher at birth but lower at weaning in the low line (47.8 g and 54.1 g; P=0.98). Kit's weight at birth affected its survival. In conclusion, selection for environmental sensitivity showed correlated response in kits survival and in homogeneity of litter weight at weaning.
The Role of Body Weight and Growing in Body Height to Nonspecific Musculoskeletal Pain in a Cohort of Bosnia and Herzegovina Schoolchildren
Nurka Pranjic, Selma Azabagic
Subject: Life Sciences, Biochemistry Keywords: musculoskeletal pain; body height; body weight; schoolchildren
Online: 8 November 2018 (11:07:54 CET)
Background Children often suffer the nonspecific musculosceletal pain as reported in literature. Aim To determine relationship between body weights with development of musculoskeletal pain and to determine whether growing in body height is associated with musculoskeletal pain in schoolchildren. Subjects/ Methods A prospective longitudinal study included 1315 school children aged 7-14 years (652 boys and 663 girls) and was performed in 13 elementary schools in B&H. Child body height and body weight were measured. The survey of perception of musculoskeletal pain in different body regions of subjects was conducted by adjusted Nordic Musculosceletal Questionnaire (NMQ). Results The highest prevalence of an overweight and obesity in the 10th year 35.7% and the lowest frequency 17.8% in the 14th year was. In the age 14th obesity was'nt found. Boys have more prevalence of overweight. Using logistic regression model, we found that school children with normal BMI were protected with increased body height of acute lower back pain (β= -0.089, 95%CI, -9.730- -0.023, P< 0.049), and increased body height was protector of obese school children of acute upper back pain (β= -0.356, 95%CI, -14.077- -3.878, P< 0.001) and chronic lower back pain (β= -0.356, 95%CI, -14.077- -3.878, P< 0.001). Conclusion Schoolchildren with normal weight more often have had musculosceletal pain than those with overweight or obesity. This can be associated with intense physical growth period in height, especially. The assumption is that the increase in height changes the relationship between excessive BMI and musculoskeletal pain in children of school age.
Bottled vs. Canned Beer: Do They Really Taste Different?
Andrew Barnett, Carlos Velasco, Charles Spence
Subject: Behavioral Sciences, Applied Psychology Keywords: packaging; beer; image mold; packaging weight; taste
People often say that beer tastes better from a bottle than from a can. However, one can ask whether this perceived difference is reliable across consumers; And, if so, whether it is purely a psychological phenomenon (associated with the influence of packaging on taste perception), or whether instead it reflects some more mundane physico-chemical interaction between the packaging material (or packing procedure/process) and the contents. We conducted two experiments in order to address these important questions. In the main experiment, 151 participants at the 2016 Edinburgh Science Festival were served a beer in a plastic cup. The beer was either poured from a bottle or can (i.e., a between-participants experimental design was used) and the participants were encouraged to pick up the packaging in order to inspect the label before tasting the beer. The participants rated the perceived taste, quality, and freshness of the beer, as well as their likelihood of purchase, and their estimate of the price. All of the beer came from the same batch (from Barney's Brewery in Edinburgh). Nevertheless, those who evaluated the bottled beer rated it as tasting better than those who rated the beer that had been served from a can. Having demonstrated such a perceptual difference in terms of taste, we then went on to investigate whether people would prefer one packaging format over the other when the beer from bottle and can was served to a new group of participants blind (i.e., when the participants did not know the packaging material). The participants in this control study (N = 29) were asked which beer they preferred or else could state that the two samples tasted the same. No sign of preference was obtained under such conditions. Explanations for the psychological impact of the packaging format, in terms of differences in packaging weight (between tin and glass), and/or prior associations of quality with specific packaging materials/formats (what some have chosen to call 'image molds') are discussed.
Analysis of Mechanical Behaviors of Waterbomb Thin-Shell Structures Under Quasi-Static Load
Lijuan Zhao, Zuen Shang, Tianyi Zhang, Zhan Liu, Liguo Han, and Chongwang Wang
Subject: Engineering, Automotive Engineering Keywords: Waterbomb structure; Origami pattern; Quasi-static load; Critical axial buckling load-to-weight ratio; Radial stiffness-to-weight ratio
Waterbomb structures are origami-inspired deformable structural components used in new types of robots. They have a unique radially deployable ability that enables robots to better adapt to their environment. In this paper, we propose a series of new waterbomb structures with square, rectangle, and parallelogram base units. Through quasi-static axial and radial compression experiments and numerical simulations, we prove that the parallelogram waterbomb structure has a twist displacement mode along the axial direction. Compared with the square waterbomb structure, the proposed optimal design of the parallelogram waterbomb structure reduces the critical axial buckling load-to-weight ratio by 55.4% and increases the radial stiffness-to-weight ratio by 67.6%. The significant increase in the radial stiffness-to-weight ratio of the waterbomb structure and decrease in the critical axial buckling load-to-weight ratio make the proposed origami pattern attractive for practical robotics applications.
UniStArt: A 12-Month Prospective Observational Study of Body Weight, Dietary Intake, and Physical Activity Levels in Australian First-Year University Students
Nina Wilson, Anthony Villani, Sze-Yen Tan, Evangeline Mantzioris
Subject: Medicine & Pharmacology, Nutrition Keywords: freshman; weight gain; body composition; diet; physical activity
Background: Students in the United States gain weight significantly during their first year of university, however limited data are available for Australian students. Methods: This 12-month observational study was conducted to monitor monthly body weight and composition, as well as quarterly eating behaviours, dietary intake, physical activity, sedentary behaviours, and basal metabolic rate changes amongst first-year Australian university students. Participants were first-year university students over 18 years. Results: Twenty-two first-year university students (5 males and 17 females) completed the study. Female students gained weight significantly at two, three, and four-months (+0.9 kg; +1.5 kg; +1.1 kg, p <0.05). Female waist circumference (2.5 cm increase at three-months, p = 0.012), and body fat also increased (+0.9%, p = 0.026 at three-months). Intakes of sugar, saturated fat (both >10% of total energy), and sodium exceeded recommended levels (>2000 mg) at 12-months. Greater sedentary behaviours were observed amongst male students throughout the study (p <0.05). Conclusions: Female students are at risk of unfavourable changes in body composition during the first year of university, while males are at risk of increased sedentary behaviours. High intakes of saturated fat, sugars, and sodium warrant future interventions in such a vulnerable group.
Locating And Location Number
Henry Garrett
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Special Set; Set's Weight; Special Number; Number's Position.
In this article, some notions about set, weight of set, number, number's position, special vertex are introduced. Some classes of graph under these new notions have been opted as if the study on the special attributes of these new notion when they've acted amid each other is considered. Internal and external relations amid these new notions have been obtained as if some classes of graphs in the matter of these notions are been pointed out.
Big Sets Of Vertices
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Special Set, Set's Weight, Special Number, Number's Position.
A Regularized Raking Estimator for Small Area Mapping from Forest Inventory Surveys
Nicholas N. Nagle, Todd A. Schroeder, Brooke Rose
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: fia; forest inventory; small area estimation; survey weight
We propose a new estimator for creating expansion factors for survey plots in the USDA Forest Inventory and Analysis program. This is a regularized version of the raking estimator widely used in sample surveys. The regularized raking method differs from other predictive modeling methods for integrating survey and ancillary data in that it produces a single set of expansion factors that can have general purpose use to produce small area estimates and wall-to-wall maps of any plot characteristic. This method also differs from other more widely used survey techniques, such of GREG estimation, in that it is guaranteed to produce positive expansion factors. We extend the previous method here to include cross-validation, and provide a comparison to expansion factors between the regularized raking and ridge GREG survey calibration.
Effects of Long-Term Walnut Supplementation on Body Weight in Free-living Elderly: Results of a Randomized Controlled Trial
Edward Bitok, Sujatha Rajaram, Karen Jaceldo-Siegl, Keiji Oda, Aleix Sala-Vila, Mercè Serra-Mir, Emilio Ros, Joan Sabaté
Subject: Medicine & Pharmacology, Nutrition Keywords: nuts; walnuts; body weight; adiposity; obesity; elderly; energy
Objective: To assess the effects of chronic walnut consumption on body weight and adiposity in elderly individuals. Methods: The Walnuts And Healthy Aging study is a dual-center (Barcelona, Spain and Loma Linda University [LLU]), 2-year randomized parallel trial. This report concerns only the LLU cohort. Healthy elders (mean age 69 y, 67% women) were randomly assigned to walnut (n = 183) or control diets (n = 173). Subjects in the walnut group received packaged walnuts (28–56 g/d), equivalent to ≈15% of daily energy requirements, to incorporate into their habitual diet, while those in the control group abstained from walnuts. Adiposity was measured periodically, and data were adjusted for in-trial changes in self-reported physical activity. Results: After 2 years, body weight significantly decreased (P = 0.031), while body fat significantly increased (P = 0.0001). However, no significant differences were observed between the control and walnut groups regarding body weight (−0.6 kg and −0.4 kg, respectively, P = 0.67) or body fat (+0.9% and +1.3%, respectively, P = 0.53). Lean body mass, waist circumference and waist-to-hip ratio remained essentially unchanged. Sensitivity analyses were consistent with the findings of primary analysis. Conclusion: Our findings indicate that walnuts can be incorporated into the daily diet of healthy elders without concern for adverse effects on body weight or body composition.
PVDF Membrane Morphology - Influence of Polymer Molecular Weight and Preparation Temperature
Monika Haponska, Anna Trojanowska, Adrianna Nogalska, Renata Jastrzab, Tania Gumi, Bartosz Tylkowski
Subject: Materials Science, Surfaces, Coatings & Films Keywords: PVDF membrane; coagulation bath temperature; polymer molecular weight
The global polyvinyldene flouride market is estimated to reach $937,278.5 thousand by 2019, therefore to develop new membranes and gain pioneering ideas, which could create innovative business opportunities, a fundamental knowledge about membrane properties fabricated from recent commercially available PVDF polymers is highly mandatory. In this study, we successfully prepared nine non-woven supported PVDF membranes using a phase inversion precipitation method starting from a 15 wt% PVDF solution in N-methyl-2-pyrrolidone. Various membrane morphologies were obtained by using (1) PVDF polymers with diverse molecular weight in a range from 300.000 Da to 700.000 Da and (2) different temperatures of the coagulation bath (20, 40, and 60 ±2°C) used for the films precipitation. Environmental Scanning Electron Microscope (ESEM) was used for surface and cross-section morphologies characterization. Atomic Force Microscope (AFM) was employed to investigate surface roughness, while Contact Angle (CA) instrument was used for membranes wettability studies. Fourier Transform Infrared Spectroscopy (FTIR) results show that the fabricated membranes are formed by a mixture of TGTG' chains in α phase crystalline domains and all-TTTT trans planar zigzag chains characteristic to β phase. Moreover, generated results indicate that the phases content and membrane morphologies depend on the polymer molecular weight and conditions used for the membranes preparation. The diversity of fabricated membranes could be applied by the End User Industries for different applications.
Antioxidant and Antidiabetic Activity of Algae
Atef Mohamed Abo-Shady, Saly Farouk Gheda, Gehan Ahmed Ismail, João Cotas, Leonel Pereira, Omnia Hamdy Abdel-Karim
Subject: Medicine & Pharmacology, Nutrition Keywords: diabetes; antioxidant; antihyperglycemic; lipid profile; body weight; algal treatments
Currently, algae arouse a growing interest in the pharmaceutical and cosmetic area due to the fact that they have a great diversity of bioactive compounds with the potential for pharmacological, cosmetic, and nutraceutical applications. Many of these bioactive compounds are secondary metabolites whose amounts in the algae vary with varying environmental conditions. Free radicals and other active oxygen derivatives are recognized as a natural by-product of aerobic metabolism. However, reactive oxygen species directly participate in mechanisms related to various pathological states such as cancer, diabetes, atherosclerosis, Alzheimer's, and Parkinson's, among others. Diabetes mellitus (DM) is a metabolic disease resulting from changes in glucose metabolism and/or deficient production/action of insulin. This review has as its main objective to reveal the potential antioxidant and antidiabetic capacity of algae extracts.
Does Losing Weight Lower the Risk of Cancer: A Systematic Review and Meta-analysis
Nikolaos Tzenios, Mary Tazanios, Omasyarifa Binti Jamal Poh, Mohamed Chahine
Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: cancer, losing weight, interventions, physical activity, dietary restrictions, hormones.
(1) Background: Loss of weight is one of the practices which have been identified as key in reducing the risk of various forms of cancer. Therefore, this study is a systematic review and meta-analysis of studies related to the topic of loss of weight and risk of cancer and addresses the question, 'does losing weight reduce the risk of cancer?' Its purpose is to identify current high-quality evidence on such a question and synthesize such evidence before summarizing it given specific data attributes to improve decision-making processes on cancer management. (2) Methods: Research studies were identified from four main databases: PubMed, Science Direct, Google scholar, and Medline. A systematic review and meta-analysis of such studies were then conducted to reveal the most current evidence on the research topic. (3) Results: The studies showed that losing weight reduces cancer risk. Nonetheless, such intervention is not necessarily effective, especially in cases where patients may be at risk of developing cancer due to other risk factors. (4) Conclusions: The current study concludes that there is a need to implement effective interventions such as physical exercise, dietary restrictions, or both that can be effective in reducing weight to reduce the risk of cancer.
The impact of BMI on Ovarian Cancer- An Updated Systematic Review and Metanalysis
Nikolaos Tzenios, Mary Tazanios, Mohamed Chahine
Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: ovarian cancer; BMI; obesity; overweight; normal weight; statistical significance
A significant number of research studies have focused on determining whether BMI influences various types of cancer. The findings of these studies showed that people have to manage their BMIs to decrease their risk of developing various types of cancer, one of which is ovarian cancer. A PRISMA guideline for systematic review and meta-analysis was used to identify 20 research studies related to the topic to establish the truth or falsity of the findings. Later, their findings were synthesized. The synthesis of the findings of such research articles suggests that overweight and obesity increase an individual's risk of developing ovarian cancer and experiencing severe symptoms of the disease. In such a manner, the current research study can conclude that effective management of BMI is necessary for decreasing the prevalence and mortality rates associated with ovarian cancer.
Chronic Positive Mass Balance is the Actual Etiology of Obesity: A Living Review
Anssi Manninen
Subject: Medicine & Pharmacology, Nutrition Keywords: obesity; body weight regulation; macronutrients; energy balance theory; mass balance model; paradigm shift; living review
According to known laws of physics, chronic positive mass balance is the actual etiology of obesity, not positive energy balance. The relevant physical law in terms of body mass regulation is the Law of Conservation of Mass, not the Law of Conservation of Energy. A recently proposed mass balance model (MBM) describes the temporal evolution of body weight and body composition under a wide variety of feeding experiments, and it seems to provide a highly accurate description of the very best experimental human feeding data. By shifting to a mass balance paradigm of obesity, a deeper understanding of this disease may follow in the near future. The purpose of this living review is to present the core issues of the upcoming paradigm shift as well as some practical applications related to the topic.
Research on Yak Body Ruler and Weight Measurement Method Based on Deep Learning and Binocular Vision
Wenzhi Wang, Yuan Zhang, Jie He, Zhanqi Chen, Dan Li, Chong Ma, Yang Ba, Qiucuo Baima, Xiaoqin Li, Rende Song
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: yak; semantic segmentation; binocular vision; body size; weight stimation
In order to solve the labor-intensive and time-consuming problem in the process of measuring yak body ruler and weight in yak breeding industry in Qinghai Province, a non-contact method for measuring yak body ruler and weight was proposed in this experiment, and key technologies based on semantic segmentation, binocular ranging and neural network algorithm were studied to boost the development of yak breeding industry in Qinghai Province. Main conclusions: (1) Study yak foreground image extraction, and implement yak foreground image extraction model based on U-net algorithm; select 2263 yak images for experiment, and verify that the accuracy of the model in yak image extraction is over 97%. (2) Develop an algorithm for estimating yak body ruler based on binocular vision, and use the extraction algorithm of yak body ruler related measurement points combined with depth image to estimate yak body ruler. The final test shows that the average estimation error of body height and body oblique length is 2.6%, and the average estimation error of chest depth is 5.94%. (3) Study the yak weight prediction model; select the body height, body oblique length and chest depth obtained by binocular vision to estimate the yak weight; use two algorithms to establish the yak weight prediction model, and verify that the average estimation error of the model for yak weight is 10.7% and 13.01% respectively.
Maternal Serum and Cord Blood Leptin Concentrations at Delivery in Normal Pregnancies and in Pregnancies Complicated by Intrauterine Growth Restriction
Małgorzata Stefaniak, Ewa Dmoch-Gajzlerska
Subject: Medicine & Pharmacology, Allergology Keywords: leptin; cord leptin; pregnancy; intrauterine growth restriction; birth weight
Introduction: Leptin is a polypeptide hormone and in pregnancy it is secreted by the placenta and maternal and fetal adipose tissues. The expression of leptin and its specific receptors is observed in the uterine endometrium which indicates leptin involvement in the implantation process and embryonic/fetal development. Normal leptin production is a factor responsible for uncomplicated gestation, embryo development and fetal growth. Objective: To compare at delivery maternal serum and cord blood leptin concentrations in normal pregnancies and in pregnancies complicated by intrauterine growth restriction (IUGR). Material and methods: The study was performed in 25 pregnant women with isolated IUGR diagnosed by ultrasonography (study subjects) and in 194 pregnant women without any comorbid health conditions (controls). Leptin concentrations in maternal serum and in cord blood samples collected at delivery were measured by ELISA and subsequently analyzed by maternal Body Mass Index (BMI), mode of delivery, and infant gender and birth weight. For comparative analyses of normally distributed variables, parametric tests were used, i.e. the Student-t to test the assumption of homogeneity or non-homogeneity of variance and a One-Way ANOVA when more than two groups were compared. The non-parametric Mann-Whitney test was used when the distribution was not normal. The Pearson correlation coefficient was calculated to assess the correlation between normally distributed variables (p<0.05). Results: In pregnancies complicated by IUGR, the mean maternal serum leptin concentration at delivery was significantly higher (52.73 ± 30.49 ng/mL) than in normal pregnancies (37.17 ± 28.07 ng/mL) (p=0.01). The mean cord blood leptin concentration in pregnancies complicated by IUGR was 7.97 ± 4.46 ng/mL and significantly lower than in normal pregnancies (14.78 ± 15.97 ng/mL) (p=0.04). In normal pregnancies, but not in pregnancies complicated by IUGR, a statistically significant correlation was established between maternal serum leptin concentrations and maternal BMI at delivery (r=0,22; p=0.00). No statistically significant correlation was found between cord blood leptin concentrations and maternal BMI in either study subjects or controls. In normal pregnancies, but not in pregnancies complicated by IUGR, a strong correlation was observed between cord blood leptin concentrations and birth weight (r=0,23; p=0.00). In both study subjects and controls, there were no correlations between leptin concentrations in maternal serum and cord blood and infant gender and mode of delivery. Conclusions: Elevated maternal blood leptin concentrations in pregnancies complicated by IUGR may indicate a significant adverse effect of elevated leptin on fetal growth. Enhanced leptin production by the placenta suggests leptin as a candidate marker of placental insufficiency. The differences in leptin concentrations, measured in maternal serum and in cord blood, between the study subjects and controls suggest that deregulated leptin levels may increase the risk of obstetric complications associated with placental insufficiency.
Corrosion Inhibition Effect of 1,10-Phenanthroline-5,6-diamine on Mild Steel in Hydrochloric Acid Solution
Ahmed A. Al-Amiery
Subject: Materials Science, Surfaces, Coatings & Films Keywords: 1,10-Phenanthroline-5,6-diamine; corrosion inhibitor; weight loss method
The inhibition impacts of 1,10-Phenanthroline-5,6-diamine (PTDA) on mild steel in 1 M HCl solution were investigated through weight loss method. The inhibition efficiencies of PTDA increase with increase in PTDA concentration at the temperature 303. Weight loss method indicate that PTDA is an excellent inhibitor the inhibition efficiency of 81.5% at the maximum PTDA concentration of 0.5 g/L at the temperature 303K.
Fetal Macrosomia and Associated Factors to Perinatal Adverse Outcomes, in Yaounde, Cameroon : A Case Control Study
Anne Esther Njom Nlend, Josepha Gwodog, Arsene Brunelle Sandie
Subject: Medicine & Pharmacology, Pediatrics Keywords: fetal macrosomia; gestational diabetes; maternal obesity; birth weight; fetal growth
Objective To Identify risk factors of perinatal complications amongst macrosomic babies in a reference hospital structure. Method We conducted a case-control institutional based study. Cases and controls of singleton livebirths were extracted from the maternity registry from January 2017 to December 2019 The case population consisted of mother and child macrosomic couples with perinatal complications. The control group consisted of couples without perinatal complications. Matching was done on age and sex. The main primary end point was the risk factors for complications. Data were analyzed using R, software version3.0 in adjusted and unadjusted analysis with p<0.05 threshold considered statistically significant. Results Out of 362 couples, we had 186 cases and 176 controls. Maternal age ≥30 years (p=0.024); non-screening for gestational diabetes (p=0.027); history of caesarean section (p=0.041); weight gain ≥16 kg (p<0.001); maternal HIV (p=0.047); birth weight ≥4500g (p=0.015) and birth height ≥ 52.7 ±1.7cm (p=0.026) were risk factors. Conclusion The delivery of a macrosomic baby remains problematic in this setting. The improvement of the maternal-fetal prognosis requires quality prenatal surveillance and management by a multidisciplinary perinatal team involving obstetricians, endocrinologist, and neonatal pediatricians.
To Identify risk factors of perinatal complications amongst macrosomic babies in a reference hospital structure.We conducted a case-control institutional based study. Cases and controls of singleton livebirths were extracted from the maternity registry from January 2017 to December 2019 The case population consisted of mother and child macrosomic couples with perinatal complications. The control group consisted of couples without perinatal complications. Matching was done on age and sex. The main primary end point was the risk factors for complications. Data were analyzed using R, software version3.0 in adjusted and unadjusted analysis with p<0.05 threshold considered statistically significant.Out of 362 couples, we had 186 cases and 176 controls. Maternal age ≥30 years (p=0.024); non-screening for gestational diabetes (p=0.027); history of caesarean section (p=0.041); weight gain ≥16 kg (p<0.001); maternal HIV (p=0.047); birth weight ≥4500g (p=0.015) and birth height ≥ 52.7 ±1.7cm (p=0.026) were risk factors. The delivery of a macrosomic baby remains problematic in this setting. The improvement of the maternal-fetal prognosis requires quality prenatal surveillance and management by a multidisciplinary perinatal team involving obstetricians, endocrinologists and neonatal pediatricians.
Gypsum Supplies Calcium to Ultisol Soil and Its Effect on Pineapple Growth, Yield and Fruit Quality in Lower Single Bed under Climate Change Issue
Supriyono Loekito, Afandi Afandi, Auliana Afandi, Nasomasa Nishimura, Hiroyuki Koyama, Masateru Senge
Subject: Biology, Horticulture Keywords: Lower bed single row; plant weight; fruit texture; crop growth
Abstract: A lower bed single row for pineapple cultivation could protect pineapple from soil erosion in rainy season and during drought, however, disease problem could arise due to water logging. Two experiments using a lower bed single row was done to understand the ability of gypsum providing soil calcium (Ca) available to pineapple plant, resistance to heart rot disease, and give better effect on crop growth and fruit quality of the pineapple in Ultisol soil. In the first trial, four level dosis of gypsum (0, 1.0, 1.5, 2.0 Mg ha-1) and dolomite 2 Mg ha-1 were applied by spreading and incorporated into the soil which have saturated with inoculums of Phytophthora nicotianae. In the second trial, gypsum treatments (0, 1.0, 1.5, 2.0, 2.5 Mg ha-1) were applied in the row between the single row beds as a basic fertilizer. The result showed that P. nicotianae attacked the pineapple plants in all treatments at 6 weeks after planting (WAP), and at 10 WAP, the mortality of dolomite treatment reached 63.8%, significantly different than that for gypsum treatments (3.3-14.3%). In the second experiment, gypsum increased plant weight significantly at 3 until 9 months after planting especially when it was applied 1.5-2.5 Mg ha-1. Fruit texture, total soluble solid (TSS), titratable acidity (TA) were not significant different among the treatment but all meet the standards for grades of canned pineapple. Result showed that soil applied gypsum before planting provides soil calcium and met the plant Ca requirement during a period of early and fast growth step and safe for heart rot disease.
Perceived Ideal Body Weight Exacerbates Bulimia and Dieting in Bodybuilding Athletes
Dimitris Efthymiou, Lampros Kokokiris, Christina Mesiari, Emilia Vassilopoulou
Subject: Behavioral Sciences, Applied Psychology Keywords: athletes; eating disorders; weight loss; body dissatisfaction; body image disorders
Online: 27 May 2021 (08:50:10 CEST)
TThe passion of bodybuilding athletes for a symmetric, lean, heavily muscled body leads them to carry out exhausting exercise programs and restrictive eating regimens, sometimes resulting in disordered eating behaviors. This study investigates potential exacerbators on the development of disordered eating in bodybuilding and strength athletes. The study involved 103 Cypriot bodybuilding athletes of both sexes, performing at three levels: professional, recreational and strength athletes. The Eating Attitude Test 26 (EAT-26) and The Three Factor Eating Question-naire (TFEQ-R21) were used to evaluate disordered eating and eating behaviors respectively. The current study was performed under the auspices of the Hellenic Center of Education & Treatment of Eating Disorders (KEADD). The degree of deviation between the perceived ideal body weight and the actual body weight was associated with increased risk of eating disorder. Athletes who desired a lower body weight recorded higher scores on EAT-26 overall, (p=0.001), and the subscales of dieting (p=0.01) and bulimia. (p=0.001). Cognitive restraint and emotional eating scales of TFEQ-R21 were more pronounced in the non-professional athletes. (p=0.01). The emotional eating score was higher in women. There is a need for appropriate sport-specific, gender-specific preventive intervention to deescalate the risk of eating disorder, in both profes-sional and non-professional bodybuilding athletes.
Preprint BRIEF REPORT | doi:10.20944/preprints202105.0402.v2
I Can Achieve Intrauterine Growth Rete if You Give Me Enough Nutrition: Preterm Infant Born at ≤29 Weeks Gestation
Angela Hoyos
Subject: Medicine & Pharmacology, Allergology Keywords: Very preterm infants; Z-score on weight; neonatal nutrition; appropriate intrauterine neonatal growth
Introduction: In general, everyone believes that the smallest preterm infants should achieve normal intrauterine growth rates, but many thinks that this is not possible with current nutrition guidelines. There is resistance to giving enough nutrition for fear of "toxicity". The difference in weight Z-score between birth and a corrected gestational age (CGA) at discharge is assess in postnatal growth in our unit. Material and methods: An observational study was done between January 2018 and December 2020 where all cases that had ≤ 29 weeks of GA at birth and survived to 36 weeks corrected GA or that were discharged home. An aggressive nutrition protocol including parenteral as well as enteral nutrition was followed. Patients and their weight trajectory was plotted on the Fenton 2013 growth curve. The patients who had had a smaller WZP difference were also plotted. Results: A total of 32 cases were found. The median change in Z-score between birth and discharge of the whole group was -0.52 (IQR 0.53). Six of 32 (19%) had a more than one WZP, all of whom had severe pathologies. The median decline in Z score for this group with poor growth was 1.24 (IQR 0.22). There were 26 cases with a < 1 WZP (81%) and a median Z score fall of 0.39 (IQR 0.55). No important complications secondary to the ingested volumes or parenteral nutrition were reported. Conclusion: The group of cases with a > 1 WZP drop had severe pathologies. All the other cases had adequate growth parallel to normal weight growth charts and a few cases had some catch-up growth. The study showed that it is possible for many preterm infants to achieve normal intrauterine growth rates if they are given enough nutrition, but bigger multicenter studies are needed to confirm these findings.
Exacerbating Factors Eating Disorder Risk in Bodybuilding Athletes
The passion of bodybuilding athletes for a symmetric, lean, heavily muscled body leads them to carry out exhausting exercise programs and restrictive eating regimens, sometimes resulting in disordered eating behaviors. This study investigates potential exacerbators on the development of disordered eating in bodybuilding and strength athletes. The study involved 103 Cypriot bodybuilding athletes of both sexes, performing at three levels: professional, recreational and strength athletes. The Eating Attitude Test 26 (EAT-26) and The Three Factor Eating Questionnaire (TFEQ-R21) were used to evaluate disordered eating and eating behaviors respectively. The current study was performed under the auspices of the Hellenic Center of Education & Treatment of Eating Disorders (KEADD). The degree of deviation between the perceived ideal body weight and the actual body weight was associated with increased risk of eating disorder. Athletes who desired a lower body weight recorded higher scores on EAT-26 overall, and the subscales of dieting and bulimia. Cognitive restraint and emotional eating scales of TFEQ-R21 were more pronounced in the non-professional athletes. The emotional eating score was higher in women. There is a need for appropriate sport-specific, gender-specific preventive intervention to deescalate the risk of eating disorder, in both professional and non-professional bodybuilding athletes.
Is a Mediterranean Diet Associated with Subjective Well-Being among Adults with Overweight and Obesity? The Key Role of Fruit and Vegetable Consumption and Body Satisfaction
Débora Godoy-Izquierdo, Adelaida Ogallar, Raquel Lara, Alejandra Rodriguez-Tadeo, Félix Arbinaga
Subject: Behavioral Sciences, Applied Psychology Keywords: healthy diet; fruits and vegetables; body image; happiness; excessive weight
Online: 16 March 2021 (11:57:13 CET)
Recent evidence suggests that among behavioral-lifestyle factors, adherence to a healthy dietary pattern such as the Mediterranean Diet (MedDiet) is linked not only to better psychological health and mental positive status but also to increased subjective well-being (SWB). Nevertheless, this association has been unexplored among individuals with excessive weight. This study explored whether adherence to the MedDiet and the intake of healthy foods such as fruits and vegetables (FV) are associated with increased happiness and life satisfaction among Spanish adults with overweight or obesity when weight, body image, and body satisfaction are also considered. A convenience sample of adult individuals with excessive weight completed self-reports on the study variables, and weight and BMI were measured by bioimpedance. No evidence of a relationship with SWB indicators was obtained for MedDiet global indicators, probably due to the low adherence to a healthy diet by these individuals. In contrast, FV intake, as a powerful indicator of healthy eating, was associated with life satisfaction when BMI and body image dimensions were considered, among which body satisfaction also had a key role. Moreover, life satisfaction fully mediated the relationship between FV consumption and happiness. Our findings are expected to make a relevant contribution to knowledge on the positive correlates or protective factors for overall well-being in obesity, including dietary habits and body appreciation. Our results may inform obesity management actions focused on inclusive, positive aesthetic models and promoting a healthy lifestyle for happiness in obesity.
Dementia Patient's Meal Monitoring Systems Using Weight and Temperature Sensors
Ji-Eun Joo, Haewon Hwang, Yujin Jeon, Jaewon Jung, Yu Hu, Sung Min Park
Subject: Medicine & Pharmacology, Allergology Keywords: Arduino; Bluetooth; load cell; monitoring system; temperature sensor; weight sensor
: This paper presents a couple of meal monitoring systems for senile dementia patients by using electronic weight and temperature sensors. These monitoring systems enable to convey the information of the amount of meal taken by the patients in real-time via wireless communication networks onto the mobile phones of their families or nurses in charge. Thereby, the nurses can easily spot the most desperate patient to take care of while the families can have relief to see the crucial information for survival of their parents at least three times a day. Meanwhile, the senile dementia patients tend to suffer the burn of their tongues because they can hardly recognize the temperature of hot meals served and therefore avoid the burn of tongues. This phenomenon can be discarded by utilizing the meal temperature monitoring system which displays alarm to the patients when the meal temperature is above the reference. These meal monitoring systems can be easily implemented by utilizing low-cost sensor chips and Arduino UNO boards so that elder-care hospitals and nursing homes can afford to exploit them with no additional cost. Hence, we believe that the proposed monitoring systems would be a potential solution to provide a great help and relief not only for the professional nursing nurses working in elder-care hospitals and nursing homes, but also for the families of the dementia patients.
Inclusion of Different Molecular Weight Condensed Tannin on Ruminal Fermentation and Milk Fatty Acid Profile of Dairy Goats
Siwaporn Paengkoum, Anan Petlum, Pramote Paengkoum
Subject: Biology, Animal Sciences & Zoology Keywords: CTs molecular weight; ruminal fermentation; bio-hydrogenation; milk compositions; goat
The aim of this study was to investigate the effect of condensed tannin (CTs) with differing molecular weight on their capacity to modify the fatty acid profile in milk. Twenty multiparous crossbred lactating dairy goats were assigned in a randomized complete block design (RCBD), and were subjected to receive the dietary treatments as followings; T1: control (with no CTs supplementation), T2: supplemented with mangosteen peel in a concentrate as a source of low molecular weight CTs at level of 3.0 %DM of CTs equivalent, T3: supplemented with the same diet with T2 but added with polyethylene glycol (PEG, as tannin inactivator) as the control of T2, and T4: supplemented with quebracho CT extract (UNITAN ATO, Buenos Aires, Argentina; 75-77 % tannins) in a concentrate as a source of high molecular weight CTs at level of 3.0 %DM of CTs equivalent, and T5) supplemented with the same diet with T4 but added with PEG as the control of T4. No significant change was detected for feed intake and nutrient digestibility indicate that CTs at level of 3.0 %DM of diet did not showed the detrimental effect to feed intake and nutrient digestibility, however, ruminal fermentation parameters and milk yield and milk compositions did not affected by different source of CT inclusion.
Dynamic Spillover and Hedging among Carbon, Biofuel and Oil
Yeonjeong Lee, Seong-Min Yoon
Subject: Social Sciences, Economics Keywords: EUA; EU ETS; Spillover; Optimal weight; Hedging ratio; Sudden change
With the rapid spread of carbon trading in the global economy, the interactions of prices between carbon (or clean/renewable energy) and traditional fossil energies such as coal and oil have raised growing attention, but little research have discussed their dynamic volatility spillover and time-varying correlation. The purpose of this study is to investigate these issues, for the weekly data of EUA futures, Biofuel and Brent oil prices from 25 October 2009 to 5 July 2020. We employ the VAR-GARCH model with the BEKK specification. Our results are summarized as follows. At first, we identified the sudden changes and the volatility persistence in the three markets, and also confirmed that the volatility of the markets has changed significantly over time. Secondly, we find that there are a weak volatility spillover effect among the three markets, while a strong spillover effect between the EUA and Brent oil markets. Lastly, in financial markets, the EUA can be used as a hedging portfolio for the Biofuel and Brent oil markets. These results can help investors to well compose their portfolios and manage their investment risks, and help potential pollutant emission sources to join in carbon market in a cost-effective way.
Effect of Meal Frequency on the Loss Weight, Glycemia, Lipid Profile, Plasma Ghrelin and Energy Expenditure of Women With Obesity: A Clinical Trial
Érika Duarte Grangeiro, Mariana Silva Trigueiro, Leysimar de Oliveira Siais, Hilana Moreira Paiva, Mauro Sola-Penna, Eliane Lopes Rosado
Subject: Medicine & Pharmacology, Nutrition Keywords: obesity; meal frequency; hypocaloric diet; energy expenditure; ghrelin; weight loss
Dietary approach is essential to obesity control, but the effectiveness of changes in meal frequency (MF) as strategies for loss and maintenance of body mass remain unclear. This study aimed to evaluate the influence of MF on a hypocaloric diet on weight loss, active ghrelin levels and metabolic indicators of women with obesity. This is a randomized, parallel clinical trial, including forty women, randomized in two groups, both following a hypocaloric diet, according to MF (G1 – six meals/day; G2 – three meals/day). Dietary, laboratory, anthropometric and body composition indicators were assessed, as well as energy expenditure (EE), before and after the 90 days of intervention. After intervention, both groups decreased body weight, body mass index (BMI), waist circumference, fat mass (FM), insulin and HOMA-IR. G1 increased insulin sensitivity and G2 reduced triglyceride and FM and increased fat-free mass (FFM). MF increased ghrelin levels. There were no differences in EE variables. Hypocaloric diet with different MF promoted a reduction in total weight, BMI, WC and FM and an improvement in glycidic metabolism. However, the accomplishment of the three meals/day increased the FFM and active ghrelin and reduced triglyceride, while six meals/day was more beneficial in increasing insulin sensitivity.
The Results on Vertex Domination in Fuzzy Graphs
Mohammadesmail Nikfar
Subject: Mathematics & Computer Science, Other Keywords: fuzzy graph; $\alpha$-strong arcs; weight of nodes; vertex domination
We do fuzzification the concept of domination in crisp graph by using membership values of nodes, $\alpha$-strong arcs and arcs. In this paper, we introduce a new variation on the domination theme which we call vertex domination. We determine the vertex domination number $\gamma_v$ for several classes of fuzzy graphs, specially complete fuzzy graph and complete bipartite fuzzy graphs. The bounds is obtained for the vertex domination number of fuzzy graphs. Also the relationship between $M$-strong arcs and $\alpha$-strong is obtained. In fuzzy graphs, monotone decreasing property and monotone increasing property is introduced. We prove the vizing's conjecture is monotone decreasing fuzzy graph property for vertex domination. we prove also the Grarier-Khelladi's conjecture is monotone decreasing fuzzy graph property for it. We obtain Nordhaus-Gaddum (NG) type results for these parameters. The relationship between several classes of operations on fuzzy graphs with the vertex domination number of them is studied.
Early Fetal Weight Estimation with Expectation Maximization Algorithm
Loc Nguyen, Thu-Hang T. Ho
Subject: Medicine & Pharmacology, Obstetrics & Gynaecology Keywords: fetal weight estimation; regression model; ultrasound measures; expectation maximization algorithm
Fetal weight estimation before delivery is important in obstetrics, which assists doctors diagnose abnormal or diseased cases. Linear regression based on ultrasound measures such as bi-parietal diameter (bpd), head circumference (hc), abdominal circumference (ac), and fetal length (fl) is common statistical method for weight estimation but the regression model requires that time points of collecting such measures must not be too far from last ultrasound scans. Therefore this research proposes a method of early weight estimation based on expectation maximization (EM) algorithm so that ultrasound measures can be taken at any time points in gestational period. In other words, gestational sample can lack some or many fetus weights, which gives facilities to practitioners because practitioners need not concern fetus weights when taking ultrasound examinations. The proposed method is called dual regression expectation maximization (DREM) algorithm. Experimental results indicate that accuracy of DREM decreases insignificantly when completion of ultrasound sample decreases significantly. So it is proved that DREM withstands missing values in incomplete sample or sparse sample.
Overview of Gulf of Mottama Wetland (GoMW) & Size Distribution and Economic Status of Sea Bass in Myanmar
Phyoe Marnn, Chunguang He, Haider Ali, Soe Moe Tun, Khin Swe Wynn, Nyein Nyein Moe, Tao Yang, Nizeyimana Jean Claude, Muhammad Hasnain, Thaw Tar Oo, Yousef A. Al-Masnay
Subject: Biology, Other Keywords: The Gulf of Mottama Wetland, Morphometric measurement, catch weight, size group
The present study was conducted the status of sea bass from Kokko and Kyuntone of The Gulf of Motttama Wetland (GoMW) area in Thanatpin Township in Bago Region Myanmar from September 2019 to August 2020. Fifty specimens were monthly collected, measured and weighed. Invoices of sea bass were collected for the depot and fish sellers by monthly. In Kokko, mean value of standard length and body weight were highest in March (32.70±1.58, 660.7±112.23). The mean value of standard length was peak in January (31.39±7.16) but peak of body weight was in March (963.24±280.86) in Kyuntone villages. The lowest mean value of standard length and body weight were found in June at both study areas. According to the invoice data revealed that monthly catch weight of sea bass is most abundance in October (829.92) kg in Kokko, (339.12) kg in Kyuntone. Based on price of relations to size group, small size C < 300g (41%) was mostly abundance in Kokko and in Kyuntone small size C < 300g (35%) was second abundance. Specimens were not landed in April and May. In June, young specimens were very rarely seen in both study sites. The important roles of wetland fishes, the economic valuation of GOMW in Myanmar and samples of fishing gear and value chain of sea bass in Myanmar was expressed in this study.
Heat Treatment in Two Tomatoes Cultivars: A Study of the Effect 3 on a Physiological and Growth Recovery
Sherzod Nigmatullayevich Rajametov, Eun Young Yang, Hyo Bong Jeong, Myeong Cheoul Cho, Soo-Young Chae, Niroj Paudel
Subject: Biology, Anatomy & Morphology Keywords: tomato; temperature; damage; seedling; plant; root; weight; photosynthesis; proline; electrical conductivity
High temperature (HT) significantly affects the crop physiological traits and reduces the 12 productivity in plants. To increase yields as well as survival of crops under HT, developing heat 13 tolerant plants is one of the main targets in crop breeding programs. The present study attempted 14 to investigate the linkage of the heat tolerance between the seedling and the reproductive growth 15 stages of tomato cultivars 'Dafnis' and 'Minichal'. This research was undertaken to evaluate heat 16 tolerance under two experimental designs such as screening at seedling stage and screening from 17 reproductive traits in greenhouses. Survival rate and physiological responses in seedlings of 18 tomatoes with 4-5 true leaf were estimated under HT (40 °C, RH 70%, day/night, respectively) and 19 under two control and HT greenhouse conditions (day time 28 °C and 40 °C, respectively). Heat 20 stress significantly affected physiological-chemical (photosynthesis, electrolyte conductivity, 21 proline) and vegetative parameters (plant height, shoot fresh weight, root fresh weight) in all 22 tomatoes seedlings. The finding revealed that regardless of tomato cultivars the photosynthesis, 23 chlorophyll, total proline and electrical conductivity parameters were varied in seedlings during the 24 heat stress period. The heat tolerance rate of tomatoes in the seedling stage might not be associated 25 always with reproductive parameters. HT reduced the fruit parameters likeas fruit weight (31.9%), 26 fruit length (14.1%), fruit diameter (19.1%) and fruit hardness (9.1%) in compared to NT under HT 27 in heat susceptible tomato cultivar 'Dafnis', while in heat tolerant cultivar 'Minichal' fruit length 28 (7.1%) and fruit diameter (12.1%) was decreased by the affect of HT but on the contrary fruit weight 29 (3.6%) and fruit hardness (8.3%) were increased. In conclusion, screening and selection for tomatoes 30 should be evaluated at the vegetative and reproductive stages with consideration of reproductive 31 parameters.
Application of Molecular Weight Regulators for the Synthesis of Sodium Polyacrylate Thinners of Mineral Suspensions
Dmitry Belov
Subject: Materials Science, Biomaterials Keywords: mineral suspension; thinner; free radical polymerization; molecular weight regulator; sodium polyacrylate
The synthesis of additives for thinning mineral suspensions based on sodium polyacrylate was carried out. The effect of molecular weight regulators on the molecular weight characteristics of the polymer and the effect of such polymers on the rheological properties of suspensions was studied. Sodium acrylate polymers are synthesized by free radical polymerization in aqueous solution using molecular weight regulators. The molecular weight characteristics of the polymeric samples were estimated by viscometry using Mark-Houwink-Kuhn-Sakurada (MHKS) equation. Synthesized polymers were used as thinners ceramic slurries, prepared according to the recipe of the enterprises producing ceramic products. The thinning ability of polymer samples with different molecular weights was estimated using an Engler viscometer from the time of the ceramic slurry flow. The influence of the type and amount molecular weight regulator on polyacryates was revealed. It was found that molecular weight synthesized samples was in the range of 21000 - 91000. It was determined that samples with a molecular weight of 28000 - 35000 synthesized using mercaptoethanol (at a dosage of 0.5-1.5% by weight of the monomer) provide optimal fluidity to the ceramic slurry.
Analysis of Impact Characteristics and Detection of Internal Defects for Unidirectional Carbon Composites with Respect to Fiber Orientation
Sun-ho Go, Alexandre Tugirumubano, and Hong-gun Kim
Subject: Materials Science, Biomaterials Keywords: drop-weight impact; unidirectional carbon composites; orientation angle; internal defect; impact
.With the increasing use of carbon fiber reinforced plastics in various area, carbon fiber composites based on prepregs have attracted attention in industries and academia research. However, prepreg manufacturing processes are costly, and the strength of structures varies depending on the orientation and defects (pores and delamination). For non-contact evaluation of internal defects, we proposed lock-in infrared thermography to investigate orientation angles after a compression test. We also conducted a drop-weight impact test to study the behaviour of the composites after impact according the fibers orientation for composite fabricated using unidirectional carbon fiber prepregs. Using CAI tests, we determined the residual compressive strength and confirmed the damage modes using a thermal camera. The results of the drop weight impact tests show that the specimen laminated at 0° suffered the largest damage because of susceptibility of the resin to impact. In contrast, the specimens oriented in of 0°/90° and +45°/–45° directions transferred more than 90% of the impact energy back to the impactor because of the lamination of fibers in the orthogonal directions. Furthermore, the specimens that underwent complete damage in the impact tests were subjected to the lock-in method and showed internal delamination and cut fibers. With the finite elements analysis, the damage of each ply could be observed. Moreover, the temperature differences in the residual compression tests were not significant.
Effects of Air Pollution on the Risk of Low Birth Weight in a Cold Climate
Hamudat Balogun, Aino Rantala, Harri Antikainen, Nazeeba Siddika, A.Kofi Amegah, Niilo Ryti, Jaakko Kukkonen, Mikhail Sofiev, Maritta S. Jaakkola, Jouni Jaakkola
Subject: Medicine & Pharmacology, Obstetrics & Gynaecology Keywords: Air pollution; low birth weight; prenatal exposure; joint effects; cold climate
There is accumulating evidence that prenatal exposure to air pollution disturbs fetal growth and development, but little is known about these effects in cold climates or their season-specific or joint effects. Our objective was to assess independent and joint effects of prenatal exposure to specific air pollutants on the risk of low birth weight (LBW). We utilized the 2568 children of the Espoo Cohort Study, born between 1984 and 1990, and living in the City of Espoo. We conducted stratified analyses for births during warm and cold seasons separately. We analyzed the effect estimates using multi-pollutant Poisson regression models with risk ratio (RR) as the measure of effect. The risk of LBW was related to exposure to CO and (adjusted RR 1.44, 95% CI: 1.04-2.00) and exposure to O3 in the spring-summer season (1.82, 1.11-2.96). There was also evidence of synergistic effects between CO and O3 (relative risk due to interaction, RERI, all year 1.08, 95% CI: 0.27-4.94, spring-summer 3.97, 2.17-25.85) and PM2.5 and O3 (all year 0.72, -0.07-3.60, spring-summer 2.80, 1.36-19.88). We present new evidence of both independent and joint effects of prenatal exposure in a cold climate on the risk of LBW at low levels of air pollution.
Designing a Sandwich Aircraft Spoiler with Lattice Structure Cores
Haifeng Ou, Jie Liu, Junfeng He, Zufeng Pang, Yonghui Zhang, Wei Wei, Hongxin Wang, Gang Zhao, Guilin Wen
Subject: Engineering, Mechanical Engineering Keywords: aircraft spoiler; topology optimization; lattice structure; high stiffness-to-weight ratio
By combing continuum topology optimization (TO) method and lattice structure technique, a sandwich aircraft spoiler with a high stiffness-to-weight is designed. TO method is served to produce the shell of the aircraft spoiler and the lattice structure, used as cores, is employed to support the shell. TO problem is established as maximizing the stiffness of the structure with limited material volume. Density-based method is utilized to achieve a 0/1 solution. We then empirically replace the core of the aircraft spoiler by using 3D kagome lattice structure. Two different materials, i.e., aluminum alloy and titanium alloy, are synthetically applied to further reduce the weight and simultaneously improve the strength of the aircraft spoiler. Numerical simulations are conducted to show that the designed aircraft spoiler can meet the service environment with a reduction of its weight by approximately 80% when compared with that of the initial design model. Finally, we have fabricated the designed model with photosensitive resin by using 3D printing technique.
Genome Based Meta-QTL Analysis of Grain Weight in Tetraploid Wheat Identifies Rare Alleles of GRF4 Associated with Larger Grains
Raz Avni, Leah Oren, Gai Shabtai, Siwar Assili, Curtis Pozniak, Iago Hale, Roi Ben-David, Zvi Peleg, Assaf Distelfeld
Subject: Biology, Plant Sciences Keywords: Wheat, emmer; domestication; genome assembly; QTL, meta-QTL; grain weight; GRF4
The domestication and subsequent genetic improvement of wheat led to the development of large-seeded cultivated wheat species relative to their smaller-seeded wild progenitors. While increased grain weight (GW) continues to be an important goal of many wheat breeding programs, few genes underlying this trait have been identified despite an abundance of studies reporting quantitative trait loci (QTLs) for GW. Here we perform a QTL analysis for GW using a population of recombinant inbred lines (RILs) derived from the cross between wild emmer wheat accession 'Zavitan' and durum wheat variety 'Svevo'. Identified QTLs in this population were anchored to the recent Zavitan reference genome, along with previously published QTLs for GW in tetraploid wheat. This genome-based, meta-QTL analysis enabled the identification of a locus on chromosome 6A whose introgression from wild wheat positively affects GW. The locus was validated using an introgression line carrying the 6A GW QTL region from Zavitan in a Svevo background, resulting in >8% increase in GW compared to Svevo. Using the reference sequence for the 6A QTL region, we identified a wheat ortholog to OsGRF4, a rice gene previously associated with GW. The coding sequence of this gene (TtGRF4-A) contains four SNPs between Zavitan and Svevo, one of which reveals the Zavitan allele to be rare in a core collection of wild emmer and completely absent from the domesticated emmer genepool. Similarly, another wild emmer accession (G18-16) was found to carry a rare allele of TtGRF4-A that also positively affects GW and is characterized by a unique SNP absent from the entire core collection. These results exemplify the rich genetic diversity of wild wheat, posit TtGRF4-A as a candidate gene underlying the 6A GW QTL, and suggest that the natural Zavitan and G18-16 alleles of TtGRF4-A have potential to increase wheat yields in breeding programs.
Gravity without Newton's Gravitational Constant and No Knowledge of Mass Size
Espen Gaarder Haug
Subject: Physical Sciences, General & Theoretical Physics Keywords: Schwarzschild radius; weight, planck mass; planck length; measurement; gravitational constant; Heisenberg
In this paper we show that the Schwarzschild radius can be extracted easily from any gravitationally-linked phenomena without having knowledge of the Newton gravitational constant or the mass size of the gravitational object. Further, the Schwarzschild radius can be used to predict any gravity phenomena accurately, again without knowledge of the Newton gravitational constant and also without knowledge of the size of the mass, although this may seem surprising at first. Hidden within the Schwarzschild radius are the mass of the gravitational object, the Planck mass (their relative mass), and the Planck length. We do not claim to have all the answers, but this seems to indicate that gravity is quantized, even at a cosmological scale, and this quantization is directly linked to the Planck units. This also supports our view that the Newton gravitational constant is a universal composite constant of the form G = l p 2 c 3 ℏ , rather than relying on the Planck units as a function of G. This does not mean that Newton's gravitational constant is not a universal constant, but that it is instead a composite universal constant that depends on the Planck length, the speed of light, and the Planck constant. Further, G × 1 weight unit c 2 = G c 2 is the Schwarzschild radius off one weight unit. So G is only needed when we want to use gravity to find the weight of an object, such as weighing the Earth. This is, to our knowledge, the first paper that shows how a long series of major gravity predictions and measurements can be completed without any knowledge of the mass size of the object, or Newton's gravitational constant. As a minimum we think it provides an interesting new angle for evaluating existing gravity theories, and it may even give us a small hint on how to combine quantum gravity with Newton and Einstein gravity.
Functional Comparison of High and Low Molecular Weight Chitosan on Lipid Metabolism and Signals in High-Fat Diet-Fed Rats
Shing-Hwa Liu, Chen-Yuan Chiu, Ching-Ming Shi, Meng-Tsan Chiang
Subject: Medicine & Pharmacology, Pharmacology & Toxicology Keywords: high and low molecular weight chitosan; lipid metabolism; liver lipid accumulation
Online: 3 July 2018 (12:13:20 CEST)
The present study examined and compared the effects of high- and low-molecular weight (MW) chitosan, a nutraceutical, on intestinal and liver lipid metabolism in rats fed with high-fat diet. Both high- and low-MW chitosan decreased liver weight, elongated small intestine, improved the dysregulation of blood lipids and liver fat accumulation, and increased fecal lipid excretion in high-fat diet-fed rats. Supplementation of both high- and low-MW chitosan significantly inhibited the decreased phosphorylated AMP-activated protein kinase (AMPK)α and peroxisome proliferator-activated receptor (PPAR)α protein expressions and the increased lipogenesis/cholesterogenesis-associated protein expressions (sterol regulatory element binding protein (SREBP)1c, SREBP2, and PPARγ) and the decreased apolipoprotein (Apo)E and microsomal triglyceride transfer protein (MTTP) protein expressions in the livers of high-fat diet-fed rats. Both high and low-MW chitosan supplementation could also suppress the increased MTTP protein expression and the decreased angiopoietin-like protein (Angptl)4 protein expression in the intestines of high-fat diet-fed rats. Comparison between high and low-MW chitosan, high-MW chitosan has a higher efficiency than low-MW chitosan on the inhibition of intestinal lipid absorption and the increase of hepatic fatty acid oxidation, which can improve liver lipid biosynthesis and accumulation.
Effects of Topical Anaesthetic and Buccal Meloxicam Treatments on Concurrent Castration and Dehorning of Beef Calves
Dominique Van der Saag, Peter White, Lachlan Ingram, Jaime Manning, Peter Windsor, Peter Thomson, Sabrina Lomax
Subject: Life Sciences, Other Keywords: behaviour; castration; cattle; dehorning; buccal meloxicam; pain; topical anaesthetic; weight gain
The use of pain relief during castration and dehorning of calves on commercial beef operations can be limited by constraints associated with the delivery of analgesic agents. As topical anaesthetic (TA) and buccal meloxicam (MEL) are now available in Australia, offering practical analgesic treatments for concurrent castration and dehorning of beef calves, a study was conducted to determine their efficacy in providing pain relief when applied alone or in combination. Weaner calves were randomly allocated to; (1) no castration and dehorning / positive control (CONP); (2) castration and dehorning / negative control (CONN); (3) castration and dehorning with buccal meloxicam (BM); (4) castration and dehorning with topical anaesthetic (TA); and (5) castration and dehorning with buccal meloxicam and topical anaesthetic (BMTA). Weight gain, paddock utilisation, lying activity and behaviour following treatment were measured. CONP and BMTA calves had significantly greater weight gain than CONN calves (P < 0.001). CONN calves spent less time lying compared to BMTA calves on all days (P < 0.001). All dehorned and castrated calves spent more time walking (P = 0.024) and less time eating (P < 0.001) compared to CONP calves. There was a trend for CONP calves to spend the most time standing and CONN calves to spend the least time standing (P = 0.059). There were also trends for the frequency of head turns to be lowest in CONP and BMTA calves (P = 0.098) and tail flicks to be highest in CONN and BM calves (P = 0.061). The findings of this study suggest that TA and MEL can improve welfare and production of calves following surgical castration and amputation dehorning.
Modulation of Gut Microbiota of Overweight Mice by Agavins and Their Association with Body Weight Loss
Alicia Huazano-García, Hakdong Shin, Mercedes G. López
Subject: Life Sciences, Microbiology Keywords: agavins; prebiotics; microbiota; overweight; body weight loss; short chain fatty acids
Agavins consumption has lead to accelerate body weight loss in mice. We investigated the changes on cecal microbiota and short chain fatty acids (SCFA) associated to body weight loss in overweight mice. Firstly, mice were fed with standard (ST5) or high fat (HF5) diet for 5 weeks. Secondly, overweight mice were shifted to standard diet alone (HF-ST10) or supplemented with agavins (HF-ST+A10) or oligofructose (HF-ST+O10), five more weeks. Cecal contents were collected before and after supplementation to determine microbiota and SCFA concentrations. At the end of first phase, HF5 mice showed a significant increase of body weight, which was associated with reduction of cecal microbiota diversity (PD whole tree; non-parametric t-test, P < 0.05), increased Firmicutes/Bacteroidetes ratio and reduced SCFA concentrations (t-test, P < 0.05). After diet shifted, HF-ST10 normalized its microbiota, increase its diversity and SCFA levels, whereas agavins (HF-ST+A10) or oligofructose (HF-ST+O10) led to partial microbiota restoration, with normalization of the Firmicutes/Bacteroides ratio as well as higher SCFA levels (P < 0.1). Moreover, agavins noticeably enriched Klebsiella and Citrobacter (LDA > 3.0); this enrichment has not been reported previously under a prebiotic treatment. In conclusion, agavins or oligofructose modulated cecal microbiota composition, reduced extent of diversity and increased SCFA. Furthermore, identification of bacteria enriched by agavins, opens opportunities to explore new probiotics.
A Case of Focal Segmental Glomerulosclerosis in a Young Girl with a Very Low Birth Weight
Yasuyo Kashiwagi, Kazushi Agata, Gaku Yamanaka, Hisashi Kawashima
Subject: Medicine & Pharmacology, Pediatrics Keywords: chronic kidney disease; low birth weight; focal segmental glomerulosclerosis; two-hit theory
In Japan, the prevalence of low birth weight (LBW) has been estimated to be approximately 10%, which is the highest among developed countries. This high prevalence might affect the prevalence of LBW-associated diseases in the adult population of Japan. Recently, LBW has been recognized as a contributing factor to post-adaptive focal segmental glomerulosclerosis (FSGS) in adulthood; however, few reports to date have evaluated the clinical and pathological characteristics of post-adaptive FSGS. A 13-year-old girl was referred to our hospital owing to mild proteinuria, which was detected at a school urinary screening. She was born at a gestational age of 23 weeks, with a very LBW of 630 g. Dipstick urinalysis revealed grade (2+) proteinuria. Her serum creatinine level was 1.02 mg/dL, and she was diagnosed as having stage 2 chronic kidney disease (CKD). Her serum uric acid level was 7 mg/dL. Her mother and 16-years old brother had hyperuricemia, too. A percutaneous renal biopsy leads to a diagnosis of FSGS. After 3 years of treatment with an angiotensin receptor blocker, her proteinuria decreased. However, her serum creatinine level was 1.07 mg/dL, and she still had stage 2 CKD. We considered that in this patient, the first hit was her LBW, and the second hit was hyperuricemia. The second hit might be associated with the development of CKD. The birth history of patients is not usually confirmed by nephrologists. Our case demonstrates that obtaining information regarding the preterm birth and LBW of patients is important in the diagnosis of noncommunicable diseases, because school urinary screening is not routinely performed in countries other than Japan.
Modelling Representative Population Mobility for COVID-19 Spatial Transmission in South Africa
Arminn Potgieter, Inger Fabris-Rotelli, Zaid Kimmie, Nontembeko Dudeni-Tlhone, Jenny Holloway, Charl Janse Van Rensburg, Renate Thiede, Pravesh Debba, Raeesa Docrat, Nada Abdelatif, Sibusisiwe Makhanya
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: COVID-19; spatial; mobility; spatial weight matrices; principal component analysis; hierarchical clustering
The COVID-19 pandemic starting in the first half of 2020 has changed the lives of everyone across the world. Reduced mobility was essential due to it being the largest impact possible against the spread of the little understood SARS-CoV-2 virus. To understand the spread, a comprehension of human mobility patterns is needed. The use of mobility data in modelling is thus essential to capture the intrinsic spread through the population. It is necessary to determine to what extent mobility data convey the same message of mobility within a region. This paper compares different mobility data sources by constructing spatial weight matrices and further compares the results through hierarchical clustering. This provides insight for the user into which data provides what type of information and in what situations a particular source is most useful.
Body Mass Index and Birth Weight Improve Polygenic Risk Score for Type 2 Diabetes
Avigail Moldovan, Yedael Y. Waldman, Nadav Brandes, Michal Linial
Subject: Medicine & Pharmacology, Allergology Keywords: Body weight; Genetic variations; GWAS; Metabolic disease; Obesity; Sex difference; UK-Biobank
One of the major challenges in the post-genomic era is elucidating the genetic basis of human diseases. In recent years, studies have shown that polygenic risk scores (PRS), based on aggregated information from millions of variants across the human genome, can estimate individual risk for common diseases. In practice, the current medical practice still predominantly relies on physiological and clinical indicators to assess personal disease risk. For example, caregivers mark individuals with high body mass index (BMI) as having an increased risk to develop type 2 diabetes (T2D). An important question is whether combining PRS with clinical metrics can increase the power of disease prediction in particular from early life. In this work we examined this question, focusing on T2D. We show that an integrated approach combining adult BMI and PRS achieves considerably better prediction than each of the measures on unrelated Caucasians in the UK Biobank (UKB, n=290,584). Likewise, integrating PRS with self-reports on birth weight (n=172,239) and comparative body size at age ten (n=287,203) also substantially enhance prediction as compared to each of its components. While the integration of PRS with BMI achieved better results as compared to the other measurements, the latter are early-life measurements that can be integrated already at childhood, to allow preemptive intervention for those at high risk to develop T2D. Our integrated approach can be easily generalized to other diseases, with the relevant early-life measurements.
Impact of a Moderately Hypocaloric Mediterranean Diet on the Gut Microbiota Composition of Italian Obese Patients
Silvia Pisanu, Vanessa Palmas, Veronica Madau, Emanuela Casula, Andrea Deledda, Roberto Cusano, Paolo Uva, Sarah Vascellari, Francesco Boi, Andrea Loviselli, Aldo Manzin, Fernanda Velluzzi
Subject: Medicine & Pharmacology, Nutrition Keywords: gut microbiota; obesity; weight-loss; Mediterranean diet; 16S rRNA; High-throughput sequencing
Although it is known that the gut microbiota (GM) can be modulated by diet, the efficacy of specific dietary interventions in determining its composition and diversity in obese patients remains to be ascertained. The present work aims to evaluate the impact of a moderately hypocaloric Mediterranean diet on the GM of obese and overweight patients (OB). The GM of 23 OB patients (F/M= 20/3) was compared before (T0) and after 3 months (T3) of the nutritional intervention (NI). Fecal samples were analyzed by Illumina MiSeq sequencing of the 16S rRNA gene. At baseline, the GM characterization confirmed the typical obesity-associated dysbiosis. After 3 months of NI, patients presented a statistically significant reduction of the body weight and fat mass, along with changes in the relative abundance of many microbial patterns. In fact, we observed an increased abundance in several Bacteroidetes taxa (i.e. Sphingobacteriaceae, Sphingobacterium, Bacteroides spp., Prevotella stercorea) and depletion of many Firmicutes taxa (i.e. Lachnospiraceae members, Ruminococcaceae and Ruminococcus, Veillonellaceae, Catenibacterium, Megamonas). In addition, the phylum Proteobacteria showed an increased abundance, while the genus Sutterella, within the same phylum, decreased after the intervention. Metabolic pathways, predicted by bioinformatic analyses, showed a decrease in membrane transport and cell motility after NI. The present study extends our knowledge of the GM profiles in OB, highlighting the potential benefit of a moderate caloric restriction in counteracting the gut dysbiosis.
Glomerular Filtration Rate in Former Extreme Low Birth Weight Infants over the Full Pediatric Age Range: A Pooled Analysis
Elise Goetschalkx, Djalila Mekahli, Elena Levtchenko, Karel Allegaert
Subject: Medicine & Pharmacology, Pediatrics Keywords: glomerular filtration rate; Brenner hypothesis; extreme low birth weight infants; renal outcome
Different cohort studies documented a lower glomerular filtration rate (GFR) in former extremely low birth weight (ELBW, <1000 g) neonates throughout childhood when compared to term controls. The current aim is to pool these studies to describe the GFR pattern over the pediatric age range. To do so, we conducted a systematic review on studies reporting on GFR measurements in former ELBW cases while GFR data of healthy age-matched controls included in these studies were co-collected. Based on 248 hits, 6 case-control and 3 cohort studies were identified, with 444 GFR measurements in 380 former ELBW cases (median age 5.3-20.7 years). The majority were small (17-78 cases) single center studies, with heterogeneity in GFR measurement (inulin, Cystatin C or creatinine estimated GFR formulae) tools. Despite this, the median GFR (ml/kg/1.73m2) within case-control studies was consistently lower (-13, range -8 to -25%) in cases, so that a relevant minority (15-30%) has a eGFR<90 mgl/kg/1.73m2). Consequently, this pooled analysis describes a consistent pattern of reduced eGFR in former ELBW cases throughout childhood. Research should focus on perinatal risk factors for impaired GFR and long-term outcome, but is hampered by single center cohorts, study size, and heterogeneity of GFR assessment tools.
Whole-Food Plant-Based Lifestyle Program and Decreased Obesity: A 10-Year Follow-up
Boštjan Jakše, Barbara Jakše, Stanislav Pinter, Jernej Pajek, Nataša Fidler Mis
Subject: Medicine & Pharmacology, Nutrition Keywords: nutrition; plant-based diet; vegan diet; lifestyle; obesity; body composition; weight-loss
Failure of various weight-loss programs and long-term maintenance of favorable body composition in all kinds of people is high, since the majority go back to old dietary patterns. Many studies have documented the efficacy of a plant-based diet (PBD) for body mass management, but there are opinions that maintaining a PBD is difficult. We aimed to evaluate the long-term success of a whole-food plant-based (WFPB) lifestyle program. We investigated the differences in the obesity indices and lifestyle of 151 adults (39.6 ± SD 12.5 years), who were on our program for short (0.5–<2 years), medium (2–<5 years), or long term (5–10 years). Body-composition changes were favourable for all three groups, both genders and all participants. There were no differences in relative body-composition changes (BMI, body fat percentage and muscle mass index (MMI)) between the three groups. All participants improved their BMI (baseline mean pre-obesity BMI range (kg/m2): 26.4 ± 5.6 to normal 23.9 ± 3.8, p < 0.001), decreased body mass (–7.1 ± 8.3 kg, p < 0.001) and body fat percentage (–6.4 ± 5.6 % points, p < 0.001). Those with the highest BMI at baseline lost the most of: a) BMI units, b) total body mass and c) body fat (a) (kg/m2) (–5.6 ± SD 2.9, –2.4 ± 1.8 and –0.9 ± 1.5), b) (kg) (–16.1 ± SD 8.8, –7.1 ± 5.4 and –2.5 ± 4.5) and c) (% points) (–9.5 ± SD 5.7, –6.6 ± 4.6 and –4.7 ± 5.3) for participants who had baseline BMI in obese, overweight and normal range, respectively; pbaseline vs. current < 0.001 for all). 85.6% (101 out of 118) of parents of underage children (< 18 years), introduced WFPB lifestyle to their children. WFPB lifestyle program provides long-term lifestyle changes for reversal of obesity and is effective transferred to the next generation.
Evaluation and Optimization of In-Vehicle HUD Design by Applying an Entropy Weight-VIKOR Hybrid Method
Yunuo Cheng, Xia Zhong, Liwei Tian
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Entropy weight; VIKOR method; Head-up display; Interface design; Design evaluation; Scheme optimization
Background: With the trend of intelligent display, the interface design of in-vehicle HUD is an expanding research field; Methods: To solve the subjectivity and uncertainty in the optimization of HUD interface design schemes, this paper proposed a hybrid scheme evaluation and optimization method based on entropy weight and VIKOR. The entropy weight method was used to reduce the subjectivity the decision-maker's weighting and obtain the objective weight of each indicator; The VIKOR method was used to obtain the best ranking of alternative schemes, and then the optimal interface design scheme was selected; Results: The evaluation of in-vehicle HUD interface design schemes were taken as an example for verification and calculation. The results showed that this method considers the subjectivity and uncertainty of the decision-making process in the optimization of design scheme, which can effectively improve the objectivity and accuracy of the evaluation results, and provide a reference for designers to optimize interface design schemes.
Alterations in Food Reward Regarding Bariatric Surgery Type and Weight Loss Outcomes: An Exploratory Study
Erika Guyot, Julie-Anne Nazare, Pauline Oustric, Maud Robert, Emmanuel Disse, Anestis Dougkas, Sylvain Iceta
Subject: Medicine & Pharmacology, Nutrition Keywords: Food reward; Liking; Wanting; Food preferences; Bariatric surgery; Eating behavior; Total Weight Loss
Changes in food preferences after bariatric surgery may alter its effectiveness as a treatment for obesity. We aimed to compare food reward for a comprehensive variety of food categories between patients who received a sleeve gastrectomy (SG) or a Roux-en-Y gastric bypass (RYGB) and to explore whether food reward differs according to weight loss. In this cross-sectional exploratory study, food reward was assessed using the Leeds Food Preference Questionnaire (LFPQ). We assessed liking and wanting of eleven food categories. Comparisons were done regarding type of surgery and Total Weight Loss (TWL; based on tercile distribution). Fifty-six patients (30 SG and 26 RYGB) were included (women: 70%; age: 44.0 (11.1) y). Regarding the type of surgery, scores were not significantly different between SG and RYGB, except for 'non-dairy products – without color' explicit liking (p = 0.04). Regarding TWL outcomes, explicit liking, explicit wanting and implicit wanting, scores were significantly higher for Good responders than Low responders for 'No meat – High fat' (post-hoc corrected p-value: 0.04, 0.03 and 0.04, respectively). Together, our results failed to identify major differences in liking and wanting regarding the type of surgery and tended to indicate that higher weight loss might be related to a higher reward for high protein-content food. Rather to focus only on palatable foods, future studies should also consider a broader range of food items, including protein reward.
IoT application for vehicles identification using the Optical Fiber Sensors and Wireless Sensor Network
Hacen Khlaifi, Amira Zrelli, Tahar Ezzedine
Subject: Engineering, Other Keywords: Wireless Sensors Networks; Fiber Bragg Grating; Pressure; Speed; Wheelbase distance; Weight; Vehicle; Identification.
Due to the renewed variation in government and political systems inside and outside countries, and with the high tariffs at borders, the latter have become an outlet for terrorism and smugglers. Therefore, each country seeks to develop its own protection system, and the technologies used in these systems vary according to the severity and the importance of the installations to be protected, it is found that some of them are expensive and unnecessary, but other have good and variable levels of efficiency. Consequently, the idea of designing a surveillance system that can monitor and control access becomes indispensable. In the same context, this work is of crucial strategic and geopolitical importance. It combines pre-existing alarm and monitoring methods and revolutionary Internet of Things (IoT) application products, of which Wireless Sensor Networks (WSN) and Optical Fiber Sensors (OFS) are part of this application. This article presents the distribution of wireless radar nodes accompanying with a Bragg fiber sensor to identify each rolling intruder incoming the zone to be monitored, from the determination of its speed, weight and wheelbase distance.
Dynamic Connectedness and Portfolio Diversification during the Coronavirus Disease 2019 Pandemic: Evidence from the Cryptocurrency Market
Samia Nasreen, Aviral Kumar Tiwari, Seong-Min Yoon
Subject: Keywords: Cryptocurrency; Coronavirus Disease 2019; Time-Varying Parameter Vector Autoregression; Portfolio Weight; Hedging Effectiveness
Online: 10 June 2021 (12:07:58 CEST)
This paper examines interlinkages and hedging opportunities between nine major cryptocurrencies for the period between 30 September 2015 and 4 June 2020, which notably includes the coronavirus disease 2019 (COVID-19) outbreak lasting from early 2020 through the end of the sample period. The results of dynamic conditional correlation (DCC) analysis using a minimum connectedness approach show a high degree of correlation between cryptocurrencies throughout the sample period. However, the correlations reach their minimum values during the COVID-19 pandemic, which indicates that cryptocurrencies acted as a hedge or safe haven during the stressful period of the COVID-19 pandemic. The weight of cryptocurrencies was significantly reduced and their hedging effectiveness varied greatly during the pandemic, which indicates that investors' preferences changed during the COVID-19 period.
The PYY/Y2R-Deficient Mouse Responds Normally to High-Fat Diet and Gastric Bypass Surgery
Brandon Boland, Michael B. Mumphrey, Zheng Hao, Benji Gill, R. Leigh Townsend, Sangho Yu, Heike Munzberg, Christopher D. Morrison, James L. Trevaskis, Hans-Rudolf Berthoud
Subject: Medicine & Pharmacology, Gastroenterology Keywords: obesity; diabetes; body weight; body composition; glucose tolerance; insulin tolerance; incretin; energy expenditure
Background/Goals: The gut hormone PYY secreted from intestinal L-cells has been implicated in the mechanisms of satiation via Y2-receptor (Y2R) signaling in the brain and periphery and is a major candidate for mediating the beneficial effects of bariatric surgery on appetite and body weight. Methods: Here we assessed the role of Y2R signaling in the response to low- and high-fat diets and its role in the effects of Roux-en-Y gastric bypass (RYGB) surgery on body weight, body composition, food intake, energy expenditure and glucose handling, in global Y2R-deficient (Y2RKO) and wildtype mice made obese on high-fat diet. Results: Both male and female Y2RKO mice responded normally to low- and high-fat diet in terms of body weight, body composition, fasting levels of glucose and insulin, as well as glucose and insulin tolerance for up to 30 weeks of age. Contrary to expectations, obese Y2RKO mice also responded similarly to RYGB compared to WT mice for up to 20 weeks after surgery, with initial hypophagia, sustained body weight loss, and significant improvements in fasting insulin, glucose tolerance, HOMA-IR, and liver weight compared to sham-operated mice. Furthermore, non-surgical Y2RKO mice weight-matched to RYGB showed the same improvements in glycemic control as Y2RKO mice with RYGB that were similar to WT mice. Conclusions: PYY signaling through Y2R is not required for the normal appetite-suppressing and body weight-lowering effects of RYGB in this global knockout mouse model. Potential compensatory adaptations of PYY signaling through other receptor subtypes or other gut satiety hormones such as GLP-1 remain to be investigated.
A Robust Approach for Identification of Cancer Biomarkers and Candidate Drugs
Md. Shahjaman, Md. Rezanur Rahman, S. M. Shahinul Islam, Md. Nurul Haque Mollah
Subject: Life Sciences, Genetics Keywords: cancer biomarker; DEGs; FC; β-divergence method; β-weight function; paired SAM; robustness
Background: Identification of cancer biomarkers that are differentially expressed (DE) under two biological conditions is an important task in many microarray studies. There exist several methods in the literature in this regards and most of these methods designed especially for unpaired samples, which does not satisfy the requirements of paired samples where the gene expressions are taken from the same patients before and after treatment. Furthermore, the traditional biomarker identification methods based on either p-values or fold change (FC) values. However, sometimes, p-value based results do not comply with FC based results due to the smaller variance of gene expressions. There are some methods that combine both p-values and FC values to solve this problem. But, these methods also show weak performance for small-sample case in presence of outlying expressions. To overcome this problem, in this paper an attempt is made to develop a hybrid robust SAM-FC approach by combining rank of FC values and rank of p-values based on SAM statistic using minimum β-divergence method, which is designed for paired samples. This method introduces a weight function known as β-weight function. This weight function produces larger weights corresponding to usual/normal expressions and smaller weights for unusual/outlying expressions. The β-weight function plays the significant role on the performance of the proposed method. Results: The proposed method uses β-weight function as a measure of outlier detection by setting β=0.2. We unify both classical and robust estimates using β-weight function such that maximum likelihood estimators (MLEs) are used in absence of outliers and minimum β-divergence estimators are used in presence of outliers to obtain reasonable p-values and FC values in the proposed method. We examined the performance of proposed method in a comparison of some popular methods (t-test, SAM, LIMMA, Wilcoxon, WAD, RP and FCROS) using both simulated and real gene expression profiles for both small-and large-sample cases. From the simulation and a real spike in data analysis results we observed that the proposed method outperforms other methods for small-sample case in presence of outliers and it keeps almost equal performance with other robust methods (Wilcoxon, RP and FCROS) otherwise. From a head-and-neck cancer (HNC) dataset the proposed method identified 2 genes (CYP3A4, NOVA1) that are significantly enriched in linoleic acid metabolism, drug metabolism, steroid hormone biosynthesis and metabolic pathways. The survival analysis through Kaplan-Meier curve revealed that combined effect of these 2 genes has prognostic capability and they might be promising biomarker of HNC. Moreover, we retrieved the 12 candidate drugs based on gene interaction from glad4u and drug bank databases. Conclusion The identified drugs showed statistical significance and critical role of the proteins indicate that these proteins might be therapeutic target in cancer. Thus, elucidating the associations between the drugs identified in the present study require further investigations.
Body Mass, Total Body Fat and Visceral Fat Percentage Predict Insulin Resistance Better Than Waist Circumference and Body Mass Index in Healthy Young Male Adult in Indonesia
Liong Boy Kurniawan, Uleng Bahrun, Mochammad Hatta, Mansyur Arif
Subject: Medicine & Pharmacology, General Medical Research Keywords: insulin resistance; body weight; body fat; visceral fat; waist circumference; body mass index
The incidence of obesity which leads to insulin resistance (IR) and metabolic disorder increases in developing countries including Indonesia. Male adult has higher risk to have abdominal obesity than female which is associated with cardiometabolic disorders. Several anthropometric measurements have been proposed to predict IR. The aim of this study was to investigate whether body mass, body mass index (BMI), waist circumference (WC), body fat percentage (BF) or visceral fat percentage (VF) could become a better predictor of IR in healthy young male adult. Total of 140 healthy young male adults ranging from 18-25 years were recruited in the study. Insulin resistance was measured by calculating Homeostatic Model Assessment for Insulin Resistance (HOMA-IR). Subjects with HOMA-IR value >75th percentile with cut off 3.75 were defined as IR. Anthropometric measurements included body weight, BMI, WC were performed whereas BF and VC were measured by bioelectrical impedance analysis (BIA). IR had significant strong correlation with body weight, BMI, WC, BF and VF. The area under curve of body mass, BF, VF were greater than WC and BMI. Anthropometric measurements correlated strongly with IR but body weight, BF, VF have stronger correlation than WC and BMI in healthy young male adult.
A Model Selection Algorithm for Complex Multi-Domain Cnn Systems Based on Feature-Weights Relation in Deep Learning
Eyad Alsaghir, Xiyu Shi, Varuna De Silva
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: CNN; AI; Causality; Understandability; Object Features; Excitation Weight; Multi-model Neural Network; Model Selection
Object recognition is an essential element of machine intelligence tasks. However, one model cannot practically be trained to identify all the possible objects it encounters. An ensemble of models may be needed to cater to a broader range of objects. Building a mathematical understanding of the relationship between various objects that share comparable outlined features is envisaged as an effective method of improving the model ensemble through a pre-processing stage, where these objects' features are grouped under a broader classification umbrella. This paper proposes a mechanism to train an ensemble of recognition models coupled with a model selection scheme to scale-up object recognition in a multi-model system. An algorithmic relationship between the learnt parameters of a trained classification model and the features of input images is presented in the paper for the system to learn the model selection scheme. The multiple models are built with a CNN structure, whereas the image features are extracted using a CNN/VGG16 architecture. Based on the models' excitation weights, a neural network model selection algorithm, which links a new object with the models and decides how close the features of the object are to the trained models for selecting a particular model for object recognition is developed and tested on a five-model neural network platform. The experiment results show the proposed model selection scheme is highly effective and accurate in selecting an appropriate model for a network of multiple models.
Liquid Smoke Treatment for Natural Fibers: The Effect on Tensile Properties, Surface Morphology, Crystalline Properties, and Functional Groups of Banana Stem Fibers
Mukhlis Muslimin, Mustamin Rahim, Ahmad Seng, Sandi Rais
Subject: Engineering, Mechanical Engineering Keywords: Banana stem fiber; tensile strength; morphology; crystalline properties; functional groups; light weight; environmentally friendly
This study aims to investigate the effect of banana stem fibers (BSFs) treatment with liquid smoke on changes in the micro-mechanics properties of BSFs, the tensile strength of single fibers, mor-phology, crystalline properties, and functional groups. The research used four model specimen variations, namely fiber without treatment and immersion in liquid smoke for 1, 2, and 3 hours. The BSFs with treatment was dried in an oven with a temperature of 40ºC for 30 minutes. Several tests were conducted, including a tensile test of single fiber capacity of 50N standard ASTM 3379-02, SEM observation, XRD, and FTIR test. The results showed that the highest increase in fiber strength was P2J, which was 264.21 MPa, and the lowest was TP fiber at 148.54 MPa. Fibers treatment with liquid smoke can form strong C-C elemental bonds caused by the H2O degradation process in BSFs, hence carbon atoms (C) are dense, and in conditions of excessive H2O degradation, the fiber strength will become brittle and the liquid smoke can increase the tensile strength of the fiber. The morphology of the fiber changed where the untreated fiber was covered with lignin, while the treated fiber had an elongated rectangular line pattern, porous, and the lignin was eroded. Crystalline properties in the X-ray diffractogram pattern differ between untreated and treated fibers. At an angle of 2ϴ, the lowest diffraction peak is around 160 in untreated wool, and the highest is 230 in treated fiber. The functional group of the fiber has changed where there is a difference in the wave crest between untreated and treatment fiber. The longer immersion time, the element of Carbon (C) will increase. In conclusion, treating BSFs with liquid smoke can change the physical, mechanical, and chemical properties, hence becoming a choice of composite reinforcement material in the future which is lightweight and environmentally friendly.
Analysis of The Development Trend of Sports Research in China and Taiwan Using Natural Language Processing
Wei-Yuan Shih, Tu-Kuang Ho
Subject: Social Sciences, Other Keywords: word segmentation; word cloud analysis; TF-IDF weight analysis; co-word analysis; network analysis
A digital text abstract presents the essential information of an article, and we can find the trend and value of the research by analyzing it rigorously and digging up knowledge. Therefore, this study focuses on the abstracts of index journals in China and Taiwan from July 2010 to June 2020 (a total of 3,283 abstracts). Through the concepts of text mining and natural language processing (NLP), it constructs processes such as text retrieval, text segmentation and word cloud analysis, TF-IDF weight analysis, co-word analysis, network analysis, and trend analysis, and analyses a large amount of text data. The results show that the scope of research in China covers the fields of social sports and sports science, and research in Taiwan covers both natural and social sciences. The network diagram highlights the richness of sports-related research fields in the two regions, but research on sports philosophy is relatively rare. It is suggested that all disci-plines/departments should re-allocate the same resources, so as to show a balanced development trend and help expand a new chapter in the sports academic field.
An Improved Cleaning Protocol for Foraminiferal Calcite: HyPerCal – A New Practice for Micropaleontological and Paleoclimatic Proxies
Stergios D. Zarkogiannis, George Kontakiotis, Georgia Gkaniatsa, Venkata S. C. Kuppili, Shashidhara Marathe, Kazimir Wanelik, Valia Lianou, Evanggelia Besiou, Panayiota Makri, and Assimina Antonarakou
Subject: Earth Sciences, Atmospheric Science Keywords: Cleaning protocol; shell weight; climate reconstruction; synchrotron X-ray microtomography (SμCT); foraminiferal-based proxies
Paleoclimatic and paleoceanographic studies routinely rely on the usage of foraminiferal calcite through faunal, morphometric and physico-chemical proxies. The application of such proxies presupposes the extraction and cleaning of these biomineralized components from ocean sediments in the most efficient way, a process which is often labor intensive and time consuming. In this respect, in this study we performed a systematic experiment for planktonic foraminiferal specimen cleaning using different chemical treatments and evaluated the resulting data of a Late Quaternary gravity core sample from the Aegean Sea. All cleaning procedures adopted here were made on the basis of their minimum potential bias upon foraminiferal proxies, such as the faunal assemblages, degree of fragmentation, stable isotope composition (δ18O and δ13C) and/or Mg/Ca ratios that are frequently used as proxies for surface-ocean climate parameters (e.g., sea surface temperature, sea surface salinity). Six different protocols were tested, involving washing, sieving, and chemical treatment of the samples with hydrogen peroxide and/or sodium hexametaphosphate (Calgon ®). Single species foraminifera shell weighing was combined with high-resolution Scanning Electron Microscopy (SEM) and synchrotron X-ray Microtomography (SμCT) of the material processed by each of the cleaning protocols, in order to assess the decontamination degree of specimen's ultrastructure and interior. It appeared that a good compromise between time and cleaning efficiency is the simultaneous treatment of samples with a mixed hydrogen peroxide and Calgon solution, while the most effective way for an almost complete decontaminate of the calcareous components from undesirable sedimentary material is a two-step treatment - initially with hydrogen peroxide and subsequently with Calgon solutions.
Effect of Sintering Time on the Densification, Microstructure, Weight Loss and Tensile Properties of a Powder Metallurgical Fe-Mn-Si Alloy
Zhigang Xu, Michael A. Hodgson, Peng Cao
Subject: Materials Science, Metallurgy Keywords: Fe-Mn-Si alloy; isothermal holding time; powder sintering; density; weight loss; tensile properties
This work investigated the isothermal holding time dependence of the densification, microstructure, weight loss and tensile properties of Fe-Mn-Si powder compacts. Elemental Fe, Mn and Si powder mixtures with a nominal composition of Fe-28Mn-3Si (in weight percent) were ball milled for 5h and subsequently pressed under a uniaxial pressure of 400 MPa. The compacted Fe-Mn-Si powder mixtures were sintered at 1200 ℃ for 0, 1, 2 and 3 h, respectively. In general, the density, weight loss and tensile properties increased with the increase of isothermal holding time. A significant increase in density, weight loss and tensile properties occurred in the compacts isothermally holding for 1 h, as compared to those with no isothermal holding. However, further extension of isothermal holding time (2 and 3 h) only played a limited role in promoting the density and tensile properties. The weight loss of the sintered compacts was mianly caused by the sublimation of Mn in Mn depletion region on the surface layer of the sintered Fe-Mn-Si compacts. The length of the Mn depletion region increased as isothermal holding time increased. A single α-Fe phase was detected on the surface of all the sintered compacts, and the locations beyond the Mn depletion region were comprised of a dual dominant γ-austenite and minor ε-martensite.
Association Between Breakfast Skipping and Body Weight – a Systematic Review and Meta-Analysis of Observational Longitudinal Studies
Julia Wicherski, Sabrina Schlesinger, Florian Fischer
Subject: Medicine & Pharmacology, Allergology Keywords: breakfast skipping; overweight; obesity; weight gain; BMI change; systematic review; meta-analysis; observational longitudinal studies
Globally, increasing rates of obesity are one of the most important health issues. The association between breakfast skipping and body weight is contradictory between cross-sectional and interventional studies. The systematic review and meta-analyses aim to summarize this association based on observational longitudinal studies. We included prospective studies on breakfast skipping and overweight/obesity or weight change in adults. Literature was searched until September 2020 in PubMed and Web of Science. Summary RRs with a 95% CI were estimated in pairwise meta-analyses by applying a random-effects model. In total, 9 studies were included in the systematic review and 6 of them were included in the meta-analyses. The meta-analysis indicated an 13% increased RR for overweight/obesity when breakfast was skipped on ≥ 3 days per week compared to ≤ 2 days per week (95% CI: 1.06, 1.21, n=3 studies). The meta-analysis on weight change displays a 21% increased RR for weight gain for breakfast skippers compared to breakfast eaters (95% CI: 1.05, 1.40, n=2 studies). The meta-analysis on BMI change displayed no difference between breakfast skipping and eating (RR=1.02, 95% CI: 0.99, 1.05, n=2 studies). This study provides low meta-evidence for an increased risk for overweight/obesity and weight gain for breakfast skipping.
Acute and Chronic Effects of Exercise on Appetite, Energy Intake and Appetite-Related Hormones: the Modulating Effect of Adiposity, Sex and Habitual Physical Activity
James Dorling, David Broom, Stephen Burns, David Clayton, Kevin Deighton, Lewis James, James King, Masashi Miyashita, Alice Thackray, Rachel Batterham, David Stensel
Subject: Medicine & Pharmacology, Nutrition Keywords: appetite; energy intake; appetite-related hormones; energy balance; exercise; physical activity; energy compensation; weight control
Exercise facilitates weight control, partly through effects on appetite regulation. Single bouts of exercise induce a short-term energy deficit without stimulating compensatory effects on appetite, whilst limited evidence suggests that exercise training may modify subjective and homeostatic mediators of appetite in directions associated with enhanced meal-induced satiety. However, large variability in responses exists between individuals. This article reviews the evidence relating to how adiposity, sex and habitual physical activity modulate exercise-induced appetite, energy intake and appetite-related hormone responses. The balance of evidence suggests that adiposity and sex do not modify appetite or energy intake responses to acute or chronic exercise interventions, but individuals with higher habitual physical activity levels may better adjust energy intake in response to energy balance perturbations. The effect of these individual characteristics and behaviours on appetite-related hormone responses to exercise remains equivocal. These findings support the continued promotion of exercise as a strategy for inducing short-term energy deficits irrespective of adiposity and sex, as well as the ability of exercise to positively influence energy balance over the longer term. Future well-controlled studies are required to further ascertain potential mediators of appetite responses to exercise.
A Bi-Gram Approach for an Exhaustive Arabic Triliteral Roots Lexicon
Ebtihal Mustafa, Karim Bouzoubaa
Subject: Arts & Humanities, Linguistics Keywords: Arabic language; Arabic roots; lexicons; phonetic system; bigram frequencies; roots weight; Artificial Intelligence; NLP; Arabic NLP
With the rapid development of science and technology, many new concepts and terms appear, especially in English. Other languages try to express these concepts with words from their own vocabulary. In the specific case of Arabic, there are many ways to find a counterpart for a particular new concept, such as using an existing word to denote the new concept, derivation, and blending. When these methods fail, the new concepts are simply phonetically transliterated. This has the disadvantage that most of the transliterated terms do not conform to the rules of the Arabic language and lead to a distortion of the language. Some modern linguists call for using the generation strategy to translate the new terms into Arabic by using the unused Arabic roots. Therefore, it is necessary to provide a resource that contains all Arabic roots with a categorization of what is used, what is available for use, and what is rejected according to the phonetic system. This work provides a comprehensive lexicon that contains all possible Arabic triliteral roots, determines the status of each root in terms of usage and acceptability, and provides a mechanism for giving preference to roots when there is more than one root that indicates the desired meaning.
Research on Mechanical Braking Model of Cows Knee Joint
Meng Liu, Kexin Meng, Shuli Mei, Ruyi Xing
Subject: Physical Sciences, Mathematical Physics Keywords: Dairy cow; Lyapunov exponent; bio-mechanical model; Nonlinear dynamics; gait; Weight scale; Three-link model; Fractals
The shape of the knee is a chain-like structure of an ellipsoid. This suggests that, in the latest study, in addition to rolling, there is slippage in the motion of the knee. This paper selects a mechanical braking model for cattle based on two mechanical structures in different directions at the joints. The experimental results show that the modified dynamical system has strong chaotic properties. This can be one of the basis for judging various health states.
Transcriptomic and Physiological Response of Durum Wheat Grain to Short-Term Heat Stress during Early Grain Filling
Anita Arenas-M, Francisca M. Castillo, Diego Godoy, Javier Canales, Daniel F. Calderini
Subject: Biology, Plant Sciences Keywords: Durum wheat; heat stress; grain weight; grain quality; RNA-seq; gene regulatory network; DOF transcription factor
Online: 7 December 2021 (23:38:32 CET)
In a changing climate, extreme weather events such as heat waves will be more frequent and could affect grain weight and the quality of crops such as wheat, one of the most significant crops in terms of global food security. In this work, we characterized the response of Triticum turgidum spp. durum wheat to a short-term heat-stress (HS) treatment at transcriptomic and physiological levels during early grain filling in glasshouse experiments. We found a significant reduction in grain weight and size from HS treatment. Grain quality was also affected, showing a decrease in starch content in addition to increments in grain protein levels. Moreover, an RNA-seq analysis of durum wheat grains allowed us to identify 1590 differentially expressed genes related to photosynthesis, response to heat, and carbohydrate metabolic process. A gene regulatory network analysis of HS-responsive genes uncovered novel transcription factors (TFs) controlling the expression of genes involved in abiotic stress response and grain quality, such as a member of the DOF family predicted to regulate glycogen and starch biosynthetic processes in response to HS in grains. In summary, our results provide new insights into the extensive transcriptome reprogramming that occurs during short-term HS in durum wheat grains.
ACTonFood. Acceptance and Commitment Therapy-Based Group Treatment Compared to Cognitive Behavioral Therapy-Based Group Treatment for Weight Maintenance: An Individually Randomized Group Treatment Trial
Roberto Cattivelli, Anna Guerrini Usubini, Gian Mauro Manzoni, Francesco Vailati Riboni, Giada Pietrabissa, Alessandro Musetti, Christian Franceschini, Giorgia Varallo, Chiara A.M. Spatola, Emanuele Giusti, Gianluca Castelnuovo, Enrico Molinari
Subject: Behavioral Sciences, Applied Psychology Keywords: obesity; obesity rehabilitation; weight maintenance; eating disorders; Acceptance and Commitment Therapy; Cognitive Behavioral Therapy; Clinical Psychology
The purpose of this Individually Randomized Group Treatment Trial was to compare an Acceptance and Commitment Therapy-based (ACT) group intervention and a Cog-nitive Behavioral Therapy-based (CBT) group intervention for weight loss maintenance in a sample of adult patients with obesity seeking treatment for weight loss. 155 over-weight adults (BMI: Kg/m2= 43.8[6.8]) attending a multidisciplinary rehabilitation program for weight loss were randomized into two conditions: ACT and CBT. Demo-graphical, physical, and clinical data were assessed at the beginning of the program (t0), at discharge (t1), and at 6-month follow-up (t2). The following measures were ad-ministered: The Acceptance and Action Questionnaire-II (AAQ-II) and the Clinical Outcome in Routine Evaluation-Outcome Measure (CORE-OM). Generalized linear mixed models were performed to assess differences between groups. Moderation ef-fects for gender and eating disorders (ED) have been considered. From baseline to dis-charge no significant differences between interventions were found, with the only ex-ception of an improvement in the CORE-OM total score and in the CORE-OM subjective well-being subscale for those in the CBT condition. From discharge to follow-up ACT group participants showed significant results in terms of weight loss maintenance, CORE-OM total score, and CORE-OM and AAQ-II's wellbeing, symptoms, and psy-chological problems subscales. Gender moderated the effects of time and intervention on the CORE-OM' subscale reporting the risk for self-harm or harm others. The pres-ence of an eating disorder moderated the effect of time and intervention on the CORE-OM total score, on the CORE-Om' symptoms and psychological problems sub-scales, and on the AAQ-II. Patients who received the ACT intervention were more likely to achieve a ≥5% weight loss from baseline to follow-up and to maintain the weight loss after discharge. The ACT intervention was thus effective in maintaining weight loss over time.
Low-Molecular-Weight Fucoidan as Complementary Therapy of Fluoropyrimidine-Based Chemotherapy in Colorectal Cancer
Ching-Wen Huang, Yen-Cheng Chen, Tzu-Chieh Yin, Po-Jung Chen, Tsung-Kun Chang, Wei-Chih Su, Cheng-Jen Ma, Ching-Chun Li, Hsiang-Lin Tsai, Jaw-Yuan Wang
Subject: Medicine & Pharmacology, Allergology Keywords: low-molecular-weight fucoidan; colorectal cancer; HCT116 cell; Caco-2 cell; fluoropyrimidine-based chemotherapy; complementary therapy
This study investigated the roles of low-molecular-weight fucoidan (LMWF) in enhancing the anti-cancer effects of fluoropyrimidine-based chemotherapy. HCT116 and Caco-2 cells were treated with LMWF and 5-FU. Cell viability, cell cycle, apoptosis, and migration were analyzed in both cell types. Potential mechanisms underlying how LMWF enhances the anti-cancer effects of fluoropyrimidine-based chemotherapy were also explored. The cell viability of HCT116 and Caco-2 cells was significantly reduced after treatment with a LMWF-5-FU combination. In HCT116 cells, LMWF enhanced the suppressive effects of 5-FU on cell viability through the 1) induction of cell cycle arrest in the S phase and 2) late apoptosis mediated by the Jun-N-terminal kinase (JNK) signaling pathway. In Caco-2 cells, LMWF enhanced the suppressive effects of 5-FU on cell viability through both c-mesenchymal–epithelial transition (MET)/ Kirsten Rat Sarcoma virus (KRAS)/ extracellular signal-regulated kinase (ERK) and c-MET/ phosphatidyl-inositol 3-kinases (PI3K)/ protein kinase B (AKT) signaling pathways. Moreover, LMWF enhanced the suppressive effects of 5-FU on tumor cell migration through the c-MET/ matrix metalloproteinase (MMP)-2 signaling pathway in both HCT116 and Caco-2 cells. Our results demonstrated that LMWF is a potential complementary therapy for enhancing the efficacies of fluoropyrimidine-based chemotherapy in colorectal cancers (CRCs) with the wild-type or mutated KRAS gene through different mechanisms. However, in vivo studies and in clinical trials are required to validate the results of the present study.
Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, examined with FSGM
Richard Niall Mark Rudd-Orthner, Lyudmila Mihaylova
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Repeatable Determinism; Weight Initialization; Convolutional Layers; Adversarial Perturbation Attack; FSGM, Transferred Learning, Machine Learning, Smart Sensors.
This paper presents a non-random weight initialization method in convolutional layers of neural networks examined with the Fast Gradient Sign Method (FSGM) attack. This paper's focus is convolutional layers, and are the layers that have been responsible for better than human performance in image categorization. The proposed method induces earlier learning through the use of striped forms, and as such has less unlearning of the existing random number speckled methods, consistent with the intuitions of Hubel and Wiesel. The proposed method provides a higher performing accuracy in a single epoch, with improvements of between 3-5% in a well known benchmark model, of which the first epoch is the most relevant as it is the epoch after initialization. The proposed method is also repeatable and deterministic, as a desirable quality for safety critical applications in image classification within sensors. That method is robust to Glorot/Xavier and He initialization limits as well. The proposed non-random initialization was examined under adversarial perturbation attack through the FGSM approach with transferred learning, as a technique to measure the affect in transferred learning with controlled distortions, and finds that the proposed method is less compromised to the original validation dataset, with higher distorted datasets.
Karim Bouzoubaa, Ebtihal Mustafa
Subject: Arts & Humanities, Linguistics Keywords: Arabic language; Arabic roots; lexicons; phonetic system; bigram frequencies; roots weight; Artifi-cial Intelligence; NLP; Arabic NLP
Purification of High Molecular-Weight Antibacterial Proteins of Insect Pathogenic Brevibacillus Laterosporus Isolates
Tauseef K. Babar, Travis R. Glare, John G. Hampton, Mark R. H. Hurst, Josefina O. Narciso, Amy Beattie
Subject: Life Sciences, Microbiology Keywords: antibacterial proteins; encapsulating protein; high molecular-weight bacteriocins; insect patho-genic bacterium; phage tail-like protein; purification methods
Brevibacillus laterosporus (Bl) is a Gram-positive and spore-forming bacterium belonging to the Brevibacillus brevis phylogenetic cluster. Globally, insect pathogenic strains of the bacterium have been isolated, characterised, and some activities patented. Two isolates, Bl 1821L and Bl 1951, exhibiting pathogenicity against the diamondback moth and mosquitoes, are under development as a biopesticide in New Zealand. However, due to the suspected activity of putative antibacterial proteins (ABPs), the endemic isolates often grow erratically. Various purification methods including size exclusion chromatography, sucrose density gradient centrifugation, polyethylene glycol precipitation, and ammonium sulphate precipitation employed in this study enabled the isolation of two putative antibacterial proteins of ~30 kD and ~48 kD from Bl 1821L and one putative antibacterial protein of ~30 kD from Bl 1951. Purification of the uninduced cultures of Bl 1821L and Bl 1951 also yielded the protein bands of ~30 kD and ~48 kD on SDS-PAGE which indicated their spontaneous induction. Disc diffusion assay was used to determine the antagonistic activities of the putative ABPs. Subsequent transmission electron microscope (TEM) examination of purified putative antibacterial protein-containing solution showed the presence of encapsulin (~30 kD) and polysheath (~48 kD) like structures. Although only the ~30 kD protein was purified from Bl 1951, both structures were seen in this strain under TEM. Furthermore, while assessing the antibacterial activity of some fractions of Bl 1951 against Bl 1821L in size exclusion chromatography method, population of Bl 1821L persister cells was noted. Overall, this work added a wealth of knowledge for the purification of the HMW proteins (bacteriocins) of the Gram-positive bacteria including Bl.
Estimation of Vitamin C Intake Requirements Based on Body Weight
Anitra C. Carr, Gladys Block, Jens Lykkesfeldt
Subject: Medicine & Pharmacology, Nutrition Keywords: vitamin C; ascorbate; obesity; body weight; vitamin C intake; plasma ascorbate concentrations; vitamin C requirements; dietary vitamin C
Higher body weight is known to negatively impact plasma vitamin C status. However, despite this well-documented inverse association, recommendations on daily vitamin C intakes by health authorities worldwide do not include particular reference values for people of higher body weight. This suggests that people of higher body weight and people with obesity may be insufficient in vitamin C in spite of ingesting the amounts recommended by their health authorities. The current preliminary investigation sought to estimate how much additional vitamin C people with higher body weights need to consume in order to attain a comparable vitamin C status to that of a lower weight person consuming an average Western vitamin C intake. Data from two published vitamin C dose-concentration studies were used to generate the relationship: a detailed pharmacokinetic study with seven healthy non-smoking men and a multiple depletion-repletion study with 68 healthy non-smoking men of varying body weights. Our estimates suggest that an additional intake of 10 mg vitamin C/day is required for every 10 kg increase in body weight to attain a comparable plasma concentration to a 60 kg individual with a vitamin C intake of ~110 mg/day, which is the daily intake recommended by the European Food Safety Authority (EFSA). Thus, individuals weighing e.g. 80 and 90 kg will need to consume ~130 and 140 mg vitamin C/day, respectively. People with obesity will likely need even higher vitamin C intakes. As poor vitamin C status is associated with increased risk of several chronic diseases including cardiovascular disease, these findings may have important public health implications. As such, dose-finding studies are required to determine optimal vitamin C intakes for overweight and obese people.
Malnutrition in Infants Aged under 6 Months Attending Community Health Centres: A Cross Sectional Survey
Carlos S. Grijalva-Eternod, Emma Beaumont, Ritu Rana, Nahom Abate, Hatty Barthorp, Marie McGrath, Ayenew Negesse, Mubarek Abera, Alemseged Abdissa, Tsinuel Girma, Elizabeth Allen, Marko Kerac, Melkamu Berhane
Subject: Medicine & Pharmacology, Allergology Keywords: Anthropometric deficit; infants under 6 months; malnutrition; weight-for-age; the Composite Index of Anthropometric Failure; MAMI; Ethiopia
Poor understanding of malnutrition burden is a common reason for not prioritizing the care of small and nutritionally at-risk infants aged under-six months (infants u6m). We aimed to estimate the anthropometric deficit prevalence in infants u6m attending health centres, using the Composite Index of Anthropometric Failure (CIAF); and to assess the overlap of different individual indicators. We undertook a two-week survey of all infants u6m visiting each of 18 health centres in two zones of the Oromia region, Ethiopia. We measured weight, length, and MUAC (Mid upper arm circumference); and calculated weight-for-length (WLZ), length-for-age (LAZ), and weight-for-age z-scores (WAZ). Overall, 21.7% (95% CI: 19.2; 24.3) of infants u6m presented CIAF and of these, 10.7% (95% CI: 8.93; 12.7) had multiple anthropometric deficits. Low MUAC overlapped with 47.5% (95% CI: 38.0; 57.3), 43.8% (95% CI: 34.9; 53.1), and 42.6% (95% CI: 36.3; 49.2) of the stunted, wasted and CIAF prevalence, respectively. Underweight overlapped with 63.4% (95% CI: 53.6; 72.2), 52.7% (95% CI: 43.4; 61.7), and 59.6% (95% CI: 53.1; 65.9) of the stunted, wasted and CIAF prevalence, respectively. Anthropometric deficits, single and multiple, are prevalent in infants attending health centres. WAZ overlaps more with other forms of anthropometric deficits than MUAC.
Fucoidan Inhibition of Osteosarcoma Cells is Species and Molecular Weight Dependent
Dhanak Gupta, Melissa Silva, Karolina Radziun, Diana Martinez, Christopher Hill, Julie Marshall, Vanessa Hearnden, Miguel Puertas-Mejia, Gwendolen Reilly
Subject: Life Sciences, Cell & Developmental Biology Keywords: apoptosis; necrosis; brown algae,; mitochondria; mg63 cells; fucoidan; molecular weight fraction; crude extract; cell cycle; transmission electron microscopy
Fucoidan is a brown algae-derived polysaccharide having several biomedical applications. This study simultaneously compares the anticancer activities of crude fucoidans from Fucus vesiculosus and Sargassum filipendula, and effects of low (LMW, 10-50kDa), medium (MMW, 50-100kDa) and high (HMW, >100kDa) molecular weight fractions of S. filipendula fucoidan against osteosarcoma cells. Glucose, fucose and acid levels were lower and sulphation was higher in F. vesiculosus crude fucoidan compared to S. filipendula crude fucoidan. MMW had highest the levels of sugars, acids and sulphation among molecular weight fractions. There was a dose dependent drop in focal adhesion formation and proliferation of cells for all fucoidan-types, but F. vesiculosus fucoidan and HMW had the strongest effects. G1-phase arrest was induced by F. vesiculosus fucoidan, MMW and HMW, however F. vesiculosus fucoidan treatment also caused accumulation in sub G1-phase. Mitochondrial damage occurred for all fucoidan-types, however F. vesiculosus fucoidan led to mitochondrial fragmentation. Annexin V/PI, TUNEL and cytochrome c staining confirmed stress induced apoptosis-like cell death for F. vesiculosus fucoidan but features of stress-induced necrosis-like cell death for S. filipendula fucoidans. There was also variation in penetrability of different fucoidans inside the cell. These differences in anti-cancer activity of fucoidans are applicable for osteosarcoma treatment.
Dynamic Weight Agnostic Neural Networks and Medical Microwave Radiometry (MWR) for Breast Cancer Diagnostics
Jolen Li, Christopher Galazis, Illarion Popov, Lev Ovchinnikov, Sergey Vesnin, Alexander Losev, Igor Goryanin
Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: breast cancer; passive microwave radiometry (MWR); cascaded correlation neural network (CCNN); weight agnostic neural network (WANN); CMA-ES algorithm.
Abstract Background and Objective: Medical Microwave Radiometry (MWR) is used to capture the thermal properties of internal tissues and has usages in breast cancer detection. Our goal in this paper is to improve classification performance and investigate automated neural architecture search methods. Methods: We investigate optimizing the weights of a weight agnostic neural network using bi-population covariance matrix adaptation evolution strategy (BIPOP-CMA-ES) once the topology is found. We compare it against a weight agnostic and cascade correlation neural network. Results: The experiments are conducted on a breast cancer dataset of 4912 patients. Our proposed weight agnostic BIPOP-CMA-ES model achieved the best performance. It obtained an F1-score of 0.9225, accuracy of 0.9219, precision of 0.9228, recall of 0.9217 and topology of 153 connections. Conclusions: The results are an indication of the potential of MWR utilizing a neural network-based diagnostic tool for cancer detection. By separating the tasks of topology search and weight training, we are able to improve the overall performance.
Zika Virus Infection in a Cohort of Pregnant Women with Exanthematic Disease in Manaus, Brazilian Amazon
Elijane de Fátima Redivo, Camila Helena Bôtto Menezes, Márcia da Costa Castilho, Marianna Facchinetti Brock, Evela da Silva Magno, Maria das Graças Gomes Saraiva, Salete Sara Alvarez Fernandes, Anny Beatriz Costa Antony de Andrade, Maria das Graças Costa Alecrim, Flor Ernestina Martinez-Espinosa
Subject: Medicine & Pharmacology, Allergology Keywords: Amazonian region; ZIKV in pregnancy; Exanthematic disease in pregnancy; Torch syndrome; Abortion; Stillbirth; Mycrocephaly; Preterm delivery; Low birth weight
The epidemic transmission of Zika virus (ZIKV) in Brazil has been identified as a cause of microcephaly and other neurological malformations in babies of ZIKV-infected women. This study provides a descriptive analysis, since the onset of symptoms to the delivery, of a cohort who were registered as having ZIKV infection in pregnancy, from November 2015 to December 2016. Suspected cases were registered at a referral center for infectious and tropical diseases in Manaus, in the Brazilian Amazonian region. A total of 834 women with suspected ZIKV in pregnancy were included, of whom 91.4% had confirmed pregnancy. Reverse-transcriptase polymerase chain reaction (RT-PCR) confirmed ZIKV infection in 42.2% of the cohort. In 35.2% of the cohort, ZIKV was the sole infection identified. Severe adverse pregnancy outcomes (abortion, stillbirth, or microcephaly) were observed in both RT-PCR ZIKV-positive (4.96%) and ZIKV-negative (2.15%) cases. Women with suspected ZIKV infection were much more likely to have adverse pregnancy outcomes if they were symptomatic during the first trimester of pregnancy (odds ratio 10.5; 95% confidence interval 4.0–27.0; p<0.001). Among pregnant women with suspected ZIKV infection, the occurrence of symptoms in the first trimester is associated with an especially high risk of severe adverse pregnancy outcomes.
Darknet Traffic Big-Data Analysis and Network Management to Real-Time Automating the Malicious Intent Detection Process by a Weight Agnostic Neural Networks Framework
Konstantinos Demertzis, Konstantinos Tsiknas, Dimitrios Takezis, Charalabos Skianis, Lazaros Iliadis
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Darknet; Traffic Analysis; Network Management; Malicious Intent Detection; Weight Agnostic Neural Networks; Real-Time Forensics; Shapley Value; Power Predicting Score
Attackers are perpetually modifying their tactics to avoid detection and frequently leverage legitimate credentials with trusted tools already deployed in a network environment, making it difficult for organizations to proactively identify critical security risks. Network traffic analysis products have emerged in response to attackers' relentless innovation, offering organizations a realistic path forward for combatting creative attackers. Additionally, thanks to the widespread adoption of cloud computing, Device Operators (DevOps) processes, and the Internet of Things (IoT), maintaining effective network visibility has become a highly complex and overwhelming process. What makes network traffic analysis technology particularly meaningful is its ability to combine its core capabilities to deliver malicious intent detection. In this paper, we propose a novel darknet traffic analysis and network management framework to real-time automating the malicious intent detection process, using a weight agnostic neural networks architecture. It is an effective and accurate computational intelligent forensics tool for network traffic analysis, the demystification of malware traffic, and encrypted traffic identification in real-time. Based on Weight Agnostic Neural Networks (WANNs) methodology, we propose an automated searching neural net architectures strategy that can perform various tasks such as identify zero-day attacks. By automating the malicious intent detection process from the darknet, the advanced proposed solution is reducing the skills and effort barrier that prevents many organizations from effectively protecting their most critical assets.
Prenatal Maternal Docosahexaenoic Acid (DHA) Supplementation and Newborn Anthropometry in India: Findings from DHANI
Shweta Khandelwal, Dimple Kondal, Monica Chaudhry, Kamal Patil, MK Swamy, Gangubai Pujeri, Swati Babu Mane, Yashaswi Kudachi, Ruby Gupta, Usha Ramakrishnan, Aryeh D Stein, Dorairaj Prabhakaran, Nikhil Tandon
Subject: Life Sciences, Biochemistry Keywords: Docosahexaenoic acid (DHA); long chain omega-3 fatty acids; maternal supplementation; pregnancy outcomes; anthropometry; birth weight; birth length; head circumference
Long-chain omega-3 fatty acid status during pregnancy may influence newborn anthropometry and duration of gestation. Evidence from high-quality trials from LMICs is limited. We conducted a double-blind, randomized, placebo-controlled trial among 957 pregnant women (singleton gestation, 14-20 weeks' gestation at enrollment) in India to test the effectiveness of 400 mg/d algal docosahexaenoic acid (DHA) compared to placebo provided from enrollment through delivery. Among 3379 women who were screened, 1171 were found eligible; 957 enrolled and were randomized. The intervention was two microencapsulated algal DHA (200 X 2= 400 mg/d) or two microencapsulated soy and corn oil placebo tablets to be consumed daily from enrollment (20 weeks) through delivery. The primary outcome was newborn anthropometry (birth weight, length, head circumference). Secondary outcomes were gestational age and 1 and 5 min Appearance, Pulse, Grimace, Activity, and Respiration (APGAR) score. The groups (DHA; n=478 and placebo; n=479) were well balanced at baseline. There were 902 live births. Compliance with the intervention was similar across groups (DHA: 88.5%; placebo: 87.1%). There were no significant differences between DHA and placebo group for birth weight (2750.6 ± 421.5 vs. 2768.2 ± 436.6 g, p=0.54), length (47.3 ± 2.0 vs. 47.5 ±2.0 cm, p=0.13) or head circumference (33.7 ± 1.4 vs 33.8 ± 1.4 cm, p=0.15). The mean gestational age at delivery was similar between groups (DHA: 38.8 ± 1.7 placebo: 38.8 ± 1.7 wk, p= 0.54) as were APGAR scores at 1 and 5 min. Supplementing mothers through pregnancy with 400mg/d DHA did not impact the offspring birthweight, length or head circumference.
Evidence of Stable Foraminifera Biomineralization During the Last Two Climate Cycles in the Tropical Atlantic Ocean
Stergios D. Zarkogiannis, Assimina Antonarakou, Vincent Fernandez, P. Graham Mortyn, George Kontakiotis, Hara Drinia, Mervyn Greaves
Subject: Earth Sciences, Palaeontology Keywords: planktonic foraminifera; shell weight; climate variability; sea surface density; carbonate production; X-ray microscopy (μCT); δ18O and Mg/Ca analyses
Planktonic foraminiferal biomineralization intensity, reflected by their shell calcite mass, affects global carbonate deposition and is known to follow the climate cycles by being increased during glacial stages and decreased during interglacial ones. Here we measure the dissolution state and the mass of the shells of the planktonic foraminifera species Globigerina bulloides from a Tropical Eastern North Atlantic site over the last two glacial-interglacial climatic transitions and we report no major changes in plankton calcite production with the atmospheric pCO2 variations. We attribute this consistency in foraminifera calcification to the climatic and hydrological stability of the tropical regions. We however recorded increased shell masses midway through the penultimate deglaciation (Termination II). In order to elucidate the cause of the increased shell weights we performed δ18O, Mg/Ca and μCT measurements on the same shells from a number of samples surrounding this event. We find that shells of increased mass are internally contaminated by sediment infilling and that shell weights are responding to local hydrographic changes.
Association of Maternal Observation and Motivation (MOM) Program with M-Health Support on Maternal and Newborn Health
Premalatha Paulsamy, Vigneshwaran Easwaran, Rizwan Ashraf, Krishnaraju Venkatesan, Mervat Moustafa, Absar Ahmed Qureshi, Kousalya Prabahar, Kalaiselvi Periannan, Rajalakshimi Vasudevan, Geetha Kandasamy, Kumarappan Chidambaram, Ester Mary Pappiya, Kumar Venkatesan, Vani Manoharan
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: pregnant mothers, physical activity; maternal wellbeing; antenatal mothers; newborn outcomes; m-health; low birth weight; small for gestation; gestation age; hemoglobin
Maternal and child nutrition has been a critical component of health, sustainable development, and progress in low- and middle-income countries (LMIC). While a decrement in maternal mortality is an important indicator, simply surviving pregnancy and childbirth does not imply better maternal health. One of the fundamental obligations of nations under international human rights law is to enable mothers and teenage girls to endure pregnancy and delivery as an aspect of their enjoyment of reproductive and sexual health and rights and live a dignified life. The aim of this study was to discover the correlation between the Maternal Observation and Motivation (MOM) program and m-Health support for maternal and newborn health. A Comparative study was done among 109 pregnant mothers (study group-94; control group-102 mothers) with not less than 20 weeks of gestation. Maternal outcomes such as Hb, weight gain and newborn results like birth weight and crown- heel length was obtained on the baseline, 28 and 36 weeks of gestation. Other secondary data collected were abortion, stillbirth, low birth weight, major congenital malformations, twin or triplet pregnancies, physical activity and maternal wellbeing. The MOM intervention included initial face to face education, three in-person visits and eight virtual health coaching by WhatsApp. The baseline data on Hb of the mothers show that 31(32.98%) vs 27(28.72%) of the study and control group had anaemia, which improved to 27.66% and 14.98% among study group mothers at 28 and 36 weeks of gestation (p<0.001). The weight gain (p< 0.001), level of physical activity (p< 0.001), and maternal wellbeing (p< 0.01) also had significant differences after the Intervention. Even after controlling for potentially confounding variables, the maternal food practices regression model revealed that birth weight was directly correlated with consumption of milk (p 0.001), fruits (p 0.01), and green vegetables (p 0.05).As per the physical activity and maternal wellbeing regression model, the birth weight and crown heel length were strongly related with the physical activity and maternal wellbeing of mothers at 36 weeks of gestation (p <0.05). Combining the MOM intervention with standard antenatal care is a safe and effective way to improve maternal welfare while upholding pregnant mothers' human rights.
Working Paper COMMUNICATION
Intact Leptin Receptor Signalling is not Required for the Sustained Weight Loss and Appetite Suppression Induced by Roux-en-Y Gastric Bypass Surgery
Mohammed K. Hankir, Laura Rotzinger, Arno Nordbeck, Caroline Corteville, Annett Hoffmann, Christoph Otto, Florian Seyfried
Subject: Medicine & Pharmacology, Gastroenterology Keywords: Roux-en-Y gastric bypass surgery; Weight loss; Food intake; Oral glucose tolerance; Leptin; Leptin receptors; Zucker Fatty fa/fa rats
Leptin is the archetypal adipokine that promotes a negative whole-body energy balance largely through its action on brain leptin receptors. As such, the sustained weight loss and food intake suppression induced by Roux-en-Y gastric bypass (RYGB) surgery have been attributed to enhancement of leptin receptor signalling. We formally revisited this idea in Zucker Fatty fa/fa rats, an established genetic model of leptin receptor deficiency, and carefully compared their body weight, food intake and oral glucose tolerance after RYGB with that of sham-operated fa/fa (obese) and sham-operated fa/+ (lean) rats. We found that RYGB rats sustainably lost body weight, which converged with that of lean rats and was 25.5 % lower than that of obese rats by the end of the 4 week study period. Correspondingly, daily food intake of RYGB rats was similar to that of lean rats from the second postoperative week, while it was always at least 33.9 % lower than that of obese rats. Further, oral glucose tolerance of RYGB rats was normalized at the forth postoperative week. These findings assert that leptin is not an essential mediator of the sustained weight loss and food intake suppression as well as the improved glycemic control induced by RYGB, and instead point to additional circulating and/or neural factors.
Usage Surface Deflection Data for Performance Prediction in Flexible Pavement
Nader Karballaeezadeh, Farah Zaremotekhases, Narjes Nabipour, Shahaboddin Shamshirband, Amir Mosavi
Subject: Engineering, Civil Engineering Keywords: transportation engineering; flexible pavement; pavement condition index prediction; falling weight deflectometer; mlp neural network; rbf neural network; intelligent machine system committee
The conventional method used for calculating pavement condition index (PCI) has two major drawbacks: safety problems during pavement inspection, and human error. This paper proposes a method for removing these problems. The proposed method uses surface deflection data in falling weight Deflectometer test to estimate PCI. The data used in this study were derived from 236 pavement segments taken from Tehran-Qom freeway in Iran. The data set was analyzed using multi layers perceptron (MLP) and radial basis function (RBF) neural networks. These neural networks were optimized by levenberg-marquardt (MLP-LM), scaled conjugate gradient (MLP-SCG), imperialist competitive (RBF-ICA), and genetic (RBF-GA) algorithms. After initial modeling with four neural networks mentioned, the committee machine intelligent systems (CMIS) method was adopted to combine the results and improve the accuracy of the modeling. The results of analysis have been verified by the four criteria of average percent relative error (APRE), average absolute percent relative error (AAPRE), root mean square error (RMSE) and standard error (SD). The best reported results belonged to CMIS, including APRE=2.3303, AAPRE=11.6768, RMSE=12.0056, and SD=0.0210.
SARS-CoV-2-Infection (COVID-19): Clinical Course and Cause(s) of Death
Giuliano Ramadori
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: SARS-CoV-2-infection; dehydration; hypoalbuminemia; pulmonary hypoxia; hyaline memrane; pulmonary engorgement; lung weight; acute respiratory distress syndrome; diffuse alveolar damage (DAD)
ABSTRACT Two years after first patients approached the emergency rooms of hospitals in Wuhan becouse of respiratory distress,thousend of SARS-CoV-2 infected persons continue to die every day worldwide.SARS-CoV-2-infected patients undergo a process of dehydration and malnutrition before they develop respiratory problems and approach the emergency room of a hospital.This is,in many cases, the consequence of high fever which causes massive loss of fluids. In addition loss of appetite, is responsible for the deficit of protein intake.Most of the virus-infected patients admitted to the emergency room are therefore hypovolemic and hypoproteinemic and suffer of respiratory distress accompanied by ground grass opacities at CT-scan of the lungs.Critically ill patients are treated following the guidelines for treatment of septic shock but with "conservative" fluid replacement and administration of diuretics to assure sufficient hourly urine production. The combination of conservative fluid administration with reduced protein content in the enterally administered diet, together with administration of diuretics, has severe hemodynamic consequenses in mostly aged,dehydrated,critically ill patients. Many of them will develop acute kidney injury in the next 24 hours.In most of the cases, patients continue to loose weight by loosing skeletal muscle mass. Ischemic damage in the lung capillaries is responsible for the acute respiratory distreass syndrome (ARDS) and for the hallmark of autoptic findings,diffuse alveolar damage (DAD) characterized by hyaline membrane formation,fluid invasion of the alveoli recruitment of some inflammatory cells and progressive arrest of blood flow in the pulmonary vessels.The consequence is progressive congestion , increase of lung weight and progressive hypoxia (progressive severity of ARDS).Sequestration of blood in the lungs worsen hypovolemia and ischemia in different organs.This is most probably responsible for recruitment of inflammatory cells and for persistance of elevated serum levels of positive acute-phase markers and of hypoalbuminemia. Autoptic studies have been performed mostly in patients who died in the ICU after SARS-CoV-2-infection because of progessive ARDS.In those patients, tubulus epithelium necrosis in the kidney is a frequent finding as it has been the case in the first SARS-CoV-1 pandemic.In the death certification charts, many times weeks after first symptoms have started ,cardiac arrest is the cause of death after respitatory insufficiency.Replacement therapy with sufficient amount of fluid and albumin should be part of the early individualized life-saving supportive measures avoiding mechanical ventilation.
Intelligent Road Inspection with Advanced Machine Learning; Hybrid Prediction Models for Smart Mobility and Transportation Maintenance Systems
Nader Karballaeezadeh, Farah Zaremotekhases, Shahaboddin Shamshirband, Amir Mosavi, Narjes Nabipour, Peter Csiba, Annamária R. Várkonyi-Kóczy
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: transportation; mobility; prediction model; pavement management; pavement condition index; falling weight deflectometer; multilayer perceptron; radial basis function; artificial neural network; intelligent machine system committee
Prediction models in mobility and transportation maintenance systems have been dramatically improved through using machine learning methods. This paper proposes novel machine learning models for an intelligent road inspection. The traditional road inspection systems based on the pavement condition index (PCI) are often associated with the critical safety, energy and cost issues. Alternatively, the proposed models utilize surface deflection data from falling weight deflectometer (FWD) tests to predict the PCI. Machine learning methods are the single multi-layer perceptron (MLP) and radial basis function (RBF) neural networks as well their hybrids, i.e., Levenberg-Marquardt (MLP-LM), scaled conjugate gradient (MLP-SCG), imperialist competitive (RBF-ICA), and genetic algorithms (RBF-GA). Furthermore, the committee machine intelligent systems (CMIS) method was adopted to combine the results and improve the accuracy of the modeling. The results of the analysis have been verified through using four criteria of average percent relative error (APRE), average absolute percent relative error (AAPRE), root mean square error (RMSE), and standard error (SD). The CMIS model outperforms other models with the promising results of APRE=2.3303, AAPRE=11.6768, RMSE=12.0056, and SD=0.0210.
Neck Circumference in Combination with Biochemical Variables as a Surrogate Marker of NAFLD: The FLiO Study
Mariana Elorz, Alberto Benito, Berta Araceli Marin, Nuria Perez del Campo, Jose Ignacio Herrero, Ignacio Monreal, Josep Tur, J. Alfredo Martínez, María Ángeles Zulet, Itziar Abete
Subject: Medicine & Pharmacology, Nutrition Keywords: Anthropometric measurements; fatty liver disease; nutritional intervention; imaging techniques; long-term follow-up; neck-to-height ratio; non-invasive diagnostic methods; neck-to-weight ratio; FLIO study; steatosis markers.
Neck circumference (NC), neck circumference to height ratio (NHtR) and neck circumference to weight ratio (NWtR) appear to be good candidates for the non-invasive management of non-alcoholic fatty liver disease (NAFLD). This study aimed to evaluate the ability of routine variables to assess and manage NAFLD in participants with obesity and NAFLD included in a 2-year nutritional intervention program. Anthropometric measurements, biochemical variables and imaging techniques were performed at different study time-points (baseline, 6, 12 and 24 months). The nutritional intervention significantly improved all anthropometric measurements as well as the glucose profile and the hepatic enzymes. NC and neck ratios combined with ALT levels and HOMA-IR showed good prediction ability for the hepatic fat content and hepatic steatosis at all the study time-points in a ROC analysis. The prediction ability of the combination panels improved when the weight loss variable was also considered. NC and neck ratios are easy anthropometric measurements that in combination with routine biochemical variables (ALT and HOMA-IR) showed good prediction ability of NAFLD. More research studies are necessary to validate the utility of these simple and easy variables as surrogate markers of NAFLD since their application could improve the prevention and management of this prevalent disease.
Profiling of Matched Adipose and Skeletal Muscle in Human Pancreatic Cancer Cachexia Reveals Distinct Gene Profiles with Convergent Pathways
Ashok Narasimhan, Xiaoling Zhong, Ernie Au, Eugene P. Ceppa, Atilla Nakeeb, Michael G. House, Nicholas J. Zyromski, C. Max Schmidt, Katharyn N.H. Schloss, Daniel E.I. Schloss, Yunlong Liu, Guanglong Jiang, Bradley A. Hancock, Milan Radovich, Joshua K. Kays, Safi Shahda, Marion E. Couch, Leonidas G. Koniaris, Teresa A. Zimmers
Subject: Medicine & Pharmacology, Allergology Keywords: Pancreatic cancer; RNAseq; Humans; Weight loss; Prognosis; Rectus Abdominis; Carcinoma, Pancreatic Ductal; Adipose Tissue; Pancreatic Carcinoma; Pancreatic Neoplasms; Subcutaneous Fat; High-Throughput Nucleotide Sequencing; Body Composition; Muscles; Cachexia; Muscular Atrophy; Gene Expression
The vast majority of patients with pancreatic ductal adenocarcinoma (PDAC) suffer cachexia. Although cachexia results from concurrent loss of adipose and muscle tissue, most studies focus on muscle alone. Emerging data demonstrate the prognostic value of fat loss in cachexia. Here we sought to identify the muscle and adipose gene profiles and pathways regulated in cachexia. Matched rectus abdominis muscle and subcutaneous adipose tissue were obtained at surgery from patients with benign conditions (n=11) and patients with PDAC (n=24). Self-reported weight loss and body composition measurements defined cachexia status. Gene profiling was done using Ion proton sequencing. Results were queried against external datasets for validation. 961 DE genes were identified from muscle and 2000 from adipose tissue, demonstrating greater response of adipose than muscle. In addition to known cachexia genes such as FOXO1, novel genes from muscle, including PPP1R8 and AEN correlated with cancer weight loss. All the adipose correlated genes including SCGN and EDR17 are novel for PDAC cachexia. Pathway analysis demonstrated shared pathways but largely non-overlapping genes in both tissues. Age related muscle loss predominantly had a distinct gene profiles compared to cachexia. This analysis of matched, externally validate gene expression points to novel targets in cachexia.
Improvement of Position Repeatability of a Linear Stage with Yaw Minimization
Doo Hyun Cho, Hyo Chan Kwon, Kwon Hee Kim
Subject: Engineering, Mechanical Engineering Keywords: Precision stage; Balanced platform; Balancing weight; Drive force offset; Yaw motion; Abbe er-ror; Error prediction; Low-cost stage; Open frame stage; Linear motion guide (LM Guide); Py-thon; GEKKO; ANSYS bushing joint
Recently, due to the miniaturization of electronic products, printed circuit boards (PCBs) have also become smaller. This trend has led to the need for high-precision electrical test equipment to check PCBs for disconnections and short circuits. The purpose of this study is to improve the position repeatability of the platform unit up to ±2.5 μm in a linear stage type test equipment. For this purpose, the causes of position errors of the platform unit are analyzed. The platform unit holding the PCB is driven by a single-axis linear ball screw drive system offset from its geometric center due to design constraints. The yaw rotation of the platform is found to have a dominant effect on position repeatability. To address this problem, the methods of adding balancing weights to the platform unit and adjusting the stiffness of LM Guides are proposed. This reduces the yaw rotation by moving the centers of mass and stiffness closer to the linear ball screw actuator. In the verification tests, the position repeatability was decreased to less than ±1.0 μm.
Weight Estimation of Marine Propulsion and Power Generation Machinery
Arun Kr Dev, Makaraksha Saha
Subject: Engineering, Marine Engineering Keywords: engine weight; engine power; engine/generator power; engine RPM; cylinder number; power-RPM ratio; power-RPM ratio per cylinder; low-speed; medium-speed; high-speed; standard deviation; correlation coefficient; coefficient of multiple determination; F-statistic
Online: 7 May 2021 (14:17:36 CEST)
During the conceptual and preliminary design stage of a ship, designers need to ensure that the selected principal dimensions and parameters are good enough to deliver a stable ship (statically and dynamically) besides deadweight and speed. To support this, the initial intact stability of the proposed ship is required to be calculated, and in doing so, the lightship weight and its detailed breakdown are necessary to be known. After hull steel weight, machinery weight, mainly, marine propulsion and power generation machinery, play a vital role in the lightship weight estimate of a ship due to its robustness. The correct estimation of respective weights improves the accuracy of calculating a ship's initial stability typically to be designed and built. Hence, it would be advantageous for the designer to convince the ship owner. A total of 3006 marine propulsion (main marine diesel) engines and 348 power generation (auxiliary marine diesel) engines/generators of various power output (generators output for auxiliary engines), engine RPM and cylinder number of different engine makers are collected. These are analyzed and presented in both tabular and graphical forms to demonstrate the possible relationship between marine propulsion engine weight and power generation engine weight, and their respective power output, RPM, cylinder number, power-RPM ratio and power-RPM ratio per cylinder.In this article, the authors have attempted to investigate the behavior of marine propulsion engine weight and power generation engine/generator weight regarding engine power output, generator power output, engine RPM and cylinder number (independent variables). Further attempts have been made to identify those independent variables that influence the weight of the marine propulsion engine and power generation engine/generator (dependent variables), and their interrelationships. A mathematical model has thus been developed and proposed, as a guiding tool, for the designer to estimate the weight of main and auxiliary engines more accurately during the conceptual and preliminary design stage. | CommonCrawl |
Optimizing the preparation conditions and characterization of a stable and recyclable cross-linked enzyme aggregate (CLEA)-protease
Safa Senan Mahmod1,
Faridah Yusof1,
Mohamed Saedi Jami1 &
Soofia Khanahmadi1
Bioresources and Bioprocessing volume 3, Article number: 3 (2016) Cite this article
Cross-linked enzyme aggregate (CLEA) is considered as an effective technique in the production of immobilized biocatalysts for its industrially attractive advantages. Simplicity, stability, low cost, time saving and reusability are proved to be some of CLEA's main advantages.
In this study, an active, stable and recyclable CLEA-protease from the viscera of channel catfish Ictalurus punctatus has been prepared. Optimization of the preparation parameters is carried out with the help of Response Surface Methodology. This methodology helped in studying the interaction between the most contributing factors such as cross-linker, precipitant and the additive concentrations. The optimum specific activity for CLEA-protease of 4.512 U/mg protein has shown a high stability against the denaturation forces such as temperature and pH as compared to free protease. It is further found from the study that the highest activity was achieved at the pH of 6.8 and at the temperature of 45 °C. After six cycles, CLEA-protease maintained 28 % of its original activity. Additionally, Michaelis–Menten models were used to determine the kinetic parameters i.e. K m and V max that helped in showing a significant difference after immobilization as compared to free protease.
This work found that this novel CLEA-protease can be used as a very active biocatalyst in industrial applications.
Recently, an increased demand for hydrolytic enzymes has been witnessed, especially for proteases due to its numerous applications in different industrial areas, such as the food, leather and detergent industries (Li et al. 2009). It has been reported that protease occupies about 60 % of the hydrolases in the industrial market. Furthermore, it also accounts for almost quarter of the enzyme's overall global production (Gupta et al. 2009). Proteases of different sources for hydrolysis of proteins for commercial uses appear to be extremely helpful because of the enzyme's biological origins (Capiralla et al. 2002). In the meantime, most of the available studies regarding proteases production have been obtained from microbial origin (Soares et al. 2005; Tang et al. 2004). In an earlier study, conducted by Mahmod et al. (2014), it was found that visceral protease extracted from channel catfish (Ictalurus punctatus) has proven to be highly active. It is further revealed that the use of visceral protease, which is extracted from channel catfish, has several advantages such as making use of this wasteful by-product to produce potential biocatalyst with high activity. Additionally, it is considered as a free and available rich source of enzymes (Mahmod et al. 2014).
In order to protect the enzymes from harsh conditions and to conserve its functional stability, immobilization technique is used in the industrial applications. This leading technique promotes several beneficial features of the enzyme such as offering greater stability and activity, resistance to inhibition, selectivity or specificity, higher performance of catalysts, and reusability (Torabizadeh et al. 2014). Enzyme immobilization is proven to be one of the key steps in making the enzymatic processes economically practicable (Cao and Schmid 2005; Zhou 2009). According to the study of Parmar et al. (2000), it is also found that the immobilized enzyme catalyst with the improved activity and stability can reduce the production cost. Cross-linking enzyme aggregate (CLEA) is described as the latest enzyme immobilization technology (Barbosa et al. 2014; Brady and Jordaan 2009; Sheldon et al. 2005).
CLEA's preparation is considered as a very simple procedure that involves enzyme's precipitation, which does not need to be purified. However, physical precipitation of the enzymes' molecules into super-molecular structures can take place through the addition of organic solvents, non-ionic polymers or salts to protein sample's aqueous solution (Sheldon 2007). The study has further highlighted the fact that CLEAs are produced with the help of physical cross-linking of protein aggregates. CLEA's activity is maintained; since it is considered as a restructured superstructure of the aggregates (Kartal et al. 2011). Hence, in order to stabilize these aggregates, the step of cross-linking appears to be necessary for the process so that the enzymes cannot be re-dissolved during the removal of precipitants. In previous researches, protease was successfully immobilized using CLEA technology (Skovgaard et al. 2010; Sangeetha and Abraham 2008).
In this study, protease extracted from the viscera of channel catfish (Ictalurus punctatus) was immobilized using cross-linked enzyme aggregates technique in acetone as an organic medium. The type of the suitable additive was studied and the optimum preparation conditions of CLEA-protease preparation were investigated using Response Surface Methodology (RSM). The resulted CLEA-protease was illustrated in terms of optimum pH and optimum temperature as well as thermal stability and pH stability. The produced CLEA-protease was also tested for its reusability. Finally, kinetics parameters (V max and K m) were determined using Michaelis–Menten models.
All of the chemicals used in this study are of analytical grades with different brands (Merck, Sigma-Aldrich, System and Bio-Rad) and are obtained from the local supplier Essen Haus Sdn. Bhd and Merck Sdn. Bhd. In order to measure the absorbance, Tecan micropalte reader (Switzerland) is used. Furthermore, Sartorius Shaker (Germany) has been used for agitation during the preparation of CLEA-protease.
Protease sample preparation
Channel catfish's viscera was obtained from a local market in Selangor/Malaysia, and was washed and weighed (749 g) before mixing with 1 M phosphate buffer (pH 7.3) in the blender with a ratio of 2:1 buffer-to-viscera, then sample was filtered using a muslin cloth.
Filtrate was then centrifuged for 1 h at 4 °C and 12,000 rpm; the supernatant collected was used to prepare the crude enzyme by precipitating for 24 h at 4 °C with 4 M ammonium sulphate with continuous stirring. Next, the sample was dissolved in PBS and centrifuged at 3000 rpm for 15 min at 4 °C. This was followed by dialysis against minimal phosphate buffer saline pH 7 by (10000 MWCO) and left for 4 h with continuous mixing at 4 °C. Afterwards, the sample was stored at the temperature of −20 °C for additional tests.
Protein concentration
Bradford assay using BSA as standard has also been deployed for measuring the protein content present in the prepared sample (Bradford 1976).
Protease activity assay
The enzymatic activity of protease was determined using the modified universal procedure described by Sigma-Aldrich with casein as a substrate (Sigma-Aldrich 2013). The substrate solution was prepared by dissolving 1 g of casein in 100 mL of 50 mM Tris–HCl buffer, pH 8. Afterwards, 1 mL of enzyme solution was added to the substrate solution and incubated for 20 min at the temperature of 35 °C. As a reaction terminator, 4 mL of 10 % trichloroacetic acid (TCA) was used. Measurement of absorbance was taken at 660 nm and preparation of tyrosine standard curve was also carried out. The equation below was used to measure the enzyme activity (Sigma-Aldrich 2013) with modification:
$${\text{Enzyme activity }}\left( {{\text{U}}/{\text{mL}}} \right) \, = \frac{{(\upmu {\text{mole}}\;{\text{of}}\;{\text{tyrosine}}\;{\text{released)}}\; \times \;{\text{Total}}\;{\text{volume}}\;{\text{of}}\;{\text{assay}}\; ( {\text{mL}})}}{{{\text{Volume}}\;{\text{of}}\;{\text{used}}\;{\text{enzyme}}\; ( {\text{mL)}} \times {\text{Time of assay}} \times {\text{Volume in cuvette}}\; ( {\text{mL)}}}}$$
A unit of protease is termed as the quantity of enzyme which is necessary for hydrolysing casein to produce a colour equal to 1.0 µmol of tyrosine per minute. The specific activity of protease (U/mg protein) has been estimated by dividing the enzyme activity (U/mL) over the protein content obtained by Bradford assay (mg/mL).
The recovered activity of CLEA-protease was calculated using the following equation (Kim et al. 2013):
$${\text{Recovered activity (}}\% ) { } = \frac{{{\text{Activity}}\;{\text{of}}\;{\text{CLEA}} - {\text{protease}}}}{{{\text{Activity}}\;{\text{of}}\;{\text{free}}\;{\text{enzyme}}\;{\text{used}}\;{\text{for}}\;{\text{CLEA}} - {\text{protease}}\;{\text{preparation}}}}\, \times \,100\;\%$$
Selection of additives
In this study, three additives were studied for their effects on the activity of CLEA-protease; they are 50 mg, 10 mg, 50 % (v/v) of bovine serum albumin (BSA), sodium dodecyl sulphate (SDS) and heptane, respectively. The precipitant and the cross-linker values were fixed based on a previous study (Yusof et al. 2013) at acetone = 60 % (v/v) and glutaraldehyde = 60 mM.
Immobilization mixture was prepared in triplicates. It consisted of 1 mL of enzyme solution, precipitant, glutaraldehyde and an additive. The samples were shaken at 200 rpm in the shaker for 17 h at room temperature. Samples were collected and centrifuged at 6000 rpm at the temperature of 4 °C for 30 min, and washed three times with 3 mL of acetone. The process was then followed by determination of the protease activity by conducting the standard assay (in "Protease activity assay").
Optimization of CLEA-protease preparation conditions
The optimization of the preparation parameters of CLEAs is reported to be an extremely complicated process (Cruz et al. 2012). Any parameter that may change the aggregation or the aggregate cross-linking will possibly have an effect on the recovered activity and particle size. Likewise, the co-interaction among these three main factors (cross-linker, precipitant and additive) in many cases can affect CLEAs final form (Yu et al. 2006).
In this experiment, the optimization of the preparation parameters was carried out with the help of Response Surface Methodology. In fact, RSM can locate any minimum or maximum response that exists inside the factor region. In order to fit a quadratic function, three different values for each factor are essential for this purpose. Among the commonly used response, surface design in statistical analysis is the central composite design which combines a two-level fractional factorial, centre points and axial points (JMP 2005).
The crude sample (1 mL) was aggregated by adding acetone in various concentrations [30, 45, 60 % (v/v)], then cross-linked by the addition of glutaraldehyde (50, 65, 80 mM) and (0.038, 0.113, 0.188 mM) of BSA that was added at the end as an additive. As a result of which, the total volume of the solution was found to be 4 mL. After 17 h of incubation, the CLEA-protease was separated from the supernatant by centrifuging at 6000 rpm at 4 °C for 20 min. In order to take out the excess amount of glutaraldehyde, the sample was washed three times with 3 mL of acetone.
Characterization of CLEA-protease
PH stability of CLEA-protease
The effect of pH on immobilized protease stability was evaluated by incubating CLEA-protease at different pH values ranging from 4.5 to 9.5 (acetate, phosphate, Tris–HCl, glycine-NaOH buffers) for 30 min. Supernatants were removed by centrifugation and the activity of the immobilized/free enzyme was determined using the standard protease activity assay. The residual activities for CLEAs and free enzyme were calculated in percentage with reference to the highest pH value (=100 %).
Effect of pH of the activity of CLEA-protease
The optimum pH values for the immobilized and free protease were determined using the substrate prepared in buffers of varying pH values from 4.5 to 9.5. The relative activity was measured by considering the highest activity to be (=100 %).
Thermal stability of CLEA-protease
Free and immobilized enzymes were incubated at different temperature values (25–65 °C) for 30 min. Samples were periodically withdrawn to run the activity assay. The residual activities were determined as above.
Effect of temperature
In order to determine the optimum temperature of free protease and CLEA-protease, activities were measured in the temperature ranging from 25 to 65 °C. Both forms of enzyme (free and immobilized) were incubated in water bath for 20 min at different temperatures, after cooling protease activity was assayed in the standard conditions and the absorbance was taken to measure both activities.
Organic solvent stability of CLEA-protease
After preparing CLEA-protease, CLEAs were stored in acetone at 4 °C for 4 days and the activity of protease was daily measured to determine the stability of CLEA-protease in acetone. The first day's activity was considered 100 % and the runs were measured in reference to it.
Reusability of CLEA-protease
The reusability of the produced CLEA-protease was determined by the repeated use of CLEA-protease in substrates hydrolysis for six cycles (2 days per cycle). Between cycles, CLEAs were recovered by centrifugation (at 4000 rpm, for 20 min, at 4 °C) and washed thrice with 3 mL of acetone to insure the separation of CLEAs from the reaction mixture. Then CLEA-protease was resuspended in a fresh batch of reaction medium for 2 days. The relative activity of CLEA-protease obtained in the first cycle was assigned as 100 %.
Kinetic study
The kinetic study for the hydrolytic activity of substrate was investigated using Michaelis–Menten kinetic models. Hyperbolic Regression Analysis Software (Hyper32) was used, which has been designed on non-linear regression analysis and is more preferable to the common linear method. The kinetic study was carried out by conducting the protease assay for each of free protease and CLEA-protease (noting that same amount of free enzyme was used to prepare CLEA, which is 1 mL) using different concentrations of casein [0.2, 0.6, 1.0, 1.4, 2.8, 2.2, 2.6, and 3 % (w/v)], dissolved in 50 mM phosphate buffer solution (pH 7.0) and the incubation time was set to 10 min at the temperature of 35 °C. The evaluation of the Michaelis–Menten was based on Hanes–Woolf, Lineweaver–Burk, Eadie–Hofstee and hyperbolic regression models.
In an earlier study, the specific activity of protease extracted from channel catfish viscera was found to be 13.57 U/mg proteins (Mahmod et al. 2014). In the current study, CLEA-protease is produced by the precipitation of protease that was carried out by adding acetone as an organic solvent. It has been then followed by cross-linking through the use of bi-functional reagent i.e. glutaraldehyde.
Three additives were tested for their effect on the activity of CLEA in organic solvent medium, they were selected based on the different effects on CLEAs expected from each. BSA behaves as a feeder (Dong et al. 2010), heptane for interfacial activation (Torres et al. 2013) and SDS as surfactant that facilitates the precipitation and increases interfacial surface of the enzyme (Gupta et al. 2002). The CLEA activity without any additive is considered 100 % relative activity. Figure 1 illustrates the relative activity of CLEA-protease each time with different additives.
Effect of different additives on the activity of CLEA-protease, CLEAs without additive (non) is 100 % relative activity
Bovine serum albumin showed the highest activity when the BSA-to-protease ratio was 50:1. It is due to the fact that BSA acts as a proteic feeder which helps in facilitating the formation of CLEAs in cases where the protein concentration is too low or if the activity of enzymes appears to be weak when the required glutaraldehyde is at high concentrations to gain the aggregates (Shah et al. 2006; Aytar and Bakir 2008). However, glutaraldehyde reacts with the free amino groups of the amino acids present on the enzyme surface i.e. lysine. Moreover, the improvement in activity after the addition of BSA is explained as that the lysine residues provided by this molecule can bind with glutaraldehyde to prevent the denaturing of the targeted protein. In a similar study, it has been reported that there is a linear relationship between the CLEAs recovered activity and the amount of BSA added (Torres et al. 2013).
On the other hand, the addition of the surfactant SDS has shown lower activity than the native (without additive) CLEA. The interaction between the surfactant and the enzyme is known to be hydrophobic (Maldonado-Valderrama and Patino 2010) that may increase the interfacial surface of the enzyme. But, in this experiment, the precipitant used (acetone) was enough to perform the aggregates. SDS generally binds un-specifically to the protein surface, which usually leads to protein unfolding (Mogensen et al. 2005). On the other hand, Pan et al. (2014) reported that adding a surfactant to the mixture can facilitate the dispersal of the water-insoluble substrate through the formation of micellar system, it can also improve the substrate's mass transfer, leading to an increase in the substrate concentration which can ease the downstream separation effectively and reduce the cost of the product.
Finally, heptane was used to increase the surface activity of the enzyme in CLEAs preparation (Torres et al. 2013). Heptane is one of the non-polar organic solvents that are usually considered to be hydrophobic molecules (like several organic molecules' hydrocarbon backbone). It makes the surrounding water molecules to be in order. In the case of enzyme and substrate reactions, the well organized water molecules are moved away from the area between the two groups for allowing more interfacial interaction. Therefore, the formation of enzyme–substrate complex becomes easier. This in turn, initiates a chain of chemical reactions that are directed by the enzyme (Nelson and Cox 2004). In case of CLEA preparation, the relative activity obtained with the addition of heptane was slightly less than that of CLEA's without additives. Hence, it indicated that more interfacial surface was provided. However, each enzyme's surface was interacted with other enzymes and substrate as well as the cross-linking agent.
Additionally, in order to statistically analyse the selection process of the additives, one-way ANOVA analysis was carried out. In one-way ANOVA analysis for all the groups (p value <0.01) F is also less than F critical, meaning that the design is significantly different. However, the detailed one-way ANOVA analysis of t-test has been carried out which assisted in presenting the interaction between each two groups. BSA was significantly different from all the other groups. SDS differs significantly with BSA and no-additive, but there is no significant difference with heptane. Likewise, heptane is significantly different from BSA, but there is no significant difference with the effect of SDS and no-additive groups.
The CLEA-protease activity without any additive was 2.42 ± 0.401 U/mg, that is much less than the free protease activity (13.75 ± 0.380 U/mg), this is because cross-linking provided less active site to be attached with the substrate. After adding BSA to CLEA (specific activity of 3.46 ± 1.215 U/mg), the activity increased by 42.97 %, possibly because more amino group was added to the surface of the enzyme. Thus, the additive selected to prepare CLEA-protease was BSA. Previous studies have shown similar findings to this result, whereby 28.7 % recovered activity of CLEA was observed after adding 0.1 mg of BSA (Cabana et al. 2007). In another study (Dong et al. 2010), it was reported that the addition of 10 mg BSA per 100 mg of enzyme has resulted in an increase of recovered activity from 24 to 82 %. An increase of activity upon the addition of BSA was also observed by Shah et al. (2006), whereby the prepared CLEA recovered 100 % of its activity (as compared to free enzyme) upon the addition of BSA, compared to 0.4 % without BSA. Moreover, the study of Torres et al. (2013) reported that the addition of BSA as a feeder helped in improving the step of cross-linking and allowed a better stabilization of CLEAs produced, and the recovered activity of CLEAs was found to be 31.3 % when adding 75 mg of BSA.
Optimization of CLEA-protease preparation
Design Expert 6.0.8 was used for designing the optimization model for this study. Table 1 shows the Faced Centred Central Composite Design of the Response Surface Methodology for optimizing the preparation parameters of CLEA-protease. It was carried out by varying the concentrations of glutaraldehyde, acetone and BSA. The highest recovered CLEA-protease activity determined was found to be 33.25 % in Run 5, as shown in Table 1, at 65 mM, 45 % (v/v) and 0.113 mM of glutaraldehyde, acetone and BSA, respectively.
Table 1 Experimental design for optimization of CLEA-protease preparation parameters, using faced cantered central composite design FCCCD and response surface methodology RSM, all experiments were done in triplicate
According to the analysis of variables (ANOVA), the model was significant and there is only a 0.81 % chance that a "Model F value (that is 7.227)" could take place due to noise, and the mean square of the model was 1.594. The correlation coefficient was 0.91. In general, the experimental data have shown that this model can be used to navigate the design space.
The statistical Eq. (3) indicates that the positive values have positive effect on the response and the negative values have negative effect on the response, where, A is glutaraldehyde, B is acetone and C is BSA concentrations.
$${\text{Recovered activity of CLEA}} - {\text{protease }}\left( \% \right) \, = \, +\, 27.83 \, - \, 2.08 \, A \, + \, 2.92 \, B \, - \, 0.059 \, C \, - \, 3.23 \, A^{2} - \, 4.89 \, B^{2} - \, 6.29 \, C^{2} + \, 0.62 \, AB \, - \, 2.19\,AC \, - \, 0.33 \, BC$$
The effect of each parameter involved in preparation of CLEA-protease is shown in Fig. 2. The concentration of acetone as a precipitant helped in forming the aggregates that is the initial step for cross-linking, too little precipitant is not enough to form the aggregates, while high acetone concentration reduces the protein residues in supernatant indicating that more free protease was precipitating in forming the insoluble enzyme aggregates. Also, high acetone concentration may lead to protein denaturation that is responsible for the activity loss (Wang et al. 2011). The study of Wang et al. (2011) has reported that the drop in the activity of CLEAs is related to the polarity of the organic solvent used for precipitation, where log P is known as the ratio of the compound's concentration in the phase of octanol/water (log P for acetone is −0.24). The water miscible solvents (i.e. acetone) can strip the protein-bound water that is essential to maintain the structure and function of the enzyme (Serdakowski and Dordick 2008).
Response surface plot as a function of CLEA-protease specific activity showing the interaction of a glutaraldehyde and acetone at constant BSA concentration of 0.12; b glutaraldehyde and BSA at constant acetone concentration of 45 % (v/v) c acetone and BSA, as constant glutaraldehyde concentration of 65 mM
The suitable concentration of glutaraldehyde for cross-linking was found to be 65 mM. For higher concentrations of glutaraldehyde, the activity observed was low, this can be explained as an excessive cross-linking happened that resulted in losing some of the flexibility of the enzyme which is required for its activity (Yu et al. 2013), or more cross-linking occurred leading to too strong CLEA with a strong diffusion resistance (Dong et al. 2010). Usually, the activity of CLEAs prepared using glutaraldehyde appears to be highly dependent on the type of enzyme as well as the concentration of glutaraldehyde. It is due to the high reactivity of the enzyme and the small size of glutaraldehyde. The activity of CLEAs reduces as compared to the free enzyme form when the catalytical residues of the enzyme react with glutaraldehyde (Kim et al. 2013). Hence, the use of other protein as a "feeder" was proposed to enhance the activity of CLEA by increasing the amount of amino residues (Garcia-Galan et al. 2011), such as BSA.
In Fig. 2b and c, with the increase of BSA concentration, the enzyme molecule is tightly attached to it, which effectively lowers the effect of the precipitant. However, the enzyme will not be attached to all of the BSA molecules in the presence of excess amount of BSA; hence enzyme aggregates will be formed. Nonetheless, the addition of BSA as a co-feeder allowed the cross-linking step to act effectively, and lead to better stabilization of the CLEA-protease produced (Torres et al. 2013).
There are many studies indicated that RSM and FCCCD are the best methods which are used to optimize the preparation of CLEAs. For example, according to Khanahmadi et al. (2015), CLEA-lipase activity was increased by 2.5-fold after optimization using response surface methodology and faced central composite design (FCCD) as compared to the activity obtained in One-Factor-at-a-Time method. In another study, the preparation of hybrid magnetic cross-linked enzyme aggregates had been successfully optimized using centered composite design (CCD), it helped in explaining the interaction relationship among the main preparation factors to obtain recovery activity of 43.27 % (Cui et al. 2014). Similarly, the preparation of CLEA-naringinase was optimized with the help of Response Surface Methodology and Central Composite Rotatable Design (CCRD) (Ribeiro and Rabaça 2011). Response Surface Methodology has been implemented for optimizing the preparation of CLEA- hydroxynitrile lyase purified from seeds of Prunus dulcis (Yildirim et al. 2014).
Effect of pH and pH stability of CLEA-protease
pH is one of the most influential parameters altering enzyme activity in an organic medium. The catalytic activity of the immobilized protease in the hydrolysis of the substrate was investigated at different pH values (4.5–9.5). As shown in Fig. 3a, the optimum pH of the immobilized and free protease was found to be 6.8 and 8.0, respectively. This agrees with previous studies, whereby Hormigo et al. (2012) found out that the optimal CLEA activity was obtained at pH value of 7.0, which is shifted as compared to its soluble free enzyme's pH that was 8.0. The study of Xu et al. (2013) has reported that both enzyme forms (free and immobilized) were similarly stable at pH ≤ 6.0, and both had maximum value at pH 6.0. In other words, the immobilization of protease led the optimum pH curve to shift towards neutral pH value. According to the research of Hormigo et al. (2012), it was revealed that when the structure of CLEA is enclosed by molecules of BSA protein the pH in the enzyme's active site microenvironment is shifted towards more alkaline values.
a Effect of reaction pH, b pH stability, c effect of reaction temperature and d thermal stability of free protease (open circle) and CLEA-protease (filled circle). The relative activity for the highest run is 100 % and the other runs were measured based on it
The free protease characterized in this experiment was found to be active functionally in the pH range of 6.8–8.0; while, the immobilized protease had a wider pH range, as depicted in Fig. 3b. This indicates its potential applicability in several applications, since CLEAs can resist the harsh changes in the environment. It must be taken under consideration that the highest activity of CLEA-protease is achieved at around neutral pH value.
Effect of temperature on thermal stability of CLEA-protease
The resistance of the immobilized enzyme to temperature changes is considered as a significant and a major advantage in the practical applications of enzymes. Moreover, the temperature dependence of the relative activity of immobilized and free enzymes was examined for the intervals of 25–65 °C. The results of these experiments are shown in Fig. 3c. It has been observed that the optimal temperature identified for the free as well as immobilized protease was found to be around 45 °C. A drop in the activity of free enzyme was observed at temperatures higher than 45 °C; while, CLEA-protease resisted the increment in temperature and recorded higher activity. Hence, the free enzymes can easily undergo denaturation at higher temperature while immobilized enzymes are protected, probably by its rigid conformation. Thus, immobilized enzymes appear to be capable of retaining their catalytic activity (Tutar et al. 2009).
It has been revealed that one of the major and primary reasons of enzyme immobilization is the anticipated increase in its stability towards various deactivating conditions; it takes place because of the limited conformational mobility of such molecules that follows the immobilization. Resistance of immobilized enzymes towards temperature at some specific time is considered as a significant and major advantage for the enzyme's potential applications. It has been further found that the thermal stability of immobilized protease has also been enhanced significantly. The residual activity of protease was about 20 % for the free enzyme and 60 % for immobilized protease after 2 h of temperature exposure, as shown in Fig. 3d. The study of Xu et al. (2013) stated that CLEAs were found to be more thermally stable as compared to the free enzymes, as a result of the rigid structure which has been created by the step of cross-linking. Moreover, Kim et al. (2013) reported that CLEAs have shown four times higher thermal stability as compared to that of free enzyme. Such an increment was because of the inter-molecular as well as intra-molecular covalent cross-links which stop the conformational change in CLEAs (Zhu et al. 2001).
Scanning electron microscopy (SEM)
As it has been reported that the particle shape and size of CLEA are still unexplored (Singh et al. 2013), Skovgaard et al. (2004) has reported two types of CLEAs' structure that can appear under SEM; either a spherical appearance (Type 1) or a less-structured form (Type 2), whereby (Type 1) aggregate forms typical balls, and (Type 2) aggregate clusters into less-defined structures. In this study, CLEA-protease from channel catfish viscera is considered as Type 2 (data not shown).
CLEA-protease was stored in acetone in order to maintain its environment and to avoid drastic changes in its structure that might occur in different environments. Figure 4 shows that the relative activities of CLEA stored in acetone for 4 days were 100, 86, 85 and 70 %.
The solvent stability of CLEA is important to be determined since many studies have reported some limitations of using organic solvent with CLEA (Yu et al. 2006; Kartal et al. 2011). Polar solvents (i.e. acetone) can interact with the enzyme as well as the associated water molecules, hence, cause drastic reduction in the catalytic activity (Singh et al. 2013). However, Singh et al. (2013) mentioned several potential advantages in using non-aqueous solvents, such as enhancement in the solubility of substrates (for instance lipids and phospholipids), novel chemistry in synthetic applications, altered substrate specificity, easy product recovery and reduction in microbial contamination. To further improve the organic stability of the enzyme, Kartal et al. (2011) suggested the mixing of acetone with another buffer.
Reusability test
One of the most important aims of enzymes immobilization is its reusability that provides opportunities for multiple uses for the same biocatalyst (Torabizadeh et al. 2014). CLEAs have a very promising result in this field, since it can be simply detached from the reaction mixture, washed and reused. Moreover, CLEAs are insoluble in the reaction medium, thus it avoids the product contamination, which is feasible for separation and facilitates the downstream process (Pan et al. 2014). It appears as an essential and significant requirement for their applications in industrial processes.
Results of recycling experiments are shown in Fig. 5, where three runs of the highest activities of CLEA-protease from Table 1 were selected for the reusability test. After each cycle, CLEAs were washed with acetone and stored at 4 °C for 2 days. The activity decreased steadily up to six cycles.
Recyclability of CLEA-protease
The first cycles' residual activities were standardized as 100 %. The second cycle of run 5 and 8 maintained 82 % of the activity, while run 10 lost 54 % of its activity. The reason behind this loss could be for the mechanical forces on CLEA-protease during washing and centrifuging procedures (Matijošyte et al. 2010). Another explanation for this loss is that CLEAs experience several drawbacks like not dispersed properly in solution and easy to clump (Xu and Yang 2013). However, run 10 lost only around 10 % of its activity in the next three cycles; unlike the other two runs that showed regular decrease in their activities.
The results of the study have suggested that the observed activity loss in the recycling experiments results from leaching of enzyme into the organic solvent. Another reason for the loss of activity can be related to enzyme denaturation in each time the assay was run 15. The average of the residual activity of the three runs was 28 % after the six cycles; this result might be increased if the washing and centrifugation processes were carried out more carefully.
While the supernatant of the storage solution showed no protease activity. In other words, using this procedure, leakage of enzymes from CLEA was not detected. This is probably because of the procedure followed in this study, whereby the preparation mixture is centrifuged and washed after each use to make sure of the separation of CLEA from the unwanted solution.
Determination of kinetic parameters
Casein as the substrate was used in increasing concentrations that led to enzyme saturation. An increase in the substrate hydrolysis was witnessed up to 1.8 % thereafter there was saturation of the enzyme. Table 2 shows that the kinetics of free protease was well fitted in the four models. The highest (R 2) value was 0.997 of Hanes–Woolf plot. This is a good indication that the Michaelis–Menten for free protease described the effective substrate hydrolysis perfectly.
Table 2 Summary of Michaelis–Menten expression for hydrolysis of protease (free and CLEA)
On the other hand, the kinetic parameters obtained for CLEA-protease, were extremely different from the results obtained by using free protease. For the four models, V max increased in CLEAs as compared to free protease values. The coefficient of determination (R 2) was also found to be the highest in Hanes–Woolf model, as indicated in Table 2.
Hence, it can be said that the kinetic parameters of CLEA-protease and free protease were determined by measuring initial reaction rates for each by varying amounts of casein. It was observed that K m value of CLEA-protease was slightly higher than K m before immobilization, and V max values increased after immobilization as compared to free protease. The results indicated that due to the substrate diffusion resistance the cross-linking immobilization has lowered the accessibility of substrate to the immobilized enzyme's active site, which had decreased the affinity of protease. It is also to be noticed, as mentioned earlier, that the optimum pH for CLEA-protease and free protease were 6.8 and 8, respectively, this had an effect on the different performances of the two forms of enzyme. Nevertheless, these findings could be linked to the behaviour of the two enzymes (CLEA and free) in any application at neutral pH.
Similar findings were reported about the increased K m of enzymes after immobilization (Rehman et al. 2013; Lei and Bi 2007). Also CLEA structure has less interfacial interaction with the substrate due to cross-linking as compared to the liquid-form free enzyme. V max value basically measures how fast the enzyme can hydrolyze a completely saturated substrate; hence an increase in V max after immobilization means that less substrate is needed to be converted to a product per unit of a time. The catalytic efficiency which is denoted by V max/K m of immobilized protease was higher than the catalytic efficiency of free enzymes (1.992 vs. 0.946 min−1); this is in agreement with Yu et al. (2013).
For the first time, protease extracted from the viscera of channel catfish (Ictalurus punctatus) was successfully immobilized by CLEA technique. The effect of each of the factors involved in the CLEA-protease production was discussed. In addition, the impact of different additives on the CLEA-protease's activity was also selected and analysed. It has been found that the bovine serum albumin had the maximum effect on CLEA-protease as compared to sodium dodecyl sulphate and heptane. Nevertheless, the addition of BSA as a feeder supported the CLEA mechanically and increased its activity. Experimental design and statistical analysis were used to verify the optimization model. The optimal CLEA-protease recovered activity obtained was 33.24 % in 65 mM, 45 % (v/v) and 0.113 mM of glutaraldehyde, acetone and BSA, respectively. Moreover, produced CLEA-protease maintained more than 28 % of its activity after six cycles. Finally, Hanes–Woolf model was the best fit model for determining the kinetic parameters of the hydrolysis reaction. Overall, the results shown in this study propose that CLEA-protease can be a promising tool in increasing the stability and reusability of protease in biotechnological applications.
Aytar BS, Bakir U (2008) Preparation of cross-linked tyrosinase aggregates. Process Biochem 43:125–131
Barbosa O, Ortiz C, Berenguer-Murcia A, Torres R, Rodrigues RC, Fernandez-Lafuente R (2014) Glutaraldehyde in bio-catalysts design: a useful crosslinker and a versatile tool in enzyme immobilization. R Soc Chem Adv. 4:1583
Bradford MM (1976) A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein–dye binding. Anal Biochem 72:248–254
Brady D, Jordaan J (2009) Advances in enzyme immobilization. Biotechnol Lett 31:1639–1650
Cabana H, Jones JP, Agathos SN (2007) Preparation and characterization of cross-linked laccase aggregates and their application to the elimination of endocrine disrupting chemicals. J Biotechnol 132:23–31
Cao L, Schmid RD (2005) Carrier-bound immobilized enzymes: principles, application and design. Wiley-VCH Verlag GmbH and Co, Weinheim
Capiralla H, Hiroi T, Hirokawa T, Maeda S (2002) Purification and characterization of a hydrophobic amino acid-specific endopeptidase from Halobacteriumhalobium S9 with potential application in debittering of protein hydrolysates. Process Biochem 38:571–579
Cruz J, Barbosa O, Rodrigues RC, Fernandez-Lafuente R, Torres R, Ortiz C (2012) Optimized preparation of CALB-CLEAs by response surface methodology: the necessity to employ a feeder to have an effective crosslinking. J Mol Catal B Enzym 80:7–14
Cui JD, Cui LL, Zhang SP, Zhang YF, Su ZG, Ma GH (2014) Hybrid magnetic cross-linked enzyme aggregates of phenylalanine ammonia lyase from Rhodotorulaglutinis. Pubmed PloS One. 9(5):e97221
Dong T, Zhao L, Huang Y, Tan X (2010) Preparation of cross-linked aggregates of aminoacylase from Aspergillusmelleus by using bovine serum albumin as an inert additive. Biores Technol 101:6569–6571
Garcia-Galan C, Berenguer-Murcia A, Fernandez-Lafuente R, Rodrigues RC (2011) Potential of different enzyme immobilization strategies to improve enzyme performance. Adv Synth Catal 353:2885–2904
Gupta R, Beg Q, Lorenz P (2002) Bacterial alkaline proteases: molecular approaches and industrial applications. Appl Microbiol Biotechnol 59(1):15–32
Gupta P, Dutt K, Misra S, Raghuwanshi S, Saxena RK (2009) Characterization of cross-linked immobilized lipase from thermophilic mould Thermomyces lanuginosa using glutaraldehyde. Biores Technol 100:4074–4076
Hormigo D, García-Hidalgo J, Acebal C, Mata I, Arroyo M (2012) Preparation and characterization of cross-linked enzyme aggregates (CLEAs) of recombinant poly-3-hydroxybutyrate depolymerase from Streptomyces exfoliates. Biores Technol 115:177–182
JMP (2005) Design of experiments, release 6. SAS Institute Inc., Cary
Kartal F, Janssenb MHA, Hollmannc F, Sheldon RA, Kılınc A (2011) Improved esterification activity of Candida rugosa lipase in organic solvent by immobilization as cross-linked enzyme aggregates (CLEAs). J Mol Catal B Enzym 71:85–89
Khanahmadi S, Yusof F, Amid A, Mahmod SS, Mahat MK (2015) Optimized preparation and characterization of CLEA-lipase from cocoa pod husk. J Biotechnol 202:153–161
Kim MH, Parka S, Kim YH, Won K, Lee SH (2013) Immobilization of formate dehydrogenase from Candida boidiniithrough cross-linked enzyme aggregates. J Mol Catal B Enzym 97:209–214
Lei Z, Bi S (2007) The silica-coated chitosan particle from a layer-by-layer approach for pectinase immobilization. Enzym Micro Technol 40:1442–1447
Li S, He B, Bai Z, Ouyang P (2009) A novel organic solvent-stable alkaline protease from organic solvent-tolerant Bacillus licheniformis YP1A. J Mol Catal B Enzym 56(2–3):85–88
Mahmod SS, Yusof F, Jami MS (2014) Extraction and screening of various hydrolases from Malaysian channel catfish (Ictalurus punctatus) viscera. Int J Chem Environ Eng 2(5):79–82
Maldonado-Valderrama J, Patino MR (2010) Interfacial rheology of protein–surfactant mixtures. Curr Opin Colloid Interface Sci 15:271–282
Matijošyte I, Arends IWCE, de Vries S, Sheldon RA (2010) Preparation and use of cross-linked enzyme aggregates (CLEAs) of laccases. J Mol Catal B Enzym 62(2):142–148
Mogensen JE, Kleinschmid JH, Schmidt MA, Otzen DE (2005) Misfolding of a bacterial autotransporter. Protein Sci 14(11):2814–2827
Nelson DL, Cox MM (2004) Lehninger principles of biochemistry. W. H. Freeman, New York, pp 47–75
Pan J, Dang N-D, Zheng G-W, Cheng B, Ye Q, Xu J-H (2014) Efficient production of l-menthol in a two-phase system with SDS using an immobilized Bacillus subtilis esterase. Biores Bioprocess 1:12
Parmar A, Kumar H, Marwaha S, Kennedy JF (2000) Advances in enzymatic transformation of penicillins to 6-aminopenicillanic acid (6-APA). Biotechnol Adv 18(4):289–301
Rehman HU, Aman A, Silipo A, Qader SAU, Molinaro A, Ansar A (2013) Degradation of complex carbohydrate: immobilization of pectinase from Bacillus licheniformis KIBGE-IB21 using calcium alginate as a support. Food Chem 139:1081–1086
Ribeiro MHL, Rabaça M (2011) Cross-linked enzyme aggregates of naringinase: novel biocatalysts for naringin hydrolysis. Enzyme Res 2011:8 (Article ID 851272)
Sangeetha K, Abraham TE (2008) Preparation and characterization of cross-linked enzyme aggregates (CLEA) of subtilisin for controlled release applications. Int J Biol Macromol 43:314–319
Schoevaart R, Wolbers MW, Golubovic M, Ottens M, Kieboom APG, van Rantwijk F, van der Wielen LAM, Sheldon RA (2004) Preparation, optimization, and structures of cross-linked enzyme aggregates (CLEAs). Biotechnol Bioeng 87(6):754–762
Serdakowski AL, Dordick JS (2008) Enzyme activation for organic solvents made easy. Trend Biotechnol. 26:48–54
Shah S, Sharma A, Gupta MN (2006) Preparation of cross-linked enzyme aggregates by using bovine serum albumin as a proteic feeder. Anal Biochem 351:207–213
Sheldon R (2007) Enzyme immobilization: the quest optimum performance review. Adv Synth Catal 349:1289–1307
Sheldon RA, Schoevaart R, Langen LM (2005) Cross-linked enzyme aggregates (CLEAs): a novel and versatile method for enzyme immobilization. Biocatal Biotransform 23:141–147
Sigma-Aldrich (2013) Universal protease activity assay: casein as a substrate. http://www.sigmaaldrich.com/life-science/learning-center/life-science-video/universal-protease.html
Singh RK, Tiwari MK, Singh R, Lee J-K (2013) From protein engineering to immobilization: promising strategies for the upgrade of industrial enzymes. Int J Mol Sci 14:1232–1277
Skovgaard J, Bak CA, Snabe T, Sutherland DS, Laursen BS, Kragh KM, Besenbacher F, Poulsen CH, Shipovskov S (2010) Implementation of cross-linked enzyme aggregates of proteases for marine paint applications. J Mater Chem 2010(20):7626–7629
Soares VF, Castilho LR, Bon EP, Freire DM (2005) High-yield Bacillus subtilis protease production by solid-state fermentation. Appl Biochem Biotechnol 121:311–319
Tang XM, Lakay FM, Shen W, Shao W, Fang H, Prior BA, Wang Z, Zhuge J (2004) Purification and characterization of an alkaline protease used in tannery industry from Bacillus licheniformis. Biotechnol Lett 26:1421–1424
Torabizadeh H, Tavakolib M, Safari M (2014) Immobilization of thermostable α-amylase from Bacillus licheniformisby cross-linked enzyme aggregates method using calcium and sodium ions as additives. J Mol Catal B Enzym 108:13–20
Torres MPG, Foresti ML, Ferreiraa MI (2013) Effect of different parameters on the hydrolytic activity of cross-linked enzyme aggregates (CLEAs) of lipase from Thermomyces lanuginose. Biochem Eng J 72:18–23
Tutar H, Yilmaza E, Pehlivan E, Yilmaz M (2009) Immobilization of Candida rugosa lipase on sporopollenin from Lycopodium clavatum. Int J Biol Macromol 45:315–320
Wang M, Qi W, Jia C, Ren Y, Su R, He Z (2011) Enhancement of activity of cross-linked enzyme aggregates by a sugar-assisted precipitation strategy: technical development and molecular mechanism. J Biotechnol 156:30–38
Xu DY, Yang Z (2013) Cross-linked tyrosinase aggregates for elimination of phenolic compounds from wastewater. Chemosphere 92:391–398
Yildirim D, Tükel SS, Alagöz D (2014) Crosslinked enzyme aggregates of hydroxynitrilelyase partially purified from Prunusdulcis seeds and its application for the synthesis of enantiopure cyanohydrins. Biotechnol Prog 30(4):818–827
Yu HW, Chen H, Wang X, Yang YY, Ching CB (2006) Cross-linked enzyme aggregates (CLEAs) with controlled particles: application to Candida Rugosa lipase. J Mol Catal B Enzym 43:124–127
Yu CY, Li XF, Loub WY, Zong MH (2013) Cross-linked enzyme aggregates of Mung bean epoxide hydrolases: a highly active, stable and recyclable biocatalyst for asymmetric hydrolysis of epoxides. J Biotechnol 166:12–19
Yusof F, Firdaus MR, Jimat DN (2013) Preparation of cross-linked enzyme aggregate (CLEA)-lipase from skim latex serum of HeveaBrasiliensis. ICBioE2013, Kuala Lumpur, pp 2–4
Zhou J (2009) Immobilization of alliinase and its application: flow-injection enzymatic analysis for alliin. Afr J Biotechnol 8(7):1337–1342
Zhu K, Jutila A, Tuominen EKJ, Patkar SA, Svendsen A, Kinnunen PK (2001) Impact of the tryptophan residues of Humicolalanuginosa lipase on its thermal stability. J Biochim Biophys Acta 1547:329–338
SSM drafted the manuscript and made substantial contributions to acquisition, experimental work, analysis and interpretation of data; FY and MSJ designed the study and were responsible for the revision of the manuscript; SK provided experimental guidance. All authors read and approved the final manuscript.
Safa Senan Mahmod finished her Masters degree in March 2015 in Biotechnology Engineering at the International Islamic University Malaysia under supervision of Prof. Faridah Yusof. Her research was on the co-immobilization of multi-purpose biocatalyst using cross-linked enzyme aggregates multi-CLEA. Currently, Safa is a PhD candidate in Chemical Engineering at the National University of Malaysia.
Professor Dr. Faridah Yusof started her career as a lecturer at the Department of Chemistry, UTM, Kuala Lumpur in 1983. In 1988, she joined the Rubber Research Institute of Malaysia (RRIM) as a Research Officer working as a Biochemist. Prof Faridah's Ph.D. research was on the proteinaceous factors that have purported roles in the biosynthesis of rubber molecules in latex. In 2004, she joined the Department of Biotechnology Engineering IIUM, as an Associate Professor. Her main area of research includes biotransformation, bioprocess and bioseparation engineering, purification technology, protein/peptides research, enzyme technology, latex biochemistry and natural rubber/carbon nanotubes nanocomposites. She has authored and co-authored papers and written book chapters in her areas of interests. In 2006, Prof Faridah was appointed the Deputy Dean of the Research Management Centre, IIUM, handling the Research and Innovation Unit.
Associate Professor Dr. Mohammed Saedi Jami, an Ethiopian national is a Chemical Engineer and lecturer in Department of Biotechnology Engineering, Faculty of Engineering, International Islamic University Malaysia (IIUM). He earned BSc in Chemical Engineering from Addis Ababa University, Ethiopia, MSc and PhD in Chemical Engineering from Nagoya University, Japan, postdoctoral researcher (JSPS) at Suzuka National College, Japan. He is member of The Filtration Society Japan, Malaysian Water Association and Malaysian Society for Engineering and Technology. His areas of expertise are water and wastewater treatment (municipal and industrial), Environmental Engineering, Artificial Neural Network Modeling of environmental systems, Membrane Processes, Bioseparation Processes, Immobilization, Transport Phenomena. He has authored and co-authored over 170 publications in book chapters, journals, conferences, etc.
Soofia Khanahmadi was born in Iran, in 1987. She received her degree in Cellular and Molecular Biology, Microbiology in 2010 in Iran. In 2015, she obtained a master's degree in Biochemical and Biotechnology Engineering at the International Islamic University Malaysia, under supervision of Prof. Dr. Faridah Yusof. Her master research was on Immobilization of lipase from cocoa pod husk with cross-linked enzyme aggregate technology to be utilized in biodiesel production reaction.
The authors are grateful to the Department of Biotechnology Engineering of the International Islamic University Malaysia (IIUM) for providing the laboratory facilities.
Department of Biotechnology Engineering, International Islamic University Malaysia, P.O. Box 10, Gomabak, 50728, Kuala Lumpur, Malaysia
Safa Senan Mahmod, Faridah Yusof, Mohamed Saedi Jami & Soofia Khanahmadi
Safa Senan Mahmod
Faridah Yusof
Mohamed Saedi Jami
Soofia Khanahmadi
Correspondence to Safa Senan Mahmod.
Mahmod, S.S., Yusof, F., Jami, M.S. et al. Optimizing the preparation conditions and characterization of a stable and recyclable cross-linked enzyme aggregate (CLEA)-protease. Bioresour. Bioprocess. 3, 3 (2016). https://doi.org/10.1186/s40643-015-0081-5
Cross-linked enzyme aggregate
Enzyme specific activity
Fish viscera
Michaelis–Menten kinetics | CommonCrawl |
Sample records for code cyrano3 running
Description of modelling to be implemented in the fuel rod thermomechanics code Cyrano3; Description des modeles a introduire dans le logiciel de thermomecanique du crayon combustible Cyrano3
Baron, D; Bouffioux, P
CYRANO3 is the new EDF thermomechanical code developed to evaluate the overall fuel rod behavior under irradiation. In that context, this paper presents the phenomena to be simulated and the correlations adopted for modelling purposes. The empirical models presented are taken from the CYRANO2 code and a compilation of the relevant literature. The present revision corrects and supplements version B on the basis of its use during the software coding phase from January 1991 to May 1993. (authors). figs., tabs., 120 refs.
A new coupling of the 3D thermal-hydraulic code THYC and the thermo-mechanical code CYRANO3 for PWR calculations
Marguet, S.D. [Electricite de France (EDF), 92 - Clamart (France)
Among all parameters, the fuel temperature has a significant influence on the reactivity of the core, because of the Doppler effect on cross-sections. Most neutronic codes use a straightforward method to calculate an average fuel temperature used in their specific feed-back models. For instance, EDF`s neutronic code COCCINELLE uses the Rowland`s formula using the temperatures of the center and the surface of the pellet. COCCINELLE is coupled to the 3D thermal-hydraulic code THYC with calculates TDoppler with is standard thermal model. In order to improve the accuracy of such calculations, we have developed the coupling of our two latest codes in thermal-hydraulics (THYC) and thermo-mechanics (CYRANO3). THYC calculates two-phase flows in pipes or rod bundles and is used for transient calculations such as steam-line break, boron dilution accidents, DNB predictions, steam generator and condenser studies. CYRANO3 calculates most of the phenomena that take place in the fuel such as: 1) heat transfer induced by nuclear power; 2) thermal expansion of the fuel and the cladding; 3) release of gaseous fission`s products; 4) mechanical interaction between the pellet and the cladding. These two codes are now qualified in their own field and the coupling, using Parallel Virtual Machine (PVM) libraries customized in an home-made-easy-to-use package called CALCIUM, has been validated on `low` configurations (no thermal expansion, constant thermal characteristics) and used on accidental transients such as rod ejection and loss of coolant accident. (K.A.) 7 refs.
Marguet, S.D.
Among all parameters, the fuel temperature has a significant influence on the reactivity of the core, because of the Doppler effect on cross-sections. Most neutronic codes use a straightforward method to calculate an average fuel temperature used in their specific feed-back models. For instance, EDF's neutronic code COCCINELLE uses the Rowland's formula using the temperatures of the center and the surface of the pellet. COCCINELLE is coupled to the 3D thermal-hydraulic code THYC with calculates TDoppler with is standard thermal model. In order to improve the accuracy of such calculations, we have developed the coupling of our two latest codes in thermal-hydraulics (THYC) and thermo-mechanics (CYRANO3). THYC calculates two-phase flows in pipes or rod bundles and is used for transient calculations such as steam-line break, boron dilution accidents, DNB predictions, steam generator and condenser studies. CYRANO3 calculates most of the phenomena that take place in the fuel such as: 1) heat transfer induced by nuclear power; 2) thermal expansion of the fuel and the cladding; 3) release of gaseous fission's products; 4) mechanical interaction between the pellet and the cladding. These two codes are now qualified in their own field and the coupling, using Parallel Virtual Machine (PVM) libraries customized in an home-made-easy-to-use package called CALCIUM, has been validated on 'low' configurations (no thermal expansion, constant thermal characteristics) and used on accidental transients such as rod ejection and loss of coolant accident. (K.A.)
Running codes through the web
Clark, R.E.H.
Dr. Clark presented a report and demonstration of running atomic physics codes through the WWW. The atomic physics data is generated from Los Alamos National Laboratory (LANL) codes that calculate electron impact excitation, ionization, photoionization, and autoionization, and inversed processes through detailed balance. Samples of Web interfaces, input and output are given in the report
Running code as part of an open standards policy
Shah, Rajiv; Kesan, Jay
Governments around the world are considering implementing or even mandating open standards policies. They believe these policies will provide economic, socio-political, and technical benefits. In this article, we analyze the failure of the Massachusetts's open standards policy as applied to document formats. We argue it failed due to the lack of running code. Running code refers to multiple independent, interoperable implementations of an open standard. With running code, users have choice ...
Modelling 3-D mechanical phenomena in a 1-D industrial finite element code: results and perspectives
Guicheret-Retel, V.; Trivaudey, F.; Boubakar, M.L.; Masson, R.; Thevenin, Ph.
Assessing fuel rod integrity in PWR reactors must enjoin two opposite goals: a one-dimensional finite element code (axial revolution symmetry) is needed to provide industrial results at the scale of the reactor core, while the main risk of cladding failure [e.g. pellet-cladding interaction (PCI)] is based on fully three-dimensional phenomena. First, parametric three-dimensional elastic calculations were performed to identify the relevant parameters (fragment number, contact pellet-cladding conditions, etc.) as regards PCI. Axial fragment number as well as friction coefficient are shown to play a major role in PCI as opposed to other parameters. Next, the main limitations of the one-dimensional hypothesis of the finite element code CYRANO3 are identified. To overcome these limitations, both two- and three-dimensional emulations of CYRANO3 were developed. These developments are shown to significantly improve the results provided by CYRANO3. (authors)
RunJumpCode: An Educational Game for Educating Programming
Hinds, Matthew; Baghaei, Nilufar; Ragon, Pedrito; Lambert, Jonathon; Rajakaruna, Tharindu; Houghton, Travers; Dacey, Simon
Programming promotes critical thinking, problem solving and analytic skills through creating solutions that can solve everyday problems. However, learning programming can be a daunting experience for a lot of students. "RunJumpCode" is an educational 2D platformer video game, designed and developed in Unity, to teach players the…
Running the source term code package in Elebra MX-850
Guimaraes, A.C.F.; Goes, A.G.A.
The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)
Strong normalization by type-directed partial evaluation and run-time code generation
Balat, Vincent; Danvy, Olivier
We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....
Project Everware - running other people's code doesn't have to be painful
CERN. Geneva
Everware is a project that allows you to edit and run someone else's code with one click, even if that code has complicated setup instructions. The main aim of the project is to encourage reuse of software between researchers by making it easy and risk free to try out someone else's code.
A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink
Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.
This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.
Folklore in bureaucracy code: Running a music event
Krstanović-Lukić Miroslava
Full Text Available A music folk-created piece of work is a construction expressed as a paradigm part of a set in the bureaucracy system and the public arena. Such a work is a mechanical concept, which defines inheritance as a construction of authenticity saturated with elements of folk, national culture. It is also a subject of certain conventions in the system of regulations; namely, it is a part of the administrative code. The usage of the folk created work as a paradigm and legislations is realized through an organizational apparatus that is, it becomes entertainment, a spectacle. This paper analyzes the functioning of the organizational machinery of a folk spectacle, starting with the government authorities, local self-management and the spectacle's administrative committees. To illustrate this phenomenon, the paper presents the development of a trumpet playing festival in Draga�evo. This particular festival establishes a cultural, economic and political order with a clear and defined division of power. The analysis shows that the folk event in question, through its programs and activities, represents a scene and arena of individual and group interests. Organizational interactions are recognized in binary oppositions: sovereignty/dependency official/unofficial, dominancy/ subordination, innovative/inherited common/different, needed/useful, original/copy, one's own/belonging to someone else.
Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer
Caon, M.; Bibbo, G.; Pattison, J.
The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab
Caon, M. [Flinders Univ. of South Australia, Bedford Park, SA (Australia)]|[Univercity of South Australia, SA (Australia); Bibbo, G. [Womens and Childrens hospital, SA (Australia); Pattison, J. [Univercity of South Australia, SA (Australia)
The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Susanne Kunkel
Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation
Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua
When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.
Experience gained in running the EPRI MMS code with an in-house simulation language
Weber, D.S.
The EPRI Modular Modeling System (MMS) code represents a collection of component models and a steam/water properties package. This code has undergone extensive verification and validation testing. Currently, the code requires a commercially available simulation language to run. The Philadelphia Electric Company (PECO) has been modeling power plant systems for over the past sixteen years. As a result, an extensive number of models have been developed. In addition, an extensive amount of experience has been developed and gained using an in-house simulation language. The objective of this study was to explore the possibility of developing an MMS pre-processor which would allow the use of the MMS package with other simulation languages such as the PECO in-house simulation language
MAX: an expert system for running the modular transport code APOLLO II
Loussouarn, O.; Ferraris, C.; Boivineau, A.
MAX is an expert system built to help users of the APOLLO II code to prepare the input data deck to run a job. APOLLO II is a modular transport-theory code for calculating the neutron flux in various geometries. The associated GIBIANE command language allows the user to specify the physical structure and the computational method to be used in the calculation. The purpose of MAX is to bring into play expertise in both neutronic and computing aspects of the code, as well as various computational schemes, in order to generate automatically a batch data set corresponding to the APOLLO II calculation desired by the user. MAX is implemented on the SUN 3/60 workstation with the S1 tool and graphic interface external functions
Automated JPSS VIIRS GEO code change testing by using Chain Run Scripts
Chen, W.; Wang, W.; Zhao, Q.; Das, B.; Mikles, V. J.; Sprietzer, K.; Tsidulko, M.; Zhao, Y.; Dharmawardane, V.; Wolf, W.
The Joint Polar Satellite System (JPSS) is the next generation polar-orbiting operational environmental satellite system. The first satellite in the JPSS series of satellites, J-1, is scheduled to launch in early 2017. J1 will carry similar versions of the instruments that are on board of Suomi National Polar-Orbiting Partnership (S-NPP) satellite which was launched on October 28, 2011. The center for Satellite Applications and Research Algorithm Integration Team (STAR AIT) uses the Algorithm Development Library (ADL) to run S-NPP and pre-J1 algorithms in a development and test mode. The ADL is an offline test system developed by Raytheon to mimic the operational system while enabling a development environment for plug and play algorithms. The Perl Chain Run Scripts have been developed by STAR AIT to automate the staging and processing of multiple JPSS Sensor Data Record (SDR) and Environmental Data Record (EDR) products. JPSS J1 VIIRS Day Night Band (DNB) has anomalous non-linear response at high scan angles based on prelaunch testing. The flight project has proposed multiple mitigation options through onboard aggregation, and the Option 21 has been suggested by the VIIRS SDR team as the baseline aggregation mode. VIIRS GEOlocation (GEO) code analysis results show that J1 DNB GEO product cannot be generated correctly without the software update. The modified code will support both Op21, Op21/26 and is backward compatible with SNPP. J1 GEO code change version 0 delivery package is under development for the current change request. In this presentation, we will discuss how to use the Chain Run Script to verify the code change and Lookup Tables (LUTs) update in ADL Block2.
Validation analysis of pool fire experiment (Run-F7) using SPHINCS code
Yamaguchi, Akira; Tajima, Yuji
SPHINCS (Sodium Fire Phenomenology IN multi-Cell System) code has been developed for the safety analysis of sodium fire accident in a Fast Breeder Reactor. The main features of the SPHINCS code with respect to the sodium pool fire phenomena are multi-dimensional modeling of the thermal behavior in sodium pool and steel liner, modeling of the extension of sodium pool area based on the sodium mass conservation, and equilibrium model for the chemical reaction of pool fire on the flame sheet at the surface of sodium pool during. Therefore, the SPHINCS code is capable of temperature evaluation of the steel liner in detail during the small and/or medium scale sodium leakage accidents. In this study, Run-F7 experiment in which the sodium leakage rate is 11.8 kg/hour has been analyzed. In the experiment the diameter of the sodium pool is approximately 60 cm and the maximum steel liner temperature was 616 degree C. The analytical results tell us the agreement between the SPHINCS analysis and the experiment is excellent with respect to the time history and spatial distribution of the liner temperature, sodium pool extension behavior, as well as atmosphere gas temperature. It is concluded that the pool fire modeling of the SPHINCS code has been validated for this experiment. The SPHINCS code is currently applicable to the sodium pool fire phenomena and the temperature evaluation of the steel liner. The experiment series are continued to check some parameters, i.e., sodium leakage rate and the height of sodium leakage. Thus, the author will analyze the subsequent experiments to check the influence of the parameters and applies SPHINCS to the sodium fire consequence analysis of fast reactor. (author)
Modeling of a confinement bypass accident with CONSEN, a fast-running code for safety analyses in fusion reactors
Caruso, Gianfranco, E-mail: gianfranco.caruso@uniroma1.it [Sapienza University of Rome – DIAEE, Corso Vittorio Emanuele II, 244, 00186 Roma (Italy); Giannetti, Fabio [Sapienza University of Rome – DIAEE, Corso Vittorio Emanuele II, 244, 00186 Roma (Italy); Porfiri, Maria Teresa [ENEA FUS C.R. Frascati, Via Enrico Fermi, 45, 00044 Frascati, Roma (Italy)
Highlights: • The CONSEN code for thermal-hydraulic transients in fusion plants is introduced. • A magnet induced confinement bypass accident in ITER has been simulated. • A comparison with previous MELCOR results for the accident is presented. -- Abstract: The CONSEN (CONServation of ENergy) code is a fast running code to simulate thermal-hydraulic transients, specifically developed for fusion reactors. In order to demonstrate CONSEN capabilities, the paper deals with the accident analysis of the magnet induced confinement bypass for ITER design 1996. During a plasma pulse, a poloidal field magnet experiences an over-voltage condition or an electrical insulation fault that results in two intense electrical arcs. It is assumed that this event produces two one square meters ruptures, resulting in a pathway that connects the interior of the vacuum vessel to the cryostat air space room. The rupture results also in a break of a single cooling channel within the wall of the vacuum vessel and a breach of the magnet cooling line, causing the blow down of a steam/water mixture in the vacuum vessel and in the cryostat and the release of 4 K helium into the cryostat. In the meantime, all the magnet coils are discharged through the magnet protection system actuation. This postulated event creates the simultaneous failure of two radioactive confinement barrier and it envelopes all type of smaller LOCAs into the cryostat. Ice formation on the cryogenic walls is also involved. The accident has been simulated with the CONSEN code up to 32 h. The accident evolution and the phenomena involved are discussed in the paper and the results are compared with available results obtained using the MELCOR code.
Calculation of Sodium Fire Test-I (Run-E6) using sodium combustion analysis code ASSCOPS version 2.0
Nakagiri, Toshio; Ohno, Shuji; Miyake, Osamu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
The calculation of Sodium Fire Test-I (Run-E6) was performed using the ASSCOPS (Analysis of Simultaneous Sodium Combustions in Pool and Spray) code version 2.0 in order to determine the parameters used in the code for the calculations of sodium combustion behavior of small or medium scale sodium leak, and to validate the applicability of the code. The parameters used in the code were determined and the validation of the code was confirmed because calculated temperatures, calculated oxygen concentration and other calculated values almost agreed with the test results. (author)
Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding with Run-Length Coded Secret Images of Which Location of Codes are Determined with Random Number
Kohei Arai; Yuji Yamada
An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images...
Increasing the efficiency of the TOUGH code for running large-scale problems in nuclear waste isolation
Nitao, J.J.
The TOUGH code developed at Lawrence Berkeley Laboratory (LBL) is being extensively used to numerically simulate the thermal and hydrologic environment around nuclear waste packages in the unsaturated zone for the Yucca Mountain Project. At the Lawrence Livermore National Laboratory (LLNL) we have rewritten approximately 80 percent of the TOUGH code to increase its speed and incorporate new options. The geometry of many requires large numbers of computational elements in order to realistically model detailed physical phenomena, and, as a result, large amounts of computer time are needed. In order to increase the speed of the code we have incorporated fast linear equation solvers, vectorization of substantial portions of code, improved automatic time stepping, and implementation of table look-up for the steam table properties. These enhancements have increased the speed of the code for typical problems by a factor of 20 on the Cray 2 computer. In addition to the increase in computational efficiency we have added several options: vapor pressure lowering; equivalent continuum treatment of fractures; energy and material volumetric, mass and flux accounting; and Stefan-Boltzmann radiative heat transfer. 5 refs
Probabilistic evaluation of fuel element performance by the combined use of a fast running simplistic and a detailed deterministic fuel performance code
Misfeldt, I.
A comprehensive evaluation of fuel element performance requires a probabilistic fuel code supported by a well bench-marked deterministic code. This paper presents an analysis of a SGHWR ramp experiment, where the probabilistic fuel code FRP is utilized in combination with the deterministic fuel models FFRS and SLEUTH/SEER. The statistical methods employed in FRP are Monte Carlo simulation or a low-order Taylor approximation. The fast-running simplistic fuel code FFRS is used for the deterministic simulations, whereas simulations with SLEUTH/SEER are used to verify the predictions of FFRS. The ramp test was performed with a SGHWR fuel element, where 9 of the 36 fuel pins failed. There seemed to be good agreement between the deterministic simulations and the experiment, but the statistical evaluation shows that the uncertainty on the important performance parameters is too large for this ''nice'' result. The analysis does therefore indicate a discrepancy between the experiment and the deterministic code predictions. Possible explanations for this disagreement are discussed. (author)
Method and codes for solving the optimization problem of initial material distribution and controlling of reactor during the run
Isakova, L.Ya.; Rachkova, D.A.; Vtorova, O.Yu.; Matekin, M.P.; Sobol, I.M.
The optimization problem of initial distribution of fuel composition and controlling of the reactor during the run is solved. The optimization problem is formulated as a multicriterial one with different types of constraints. The distinguished feature of the method proposed is the systematic scanning of multidimensional ares, where the trial points in the space of parameters are the points of uniformly distributed LP Ï" -sequences. The reactor computation is carried out by the four group diffusion method in two-dimensional cylindrical geometry. The burnup absorbers are taken into account as additional absorption cross-sections, represented by approximants. The tables of trials make possible the estimation of the values of global extrema. The coordinates of the points where the external values are attained can be estimated too
Impact of e-publication changes in the International Code of Nomenclature for algae, fungi and plants (Melbourne Code, 2012) - did we need to "run for our lives"?
Nicolson, Nicky; Challis, Katherine; Tucker, Allan; Knapp, Sandra
At the Nomenclature Section of the XVIII International Botanical Congress in Melbourne, Australia (IBC), the botanical community voted to allow electronic publication of nomenclatural acts for algae, fungi and plants, and to abolish the rule requiring Latin descriptions or diagnoses for new taxa. Since the 1st January 2012, botanists have been able to publish new names in electronic journals and may use Latin or English as the language of description or diagnosis. Using data on vascular plants from the International Plant Names Index (IPNI) spanning the time period in which these changes occurred, we analysed trajectories in publication trends and assessed the impact of these new rules for descriptions of new species and nomenclatural acts. The data show that the ability to publish electronically has not "opened the floodgates" to an avalanche of sloppy nomenclature, but concomitantly neither has there been a massive expansion in the number of names published, nor of new authors and titles participating in publication of botanical nomenclature. The e-publication changes introduced in the Melbourne Code have gained acceptance, and botanists are using these new techniques to describe and publish their work. They have not, however, accelerated the rate of plant species description or participation in biodiversity discovery as was hoped.
SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi
Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S; Schuemann, J; Paganetti, H; Jia, X; Jiang, S
Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called "Fano cavity test�. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm 3 , 0.001 g/cm 3 ) in a 10×10×50 cm 3 water phantom (1 g/cm 3 ). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response
Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S [Universite catholique de Louvain, Brussels, Brussels (Belgium); Schuemann, J; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States); Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called "Fano cavity test�. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm{sup 3}, 0.001 g/cm{sup 3}) in a 10×10×50 cm{sup 3} water phantom (1 g/cm{sup 3}). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response.
Analysis, by RELAP5 code, of boron dilution phenomena in a mid-loop operation transient, performed in PKL III F2.1 RUN 1 test
Mascari, F.; Vella, G.; Del Nevo, A.; D'Auria, F.
The present paper deals with the post test analysis and accuracy quantification of the test PKL III F2.1 RUN 1 by RELAP5/Mod3.3 code performed in the framework of the international OECD/SETH PKL III Project. The PKL III is a full-height integral test facility (ITF) that models the entire primary system and most of the secondary system (except for turbine and condenser) of pressurized water reactor of KWU design of the 1300-MW (electric) class on a scale of 1:145. Detailed design was based to the largest possible extent on the specific data of Philippsburg nuclear power plant, unit 2. As for the test facilities of this size, the scaling concept aims to simulate overall thermal hydraulic behavior of the full-scale power plant [1]. The main purpose of the project is to investigate PWR safety issues related to boron dilution and in particular this experiment investigates (a) the boron dilution issue during mid-loop operation and shutdown conditions, and (b) assessing primary circuit accident management operations to prevent boron dilution as a consequence of loss of heat removal [2]. In this work the authors deal with a systematic procedure (developed at the university of Pisa) for code assessment and uncertainty qualification and its application to RELAP5 system code. It is used to evaluate the capability of RELAP5 to reproduce the thermal hydraulics of an inadvertent boron dilution event in a PWR. The quantitative analysis has been performed adopting the Fast Fourier Transform Based Method (FFTBM), which has the capability to quantify the errors in code predictions as compared to the measured experimental signal. (author)
Running the running
Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph
We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...
Speaking Code
Cox, Geoff
Speaking Code begins by invoking the "Hello World� convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...
Liquidity Runs
Matta, R.; Perotti, E.
Can the risk of losses upon premature liquidation produce bank runs? We show how a unique run equilibrium driven by asset liquidity risk arises even under minimal fundamental risk. To study the role of illiquidity we introduce realistic norms on bank default, such that mandatory stay is triggered
Running Linux
Dalheimer, Matthias Kalle
The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.
RUN COORDINATION
Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...
The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the 'Cross Interentreprises', a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...
Christophe Delaere
The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...
M. Chamizo
On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...
G. Rakness.
After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...
The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a "best family� challenge (judged on best parent + best child). Trophies are awarded in the usual men's, women's and veterans' categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...
On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...
With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...
EnergyPlus Run Time Analysis
Hong, Tianzhen; Buhl, Fred; Haves, Philip
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.
DLLExternalCode
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.
Code portability and data management considerations in the SAS3D LMFBR accident-analysis code
Dunn, F.E.
The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available
Dr. Sheehan on Running.
Sheehan, George A.
This book is both a personal and technical account of the experience of running by a heart specialist who began a running program at the age of 45. In its seventeen chapters, there is information presented on the spiritual, psychological, and physiological results of running; treatment of athletic injuries resulting from running; effects of diet…
Running and osteoarthritis.
Willick, Stuart E; Hansen, Pamela A
The overall health benefits of cardiovascular exercise, such as running, are well established. However, it is also well established that in certain circumstances running can lead to overload injuries of muscle, tendon, and bone. In contrast, it has not been established that running leads to degeneration of articular cartilage, which is the hallmark of osteoarthritis. This article reviews the available literature on the association between running and osteoarthritis, with a focus on clinical epidemiologic studies. The preponderance of clinical reports refutes an association between running and osteoarthritis. Copyright 2010 Elsevier Inc. All rights reserved.
Code Cactus; Code Cactus
Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)
This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)
Electron run-away
Levinson, I.B.
The run-away effect of electrons for the Coulomb scattering has been studied by Dricer, but the question for other scattering mechanisms is not yet studied. Meanwhile, if the scattering is quasielastic, a general criterion for the run-away may be formulated; in this case the run-away influence on the distribution function may also be studied in somewhat general and qualitative manner. (Auth.)
Triathlon: running injuries.
Spiker, Andrea M; Dixit, Sameer; Cosgarea, Andrew J
The running portion of the triathlon represents the final leg of the competition and, by some reports, the most important part in determining a triathlete's overall success. Although most triathletes spend most of their training time on cycling, running injuries are the most common injuries encountered. Common causes of running injuries include overuse, lack of rest, and activities that aggravate biomechanical predisposers of specific injuries. We discuss the running-associated injuries in the hip, knee, lower leg, ankle, and foot of the triathlete, and the causes, presentation, evaluation, and treatment of each.
XSOR codes users manual
Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.
This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms
Overcoming the "Run" Response
Swanson, Patricia E.
Recent research suggests that it is not simply experiencing anxiety that affects mathematics performance but also how one responds to and regulates that anxiety (Lyons and Beilock 2011). Most people have faced mathematics problems that have triggered their "run response." The issue is not whether one wants to run, but rather…
Overuse injuries in running
Larsen, Lars Henrik; Rasmussen, Sten; Jørgensen, Jens Erik
What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence.......What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence....
PRECIS Runs at IITM
First page Back Continue Last page Overview Graphics. PRECIS Runs at IITM. Evaluation experiment using LBCs derived from ERA-15 (1979-93). Runs (3 ensembles in each experiment) already completed with LBCs having a length of 30 years each, for. Baseline (1961-90); A2 scenario (2071-2100); B2 scenario ...
The LHCb Run Control
Alessio, F; Callot, O; Duval, P-Y; Franek, B; Frank, M; Galli, D; Gaspar, C; v Herwijnen, E; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P
LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provid...
Symmetry in running.
Raibert, M H
Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.
RUNNING INJURY DEVELOPMENT
Johansen, Karen Krogh; Hulme, Adam; Damsted, Camma
BACKGROUND: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. PURPOSE: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. METHODS: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: "Which...... factors do you believe influence the risk of running injuries?". In response to this question, the athletes and coaches had to click "Yes" or "No" to 19 predefined factors. In addition, they had the possibility to submit a free-text response. RESULTS: A total of 68 athletes and 19 coaches were included...
Krogh Johansen, Karen; Hulme, Adam; Damsted, Camma
Background: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. Purpose: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. Methods: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: "Which...... factors do you believe influence the risk of running injuries?�. In response to this question, the athletes and coaches had to click "Yes� or "No� to 19 predefined factors. In addition, they had the possibility to submit a free-text response. Results: A total of 68 athletes and 19 coaches were included...
runDM: Running couplings of Dark Matter to the Standard Model
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.
Alessio, F; Barandela, M C; Frank, M; Gaspar, C; Herwijnen, E v; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P [CERN, 1211 Geneva 23 (Switzerland); Callot, O [LAL, IN2P3/CNRS and Universite Paris 11, Orsay (France); Duval, P-Y [Centre de Physique des Particules de Marseille, Aix-Marseille Universite, CNRS/IN2P3, Marseille (France); Franek, B [Rutherford Appleton Laboratory, Chilton, Didcot, OX11 0QX (United Kingdom); Galli, D, E-mail: Clara.Gaspar@cern.c [Universita di Bologna and INFN, Bologna (Italy)
LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented
Running Boot Camp
Toporek, Chuck
When Steve Jobs jumped on stage at Macworld San Francisco 2006 and announced the new Intel-based Macs, the question wasn't if, but when someone would figure out a hack to get Windows XP running on these new "Mactels." Enter Boot Camp, a new system utility that helps you partition and install Windows XP on your Intel Mac. Boot Camp does all the heavy lifting for you. You won't need to open the Terminal and hack on system files or wave a chicken bone over your iMac to get XP running. This free program makes it easy for anyone to turn their Mac into a dual-boot Windows/OS X machine. Running Bo
Fermilab DART run control
Oleynik, G.; Engelfried, J.; Mengel, L.
DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the control and monitoring of the data acquisition systems. The authors discuss the unique and interesting concepts of the run control and some of the experiences in developing it. They also give a brief update and status of the whole DART system
DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the, control and monitoring of the data acquisition systems. We discuss the unique and interesting concepts of the run control and some of our experiences in developing it. We also give a brief update and status of the whole DART system
SASSYS LMFBR systems code
Dunn, F.E.; Prohammer, F.G.; Weber, D.P.
The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time
'Outrunning' the running ear
In even the most experienced hands, an adequate physical examination of the ears can be difficult to perform because of common problems such as cerumen blockage of the auditory canal, an unco- operative toddler or an exasperated parent. The most common cause for a running ear in a child is acute purulent otitis.
Towards advanced code simulators
Scriven, A.H.
The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5
Computer codes used in particle accelerator design: First edition
This paper contains a listing of more than 150 programs that have been used in the design and analysis of accelerators. Given on each citation are person to contact, classification of the computer code, publications describing the code, computer and language runned on, and a short description of the code. Codes are indexed by subject, person to contact, and code acronym
A Mobile Application Prototype using Network Coding
Pedersen, Morten Videbæk; Heide, Janus; Fitzek, Frank
This paper looks into implementation details of network coding for a mobile application running on commercial mobile phones. We describe the necessary coding operations and algorithms that implements them. The coding algorithms forms the basis for a implementation in C++ and Symbian C++. We report...
ALICE HLT Run 2 performance overview.
Krzewicki, Mikolaj; Lindenstruth, Volker; ALICE Collaboration
For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.
Coding Partitions
Fabio Burderi
Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.
Running economy and energy cost of running with backpacks.
Scheer, Volker; Cramer, Leoni; Heitkamp, Hans-Christian
Running is a popular recreational activity and additional weight is often carried in backpacks on longer runs. Our aim was to examine running economy and other physiological parameters while running with a 1kg and 3 kg backpack at different submaximal running velocities. 10 male recreational runners (age 25 ± 4.2 years, VO2peak 60.5 ± 3.1 ml·kg-1·min-1) performed runs on a motorized treadmill of 5 minutes durations at three different submaximal speeds of 70, 80 and 90% of anaerobic lactate threshold (LT) without additional weight, and carrying a 1kg and 3 kg backpack. Oxygen consumption, heart rate, lactate and RPE were measured and analysed. Oxygen consumption, energy cost of running and heart rate increased significantly while running with a backpack weighing 3kg compared to running without additional weight at 80% of speed at lactate threshold (sLT) (p=0.026, p=0.009 and p=0.003) and at 90% sLT (p<0.001, p=0.001 and p=0.001). Running with a 1kg backpack showed a significant increase in heart rate at 80% sLT (p=0.008) and a significant increase in oxygen consumption and heart rate at 90% sLT (p=0.045 and p=0.007) compared to running without additional weight. While running at 70% sLT running economy and cardiovascular effort increased with weighted backpack running compared to running without additional weight, however these increases did not reach statistical significance. Running economy deteriorates and cardiovascular effort increases while running with additional backpack weight especially at higher submaximal running speeds. Backpack weight should therefore be kept to a minimum.
Ubuntu Up and Running
Nixon, Robin
Ubuntu for everyone! This popular Linux-based operating system is perfect for people with little technical background. It's simple to install, and easy to use -- with a strong focus on security. Ubuntu: Up and Running shows you the ins and outs of this system with a complete hands-on tour. You'll learn how Ubuntu works, how to quickly configure and maintain Ubuntu 10.04, and how to use this unique operating system for networking, business, and home entertainment. This book includes a DVD with the complete Ubuntu system and several specialized editions -- including the Mythbuntu multimedia re
ATLAS people can run!
Claudia Marcelloni de Oliveira; Pauline Gagnon
It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...
Underwater running device
Kogure, Sumio; Matsuo, Takashiro; Yoshida, Yoji
An underwater running device for an underwater inspection device for detecting inner surfaces of a reactor or a water vessel has an outer frame and an inner frame, and both of them are connected slidably by an air cylinder and connected rotatably by a shaft. The outer frame has four outer frame legs, and each of the outer frame legs is equipped with a sucker at the top end. The inner frame has four inner frame legs each equipped with a sucker at the top end. The outer frame legs and the inner frame legs are each connected with the outer frame and the inner frame by the air cylinder. The outer and the inner frame legs can be elevated or lowered (or extended or contracted) by the air cylinder. The sucker is connected with a jet pump-type negative pressure generator. The device can run and move by repeating attraction and releasing of the outer frame legs and the inner frame legs alternately while maintaining the posture of the inspection device stably. (I.N.)
TASS code topical report. V.1 TASS code technical manual
Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.
TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs
The design of the run Clever randomized trial: running volume, -intensity and running-related injuries.
Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik; Parner, Erik; Lind, Martin; Rasmussen, Sten
Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. The Run Clever trial is a randomized trial with a 24-week follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity by increasing the weekly volume of running at a hard pace, while Schedule-V emphasizes a progression in running volume, by increasing the weekly overall volume. Data on the running performed is collected by GPS. Participants who sustain running-related injuries are diagnosed by a diagnostic team of physiotherapists using standardized diagnostic criteria. The members of the diagnostic team are blinded. The study design, procedures and informed consent were approved by the Ethics Committee Northern Denmark Region (N-20140069). The Run Clever trial will provide insight into possible differences in injury risk between running schedules emphasizing either running intensity or running volume. The risk of sustaining volume- and intensity-related injuries will be compared in the two intervention groups using a competing
Barefoot running: biomechanics and implications for running injuries.
Altman, Allison R; Davis, Irene S
Despite the technological developments in modern running footwear, up to 79% of runners today get injured in a given year. As we evolved barefoot, examining this mode of running is insightful. Barefoot running encourages a forefoot strike pattern that is associated with a reduction in impact loading and stride length. Studies have shown a reduction in injuries to shod forefoot strikers as compared with rearfoot strikers. In addition to a forefoot strike pattern, barefoot running also affords the runner increased sensory feedback from the foot-ground contact, as well as increased energy storage in the arch. Minimal footwear is being used to mimic barefoot running, but it is not clear whether it truly does. The purpose of this article is to review current and past research on shod and barefoot/minimal footwear running and their implications for running injuries. Clearly more research is needed, and areas for future study are suggested.
Darlington up and running
Show, Don
We've built some of the largest and most successful generating stations in the world. Nonetheless, we cannot take our knowledge and understanding of the technology for granted. Although, I do believe that we are getting better, building safer, more efficient plants, and introducing significant improvements to our existing stations. Ontario Hydro is a large and technically rich organization. Even so, we realize that partnerships with others in the industry are absolutely vital. I am thinking particularly of Atomic Energy of Canada Limited. We enjoy a very close relationship with Aecl, and their support was never more important than during the N/A Investigations. In recent years, we've strengthened our relationship with Aecl considerably. For example, we recently signed an agreement with Aecl, making available all of the Darlington 900 MW e design. Much of the cooperation between Ontario Hydro and Aecl occurs through the CANDU Engineering Authority and the CANDU Owners Group (CO G). These organizations are helping both of US to greatly improve cooperation and efficiency, and they are helping ensure we get the biggest return on our CANDU investments. CO G also provides an important information network which links CANDU operators in Canada, here in Korea, Argentina, India, Pakistan and Romania. In many respects, it is helping to develop the strong partnerships to support CANDU technology worldwide. We all benefit in the long run form sharing information and resources
Backward running or absence of running from Creutz ratios
Giedt, Joel; Weinberg, Evan
We extract the running coupling based on Creutz ratios in SU(2) lattice gauge theory with two Dirac fermions in the adjoint representation. Depending on how the extrapolation to zero fermion mass is performed, either backward running or an absence of running is observed at strong bare coupling. This behavior is consistent with other findings which indicate that this theory has an infrared fixed point.
Physiological demands of running during long distance runs and triathlons.
Hausswirth, C; Lehénaff, D
The aim of this review article is to identify the main metabolic factors which have an influence on the energy cost of running (Cr) during prolonged exercise runs and triathlons. This article proposes a physiological comparison of these 2 exercises and the relationship between running economy and performance. Many terms are used as the equivalent of 'running economy' such as 'oxygen cost', 'metabolic cost', 'energy cost of running', and 'oxygen consumption'. It has been suggested that these expressions may be defined by the rate of oxygen uptake (VO2) at a steady state (i.e. between 60 to 90% of maximal VO2) at a submaximal running speed. Endurance events such as triathlon or marathon running are known to modify biological constants of athletes and should have an influence on their running efficiency. The Cr appears to contribute to the variation found in distance running performance among runners of homogeneous level. This has been shown to be important in sports performance, especially in events like long distance running. In addition, many factors are known or hypothesised to influence Cr such as environmental conditions, participant specificity, and metabolic modifications (e.g. training status, fatigue). The decrease in running economy during a triathlon and/or a marathon could be largely linked to physiological factors such as the enhancement of core temperature and a lack of fluid balance. Moreover, the increase in circulating free fatty acids and glycerol at the end of these long exercise durations bear witness to the decrease in Cr values. The combination of these factors alters the Cr during exercise and hence could modify the athlete's performance in triathlons or a prolonged run.
Tristan code and its application
Nishikawa, K.-I.
Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/Ëœkenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.
Voluntary Wheel Running in Mice.
Goh, Jorming; Ladiges, Warren
Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.
Effective action and brane running
Brevik, Iver; Ghoroku, Kazuo; Yahiro, Masanobu
We address the renormalized effective action for a Randall-Sundrum brane running in 5D bulk space. The running behavior of the brane action is obtained by shifting the brane position without changing the background and fluctuations. After an appropriate renormalization, we obtain an effective, low energy brane world action, in which the effective 4D Planck mass is independent of the running position. We address some implications for this effective action
Asymmetric information and bank runs
Gu, Chao
It is known that sunspots can trigger panic-based bank runs and that the optimal banking contract can tolerate panic-based runs. The existing literature assumes that these sunspots are based on a publicly observed extrinsic randomizing device. In this paper, I extend the analysis of panic-based runs to include an asymmetric-information, extrinsic randomizing device. Depositors observe different, but correlated, signals on the stability of the bank. I find that if the signals that depositors o...
How to run 100 meters ?
Aftalion, Amandine
A paraitre dans SIAP; The aim of this paper is to bring a mathematical justification to the optimal way of organizing one's effort when running. It is well known from physiologists that all running exercises of duration less than 3mn are run with a strong initial acceleration and a decelerating end; on the contrary, long races are run with a final sprint. This can be explained using a mathematical model describing the evolution of the velocity, the anaerobic energy, and the propulsive force: ...
A Running Start: Resource Guide for Youth Running Programs
Jenny, Seth; Becker, Andrew; Armstrong, Tess
The lack of physical activity is an epidemic problem among American youth today. In order to combat this, many schools are incorporating youth running programs as a part of their comprehensive school physical activity programs. These youth running programs are being implemented before or after school, at school during recess at the elementary…
Changes in Running Mechanics During a 6-Hour Running Race.
Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano
To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.
Coding Labour
Anthony McCosker
Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and 'prodromal' modes, is regulated and governed.
CDF run II run control and online monitor
Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.
The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs
TRAC code development status and plans
Spore, J.W.; Liles, D.R.; Nelson, R.A.
This report summarizes the characteristics and current status of the TRAC-PF1/MOD1 computer code. Recent error corrections and user-convenience features are described, and several user enhancements are identified. Current plans for the release of the TRAC-PF1/MOD2 computer code and some preliminary MOD2 results are presented. This new version of the TRAC code implements stability-enhancing two-step numerics into the 3-D vessel, using partial vectorization to obtain a code that has run 400% faster than the MOD1 code
VOA: a 2-d plasma physics code
Eltgroth, P.G.
A 2-dimensional relativistic plasma physics code was written and tested. The non-thermal components of the particle distribution functions are represented by expansion into moments in momentum space. These moments are computed directly from numerical equations. Currently three species are included - electrons, ions and ''beam electrons''. The computer code runs on either the 7600 or STAR machines at LLL. Both the physics and the operation of the code are discussed
Coding in pigeons: Multiple-coding versus single-code/default strategies.
Pinto, Carlos; Machado, Armando
To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.
GAMERA - The New Magnetospheric Code
Lyon, J.; Sorathia, K.; Zhang, B.; Merkin, V. G.; Wiltberger, M. J.; Daldorff, L. K. S.
The Lyon-Fedder-Mobarry (LFM) code has been a main-line magnetospheric simulation code for 30 years. The code base, designed in the age of memory to memory vector ma- chines,is still in wide use for science production but needs upgrading to ensure the long term sustainability. In this presentation, we will discuss our recent efforts to update and improve that code base and also highlight some recent results. The new project GAM- ERA, Grid Agnostic MHD for Extended Research Applications, has kept the original design characteristics of the LFM and made significant improvements. The original de- sign included high order numerical differencing with very aggressive limiting, the ability to use arbitrary, but logically rectangular, grids, and maintenance of div B = 0 through the use of the Yee grid. Significant improvements include high-order upwinding and a non-clipping limiter. One other improvement with wider applicability is an im- proved averaging technique for the singularities in polar and spherical grids. The new code adopts a hybrid structure - multi-threaded OpenMP with an overarching MPI layer for large scale and coupled applications. The MPI layer uses a combination of standard MPI and the Global Array Toolkit from PNL to provide a lightweight mechanism for coupling codes together concurrently. The single processor code is highly efficient and can run magnetospheric simulations at the default CCMC resolution faster than real time on a MacBook pro. We have run the new code through the Athena suite of tests, and the results compare favorably with the codes available to the astrophysics community. LFM/GAMERA has been applied to many different situations ranging from the inner and outer heliosphere and magnetospheres of Venus, the Earth, Jupiter and Saturn. We present example results the Earth's magnetosphere including a coupled ring current (RCM), the magnetospheres of Jupiter and Saturn, and the inner heliosphere.
Speech coding
Ravishankar, C., Hughes Network Systems, Germantown, MD
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Optimal codes as Tanner codes with cyclic component codes
Høholdt, Tom; Pinero, Fernando; Zeng, Peng
In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...
Aztheca Code
Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.
This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)
Vocable Code
Soon, Winnie; Cox, Geoff
a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world's 'becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...
NSURE code
Rattan, D.S.
NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases
LFSC - Linac Feedback Simulation Code
Ivanov, Valentin; /Fermilab
The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.
MED101: a laser-plasma simulation code. User guide
Rodgers, P.A.; Rose, S.J.; Rogoyski, A.M.
Complete details for running the 1-D laser-plasma simulation code MED101 are given including: an explanation of the input parameters, instructions for running on the Rutherford Appleton Laboratory IBM, Atlas Centre Cray X-MP and DEC VAX, and information on three new graphics packages. The code, based on the existing MEDUSA code, is capable of simulating a wide range of laser-produced plasma experiments including the calculation of X-ray laser gain. (author)
PLASMOR: A laser-plasma simulation code. Pt. 2
Salzman, D.; Krumbein, A.D.; Szichman, H.
This report supplements a previous one which describes the PLASMOR hydrodynamics code. The present report documents the recent changes and additions made in the code. In particular described are two new subroutines for radiative preheat, a system of preprocessors which prepare the code before run, a list of postprocessors which simulate experimental setups, and the basic data sets required to run PLASMOR. In the Appendix a new computer-based manual which lists the main features of PLASMOR is reproduced
The Aster code; Code Aster
Delbecq, J.M
The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)
Coding Class
Ejsing-Duun, Stine; Hansbøl, Mikala
Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...
Uplink Coding
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.
ANIMAL code
Lindemuth, I.R.
This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables
Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...
MCNP code
Cramer, S.N.
The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids
Expander Codes
Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.
Running continuous academic adoption programmes
Nielsen, Tobias Alsted
Running successful academic adoption programmes requires executive support, clear strategies, tactical resources and organisational agility. These two presentations will discuss the implementation of strategic academic adoption programs down to very concrete tool customisations to meet specific...
Turkey Run Landfill Emissions Dataset
Data.gov (United States)
U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....
Phthalate SHEDS-HT runs
U.S. Environmental Protection Agency — Inputs and outputs for SHEDS-HT runs of DiNP, DEHP, DBP. This dataset is associated with the following publication: Moreau, M., J. Leonard, K. Phillips, J. Campbell,...
Panda code
Altomare, S.; Minton, G.
PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)
CANAL code
Gara, P.; Martin, E.
The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr
The ZPIC educational code suite
Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.
Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.
Ivanov, Valentin; Fermilab
The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output
Some optimizations of the animal code
Fletcher, W.T.
Optimizing techniques were performed on a version of the ANIMAL code (MALAD1B) at the source-code (FORTRAN) level. Sample optimizing techniques and operations used in MALADOP--the optimized version of the code--are presented, along with a critique of some standard CDC 7600 optimizing techniques. The statistical analysis of total CPU time required for MALADOP and MALAD1B shows a run-time saving of 174 msec (almost 3 percent) in the code MALADOP during one time step
GOC: General Orbit Code
Maddox, L.B.; McNeilly, G.S.
GOC (General Orbit Code) is a versatile program which will perform a variety of calculations relevant to isochronous cyclotron design studies. In addition to the usual calculations of interest (e.g., equilibrium and accelerated orbits, focusing frequencies, field isochronization, etc.), GOC has a number of options to calculate injections with a charge change. GOC provides both printed and plotted output, and will follow groups of particles to allow determination of finite-beam properties. An interactive PDP-10 program called GIP, which prepares input data for GOC, is available. GIP is a very easy and convenient way to prepare complicated input data for GOC. Enclosed with this report are several microfiche containing source listings of GOC and other related routines and the printed output from a multiple-option GOC run
Optimization of the particle pusher in a diode simulation code
Theimer, M.M.; Quintenz, J.P.
The particle pusher in Sandia's particle-in-cell diode simulation code has been rewritten to reduce the required run time of a typical simulation. The resulting new version of the code has been found to run up to three times as fast as the original with comparable accuracy. The cost of this optimization was an increase in storage requirements of about 15%. The new version has also been written to run efficiently on a CRAY-1 computing system. Steps taken to affect this reduced run time are described. Various test cases are detailed
Multitasking the code ARC3D. [for computational fluid dynamics
Barton, John T.; Hsiung, Christopher C.
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
From concatenated codes to graph codes
Justesen, Jørn; Høholdt, Tom
We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...
Children's Fitness. Managing a Running Program.
Hinkle, J. Scott; Tuckman, Bruce W.
A running program to increase the cardiovascular fitness levels of fourth-, fifth-, and sixth-grade children is described. Discussed are the running environment, implementation of a running program, feedback, and reinforcement. (MT)
Running Improves Pattern Separation during Novel Object Recognition.
Bolz, Leoni; Heigele, Stefanie; Bischofberger, Josef
Running increases adult neurogenesis and improves pattern separation in various memory tasks including context fear conditioning or touch-screen based spatial learning. However, it is unknown whether pattern separation is improved in spontaneous behavior, not emotionally biased by positive or negative reinforcement. Here we investigated the effect of voluntary running on pattern separation during novel object recognition in mice using relatively similar or substantially different objects.We show that running increases hippocampal neurogenesis but does not affect object recognition memory with 1.5 h delay after sample phase. By contrast, at 24 h delay, running significantly improves recognition memory for similar objects, whereas highly different objects can be distinguished by both, running and sedentary mice. These data show that physical exercise improves pattern separation, independent of negative or positive reinforcement. In sedentary mice there is a pronounced temporal gradient for remembering object details. In running mice, however, increased neurogenesis improves hippocampal coding and temporally preserves distinction of novel objects from familiar ones.
Syndrome-source-coding and its universal generalization. [error correcting codes for data compression
Ancheta, T. C., Jr.
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Barefoot running survey: Evidence from the field
David Hryvniak; Jay Dicharry; Robert Wilder
Background: Running is becoming an increasingly popular activity among Americans with over 50 million participants. Running shoe research and technology has continued to advance with no decrease in overall running injury rates. A growing group of runners are making the choice to try the minimal or barefoot running styles of the pre-modern running shoe era. There is some evidence of decreased forces and torques on the lower extremities with barefoot running, but no clear data regarding how thi...
Cloud Computing for Complex Performance Codes.
Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Two-Level Semantics and Code Generation
Nielson, Flemming; Nielson, Hanne Riis
A two-level denotational metalanguage that is suitable for defining the semantics of Pascal-like languages is presented. The two levels allow for an explicit distinction between computations taking place at compile-time and computations taking place at run-time. While this distinction is perhaps...... not absolutely necessary for describing the input-output semantics of programming languages, it is necessary when issues such as data flow analysis and code generation are considered. For an example stack-machine, the authors show how to generate code for the run-time computations and still perform the compile...
PORPST: A statistical postprocessor for the PORMC computer code
Eslinger, P.W.; Didier, B.T.
This report describes the theory underlying the PORPST code and gives details for using the code. The PORPST code is designed to do statistical postprocessing on files written by the PORMC computer code. The data written by PORMC are summarized in terms of means, variances, standard deviations, or statistical distributions. In addition, the PORPST code provides for plotting of the results, either internal to the code or through use of the CONTOUR3 postprocessor. Section 2.0 discusses the mathematical basis of the code, and Section 3.0 discusses the code structure. Section 4.0 describes the free-format point command language. Section 5.0 describes in detail the commands to run the program. Section 6.0 provides an example program run, and Section 7.0 provides the references. 11 refs., 1 fig., 17 tabs
Repetition code of 15 qubits
Wootton, James R.; Loss, Daniel
The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.
The RETRAN-03 computer code
Paulsen, M.P.; McFadden, J.H.; Peterson, C.E.; McClure, J.A.; Gose, G.C.; Jensen, P.J.
The RETRAN-03 code development effort is designed to overcome the major theoretical and practical limitations associated with the RETRAN-02 computer code. The major objectives of the development program are to extend the range of analyses that can be performed with RETRAN, to make the code more dependable and faster running, and to have a more transportable code. The first two objectives are accomplished by developing new models and adding other models to the RETRAN-02 base code. The major model additions for RETRAN-03 are as follows: implicit solution methods for the steady-state and transient forms of the field equations; additional options for the velocity difference equation; a new steady-state initialization option for computer low-power steam generator initial conditions; models for nonequilibrium thermodynamic conditions; and several special-purpose models. The source code and the environmental library for RETRAN-03 are written in standard FORTRAN 77, which allows the last objective to be fulfilled. Some models in RETRAN-02 have been deleted in RETRAN-03. In this paper the changes between RETRAN-02 and RETRAN-03 are reviewed
Red light running camera assessment.
In the 2004-2007 period, the Mission Street SE and 25th Street SE intersection in Salem, Oregon showed relatively few crashes attributable to red light running (RLR) but, since a high number of RLR violations were observed, the intersection was ident...
Teaching Bank Runs through Films
Flynn, David T.
The author advocates the use of films to supplement textbook treatments of bank runs and panics in money and banking or general banking classes. Modern students, particularly those in developed countries, tend to be unfamiliar with potential fragilities of financial systems such as a lack of deposit insurance or other safety net mechanisms. Films…
Running and Breathing in Mammals
Bramble, Dennis M.; Carrier, David R.
Mechanical constraints appear to require that locomotion and breathing be synchronized in running mammals. Phase locking of limb and respiratory frequency has now been recorded during treadmill running in jackrabbits and during locomotion on solid ground in dogs, horses, and humans. Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running. Flying birds have independently achieved phase-locked locomotor and respiratory cycles. This hints that strict locomotor-respiratory coupling may be a vital factor in the sustained aerobic exercise of endothermic vertebrates, especially those in which the stresses of locomotion tend to deform the thoracic complex.
Does Addiction Run in Families?
... Makes Someone More Likely to Get Addicted to Drugs? Does Addiction Run in Families? Why Is It So Hard ... news is that many children whose parents had drug problems don't become addicted when they grow up. The chances of addiction are higher, but it doesn't have to ...
Prediction of ROSA-III experiment Run 702
Koizumi, Yasuo; Soda, Kunihisa; Kikuchi, Osamu.
The purpose of the ROSA-III experiment with a scaled BWR test facility is to examine primary coolant thermalhydraulic behavior and performance during a postulated loss-of-coolant accident of BWR. The results provide information for verification and improvement of reactor safety analysis codes. Run 702 assumes a recirculation line double ended break at the pump suction with average core power and no ECCS. Prediction of the Run 702 experiment was made with computer code RELAP-4J. What determine the coolant behavior are mixture level in the downcomer and flowrates and flow directions at jet pump drive flow nozzle, jet pump suction and discharge. There is thus the need for these measurements to compare predicted results with experimental ones. The liquid level formation model also needs improvement. (author)
Multiple running speed signals in medial entorhinal cortex
Hinman, James R.; Brandon, Mark P.; Climer, Jason R.; Chapman, G. William; Hasselmo, Michael E.
Grid cells in medial entorhinal cortex (MEC) can be modeled using oscillatory interference or attractor dynamic mechanisms that perform path integration, a computation requiring information about running direction and speed. The two classes of computational models often use either an oscillatory frequency or a firing rate that increases as a function of running speed. Yet it is currently not known whether these are two manifestations of the same speed signal or dissociable signals with potentially different anatomical substrates. We examined coding of running speed in MEC and identified these two speed signals to be independent of each other within individual neurons. The medial septum (MS) is strongly linked to locomotor behavior and removal of MS input resulted in strengthening of the firing rate speed signal, while decreasing the strength of the oscillatory speed signal. Thus two speed signals are present in MEC that are differentially affected by disrupted MS input. PMID:27427460
1995 and 1996 Upper Three Runs Dye Study Data Analyses
Chen, K.F.
This report presents an analysis of dye tracer studies conducted on Upper Three Runs. The revised STREAM code was used to analyze these studies and derive a stream velocity and a dispersion coefficient for use in aqueous transport models. These models will be used to facilitate the establishment of aqueous effluent limits and provide contaminant transport information to emergency management in the event of a release
Criticality codes migration to workstations at the Hanford site
Miller, E.M.
Westinghouse Hanford Company, Hanford Site Operations contractor, Richland, Washington, currently runs criticality codes on the Cray X-MP EA/232 computer but has recommended that US Department of Energy DOE-Richland replace the Cray with more economical workstations
Preventing Running Injuries through Barefoot Activity
Hart, Priscilla M.; Smith, Darla R.
Running has become a very popular lifetime physical activity even though there are numerous reports of running injuries. Although common theories have pointed to impact forces and overpronation as the main contributors to chronic running injuries, the increased use of cushioning and orthotics has done little to decrease running injuries. A new…
Running: Improving Form to Reduce Injuries.
Running is often perceived as a good option for "getting into shape," with little thought given to the form, or mechanics, of running. However, as many as 79% of all runners will sustain a running-related injury during any given year. If you are a runner-casual or serious-you should be aware that poor running mechanics may contribute to these injuries. A study published in the August 2015 issue of JOSPT reviewed the existing research to determine whether running mechanics could be improved, which could be important in treating running-related injuries and helping injured runners return to pain-free running.
Some neutronics and thermal-hydraulics codes for reactor analysis using personal computers
Woodruff, W.L.
Some neutronics and thermal-hydraulics codes formerly available only for main frame computers may now be run on personal computers. Brief descriptions of the codes are provided. Running times for some of the codes are compared for an assortment of personal and main frame computers. With some limitations in detail, personal computer versions of the codes can be used to solve many problems of interest in reactor analyses at very modest costs. 11 refs., 4 tabs
Run-off from roofs
Roed, J.
In order to find the run-off from roof material a roof has been constructed with two different slopes (30 deg C and 45 deg C). Beryllium-7 and caesium-137 has been used as tracers. Considering new roof material the pollution removed by runoff processes has been shown to be very different for various roof materials. The pollution is much more easily removed from silicon-treated material than from porous red-tile roof material. Caesium is removed more easily than beryllium. The content of caesium in old roof materials is greater in red-tile than in other less-porous materials. However, the measured removal from new material does not correspond to the amount accumulated in the old. This could be explained by weathering and by saturation effects. This last effect is probably the more important. The measurements on old material indicates a removal of 44-86% of the caesium pollution by run-off, whereas the measurement on new showed a removal of only 31-50%. It has been demonstrated that the pollution concentration in the run-off water could be very different from that in rainwater. The work was part of the EEC Radiation Protection Programme and done under a subcontract with Association Euratom-C.E.A. No. SC-014-BIO-F-423-DK(SD) under contract No. BIO-F-423-81-F. (author)
Better in the long run
CERN Bulletin
Last week, the Chamonix workshop once again proved its worth as a place where all the stakeholders in the LHC can come together, take difficult decisions and reach a consensus on important issues for the future of particle physics. The most important decision we reached last week is to run the LHC for 18 to 24 months at a collision energy of 7 TeV (3.5 TeV per beam). After that, we'll go into a long shutdown in which we'll do all the necessary work to allow us to reach the LHC's design collision energy of 14 TeV for the next run. This means that when beams go back into the LHC later this month, we'll be entering the longest phase of accelerator operation in CERN's history, scheduled to take us into summer or autumn 2011. What led us to this conclusion? Firstly, the LHC is unlike any previous CERN machine. Because it is a cryogenic facility, each run is accompanied by lengthy cool-down and warm-up phases. For that reason, CERN's traditional &...
LHC Report: Positive ion run!
Mike Lamont for the LHC Team
The current LHC ion run has been progressing very well. The first fill with 358 bunches per beam - the maximum number for the year - was on Tuesday, 15 November and was followed by an extended period of steady running. The quality of the beam delivered by the heavy-ion injector chain has been excellent, and this is reflected in both the peak and the integrated luminosity. Â The peak luminosity in ATLAS reached 5x1026 cm-2s-1, which is a factor of ~16 more than last year's peak of 3x1025 cm-2s-1. The integrated luminosity in each of ALICE, ATLAS and CMS is now around 100 inverse microbarn, already comfortably over the nominal target for the run. The polarity of the ALICE spectrometer and solenoid magnets was reversed on Monday, 28 November with the aim of delivering another sizeable amount of luminosity in this configuration. On the whole, the LHC has been behaving very well recently, ensuring good machine availability. On Monday evening, however, a faulty level sensor in the cooling towe...
GASIFICATION TEST RUN TC06
Southern Company Services, Inc.
This report discusses test campaign TC06 of the Kellogg Brown & Root, Inc. (KBR) Transport Reactor train with a Siemens Westinghouse Power Corporation (Siemens Westinghouse) particle filter system at the Power Systems Development Facility (PSDF) located in Wilsonville, Alabama. The Transport Reactor is an advanced circulating fluidized-bed reactor designed to operate as either a combustor or a gasifier using a particulate control device (PCD). The Transport Reactor was operated as a pressurized gasifier during TC06. Test run TC06 was started on July 4, 2001, and completed on September 24, 2001, with an interruption in service between July 25, 2001, and August 19, 2001, due to a filter element failure in the PCD caused by abnormal operating conditions while tuning the main air compressor. The reactor temperature was varied between 1,725 and 1,825 F at pressures from 190 to 230 psig. In TC06, 1,214 hours of solid circulation and 1,025 hours of coal feed were attained with 797 hours of coal feed after the filter element failure. Both reactor and PCD operations were stable during the test run with a stable baseline pressure drop. Due to its length and stability, the TC06 test run provided valuable data necessary to analyze long-term reactor operations and to identify necessary modifications to improve equipment and process performance as well as progressing the goal of many thousands of hours of filter element exposure.
Running jobs in the vacuum
McNab, A; Stagni, F; Garcia, M Ubeda
We present a model for the operation of computing nodes at a site using Virtual Machines (VMs), in which VMs are created and contextualized for experiments by the site itself. For the experiment, these VMs appear to be produced spontaneously 'in the vacuum' rather having to ask the site to create each one. This model takes advantage of the existing pilot job frameworks adopted by many experiments. In the Vacuum model, the contextualization process starts a job agent within the VM and real jobs are fetched from the central task queue as normal. An implementation of the Vacuum scheme, Vac, is presented in which a VM factory runs on each physical worker node to create and contextualize its set of VMs. With this system, each node's VM factory can decide which experiments' VMs to run, based on site-wide target shares and on a peer-to-peer protocol in which the site's VM factories query each other to discover which VM types they are running. A property of this system is that there is no gate keeper service, head node, or batch system accepting and then directing jobs to particular worker nodes, avoiding several central points of failure. Finally, we describe tests of the Vac system using jobs from the central LHCb task queue, using the same contextualization procedure for VMs developed by LHCb for Clouds.
Automatic coding method of the ACR Code
Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi
The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology
Error-correction coding
Hinds, Erold W. (Principal Investigator)
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Code Development and Analysis Program: developmental checkout of the BEACON/MOD2A code
Ramsthaler, J.A.; Lime, J.F.; Sahota, M.S.
A best-estimate transient containment code, BEACON, is being developed by EG and G Idaho, Inc. for the Nuclear Regulatory Commission's reactor safety research program. This is an advanced, two-dimensional fluid flow code designed to predict temperatures and pressures in a dry PWR containment during a hypothetical loss-of-coolant accident. The most recent version of the code, MOD2A, is presently in the final stages of production prior to being released to the National Energy Software Center. As part of the final code checkout, seven sample problems were selected to be run with BEACON/MOD2A
Run Clever - No difference in risk of injury when comparing progression in running volume and running intensity in recreational runners
Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik
Background/aim: The Run Clever trial investigated if there was a difference in injury occurrence across two running schedules, focusing on progression in volume of running intensity (Sch-I) or in total running volume (Sch-V). It was hypothesised that 15% more runners with a focus on progression...... in volume of running intensity would sustain an injury compared with runners with a focus on progression in total running volume. Methods: Healthy recreational runners were included and randomly allocated to Sch-I or Sch-V. In the first eight weeks of the 24-week follow-up, all participants (n=839) followed...... participants received real-time, individualised feedback on running intensity and running volume. The primary outcome was running-related injury (RRI). Results: After preconditioning a total of 80 runners sustained an RRI (Sch-I n=36/Sch-V n=44). The cumulative incidence proportion (CIP) in Sch-V (reference...
Dynamic Shannon Coding
Gagie, Travis
We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.
Fundamentals of convolutional coding
Johannesson, Rolf
Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual
Codes Over Hyperfields
Atamewoue Surdive
Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.
Comparison of sodium aerosol codes
Dunbar, I.H.; Fermandjian, J.; Bunz, H.; L'homme, A.; Lhiaubet, G.; Himeno, Y.; Kirby, C.R.; Mitsutsuka, N.
Although hypothetical fast reactor accidents leading to severe core damage are very low probability events, their consequences are to be assessed. During such accidents, one can envisage the ejection of sodium, mixed with fuel and fission products, from the primary circuit into the secondary containment. Aerosols can be formed either by mechanical dispersion of the molten material or as a result of combustion of the sodium in the mixture. Therefore considerable effort has been devoted to study the different sodium aerosol phenomena. To ensure that the problems of describing the physical behaviour of sodium aerosols were adequately understood, a comparison of the codes being developed to describe their behaviour was undertaken. The comparison consists of two parts. The first is a comparative study of the computer codes used to predict aerosol behaviour during a hypothetical accident. It is a critical review of documentation available. The second part is an exercise in which code users have run their own codes with a pre-arranged input. For the critical comparative review of the computer models, documentation has been made available on the following codes: AEROSIM (UK), MAEROS (USA), HAARM-3 (USA), AEROSOLS/A2 (France), AEROSOLS/B1 (France), and PARDISEKO-IIIb (FRG)
Improved Algorithms Speed It Up for Codes
Hazi, A
Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leader for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics
Symbol synchronization in convolutionally coded systems
Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.
Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.
LHCb siliicon detectors: the Run 1 to Run 2 transition and first experience of Run 2
Rinnert, Kurt
LHCb is a dedicated experiment to study New Physics in the decays of heavy hadrons at the Large Hadron Collider (LHC) at CERN. The detector includes a high precision tracking system consisting of a silicon-strip vertex detector (VELO) surrounding the pp interaction region, a large- area silicon-strip detector located upstream of a dipole magnet (TT), and three stations of silicon- strip detectors (IT) and straw drift tubes placed downstream (OT). The operational transition of the silicon detectors VELO, TT and IT from LHC Run 1 to Run 2 and first Run 2 experiences will be presented. During the long shutdown of the LHC the silicon detectors have been maintained in a safe state and operated regularly to validate changes in the control infrastructure, new operational procedures, updates to the alarm systems and monitoring software. In addition, there have been some infrastructure related challenges due to maintenance performed in the vicinity of the silicon detectors that will be discussed. The LHCb silicon dete...
Barefoot running: does it prevent injuries?
Murphy, Kelly; Curry, Emily J; Matzkin, Elizabeth G
Endurance running has evolved over the course of millions of years and it is now one of the most popular sports today. However, the risk of stress injury in distance runners is high because of the repetitive ground impact forces exerted. These injuries are not only detrimental to the runner, but also place a burden on the medical community. Preventative measures are essential to decrease the risk of injury within the sport. Common running injuries include patellofemoral pain syndrome, tibial stress fractures, plantar fasciitis, and Achilles tendonitis. Barefoot running, as opposed to shod running (with shoes), has recently received significant attention in both the media and the market place for the potential to promote the healing process, increase performance, and decrease injury rates. However, there is controversy over the use of barefoot running to decrease the overall risk of injury secondary to individual differences in lower extremity alignment, gait patterns, and running biomechanics. While barefoot running may benefit certain types of individuals, differences in running stance and individual biomechanics may actually increase injury risk when transitioning to barefoot running. The purpose of this article is to review the currently available clinical evidence on barefoot running and its effectiveness for preventing injury in the runner. Based on a review of current literature, barefoot running is not a substantiated preventative running measure to reduce injury rates in runners. However, barefoot running utility should be assessed on an athlete-specific basis to determine whether barefoot running will be beneficial.
HTML 5 up and running
Pilgrim, Mark
If you don't know about the new features available in HTML5, now's the time to find out. This book provides practical information about how and why the latest version of this markup language will significantly change the way you develop for the Web. HTML5 is still evolving, yet browsers such as Safari, Mozilla, Opera, and Chrome already support many of its features -- and mobile browsers are even farther ahead. HTML5: Up & Running carefully guides you though the important changes in this version with lots of hands-on examples, including markup, graphics, and screenshots. You'll learn how to
Inequality in the long run.
Piketty, Thomas; Saez, Emmanuel
This Review presents basic facts regarding the long-run evolution of income and wealth inequality in Europe and the United States. Income and wealth inequality was very high a century ago, particularly in Europe, but dropped dramatically in the first half of the 20th century. Income inequality has surged back in the United States since the 1970s so that the United States is much more unequal than Europe today. We discuss possible interpretations and lessons for the future. Copyright © 2014, American Association for the Advancement of Science.
Electroweak processes at Run 2
Spalla, Margherita; Sestini, Lorenzo
We present a summary of the studies of the electroweak sector of the Standard Model at LHC after the first year of data taking of Run2, focusing on possible results to be achieved with the analysis of full 2015 and 2016 data. We discuss the measurements of W and Z boson production, with particular attention to the precision determination of basic Standard Model parameters, and the study of multi-boson interactions through the analysis of boson-boson final states. This work is the result of the collaboration between scientists from the ATLAS, CMS and LHCb experiments.
Running gratings in photoconductive materials
Kukhtarev, N. V.; Kukhtareva, T.; Lyuksyutov, S. F.
Starting from the three-dimensional version of a standard photorefractive model (STPM), we obtain a reduced compact Set of equations for an electric field based on the assumption of a quasi-steady-state fast recombination. The equations are suitable for evaluation of a current induced by running...... gratings at small-contrast approximation and also are applicable for the description of space-charge wave domains. We discuss spatial domain and subharmonic beam formation in bismuth silicon oxide (BSO) crystals in the framework of the small-contrast approximation of STPM. The experimental results...
Google Wave Up and Running
Ferrate, Andres
Catch Google Wave, the revolutionary Internet protocol and web service that lets you communicate and collaborate in realtime. With this book, you'll understand how Google Wave integrates email, instant messaging (IM), wiki, and social networking functionality into a powerful and extensible platform. You'll also learn how to use its features, customize its functions, and build sophisticated extensions with Google Wave's open APIs and network protocol. Written for everyone -- from non-techies to ninja coders -- Google Wave: Up and Running provides a complete tour of this complex platform. You'
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
R-matrix analysis code (RAC)
Chen Zhenpeng; Qi Huiquan
A comprehensive R-matrix analysis code has been developed. It is based on the multichannel and multilevel R-matrix theory and runs in VAX computer with FORTRAN-77. With this code many kinds of experimental data for one nuclear system can be fitted simultaneously. The comparisions between code RAC and code EDA of LANL are made. The data show both codes produced the same calculation results when one set of R-matrix parameters was used. The differential cross section of 10 B (n, α) 7 Li for E n = 0.4 MeV and the polarization of 16 O (n,n) 16 O for E n = 2.56 MeV are presented
The PS locomotive runs again
Over forty years ago, the PS train entered service to steer the magnets of the accelerator into place... ... a service that was resumed last Tuesday. Left to right: Raymond Brown (CERN), Claude Tholomier (D.B.S.), Marcel Genolin (CERN), Gérard Saumade (D.B.S.), Ingo Ruehl (CERN), Olivier Carlier (D.B.S.), Patrick Poisot (D.B.S.), Christian Recour (D.B.S.). It is more than ten years since people at CERN heard the rumbling of the old PS train's steel wheels. Last Tuesday, the locomotive came back into service to be tested. It is nothing like the monstrous steel engines still running on conventional railways -just a small electric battery-driven vehicle employed on installing the magnets for the PS accelerator more than 40 years ago. To do so, it used the tracks that run round the accelerator. In fact, it is the grandfather of the LEP monorail. After PS was commissioned in 1959, the little train was used more and more rarely. This is because magnets never break down, or hardly ever! In fact, the loc...
(Nearly) portable PIC code for parallel computers
Decyk, V.K.
As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes
Homological stabilizer codes
Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com
In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.
Effect of Minimalist Footwear on Running Efficiency
Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.
Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304
pTSC: Data file editing for the Tokamak Simulation Code
Meiss, J.D.
The code pTSC is an editor for the data files needed to run the Princeton Tokamak Simulation Code (TSC). pTSC utilizes the Macintosh interface to create a graphical environment for entering the data. As most of the data to run TSC consists of conductor positions, the graphical interface is especially appropriate
Running Parallel Discrete Event Simulators on Sierra
Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.
Diagnostic Coding for Epilepsy.
Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Coding of Neuroinfectious Diseases.
Barkley, Gregory L
ATLAS inner detector: the Run 1 to Run 2 transition, and first experience from Run 2
Dobos, Daniel; The ATLAS collaboration
The ATLAS experiment is equipped with a tracking system, the Inner Detector, built using different technologies, silicon planar sensors (pixel and micro-strip) and gaseous drift- tubes, all embedded in a 2T solenoidal magnetic field. For the LHC Run II, the system has been upgraded; taking advantage of the long showdown, the Pixel Detector was extracted from the experiment and brought to surface, to equip it with new service quarter panels, to repair modules and to ease installation of the Insertable B-Layer (IBL), a fourth layer of pixel detectors, installed in May 2014 between the existing Pixel Detector and a new smaller radius beam-pipe at a radius of 3.3 cm from the beam axis. To cope with the high radiation and pixel occupancy due to the proximity to the interaction point and the increase of Luminosity that LHC will face in Run-2, a new read-out chip within CMOS 130nm and two different silicon sensor pixel technologies (planar and 3D) have been developed. SCT and TRT systems consolidation was also carri...
Adding run history to CLIPS
Tuttle, Sharon M.; Eick, Christoph F.
To debug a C Language Integrated Production System (CLIPS) program, certain 'historical' information about a run is needed. It would be convenient for system builders to have the capability to request such information. We will discuss how historical Rete networks can be used for answering questions that help a system builder detect the cause of an error in a CLIPS program. Moreover, the cost of maintaining a historical Rete network is compared with that for a classical Rete network. We will demonstrate that the cost for assertions is only slightly higher for a historical Rete network. The cost for handling retraction could be significantly higher; however, we will show that by using special data structures that rely on hashing, it is also possible to implement retractions efficiently.
Injecting Artificial Memory Errors Into a Running Computer Program
Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.
Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.
Robotic Bipedal Running : Increasing disturbance rejection
Karssen, J.G.D.
The goal of the research presented in this thesis is to increase the understanding of the human running gait. The understanding of the human running gait is essential for the development of devices, such as prostheses and orthoses, that enable disabled people to run or that enable able people to
David Hryvniak
Conclusion: Prior studies have found that barefoot running often changes biomechanics compared to shod running with a hypothesized relationship of decreased injuries. This paper reports the result of a survey of 509 runners. The results suggest that a large percentage of this sample of runners experienced benefits or no serious harm from transitioning to barefoot or minimal shoe running.
Age and sex influences on running mechanics and coordination variability.
Boyer, Katherine A; Freedman Silvernail, Julia; Hamill, Joseph
The purpose of this study was to examine the impact of age on running mechanics separately for male and female runners and to quantify sex differences in running mechanics and coordination variability for older runners. Kinematics and kinetics were captured for 20 younger (10 male) and 20 older (10 male) adults running overground at 3.5 m · s -1 . A modified vector coding technique was used to calculate segment coordination variability. Lower extremity joint angles, moments and segment coordination variability were compared between age and sex groups. Significant sex-age interaction effects were found for heel-strike hip flexion and ankle in/eversion angles and peak ankle dorsiflexion angle. In older adults, mid-stance knee flexion angle, ankle inversion and abduction moments and hip abduction and external rotation moments differed by sex. Older compared with younger females had reduced coordination variability in the thigh-shank transverse plane couple but greater coordination variability for the shank rotation-foot eversion couple in early stance. These results suggest there may be a non-equivalent aging process in the movement mechanics for males and females. The age and sex differences in running mechanics and coordination variability highlight the need for sex-based analyses for future studies examining injury risk with age.
Contribution to numerical and mechanical modelling of pellet-cladding interaction in nuclear reactor fuel rod
Retel, V.
Pressurised water reactor fuel rods (PWR) are the place of nuclear fission, resulting in unstable and radioactive elements. Today, the mechanical loading on the cladding is harder and harder and is partly due to the fuel pellet movement. Then, the mechanical behaviour of the cladding needs to be simulated with models allowing to assess realistic stress and strain fields for all the running conditions. Besides, the mechanical treatment of the fuel pellet needs to be improved. The study is part of a global way of improving the treatment of pellet-cladding interaction (PCI) in the 1D finite elements EDF code named CYRANO3. Non-axisymmetrical multidirectional effects have to be accounted for in a context of unidirectional axisymmetrical finite elements. The aim of this work is double. Firstly a model simulating the effect of stress concentration on the cladding, due to the opening of the radial cracks of fuel, had been added in the code. Then, the fragmented state of fuel material has been taken into account in the thermomechanical calculation, through a model which led the strain and stress relaxation in the pellet due to the fragmentation, be simulated. This model has been implemented in the code for two types of fuel behaviour: elastic and viscoplastic. (author)
Particle In Cell Codes on Highly Parallel Architectures
Tableman, Adam
We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.
Vector Network Coding
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...
Entropy Coding in HEVC
Sze, Vivienne; Marpe, Detlev
Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...
Generalized concatenated quantum codes
Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei
We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.
Rateless feedback codes
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....
Advanced video coding systems
Gao, Wen
This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV
Coding for dummies
Abraham, Nikhil
Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill
Mathematical analysis of running performance and world running records.
Péronnet, F; Thibault, G
The objective of this study was to develop an empirical model relating human running performance to some characteristics of metabolic energy-yielding processes using A, the capacity of anaerobic metabolism (J/kg); MAP, the maximal aerobic power (W/kg); and E, the reduction in peak aerobic power with the natural logarithm of race duration T, when T greater than TMAP = 420 s. Accordingly, the model developed describes the average power output PT (W/kg) sustained over any T as PT = [S/T(1 - e-T/k2)] + 1/T integral of T O [BMR + B(1 - e-t/k1)]dt where S = A and B = MAP - BMR (basal metabolic rate) when T less than TMAP; and S = A + [Af ln(T/TMAP)] and B = (MAP - BMR) + [E ln(T/TMAP)] when T greater than TMAP; k1 = 30 s and k2 = 20 s are time constants describing the kinetics of aerobic and anaerobic metabolism, respectively, at the beginning of exercise; f is a constant describing the reduction in the amount of energy provided from anaerobic metabolism with increasing T; and t is the time from the onset of the race. This model accurately estimates actual power outputs sustained over a wide range of events, e.g., average absolute error between actual and estimated T for men's 1987 world records from 60 m to the marathon = 0.73%. In addition, satisfactory estimations of the metabolic characteristics of world-class male runners were made as follows: A = 1,658 J/kg; MAP = 83.5 ml O2.kg-1.min-1; 83.5% MAP sustained over the marathon distance. Application of the model to analysis of the evolution of A, MAP, and E, and of the progression of men's and women's world records over the years, is presented.
Full core reactor analysis: Running Denovo on Jaguar
Jarrell, J. J.; Godfrey, A. T.; Evans, T. M.; Davidson, G. G. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831 (United States)
Fully-consistent, full-core, 3D, deterministic neutron transport simulations using the orthogonal mesh code Denovo were run on the massively parallel computing architecture Jaguar XT5. Using energy and spatial parallelization schemes, Denovo was able to efficiently scale to more than 160 k processors. Cell-homogenized cross sections were used with step-characteristics, linear-discontinuous finite element, and trilinear-discontinuous finite element spatial methods. It was determined that using the finite element methods gave considerably more accurate eigenvalue solutions for large-aspect ratio meshes than using step-characteristics. (authors)
First LQCD Physics Runs with MILC and P4RHMC
Soltz, R; Gupta, R
An initial series of physics LQCD runs were submitted to the BG/L science bank with the milc and p4rhmc. Both runs were for lattice dimensions of 32 2 x 8. The p4 calculation was performed with v2.0 QMP( ) MPI.X (semioptomized p4 code using qmp over mpi) and milc v7.2, also using RHMC, but not specifically optimized for BlueGene. Calculations were performed along lines of constant physics, with the light quark masses 2-3 times their physics values and the strange quark mass set by m ud = 0.1m s . Job submissions was performed using the standard milc and p4 scripts provided on the ubgl cluster. Initial thermalized lattices for each code were also provided in this way. The only modifications for running on BG/L were to the directory names and the mT parameter which determines job durations (24 hrs on BG/L vs. 4 hrs on ubgl). The milc scripts were set to resubmit themselves 10 times, and the p4 scripts were submitted serially using the ''psub -d'' job dependency option. The runp4rhmc.tcsh could not be used to resubmit due to the 30m time limit imposed on interactive jobs. Most jobs were submitted to the smallest, 512 node partitions, but both codes could also run on the 1024 node partitions with a gain of only 30-50%. The majority of jobs ran without error. Stalled jobs were often indicative of a communication gap within a partition that LC was able to fix quickly. On some occasion a zero-length lattice file was deleted to allow jobs to restart successfully. Approximately 1000 trajectories were calculated for each beta value, see Table . The analysis was performed with the standard analysis scripts for each code, make( ) summary.pl for milc and analysis.tcsh for p4rhmc. All lattices, log files, and job submission scripts have been archived to permanent storage for subsequent analysis
Progression in Running Intensity or Running Volume and the Development of Specific Injuries in Recreational Runners
-training. Participants were randomized to one of two running schedules: Schedule Intensity(Sch-I) or Schedule Volume(Sch-V). Sch-I progressed the amount of high intensity running (≥88% VO2max) each week. Sch-V progressed total weekly running volume. Global positioning system watch or smartphone collected data on running...
Running Club - Nocturne des Evaux
Les coureurs du CERN sont encore montés sur les plus hautes marches du podium lors de la course interentreprises. Cette course d'équipe qui se déroule de nuit et par équipe de 3 à 4 coureurs est unique dans la région de par son originalité : départ groupé toutes les 30 secondes, les 3 premiers coureurs doivent passer la ligne d'arrivée ensemble. Double victoire pour le running club a la nocturne !!!! 1ère place pour les filles et 22e au classement général; 1ère place pour l'équipe mixte et 4e au général, battant par la même occasion le record de l'épreuve en mixte d'environ 1 minute; 10e place pour l'équipe homme. Retrouvez tous les résultats sur http://www.chp-geneve.ch/web-cms/index.php/nocturne-des-evaux
LHCf completes its first run
LHCf, one of the three smaller experiments at the LHC, has completed its first run. The detectors were removed last week and the analysis of data is continuing. The first results will be ready by the end of the year. One of the two LHCf detectors during the removal operations inside the LHC tunnel. LHCf is made up of two independent detectors located in the tunnel 140 m either side of the ATLAS collision point. The experiment studies the secondary particles created during the head-on collisions in the LHC because they are similar to those created in a cosmic ray shower produced when a cosmic particle hits the Earth's atmosphere. The focus of the experiment is to compare the various shower models used to estimate the primary energy of ultra-high-energy cosmic rays. The energy of proton-proton collisions at the LHC will be equivalent to a cosmic ray of 1017eV hitting the atmosphere, very close to the highest energies observed in the sky. "We have now completed the fir...
Daytime Running Lights. Public Consultation
The Road Safety Authority is considering the policy options available to promote the use of Daytime Running Lights (DRL), including the possibility of mandating the use of DRL on all vehicles. An EC Directive would make DRL mandatory for new vehicles from 2011 onwards and by 2024 it is predicted that due to the natural replacement of the national fleet, almost all vehicles would be equipped with DRL. The RSA is inviting views on introducing DRL measures earlier, whereby all road vehicles would be required to use either dipped head lights during hours of daylight or dedicated DRL from next year onwards. The use of DRL has been found to enhance the visibility of vehicles, thereby increasing road safety by reducing the number and severity of collisions. This paper explores the benefits of DRL and the implications for all road users including pedestrians, cyclists and motorcyclists. In order to ensure a comprehensive consideration of all the issues, the Road Safety Authority is seeking the views and advice of interested parties.
Discussion on LDPC Codes and Uplink Coding
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
ETF system code: composition and applications
Reid, R.L.; Wu, K.F.
A computer code has been developed for application to ETF tokamak system and conceptual design studies. The code determines cost, performance, configuration, and technology requirements as a function of tokamak parameters. The ETF code is structured in a modular fashion in order to allow independent modeling of each major tokamak component. The primary benefit of modularization is that it allows updating of a component module, such as the TF coil module, without disturbing the remainder of the system code as long as the input/output to the modules remains unchanged. The modules may be run independently to perform specific design studies, such as determining the effect of allowable strain on TF coil structural requirements, or the modules may be executed together as a system to determine global effects, such as defining the impact of aspect ratio on the entire tokamak system
RCS modeling with the TSAR FDTD code
Pennock, S.T.; Ray, S.L.
The TSAR electromagnetic modeling system consists of a family of related codes that have been designed to work together to provide users with a practical way to set up, run, and interpret the results from complex 3-D finite-difference time-domain (FDTD) electromagnetic simulations. The software has been in development at the Lawrence Livermore National Laboratory (LLNL) and at other sites since 1987. Active internal use of the codes began in 1988 with limited external distribution and use beginning in 1991. TSAR was originally developed to analyze high-power microwave and EMP coupling problems. However, the general-purpose nature of the tools has enabled us to use the codes to solve a broader class of electromagnetic applications and has motivated the addition of new features. In particular a family of near-to-far field transformation routines have been added to the codes, enabling TSAR to be used for radar-cross section and antenna analysis problems.
Impact Accelerations of Barefoot and Shod Running.
Thompson, M; Seegmiller, J; McGowan, C P
During the ground contact phase of running, the body's mass is rapidly decelerated resulting in forces that propagate through the musculoskeletal system. The repetitive attenuation of these impact forces is thought to contribute to overuse injuries. Modern running shoes are designed to reduce impact forces, with the goal to minimize running related overuse injuries. Additionally, the fore/mid foot strike pattern that is adopted by most individuals when running barefoot may reduce impact force transmission. The aim of the present study was to compare the effects of the barefoot running form (fore/mid foot strike & decreased stride length) and running shoes on running kinetics and impact accelerations. 10 healthy, physically active, heel strike runners ran in 3 conditions: shod, barefoot and barefoot while heel striking, during which 3-dimensional motion analysis, ground reaction force and accelerometer data were collected. Shod running was associated with increased ground reaction force and impact peak magnitudes, but decreased impact accelerations, suggesting that the midsole of running shoes helps to attenuate impact forces. Barefoot running exhibited a similar decrease in impact accelerations, as well as decreased impact peak magnitude, which appears to be due to a decrease in stride length and/or a more plantarflexed position at ground contact. © Georg Thieme Verlag KG Stuttgart · New York.
Locally orderless registration code
This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....
Decoding Codes on Graphs
Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.
Manually operated coded switch
Barnette, J.H.
The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made
Coding in Muscle Disease.
Jones, Lyell K; Ney, John P
Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.
An Auto sequence Code to Integrate a Neutron Unfolding Code with thePC-MCA Accuspec
Darsono
In a neutron spectrometry using proton recoil method, the neutronunfolding code is needed to unfold the measured proton spectrum to become theneutron spectrum. The process of the unfolding neutron in the existingneutron spectrometry which was successfully installed last year was doneseparately. This manuscript reports that the auto sequence code to integratethe neutron unfolding code UNFSPEC.EXE with the software facility of thePC-MCA Accuspec has been made and run successfully so that the new neutronspectrometry become compact. The auto sequence code was written based on therules in application program facility of PC-MCA Accuspec and then it wascompiled using AC-EXE. Result of the test of the auto sequence code showedthat for binning width 20, 30, and 40 giving a little different spectrumshape. The binning width around 30 gives a better spectrum in mean of givingsmall error compared to the others. (author)
QR Codes 101
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
[Physiological differences between cycling and running].
Millet, Grégoire
This review compares the differences in systemic responses (VO2max, anaerobic threshold, heart rate and economy) and in underlying mechanisms of adaptation (ventilatory and hemodynamic and neuromuscular responses) between cycling and running. VO2max is specific to the exercise modality. Overall, there is more physiological training transfer from running to cycling than vice-versa. Several other physiological differences between cycling and running are discussed: HR is different between the two activities both for maximal and sub-maximal intensities. The delta efficiency is higher in running. Ventilation is more impaired in cycling than running due to mechanical constraints. Central fatigue and decrease in maximal strength are more important after prolonged exercise in running than in cycling.
Design of ProjectRun21
Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik
BACKGROUND: Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow......-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running...... the association between running experience or running pace and the risk of running-related injury. METHODS: Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one...
Should the Air Force Teach Running Technique
barefoot running, and gait training techniques. Current research indicates efficiencies in running with a forefoot or midfoot- strike gait, and a...recent retrospective study showed a lower injury rate in forefoot - strike runners as compared with heel- strike runners. However, there are no...barefoot-like� fashion and allows a forefoot or midfoot- strike gait, as opposed to the heel- strike gait style often seen with traditional running
Running-in as an Engineering Optimization
Jamari, Jamari
Running-in is a process which can be found in daily lives. This phenomenon occurs after the start of the contact between fresh solid surfaces, resulting in changes in the surface topography, friction and wear. Before the contacting engineering solid surfaces reach a steady-state operation situation this running-n enhances the contact performance. Running-in is very complex and is a vast problem area. A lot of variable occurs in the running-in process, physically, mechanically or chemically. T...
Run 2 ATLAS Trigger and Detector Performance
Solovyanov, Oleg; The ATLAS collaboration
The 2nd LHC run has started in June 2015 with a proton-proton centre-of-mass collision energy of 13 TeV. During the years 2016 and 2017, LHC delivered an unprecedented amount of luminosity under the ever-increasing challenging conditions in terms of peak luminosity, pile-up and trigger rates. In this talk, the LHC running conditions and the improvements made to the ATLAS experiment in the course of Run 2 will be discussed, and the latest ATLAS detector and ATLAS trigger performance results from the Run 2 will be presented.
How to run ions in the future?
Küchler, D; Manglunki, D; Scrivens, R
In the light of different running scenarios potential source improvements will be discussed (e.g. one month every year versus two month every other year and impact of the different running options [e.g. an extended ion run] on the source). As the oven refills cause most of the down time the oven design and refilling strategies will be presented. A test stand for off-line developments will be taken into account. Also the implications on the necessary manpower for extended runs will be discussed
ATLAS detector performance in Run1: Calorimeters
Burghgrave, B; The ATLAS collaboration
ATLAS operated with an excellent efficiency during the Run 1 data taking period, recording respectively in 2011 and 2012 an integrated luminosity of 5.3 fb-1 at √s = 7 TeV and 21.6 fb-1 at √s = 8TeV. The Liquid Argon and Tile Calorimeter contributed to this effort by operating with a good data quality efficiency, improving over the whole Run 1. This poster presents the Run 1 overall status and performance, LS1 works and Preparations for Run 2.
Electromagnetic field and mechanical stress analysis code
Analysis TEXMAGST is a two stage linear finite element code for the analysis of static magnetic fields in three dimensional structures and associated mechanical stresses produced by the anti J x anti B forces within these structures. The electromagnetic problem is solved in terms of magnetic vector potential A for a given current density anti J as curl 1/μ curl anti A = anti J considering the magnetic permeability as constant. The Coulombian gauge (div anti A = o) was chosen and was implemented through the use of Lagrange multipliers. The second stage of the problem - the calculation of mechanical stresses in the same three dimensional structure is solved by using the same code with few modifications - through a restart card. Body forces anti J x anti B within each element are calculated from the solution of the first stage run and represent the input to the second stage run which will give the solution for the stress problem
Codes and curves
Walker, Judy L
When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...
Computer codes in particle transport physics
Pesic, M.
Simulation of transport and interaction of various particles in complex media and wide energy range (from 1 MeV up to 1 TeV) is very complicated problem that requires valid model of a real process in nature and appropriate solving tool - computer code and data library. A brief overview of computer codes based on Monte Carlo techniques for simulation of transport and interaction of hadrons and ions in wide energy range in three dimensional (3D) geometry is shown. Firstly, a short attention is paid to underline the approach to the solution of the problem - process in nature - by selection of the appropriate 3D model and corresponding tools - computer codes and cross sections data libraries. Process of data collection and evaluation from experimental measurements and theoretical approach to establishing reliable libraries of evaluated cross sections data is Ion g, difficult and not straightforward activity. For this reason, world reference data centers and specialized ones are acknowledged, together with the currently available, state of art evaluated nuclear data libraries, as the ENDF/B-VI, JEF, JENDL, CENDL, BROND, etc. Codes for experimental and theoretical data evaluations (e.g., SAMMY and GNASH) together with the codes for data processing (e.g., NJOY, PREPRO and GRUCON) are briefly described. Examples of data evaluation and data processing to generate computer usable data libraries are shown. Among numerous and various computer codes developed in transport physics of particles, the most general ones are described only: MCNPX, FLUKA and SHIELD. A short overview of basic application of these codes, physical models implemented with their limitations, energy ranges of particles and types of interactions, is given. General information about the codes covers also programming language, operation system, calculation speed and the code availability. An example of increasing computation speed of running MCNPX code using a MPI cluster compared to the code sequential option
Computer Security: is your code sane?
Stefan Lueders, Computer Security Team
How many of us write code? Software? Programs? Scripts? How many of us are properly trained in this and how well do we do it? Do we write functional, clean and correct code, without flaws, bugs and vulnerabilities*? In other words: are our codes sane? Figuring out weaknesses is not that easy (see our quiz in an earlier Bulletin article). Therefore, in order to improve the sanity of your code, prevent common pit-falls, and avoid the bugs and vulnerabilities that can crash your code, or – worse – that can be misused and exploited by attackers, the CERN Computer Security team has reviewed its recommendations for checking the security compliance of your code. "Static Code Analysers� are stand-alone programs that can be run on top of your software stack, regardless of whether it uses Java, C/C++, Perl, PHP, Python, etc. These analysers identify weaknesses and inconsistencies including: employing undeclared variables; expressions resu...
CBP Phase I Code Integration
Smith, F.; Brown, K.; Flach, G.; Sarkar, S.
was developed to link GoldSim with external codes (Smith III et al. 2010). The DLL uses a list of code inputs provided by GoldSim to create an input file for the external application, runs the external code, and returns a list of outputs (read from files created by the external application) back to GoldSim. In this way GoldSim provides: (1) a unified user interface to the applications, (2) the capability of coupling selected codes in a synergistic manner, and (3) the capability of performing probabilistic uncertainty analysis with the codes. GoldSim is made available by the GoldSim Technology Group as a free 'Player' version that allows running but not editing GoldSim models. The player version makes the software readily available to a wider community of users that would wish to use the CBP application but do not have a license for GoldSim.
developed to link GoldSim with external codes (Smith III et al. 2010). The DLL uses a list of code inputs provided by GoldSim to create an input file for the external application, runs the external code, and returns a list of outputs (read from files created by the external application) back to GoldSim. In this way GoldSim provides: (1) a unified user interface to the applications, (2) the capability of coupling selected codes in a synergistic manner, and (3) the capability of performing probabilistic uncertainty analysis with the codes. GoldSim is made available by the GoldSim Technology Group as a free 'Player' version that allows running but not editing GoldSim models. The player version makes the software readily available to a wider community of users that would wish to use the CBP application but do not have a license for GoldSim.
Web interface for plasma analysis codes
Emoto, M. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)], E-mail: emo@nifs.ac.jp; Murakami, S. [Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501 (Japan); Yoshida, M.; Funaba, H.; Nagayama, Y. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)
There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach.
Emoto, M.; Murakami, S.; Yoshida, M.; Funaba, H.; Nagayama, Y.
There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach
Los Alamos radiation transport code system on desktop computing platforms
Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.; West, J.T.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. The current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines
Running Records and First Grade English Learners: An Analysis of Language Related Errors
Briceño, Allison; Klein, Adria F.
The purpose of this study was to determine if first-grade English Learners made patterns of language related errors when reading, and if so, to identify those patterns and how teachers coded language related errors when analyzing English Learners' running records. Using research from the fields of both literacy and Second Language Acquisition, we…
Comparing internal and external run-time coupling of CFD and building energy simulation software
Djunaedy, E.; Hensen, J.L.M.; Loomans, M.G.L.C.
This paper describes a comparison between internal and external run-time coupling of CFD and building energy simulation software. Internal coupling can be seen as the "traditional" way of developing software, i.e. the capabilities of existing software are expanded by merging codes. With external
FLP: a field line plotting code for bundle divertor design
Ruchti, C.
A computer code was developed to aid in the design of bundle divertors. The code can handle discrete toroidal field coils and various divertor coil configurations. All coils must be composed of straight line segments. The code runs on the PDP-10 and displays plots of the configuration, field lines, and field ripple. It automatically chooses the coil currents to connect the separatrix produced by the divertor to the outer edge of the plasma and calculates the required coil cross sections. Several divertor designs are illustrated to show how the code works
Responding for sucrose and wheel-running reinforcement: effect of pre-running.
Belke, Terry W
Six male albino Wistar rats were placed in running wheels and exposed to a fixed interval 30-s schedule that produced either a drop of 15% sucrose solution or the opportunity to run for 15s as reinforcing consequences for lever pressing. Each reinforcer type was signaled by a different stimulus. To assess the effect of pre-running, animals were allowed to run for 1h prior to a session of responding for sucrose and running. Results showed that, after pre-running, response rates in the later segments of the 30-s schedule decreased in the presence of a wheel-running stimulus and increased in the presence of a sucrose stimulus. Wheel-running rates were not affected. Analysis of mean post-reinforcement pauses (PRP) broken down by transitions between successive reinforcers revealed that pre-running lengthened pausing in the presence of the stimulus signaling wheel running and shortened pauses in the presence of the stimulus signaling sucrose. No effect was observed on local response rates. Changes in pausing in the presence of stimuli signaling the two reinforcers were consistent with a decrease in the reinforcing efficacy of wheel running and an increase in the reinforcing efficacy of sucrose. Pre-running decreased motivation to respond for running, but increased motivation to work for food.
The Effect of Training in Minimalist Running Shoes on Running Economy.
Ridge, Sarah T; Standifird, Tyler; Rivera, Jessica; Johnson, A Wayne; Mitchell, Ulrike; Hunter, Iain
The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key pointsRunning in minimalist footwear did not result in a change in running economy compared to running in traditional footwear
Middle cerebral artery blood velocity during running
Lyngeraa, T. S.; Pedersen, L. M.; Mantoni, T.; Belhage, B.; Rasmussen, L. S.; van Lieshout, J. J.; Pott, F. C.
Running induces characteristic fluctuations in blood pressure (BP) of unknown consequence for organ blood flow. We hypothesized that running-induced BP oscillations are transferred to the cerebral vasculature. In 15 healthy volunteers, transcranial Doppler-determined middle cerebral artery (MCA)
Running with technology: Where are we heading?
Jensen, Mads Møller; Mueller, Florian 'Floyd'
technique- related information in run-training interfaces. From that finding, this paper presents three questions to be addressed by designers of future run-training interfaces. We believe that addressing these questions will support creation of expedient interfaces that improve runners' technique...
The Second Student-Run Homeless Shelter
Seider, Scott C.
From 1983-2011, the Harvard Square Homeless Shelter (HSHS) in Cambridge, Massachusetts, was the only student-run homeless shelter in the United States. However, college students at Villanova, Temple, Drexel, the University of Pennsylvania, and Swarthmore drew upon the HSHS model to open their own student-run homeless shelter in Philadelphia,…
Performance evaluation and financial market runs
Wagner, W.B.
This paper develops a model in which performance evaluation causes runs by fund managers and results in asset fire sales. Performance evaluation nonetheless is efficient as it disciplines managers. Optimal performance evaluation combines absolute and relative components in order to make runs less
Impact of Running Away on Girls' Pregnancy
Thrane, Lisa E.; Chen, Xiaojin
This study assessed the impact of running away on pregnancy in the subsequent year among U.S. adolescents. We also investigated interactions between running away and sexual assault, romance, and school disengagement. Pregnancy among females between 11 and 17 years (n = 6100) was examined utilizing the Longitudinal Study of Adolescent Health (Add…
Teaching Bank Runs with Classroom Experiments
Balkenborg, Dieter; Kaplan, Todd; Miller, Timothy
Once relegated to cinema or history lectures, bank runs have become a modern phenomenon that captures the interest of students. In this article, the authors explain a simple classroom experiment based on the Diamond-Dybvig model (1983) to demonstrate how a bank run--a seemingly irrational event--can occur rationally. They then present possible…
Training errors and running related injuries
Nielsen, Rasmus Østergaard; Buist, Ida; Sørensen, Henrik
The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries.......The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries....
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
Long Run Relationship Between Agricultural Production And ...
The study sought to estimate the impact of agricultural production on the long run economic growth in Nigeria using the Vector Error Correction Methodology. The result shows that long run relationship exists between agricultural production and economic growth in Nigeria. Among the variables in the model, crop production ...
Orthopaedic Perspective on Barefoot and Minimalist Running.
Roth, Jonathan; Neumann, Julie; Tao, Matthew
In recent years, there has been a movement toward barefoot and minimalist running. Advocates assert that a lack of cushion and support promotes a forefoot or midfoot strike rather than a rearfoot strike, decreasing the impact transient and stress on the hip and knee. Although the change in gait is theorized to decrease injury risk, this concept has not yet been fully elucidated. However, research has shown diminished symptoms of chronic exertional compartment syndrome and anterior knee pain after a transition to minimalist running. Skeptics are concerned that, because of the effects of the natural environment and the lack of a standardized transition program, barefoot running could lead to additional, unforeseen injuries. Studies have shown that, with the transition to minimalist running, there is increased stress on the foot and ankle and risk of repetitive stress injuries. Nonetheless, despite the large gap of evidence-based knowledge on minimalist running, the potential benefits warrant further research and consideration.
Running injuries - changing trends and demographics.
Fields, Karl B
Running injuries are common. Recently the demographic has changed, in that most runners in road races are older and injuries now include those more common in master runners. In particular, Achilles/calf injuries, iliotibial band injury, meniscus injury, and muscle injuries to the hamstrings and quadriceps represent higher percentages of the overall injury mix in recent epidemiologic studies compared with earlier ones. Evidence suggests that running mileage and previous injury are important predictors of running injury. Evidence-based research now helps guide the treatment of iliotibial band, patellofemoral syndrome, and Achilles tendinopathy. The use of topical nitroglycerin in tendinopathy and orthotics for the treatment of patellofemoral syndrome has moderate to strong evidence. Thus, more current knowledge about the changing demographics of runners and the application of research to guide treatment and, eventually, prevent running injury offers hope that clinicians can help reduce the high morbidity associated with long-distance running.
ATLAS strip detector: Operational Experience and Run1 → Run2 transition
NAGAI, K; The ATLAS collaboration
The ATLAS SCT operational experience and the detector performance during the RUN1 period of LHC will be reported. Additionally the preparation outward to RUN2 during the long shut down 1 will be mentioned.
Excessive Progression in Weekly Running Distance and Risk of Running-related Injuries
Nielsen, R.O.; Parner, Erik Thorlund; Nohr, Ellen Aagaard
Study Design An explorative, 1-year prospective cohort study. Objective To examine whether an association between a sudden change in weekly running distance and running-related injury varies according to injury type. Background It is widely accepted that a sudden increase in running distance...... is strongly related to injury in runners. But the scientific knowledge supporting this assumption is limited. Methods A volunteer sample of 874 healthy novice runners who started a self-structured running regimen were provided a global-positioning-system watch. After each running session during the study...... period, participants were categorized into 1 of the following exposure groups, based on the progression of their weekly running distance: less than 10% or regression, 10% to 30%, or more than 30%. The primary outcome was running-related injury. Results A total of 202 runners sustained a running...
The materiality of Code
Soon, Winnie
This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms' interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...
Coding for optical channels
Djordjevic, Ivan; Vasic, Bane
This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.
SEVERO code - user's manual
Sacramento, A.M. do.
This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)
... An example of flawed code
Computer Security Team
Do you recall our small exercise in the last issue of the Bulletin? We were wondering how well written the following code was: 1 /* Safely Exec program: drop privileges to user uid and group 2 * gid, and use chroot to restrict file system access to jail 3 * directory. Also, don't allow program to run as a 4 * privileged user or group */ 5 void ExecUid(int uid, int gid, char *jailDir, char *prog, char *const argv[]) 6 { 7 if (uid == 0 || gid == 0) { 8 FailExit("ExecUid: root uid or gid not allowed�); 9 } 10 11 chroot(jailDir); /* restrict access to this dir */ 12 13 setuid(uid); /* drop privs */ 14 setgid(gid); 15 16 fprintf(LOGFILE, "Execvp of %s as uid=%d gid=%d\
Synthesizing Certified Code
Whalen, Michael; Schumann, Johann; Fischer, Bernd
Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...
FERRET data analysis code
Schmittroth, F.
A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples
Stylize Aesthetic QR Code
Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing
With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...
Enhancing QR Code Security
Zhang, Linfan; Zheng, Shuang
Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...
Leadership Class Configuration Interaction Code - Status and Opportunities
Vary, James
With support from SciDAC-UNEDF (www.unedf.org) nuclear theorists have developed and are continuously improving a Leadership Class Configuration Interaction Code (LCCI) for forefront nuclear structure calculations. The aim of this project is to make state-of-the-art nuclear structure tools available to the entire community of researchers including graduate students. The project includes codes such as NuShellX, MFDn and BIGSTICK that run a range of computers from laptops to leadership class supercomputers. Codes, scripts, test cases and documentation have been assembled, are under continuous development and are scheduled for release to the entire research community in November 2011. A covering script that accesses the appropriate code and supporting files is under development. In addition, a Data Base Management System (DBMS) that records key information from large production runs and archived results of those runs has been developed (http://nuclear.physics.iastate.edu/info/) and will be released. Following an outline of the project, the code structure, capabilities, the DBMS and current efforts, I will suggest a path forward that would benefit greatly from a significant partnership between researchers who use the codes, code developers and the National Nuclear Data efforts. This research is supported in part by DOE under grant DE-FG02-87ER40371 and grant DE-FC02-09ER41582 (SciDAC-UNEDF).
Opening up codings?
Steensig, Jakob; Heinemann, Trine
doing formal coding and when doing more "traditional� conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...
Gauge color codes
Bombin Palomo, Hector
Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...
Refactoring test code
A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok
textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from
Rocker shoe, minimalist shoe, and standard running shoe : A comparison of running economy
Sobhani, Sobhan; Bredeweg, Steven; Dekker, Rienk; Kluitenberg, Bas; van den Heuvel, Edwin; Hijmans, Juha; Postema, Klaas
Objectives: Running with rocker shoes is believed to prevent lower limb injuries. However, it is not clear how running in these shoes affects the energy expenditure. The purpose of this study was, therefore, to assess the effects of rocker shoes on running economy in comparison with standard and
Benchmarking NNWSI flow and transport codes: COVE 1 results
Hayden, N.K.
The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs
Development of the integrated system reliability analysis code MODULE
Han, S.H.; Yoo, K.J.; Kim, T.W.
The major components in a system reliability analysis are the determination of cut sets, importance measure, and uncertainty analysis. Various computer codes have been used for these purposes. For example, SETS and FTAP are used to determine cut sets; Importance for importance calculations; and Sample, CONINT, and MOCUP for uncertainty analysis. There have been problems when the codes run each other and the input and output are not linked, which could result in errors when preparing input for each code. The code MODULE was developed to carry out the above calculations simultaneously without linking input and outputs to other codes. MODULE can also prepare input for SETS for the case of a large fault tree that cannot be handled by MODULE. The flow diagram of the MODULE code is shown. To verify the MODULE code, two examples are selected and the results and computation times are compared with those of SETS, FTAP, CONINT, and MOCUP on both Cyber 170-875 and IBM PC/AT. Two examples are fault trees of the auxiliary feedwater system (AFWS) of Korea Nuclear Units (KNU)-1 and -2, which have 54 gates and 115 events, 39 gates and 92 events, respectively. The MODULE code has the advantage that it can calculate the cut sets, importances, and uncertainties in a single run with little increase in computing time over other codes and that it can be used in personal computers
Software Certification - Coding, Code, and Coders
Havelund, Klaus; Holzmann, Gerard J.
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Running Economy from a Muscle Energetics Perspective
Jared R. Fletcher
Full Text Available The economy of running has traditionally been quantified from the mass-specific oxygen uptake; however, because fuel substrate usage varies with exercise intensity, it is more accurate to express running economy in units of metabolic energy. Fundamentally, the understanding of the major factors that influence the energy cost of running (Erun can be obtained with this approach. Erun is determined by the energy needed for skeletal muscle contraction. Here, we approach the study of Erun from that perspective. The amount of energy needed for skeletal muscle contraction is dependent on the force, duration, shortening, shortening velocity, and length of the muscle. These factors therefore dictate the energy cost of running. It is understood that some determinants of the energy cost of running are not trainable: environmental factors, surface characteristics, and certain anthropometric features. Other factors affecting Erun are altered by training: other anthropometric features, muscle and tendon properties, and running mechanics. Here, the key features that dictate the energy cost during distance running are reviewed in the context of skeletal muscle energetics.
Post-processing of the TRAC code's results
Baron, J.H.; Neuman, D.
The TRAC code serves for the analysis of accidents in nuclear installations from the thermohydraulic point of view. A program has been developed with the aim of processing the information rapidly generated by the code, with screening graph capacity, both in high and low resolution, or either in paper through printer or plotter. Although the programs are intended to be used after the TRAC runs, they may be also used even when the program is running so as to observe the calculation process. The advantages of employing this type of tool, its actual capacity and its possibilities of expansion according to the user's needs are herein described. (Author)
The effect of footwear on running performance and running economy in distance runners.
Fuller, Joel T; Bellenger, Clint R; Thewlis, Dominic; Tsiros, Margarita D; Buckley, Jonathan D
The effect of footwear on running economy has been investigated in numerous studies. However, no systematic review and meta-analysis has synthesised the available literature and the effect of footwear on running performance is not known. The aim of this systematic review and meta-analysis was to investigate the effect of footwear on running performance and running economy in distance runners, by reviewing controlled trials that compare different footwear conditions or compare footwear with barefoot. The Web of Science, Scopus, MEDLINE, CENTRAL (Cochrane Central Register of Controlled Trials), EMBASE, AMED (Allied and Complementary Medicine), CINAHL and SPORTDiscus databases were searched from inception up until April 2014. Included articles reported on controlled trials that examined the effects of footwear or footwear characteristics (including shoe mass, cushioning, motion control, longitudinal bending stiffness, midsole viscoelasticity, drop height and comfort) on running performance or running economy and were published in a peer-reviewed journal. Of the 1,044 records retrieved, 19 studies were included in the systematic review and 14 studies were included in the meta-analysis. No studies were identified that reported effects on running performance. Individual studies reported significant, but trivial, beneficial effects on running economy for comfortable and stiff-soled shoes [standardised mean difference (SMD) beneficial effect on running economy for cushioned shoes (SMD = 0.37; P beneficial effect on running economy for training in minimalist shoes (SMD = 0.79; P beneficial effects on running economy for light shoes and barefoot compared with heavy shoes (SMD running was identified (P running economy. Certain models of footwear and footwear characteristics can improve running economy. Future research in footwear performance should include measures of running performance.
The network code
The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)
Coding for Electronic Mail
Rice, R. F.; Lee, J. J.
Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.
Lyngeraa, Tobias; Pedersen, Lars Møller; Mantoni, T
for eight subjects, respectively, were excluded from analysis because of insufficient signal quality. Running increased mean arterial pressure and mean MCA velocity and induced rhythmic oscillations in BP and in MCA velocity corresponding to the difference between step rate and heart rate (HR) frequencies....... During running, rhythmic oscillations in arterial BP induced by interference between HR and step frequency impact on cerebral blood velocity. For the exercise as a whole, average MCA velocity becomes elevated. These results suggest that running not only induces an increase in regional cerebral blood flow...
CMB constraints on running non-Gaussianity
Oppizzi, Filippo; Liguori, Michele; Renzi, Alessandro; Arroja, Frederico; Bartolo, Nicola
We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the $f_{\\rm NL}$ running spectral index, $n_{\\rm NG}$, using WMAP 9-year data. Our final bounds (68\\% C.L.) read $-0.3< n_{\\rm NG}
Running Injuries During Adolescence and Childhood.
Krabak, Brian J; Snitily, Brian; Milani, Carlo J E
The popularity of running among young athletes has significantly increased over the past few decades. As the number of children who participate in running increases, so do the potential number of injuries to this group. Proper care of these athletes includes a thorough understanding of the unique physiology of the skeletally immature athlete and common injuries in this age group. Treatment should focus on athlete education, modification of training schedule, and correction of biomechanical deficits contributing to injury. Early identification and correction of these factors will allow a safe return to running sports. Copyright © 2016 Elsevier Inc. All rights reserved.
What to do with a Dead Research Code
Nemiroff, Robert J.
The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.
ATLAS Strip Detector: Operational Experience and Run1-> Run2 Transition
Nagai, Koichi; The ATLAS collaboration
Large hadron collider was operated very successfully during the Run1 and provided a lot of opportunities of physics studies. It currently has a consolidation work toward to the operation at $\\sqrt{s}=14 \\mathrm{TeV}$ in Run2. The ATLAS experiment has achieved excellent performance in Run1 operation, delivering remarkable physics results. The SemiConductor Tracker contributed to the precise measurement of momentum of charged particles. This paper describes the operation experience of the SemiConductor Tracker in Run1 and the preparation toward to the Run2 operation during the LS1.
Electricity prices and fuel costs. Long-run relations and short-run dynamics
Mohammadi, Hassan
The paper examines the long-run relation and short-run dynamics between electricity prices and three fossil fuel prices - coal, natural gas and crude oil - using annual data for the U.S. for 1960-2007. The results suggest (1) a stable long-run relation between real prices for electricity and coal (2) Bi-directional long-run causality between coal and electricity prices. (3) Insignificant long-run relations between electricity and crude oil and/or natural gas prices. And (4) no evidence of asymmetries in the adjustment of electricity prices to deviations from equilibrium. A number of implications are addressed. (author)
User's manual for EXALPHA (a code for calculating electronic properties of molecules). [Muscatel code, multiply scattered electron approximation
Jones, H.D.
The EXALPHA procedures provide a simplified method for running the MUSCATEL computer code, which in turn is used for calculating electronic properties of simple molecules and atomic clusters, based on the multiply scattered electron approximation for the wave equations. The use of the EXALPHA procedures to set up a run of MUSCATEL is described.
NAGRADATA. Code key. Geology
Mueller, W.H.; Schneider, B.; Staeuble, J.
This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)
Reactor lattice codes
Kulikowska, T.
The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)
Four-D propagation code for high-energy laser beams: a user's manual
Morris, J.R.
This manual describes the use and structure of the June 30, 1976 version of the Four-D propagation code for high energy laser beams. It provides selected sample output from a typical run and from several debug runs. The Four-D code now includes the important noncoplanar scenario feature. Many problems that required excessive computer time can now be meaningfully simulated as steady-state noncoplanar problems with short run times.
Common running musculoskeletal injuries among recreational half ...
probing the prevalence and nature of running musculoskeletal injuries in the 12 months preceding ... or agony, and which prevented them from physical activity for ..... injuries to professional football players: Developing the UEFA model.
TEK twisted gradient flow running coupling
Pérez, Margarita García; Keegan, Liam; Okawa, Masanori
We measure the running of the twisted gradient flow coupling in the Twisted Eguchi-Kawai (TEK) model, the SU(N) gauge theory on a single site lattice with twisted boundary conditions in the large N limit.
Run-2 Supersymmetry searches in ATLAS
Soffer, Abner; The ATLAS collaboration
Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. With the large increase in collision energy with the LHC Run-2 (from 8TeV to 13 TeV) the sensitivity to heavy strongly produced SUSY particles (squarks and gluinos) increases tremendously. This talk presents recent ATLAS Run-2 searches for such particles in final states including jets, missing transverse momentum, and possibly light leptons.
Running heavy-quark masses in DIS
Alekhin, S.; Moch, S.
We report on determinations of the running mass for charm quarks from deep-inelastic scattering reactions. The method provides complementary information on this fundamental parameter from hadronic processes with space-like kinematics. The obtained values are consistent with but systematically lower than the world average as published by the PDG. We also address the consequences of the running mass scheme for heavy-quark parton distributions in global fits to deep-inelastic scattering data. (orig.)
The meaning of running away for girls.
Peled, Einat; Cohavi, Ayelet
The aim of this qualitative research was to understand how runaway girls perceive the processes involved in leaving home and the meaning they attribute to it. Findings are based on in-depth interviews with 10 Israeli girls aged 13-17 with a history of running away from home. The meaning of running away as it emerged from the girls' descriptions of their lives prior to leaving home was that of survival - both psychological and physical. The girls' stories centered on their evolving experiences of alienation, loneliness and detachment, and the failure of significant relationships at home and outside of home to provide them with the support they needed. These experiences laid the ground for the "final moments" before leaving, when a feeling of "no alternative," a hope for a better future, and various particular triggers led the girls to the decision to leave home. Participants' insights about the dynamics leading to running-away center on the meaning of family relationships, particularly those with the mother, as constituting the girl's psychological home. The girls seemed to perceive running away as an inevitability, rather than a choice, and even portrayed the running away as "living suicide." Yet, their stories clearly demonstrate their ability to cope and the possession of strengths and skills that enabled them to survive in extremely difficult home situations. The findings of this research highlight the importance of improving services for reaching out and supporting girls who are on the verge of running away from home. Such services should be tailored to the needs of girls who experience extreme but often silenced distress at home, and should facilitate alternative solutions to the girls' plight other than running away. An understanding of the dynamics leading to running away from the girls' perspective has the potential to improve the efficacy of services provided by contributing to the creation of a caring, empowering, understanding and trustful professional
A finite element code for electric motor design
Campbell, C. Warren
FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.
[Osteoarthritis from long-distance running?].
Hohmann, E; Wörtler, K; Imhoff, A
Long distance running has become a fashionable recreational activity. This study investigated the effects of external impact loading on bone and cartilage introduced by performing a marathon race. Seven beginners were compared to six experienced recreational long distance runners and two professional athletes. All participants underwent magnetic resonance imaging of the hip and knee before and after a marathon run. Coronal T1 weighted and STIR sequences were used. The pre MRI served as a baseline investigation and monitored the training effect. All athletes demonstrated normal findings in the pre run scan. All but one athlete in the beginner group demonstrated joint effusions after the race. The experienced and professional runners failed to demonstrate pathology in the post run scans. Recreational and professional long distance runners tolerate high impact forces well. Beginners demonstrate significant changes on the post run scans. Whether those findings are a result of inadequate training (miles and duration) warrant further studies. We conclude that adequate endurance training results in adaptation mechanisms that allow the athlete to compensate for the stresses introduced by long distance running and do not predispose to the onset of osteoarthritis. Significant malalignment of the lower extremity may cause increased focal loading of joint and cartilage.
Running With an Elastic Lower Limb Exoskeleton.
Cherry, Michael S; Kota, Sridhar; Young, Aaron; Ferris, Daniel P
Although there have been many lower limb robotic exoskeletons that have been tested for human walking, few devices have been tested for assisting running. It is possible that a pseudo-passive elastic exoskeleton could benefit human running without the addition of electrical motors due to the spring-like behavior of the human leg. We developed an elastic lower limb exoskeleton that added stiffness in parallel with the entire lower limb. Six healthy, young subjects ran on a treadmill at 2.3 m/s with and without the exoskeleton. Although the exoskeleton was designed to provide ~50% of normal leg stiffness during running, it only provided 24% of leg stiffness during testing. The difference in added leg stiffness was primarily due to soft tissue compression and harness compliance decreasing exoskeleton displacement during stance. As a result, the exoskeleton only supported about 7% of the peak vertical ground reaction force. There was a significant increase in metabolic cost when running with the exoskeleton compared with running without the exoskeleton (ANOVA, P exoskeletons for human running are human-machine interface compliance and the extra lower limb inertia from the exoskeleton.
Metadata aided run selection at ATLAS
Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E
Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.
You know the Science. Do you know your Code?
This talk is about automated code analysis and transformation tools to support scientific computing. Code bases are difficult to manage because of size, age, or safety requirements. Tools can help scientists and IT engineers understand their code, locate problems, improve quality. Tools can also help transform the code, by implementing complex refactorings, replatforming, or migration to a modern language. Such tools are themselves difficult to build. This talk describes DMS, a meta-tool for building software analysis tools. DMS is a kind of generalized compiler, and can be configured to process arbitrary programming languages, to carry out arbitrary analyses, and to convert specifications into running code. It has been used for a variety of purposes, including converting embedded mission software in the US B-2 Stealth Bomber, providing the US Social Security Administration with a deep view how their 200 millions lines of COBOL are connected, and reverse-engineering legacy factory process control code i...
Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding
Hansen, Johan Peder
We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....
A method for scientific code coupling in a distributed environment
Caremoli, C.; Beaucourt, D.; Chen, O.; Nicolas, G.; Peniguel, C.; Rascle, P.; Richard, N.; Thai Van, D.; Yessayan, A.
This guide book deals with coupling of big scientific codes. First, the context is introduced: big scientific codes devoted to a specific discipline coming to maturity, and more and more needs in terms of multi discipline studies. Then we describe different kinds of code coupling and an example of code coupling: 3D thermal-hydraulic code THYC and 3D neutronics code COCCINELLE. With this example we identify problems to be solved to realize a coupling. We present the different numerical methods usable for the resolution of coupling terms. This leads to define two kinds of coupling: with the leak coupling, we can use explicit methods, and with the strong coupling we need to use implicit methods. On both cases, we analyze the link with the way of parallelizing code. For translation of data from one code to another, we define the notion of Standard Coupling Interface based on a general structure for data. This general structure constitutes an intermediary between the codes, thus allowing a relative independence of the codes from a specific coupling. The proposed method for the implementation of a coupling leads to a simultaneous run of the different codes, while they exchange data. Two kinds of data communication with message exchange are proposed: direct communication between codes with the use of PVM product (Parallel Virtual Machine) and indirect communication with a coupling tool. This second way, with a general code coupling tool, is based on a coupling method, and we strongly recommended to use it. This method is based on the two following principles: re-usability, that means few modifications on existing codes, and definition of a code usable for coupling, that leads to separate the design of a code usable for coupling from the realization of a specific coupling. This coupling tool available from beginning of 1994 is described in general terms. (authors). figs., tabs
An Optimal Linear Coding for Index Coding Problem
Pezeshkpour, Pouya
An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...
Development of EASYQAD version β: A Visualization Code System for QAD-CGGP-A Gamma and Neutron Shielding Calculation Code
Kim, Jae Cheon; Lee, Hwan Soo; Ha, Pham Nhu Viet; Kim, Soon Young; Shin, Chang Ho; Kim, Jong Kyung
EASYQAD had been previously developed by using MATLAB GUI (Graphical User Interface) in order to perform conveniently gamma and neutron shielding calculations at Hanyang University. It had been completed as version α of radiation shielding analysis code. In this study, EASYQAD was upgraded to version β with many additional functions and more user-friendly graphical interfaces. For general users to run it on Windows XP environment without any MATLAB installation, this version was developed into a standalone code system
A PC version of the Monte Carlo criticality code OMEGA
Seifert, E.
A description of the PC version of the Monte Carlo criticality code OMEGA is given. The report contains a general description of the code together with a detailed input description. Furthermore, some examples are given illustrating the generation of an input file. The main field of application is the calculation of the criticality of arrangements of fissionable material. Geometrically complicated arrangements that often appear inside and outside a reactor, e.g. in a fuel storage or transport container, can be considered essentially without geometrical approximations. For example, the real geometry of assemblies containing hexagonal or square lattice structures can be described in full detail. Moreover, the code can be used for special investigations in the field of reactor physics and neutron transport. Many years of practical experience and comparison with reference cases have shown that the code together with the built-in data libraries gives reliable results. OMEGA is completely independent on other widely used criticality codes (KENO, MCNP, etc.), concerning programming and the data base. It is a good practice to run difficult criticality safety problems by different independent codes in order to mutually verify the results. In this way, OMEGA can be used as a redundant code within the family of criticality codes. An advantage of OMEGA is the short calculation time: A typical criticality safety application takes only a few minutes on a Pentium PC. Therefore, the influence of parameter variations can simply be investigated by running many variants of a problem. (orig.)
The Aesthetics of Coding
Andersen, Christian Ulrik
Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer's materiality. Cramer is thus the voice of a new 'code...... avant-garde'. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: "art-oriented programming needs to acknowledge the conditions of its own making – its poesis.� By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...
Majorana fermion codes
Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard
We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.
Theory of epigenetic coding.
Elder, D
The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.
DISP1 code
Vokac, P.
DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)
AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.
Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld
There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Rhexifolia versus Rhexiifolia: Plant Nomenclature Run Amok?
R. Kasten Dumroese; Mark W. Skinner
The International Botanical Congress governs plant nomenclature worldwide through the International Code of Botanical Nomenclature. In the current code are very specific procedures for naming plants with novel compound epithets, and correcting compound epithets, like rhexifolia, that were incorrectly combined.We discuss why rhexiifolia...
Phonological coding during reading.
Leinenger, Mallorie
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
The aeroelastic code FLEXLAST
Visser, B. [Stork Product Eng., Amsterdam (Netherlands)
To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)
Optimization of the muon reconstruction algorithms for LHCb Run 2
Aaij, Roel; Dettori, Francesco; Dungs, Kevin; Lopes, Helder; Martinez Santos, Diego; Prisciandaro, Jessica; Sciascia, Barbara; Syropoulos, Vasileios; Stahl, Sascha; Vazquez Gomez, Ricardo
The muon identi�cation algorithm in the LHCb HLT software trigger and offline reconstruction has been revisited in view of the LHC Run 2. This software has undergone a signi�cant refactorisation, resulting in a modularized common code base between the HLT and offline event processing. Because of the latter, the muon identi�cation is now identical in HLT and offline. The HLT1 algorithm sequence has been updated given the new rate and timing constraints. Also, information from the TT subdetector is used in order to reduce ghost tracks and optimize for low $p_T$ muons. The current software is presented here together with performance studies showing improved efficiencies and reduced timing.
MORSE Monte Carlo code
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
QR codes for dummies
Waters, Joe
Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown
Tokamak Systems Code
Reid, R.L.; Barrett, R.J.; Brown, T.G.
The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged
Derivation of the physical equations solved in the inertial confinement stability code DOC. Informal report
Scannapieco, A.J.; Cranfill, C.W.
There now exists an inertial confinement stability code called DOC, which runs as a postprocessor. DOC (a code that has evolved from a previous code, PANSY) is a spherical harmonic linear stability code that integrates, in time, a set of Lagrangian perturbation equations. Effects due to real equations of state, asymmetric energy deposition, thermal conduction, shock propagation, and a time-dependent zeroth-order state are handled in the code. We present here a detailed derivation of the physical equations that are solved in the code
There now exists an inertial confinement stability code called DOC, which runs as a postprocessor. DOC (a code that has evolved from a previous code, PANSY) is a spherical harmonic linear stability code that integrates, in time, a set of Lagrangian perturbation equations. Effects due to real equations of state, asymmetric energy deposition, thermal conduction, shock propagation, and a time-dependent zeroth-order state are handled in the code. We present here a detailed derivation of the physical equations that are solved in the code.
The ATLAS Tau Trigger Performance during LHC Run 1 and Prospects for Run 2
Mitani, T; The ATLAS collaboration
The ATLAS tau trigger is designed to select hadronic decays of the tau leptons. Tau lepton plays an important role in Standard Model (SM) physics, such as in Higgs boson decays. Tau lepton is also important in beyond the SM (BSM) scenarios, such as supersymmetry and exotic particles, as they are often produced preferentially in these models. During the 2010-2012 LHC run (Run1), the tau trigger was accomplished successfully, which leads several rewarding results such as evidence for $H\\rightarrow \\tau\\tau$. From the 2015 LHC run (Run2), LHC will be upgraded and overlapping interactions per bunch crossing (pile-up) are expected to increase by a factor two. It will be challenging to control trigger rates while keeping interesting physics events. This paper summarized the tau trigger performance in Run1 and its prospects for Run2.
Efficient Coding of Information: Huffman Coding -RE ...
to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...
NR-code: Nonlinear reconstruction code
Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming
NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.
Not Just Running: Coping with and Managing Everyday Life through Road-Running
Cook, Simon
From the external form, running looks like running. Yet this alikeness masks a hugely divergent practice consisting of different movements, meanings and experiences. In this paper I wish to shed light upon some of these different 'ways of running' and in turn identify a range of the sometimes surprising, sometimes significant and sometimes banal benefits that road-running can gift its practitioners beyond simply exercise and physical fitness. Drawing on an innovative mapping and ethnographic ...
GridRun: A lightweight packaging and execution environment forcompact, multi-architecture binaries
Shalf, John; Goodale, Tom
GridRun offers a very simple set of tools for creating and executing multi-platform binary executables. These ''fat-binaries'' archive native machine code into compact packages that are typically a fraction the size of the original binary images they store, enabling efficient staging of executables for heterogeneous parallel jobs. GridRun interoperates with existing distributed job launchers/managers like Condor and the Globus GRAM to greatly simplify the logic required launching native binary applications in distributed heterogeneous environments.
Students' Gender Stereotypes about Running in Schools
Xiang, Ping; McBride, Ron E.; Lin, Shuqiong; Gao, Zan; Francis, Xueying
Two hundred forty-six students (132 boys, 114 girls) were tracked from fifth to eighth grades, and changes in gender stereotypes about running as a male sport, running performance, interest in running, and intention for future running participation were assessed. Results revealed that neither sex held gender stereotypes about running as a male…
The Run-2 ATLAS Trigger System
Martínez, A Ruiz
The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in up to five times higher rates of processes of interest. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event processing farm. A few examples will be shown, such as the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy. Finally, the status of the commissioning of the trigger system and its performance during the 2015 run will be presented. (paper)
Exercise economy in skiing and running
Thomas eLosnegard
Full Text Available Substantial inter-individual variations in exercise economy exist even in highly trained endurance athletes. The variation is believed to be determined partly by intrinsic factors. Therefore, in the present study, we compared exercise economy in V2-skating, double poling and uphill running. Ten highly trained male cross-country skiers (23 ± 3 years, 180 ± 6 cm, 75 ± 8 kg, VO2peak running: 76.3 ± 5.6 mL•kg-1•min-1 participated in the study. Exercise economy and VO2peak during treadmill running, ski skating (V2 technique and double poling were compared based on correlation analysis with subsequent criteria for interpreting the magnitude of correlation (r. There was a very large correlation in exercise economy between V2-skating and double poling (r = 0.81 and a large correlation between V2-skating and running (r = 0.53 and double poling and running (r = 0.58. There were trivial to moderate correlations between exercise economy and VO2peak (r = 0.00-0.23, cycle rate (r = 0.03-0.46, body mass (r = -0.09-0.46 and body height (r = 0.11-0.36. In conclusion, the inter-individual variation in exercise economy could only moderately be explained by differences in VO2peak, body mass and body height and therefore we suggest that other intrinsic factors contribute to the variation in exercise economy between highly trained subjects.
The CMS trigger in Run 2
Tosi, Mia
During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2$\\times 10^{34}$~cm$^{-2}s^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realised by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.\\\\ In order to face this challenge, the L1 trigger has undergone a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT went through big improvements; in particular, new ap...
Chaotic inflation with curvaton induced running
Sloth, Martin Snoager
While dust contamination now appears as a likely explanation of the apparent tension between the recent BICEP2 data and the Planck data, we will here explore the consequences of a large running in the spectral index as suggested by the BICEP2 collaboration as an alternative explanation of the app......While dust contamination now appears as a likely explanation of the apparent tension between the recent BICEP2 data and the Planck data, we will here explore the consequences of a large running in the spectral index as suggested by the BICEP2 collaboration as an alternative explanation...... of the apparent tension, but which would be in conflict with prediction of the simplest model of chaotic inflation. The large field chaotic model is sensitive to UV physics, and the nontrivial running of the spectral index suggested by the BICEP2 collaboration could therefore, if true, be telling us some...... the possibility that the running could be due to some other less UV sensitive degree of freedom. As an example, we ask if it is possible that the curvature perturbation spectrum has a contribution from a curvaton, which makes up for the large running in the spectrum. We find that this effect could mask...
Habitual Minimalist Shod Running Biomechanics and the Acute Response to Running Barefoot.
Tam, Nicholas; Darragh, Ian A J; Divekar, Nikhil V; Lamberts, Robert P
The aim of the study was to determine whether habitual minimalist shoe runners present with purported favorable running biomechanithat reduce running injury risk such as initial loading rate. Eighteen minimalist and 16 traditionally cushioned shod runners were assessed when running both in their preferred training shoe and barefoot. Ankle and knee joint kinetics and kinematics, initial rate of loading, and footstrike angle were measured. Sagittal ankle and knee joint stiffness were also calculated. Results of a two-factor ANOVA presented no group difference in initial rate of loading when participants were running either shod or barefoot; however, initial loading rate increased for both groups when running barefoot (p=0.008). Differences in footstrike angle were observed between groups when running shod, but not when barefoot (minimalist:8.71±8.99 vs. traditional: 17.32±11.48 degrees, p=0.002). Lower ankle joint stiffness was found in both groups when running barefoot (p=0.025). These findings illustrate that risk factors for injury potentially differ between the two groups. Shoe construction differences do change mechanical demands, however, once habituated to the demands of a given shoe condition, certain acute favorable or unfavorable responses may be moderated. The purported benefits of minimalist running shoes in mimicking habitual barefoot running is questioned, and risk of injury may not be attenuated. © Georg Thieme Verlag KG Stuttgart · New York.
Neural network-based run-to-run controller using exposure and resist thickness adjustment
Geary, Shane; Barry, Ronan
This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.
The running pattern and its importance in running long-distance gears
Jarosław Hoffman
Full Text Available The running pattern is individual for each runner, regardless of distance. We can characterize it as the sum of the data of the runner (age, height, training time, etc. and the parameters of his run. Building the proper technique should focus first and foremost on the work of movement coordination and the power of the runner. In training the correct running steps we can use similar tools as working on deep feeling. The aim of this paper was to define what we can call a running pattern, what is its influence in long-distance running, and the relationship between the training technique and the running pattern. The importance of a running pattern in long-distance racing is immense, as the more distracted and departed from the norm, the greater the harm to the body will cause it to repetition in long run. Putting on training exercises that shape the technique is very important and affects the running pattern significantly.
Transport of mass goods on the top run and bottom run of belt conveyors
Zimmermann, D
For combined coal winning from the collieries 'General Blumenthal' and 'Ewald Fortsetzung' a large belt conveyor plant was taken into operation which is able to transport 1360 tons/h in the top run and 300 tons/h of dirt in the bottom run. The different types of coal are transported separately in intermittent operation with the aid of bunker systems connected to the front and rear of the belt conveyor. Persons can be transported in the top run as well as in the bottom run.
The nuclear reaction model code MEDICUS
Ibishia, A.I.
The new computer code MEDICUS has been used to calculate cross sections of nuclear reactions. The code, implemented in MATLAB 6.5, Mathematica 5, and Fortran 95 programming languages, can be run in graphical and command line mode. Graphical User Interface (GUI) has been built that allows the user to perform calculations and to plot results just by mouse clicking. The MS Windows XP and Red Hat Linux platforms are supported. MEDICUS is a modern nuclear reaction code that can compute charged particle-, photon-, and neutron-induced reactions in the energy range from thresholds to about 200 MeV. The calculation of the cross sections of nuclear reactions are done in the framework of the Exact Many-Body Nuclear Cluster Model (EMBNCM), Direct Nuclear Reactions, Pre-equilibrium Reactions, Optical Model, DWBA, and Exciton Model with Cluster Emission. The code can be used also for the calculation of nuclear cluster structure of nuclei. We have calculated nuclear cluster models for some nuclei such as 177 Lu, 90 Y, and 27 Al. It has been found that nucleus 27 Al can be represented through the two different nuclear cluster models: 25 Mg + d and 24 Na + 3 He. Cross sections in function of energy for the reaction 27 Al( 3 He,x) 22 Na, established as a production method of 22 Na, are calculated by the code MEDICUS. Theoretical calculations of cross sections are in good agreement with experimental results. Reaction mechanisms are taken into account. (author)
SALE: Safeguards Analytical Laboratory Evaluation computer code
Carroll, D.J.; Bush, W.J.; Dolan, C.A.
The Safeguards Analytical Laboratory Evaluation (SALE) program implements an industry-wide quality control and evaluation system aimed at identifying and reducing analytical chemical measurement errors. Samples of well-characterized materials are distributed to laboratory participants at periodic intervals for determination of uranium or plutonium concentration and isotopic distributions. The results of these determinations are statistically-evaluated, and each participant is informed of the accuracy and precision of his results in a timely manner. The SALE computer code which produces the report is designed to facilitate rapid transmission of this information in order that meaningful quality control will be provided. Various statistical techniques comprise the output of the SALE computer code. Assuming an unbalanced nested design, an analysis of variance is performed in subroutine NEST resulting in a test of significance for time and analyst effects. A trend test is performed in subroutine TREND. Microfilm plots are obtained from subroutine CUMPLT. Within-laboratory standard deviations are calculated in the main program or subroutine VAREST, and between-laboratory standard deviations are calculated in SBLV. Other statistical tests are also performed. Up to 1,500 pieces of data for each nuclear material sampled by 75 (or fewer) laboratories may be analyzed with this code. The input deck necessary to run the program is shown, and input parameters are discussed in detail. Printed output and microfilm plot output are described. Output from a typical SALE run is included as a sample problem
Code compression for VLIW embedded processors
Piccinelli, Emiliano; Sannino, Roberto
The implementation of processors for embedded systems implies various issues: main constraints are cost, power dissipation and die area. On the other side, new terminals perform functions that require more computational flexibility and effort. Long code streams must be loaded into memories, which are expensive and power consuming, to run on DSPs or CPUs. To overcome this issue, the "SlimCode" proprietary algorithm presented in this paper (patent pending technology) can reduce the dimensions of the program memory. It can run offline and work directly on the binary code the compiler generates, by compressing it and creating a new binary file, about 40% smaller than the original one, to be loaded into the program memory of the processor. The decompression unit will be a small ASIC, placed between the Memory Controller and the System bus of the processor, keeping unchanged the internal CPU architecture: this implies that the methodology is completely transparent to the core. We present comparisons versus the state-of-the-art IBM Codepack algorithm, along with its architectural implementation into the ST200 VLIW family core.
Computer codes for evaluation of control room habitability (HABIT)
Stage, S.A.
This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs
TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code
Cullen, D.E
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files
TART 2000 A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code
Cullen, D
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.
The Second Workshop on Lineshape Code Comparison: Isolated Lines
Spiros Alexiou
Full Text Available In this work, we briefly summarize the theoretical aspects of isolated line broadening. We present and discuss test run comparisons from different participating lineshape codes for the 2s-2p transition for LiI, B III and NV.
Ensuring that User Defined Code does not See Uninitialized Fields
Nielsen, Anders Bach
Initialization of objects is commonly handled by user code, often in special routines known as constructors. This applies even in a virtual machine with multiple concurrent execution engines that all share the same heap. But for a language where run-time values play a role in the type system...
Bounds on the capacity of constrained two-dimensional codes
Forchhammer, Søren; Justesen, Jørn
Bounds on the capacity of constrained two-dimensional (2-D) codes are presented. The bounds of Calkin and Wilf apply to first-order symmetric constraints. The bounds are generalized in a weaker form to higher order and nonsymmetric constraints. Results are given for constraints specified by run-l...
Code-Switching Functions in Modern Hebrew Teaching and Learning
Gilead, Yona
The teaching and learning of Modern Hebrew outside of Israel is essential to Jewish education and identity. One of the most contested issues in Modern Hebrew pedagogy is the use of code-switching between Modern Hebrew and learners' first language. Moreover, this is one of the longest running disputes in the broader field of second language…
The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE
Vandenbroucke, B.; Wood, K.
We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.
Statistical screening of input variables in a complex computer code
Krieger, T.J.
A method is presented for ''statistical screening'' of input variables in a complex computer code. The object is to determine the ''effective'' or important input variables by estimating the relative magnitudes of their associated sensitivity coefficients. This is accomplished by performing a numerical experiment consisting of a relatively small number of computer runs with the code followed by a statistical analysis of the results. A formula for estimating the sensitivity coefficients is derived. Reference is made to an earlier work in which the method was applied to a complex reactor code with good results
Establishment of computer code system for nuclear reactor design - analysis
Subki, I.R.; Santoso, B.; Syaukat, A.; Lee, S.M.
Establishment of computer code system for nuclear reactor design analysis is given in this paper. This establishment is an effort to provide the capability in running various codes from nuclear data to reactor design and promote the capability for nuclear reactor design analysis particularly from neutronics and safety points. This establishment is also an effort to enhance the coordination of nuclear codes application and development existing in various research centre in Indonesia. Very prospective results have been obtained with the help of IAEA technical assistance. (author). 6 refs, 1 fig., 1 tab
Three Dimensional Numerical Code for the Expanding Flat Universe
Kyoung W. Min
Full Text Available The current distribution of galaxies may contain clues to the condition of the universe when the galaxies condensed and to the nature of the subsequent expansion of the universe. The development of this large scale structure can be studied by employing N-body computer simulations. The present paper describes the code developed for this purpose. The computer code calculates the motion of collisionless matter action under the force of gravity in an expanding flat universe. The test run of the code shows the error less than 0.5% in 100 iterations.
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Division for Early Childhood, Council for Exceptional Children, 2009
The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…
Interleaved Product LDPC Codes
Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco
Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.
Insurance billing and coding.
Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H
The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.
Error Correcting Codes
Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.
Scrum Code Camps
Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente
is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...
RFQ simulation code
Lysenko, W.P.
We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...
78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards
... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...
Is running associated with degenerative joint disease?
Panush, R.S.; Schmidt, C.; Caldwell, J.R.
Little information is available regarding the long-term effects, if any, of running on the musculoskeletal system. The authors compared the prevalence of degenerative joint disease among 17 male runners with 18 male nonrunners. Running subjects (53% marathoners) ran a mean of 44.8 km (28 miles)/wk for 12 years. Pain and swelling of hips, knees, ankles and feet and other musculoskeletal complaints among runners were comparable with those among nonrunners. Radiologic examinations (for osteophytes, cartilage thickness, and grade of degeneration) also were without notable differences among groups. They did not find an increased prevalence of osteoarthritis among the runners. Our observations suggest that long-duration, high-mileage running need to be associated with premature degenerative joint disease in the lower extremities
Jefferson Lab Data Acquisition Run Control System
Vardan Gyurjyan; Carl Timmer; David Abbott; William Heyes; Edward Jastrzembski; David Lawrence; Elliott Wolin
A general overview of the Jefferson Lab data acquisition run control system is presented. This run control system is designed to operate the configuration, control, and monitoring of all Jefferson Lab experiments. It controls data-taking activities by coordinating the operation of DAQ sub-systems, online software components and third-party software such as external slow control systems. The main, unique feature which sets this system apart from conventional systems is its incorporation of intelligent agent concepts. Intelligent agents are autonomous programs which interact with each other through certain protocols on a peer-to-peer level. In this case, the protocols and standards used come from the domain-independent Foundation for Intelligent Physical Agents (FIPA), and the implementation used is the Java Agent Development Framework (JADE). A lightweight, XML/RDF-based language was developed to standardize the description of the run control system for configuration purposes
Instrumental Variables in the Long Run
Casey, Gregory; Klemp, Marc Patrick Brag
In the study of long-run economic growth, it is common to use historical or geographical variables as instruments for contemporary endogenous regressors. We study the interpretation of these conventional instrumental variable (IV) regressions in a general, yet simple, framework. Our aim...... quantitative implications for the field of long-run economic growth. We also use our framework to examine related empirical techniques. We find that two prominent regression methodologies - using gravity-based instruments for trade and including ancestry-adjusted variables in linear regression models - have...... is to estimate the long-run causal effect of changes in the endogenous explanatory variable. We find that conventional IV regressions generally cannot recover this parameter of interest. To estimate this parameter, therefore, we develop an augmented IV estimator that combines the conventional regression...
Estimation of POL-iteration methods in fast running DNBR code
Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H. [KAERI, Daejeon (Korea, Republic of)
In this study, various root finding methods are applied to the POL-iteration module in SCOMS and POLiteration efficiency is compared with reference method. On the base of these results, optimum algorithm of POL iteration is selected. The POL requires the iteration until present local power reach limit power. The process to search the limiting power is equivalent with a root finding of nonlinear equation. POL iteration process involved in online monitoring system used a variant bisection method that is the most robust algorithm to find the root of nonlinear equation. The method including the interval accelerating factor and escaping routine out of ill-posed condition assured the robustness of SCOMS system. POL iteration module in SCOMS shall satisfy the requirement which is a minimum calculation time. For this requirement of calculation time, non-iterative algorithm, few channel model, simple steam table are implemented into SCOMS to improve the calculation time. MDNBR evaluation at a given operating condition requires the DNBR calculation at all axial locations. An increasing of POL-iteration number increased a calculation load of SCOMS significantly. Therefore, calculation efficiency of SCOMS is strongly dependent on the POL iteration number. In case study, the iterations of the methods have a superlinear convergence for finding limiting power but Brent method shows a quardratic convergence speed. These methods are effective and better than the reference bisection algorithm.
Running mobile agent code over simulated inter-networks : an extra gear towards distributed system evaluation
Liotta, A.; Ragusa, C.; Pavlou, G.
Mobile Agent (MA) systems are complex software entities whose behavior, performance and effectiveness cannot always be anticipated by the designer. Their evaluation often presents various aspects that require a careful, methodological approach as well as the adoption of suitable tools, needed to
Validation of thermalhydraulic codes
Wilkie, D.
Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)
Fracture flow code
Dershowitz, W; Herbert, A.; Long, J.
The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)
The NLstart2run study: running related injuries in novice runners : Running related injuries in novice runners
Kluitenberg, Bas
Hardlopen is wereldwijd een populaire sport welke vaak wordt beoefend voor de positieve gezondheidseffecten. Er is echter een keerzijde. Hardlopers worden vaak geplaagd door blessures. Een probleem waar veelal beginners tegenaan lopen. Dit proefschrift beschrijft de NLstart2run studie, een onderzoek
Abort Gap Cleaning for LHC Run 2
Uythoven, Jan [CERN; Boccardi, Andrea [CERN; Bravin, Enrico [CERN; Goddard, Brennan [CERN; Hemelsoet, Georges-Henry [CERN; Höfle, Wolfgang [CERN; Jacquet, Delphine [CERN; Kain, Verena [CERN; Mazzoni, Stefano [CERN; Meddahi, Malika [CERN; Valuch, Daniel [CERN; Gianfelice-Wendt, Eliana [Fermilab
To minimize the beam losses at the moment of an LHC beam dump the 3 μs long abort gap should contain as few particles as possible. Its population can be minimised by abort gap cleaning using the LHC transverse damper system. The LHC Run 1 experience is briefly recalled; changes foreseen for the LHC Run 2 are presented. They include improvements in the observation of the abort gap population and the mechanism to decide if cleaning is required, changes to the hardware of the transverse dampers to reduce the detrimental effect on the luminosity lifetime and proposed changes to the applied cleaning algorithms.
Luminosity Measurements at LHCb for Run II
Coombs, George
A precise measurement of the luminosity is a necessary component of many physics analyses, especially cross-section measurements. At LHCb two different direct measurement methods are used to determine the luminosity: the "van der Meer scan� (VDM) and the "Beam Gas Imaging� (BGI) methods. A combined result from these two methods gave a precision of less than 2% for Run I and efforts are ongoing to provide a similar result for Run II. Fixed target luminosity is determined with an indirect method based on the single electron scattering cross-section.
Uythoven, J; Bravin, E; Goddard, B; Hemelsoet, GH; Höfle, W; Jacquet, D; Kain, V; Mazzoni, S; Meddahi, M; Valuch, D
To minimise the beam losses at the moment of an LHC beam dump the 3 μs long abort gap should contain as few particles as possible. Its population can be minimised by abort gap cleaning using the LHC transverse damper system. The LHC Run 1 experience is briefly recalled; changes foreseen for the LHC Run 2 are presented. They include improvements in the observation of the abort gap population and the mechanism to decide if cleaning is required, changes to the hardware of the transverse dampers to reduce the detrimental effect on the luminosity lifetime and proposed changes to the applied cleaning algorithms.
Running-mass inflation model and WMAP
Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.
We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable
Causal Analysis of Railway Running Delays
Cerreto, Fabrizio; Nielsen, Otto Anker; Harrod, Steven
Operating delays and network propagation are inherent characteristics of railway operations. These are traditionally reduced by provision of time supplements or "slack� in railway timetables and operating plans. Supplement allocation policies must trade off reliability in the service commitments...... Denmark (the Danish infrastructure manager). The statistical analysis of the data identifies the minimum running times and the scheduled running time supplements and investigates the evolution of train delays along given train paths. An improved allocation of time supplements would result in smaller...
The Run 2 ATLAS Analysis Event Data Model
SNYDER, S; The ATLAS collaboration; NOWAK, M; EIFERT, T; BUCKLEY, A; ELSING, M; GILLBERG, D; MOYSE, E; KOENEKE, K; KRASZNAHORKAY, A
During the LHC's first Long Shutdown (LS1) ATLAS set out to establish a new analysis model, based on the experience gained during Run 1. A key component of this is a new Event Data Model (EDM), called the xAOD. This format, which is now in production, provides the following features: A separation of the EDM into interface classes that the user code directly interacts with, and data storage classes that hold the payload data. The user sees an Array of Structs (AoS) interface, while the data is stored in a Struct of Arrays (SoA) format in memory, thus making it possible to efficiently auto-vectorise reconstruction code. A simple way of augmenting and reducing the information saved for different data objects. This makes it possible to easily decorate objects with new properties during data analysis, and to remove properties that the analysis does not need. A persistent file format that can be explored directly with ROOT, either with or without loading any additional libraries. This allows fast interactive naviga...
Dedicated OO expertise applied to Run II software projects
Amidei, D.
The change in software language and methodology by CDF and D0 to object-oriented from procedural Fortran is significant. Both experiments requested dedicated expertise that could be applied to software design, coding, advice and review. The Fermilab Run II offline computing outside review panel agreed strongly with the request and recommended that the Fermilab Computing Division hire dedicated OO expertise for the CDF/D0/Computing Division joint project effort. This was done and the two experts have been an invaluable addition to the CDF and D0 upgrade software projects and to the Computing Division in general. These experts have encouraged common approaches and increased the overall quality of the upgrade software. Advice on OO techniques and specific advice on C++ coding has been used. Recently a set of software reviews has been accomplished. This has been a very successful instance of a targeted application of computing expertise, and constitutes a very interesting study of how to move toward modern computing methodologies in HEP
The design of the run Clever randomized trial
Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik
BACKGROUND: Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need...... evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running...... and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. METHODS/DESIGN: The Run Clever trial is a randomized trial with a 24-week...
Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP
Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.
We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE
Simulating three dimensional wave run-up over breakwaters covered by antifer units
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
A. Najafi-Jilani
Full Text Available The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD and Computational Fluid Dynamics (CFD software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS Volume of Fluid (VOF code (Flow-3D was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
Methods and computer codes for probabilistic sensitivity and uncertainty analysis
Vaurio, J.K.
This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables
Vectorization of three-dimensional neutron diffusion code CITATION
Harada, Hiroo; Ishiguro, Misako
Three-dimensional multi-group neutron diffusion code CITATION has been widely used for reactor criticality calculations. The code is expected to be run at a high speed by using recent vector supercomputers, when it is appropriately vectorized. In this paper, vectorization methods and their effects are described for the CITATION code. Especially, calculation algorithms suited for vectorization of the inner-outer iterative calculations which spend most of the computing time are discussed. The SLOR method, which is used in the original CITATION code, and the SOR method, which is adopted in the revised code, are vectorized by odd-even mesh ordering. The vectorized CITATION code is executed on the FACOM VP-100 and VP-200 computers, and is found to run over six times faster than the original code for a practical-scale problem. The initial value of the relaxation factor and the number of inner-iterations given as input data are also investigated since the computing time depends on these values. (author)
Huffman coding in advanced audio coding standard
Brzuchalski, Grzegorz
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
Short-run and long-run elasticities of import demand for crude oil in Turkey
Altinay, Galip
The aim of this study is to attempt to estimate the short-run and the long-run elasticities of demand for crude oil in Turkey by the recent autoregressive distributed lag (ARDL) bounds testing approach to cointegration. As a developing country, Turkey meets its growing demand for oil principally by foreign suppliers. Thus, the study focuses on modelling the demand for imported crude oil using annual data covering the period 1980-2005. The bounds test results reveal that a long-run cointegration relationship exists between the crude oil import and the explanatory variables: nominal price and income, but not in the model that includes real price in domestic currency. The long-run parameters are estimated through a long-run static solution of the estimated ARDL model, and then the short-run dynamics are estimated by the error correction model. The estimated models pass the diagnostic tests successfully. The findings reveal that the income and price elasticities of import demand for crude oil are inelastic both in the short run and in the long run
Short-Run and Long-Run Elasticities of Diesel Demand in Korea
Seung-Hoon Yoo
Full Text Available This paper investigates the demand function for diesel in Korea covering the period 1986–2011. The short-run and long-run elasticities of diesel demand with respect to price and income are empirically examined using a co-integration and error-correction model. The short-run and long-run price elasticities are estimated to be −0.357 and −0.547, respectively. The short-run and long-run income elasticities are computed to be 1.589 and 1.478, respectively. Thus, diesel demand is relatively inelastic to price change and elastic to income change in both the short-run and long-run. Therefore, a demand-side management through raising the price of diesel will be ineffective and tightening the regulation of using diesel more efficiently appears to be more effective in Korea. The demand for diesel is expected to continuously increase as the economy grows.
Change in running kinematics after cycling are related to alterations in running economy in triathletes.
Bonacci, Jason; Green, Daniel; Saunders, Philo U; Blanch, Peter; Franettovich, Melinda; Chapman, Andrew R; Vicenzino, Bill
Emerging evidence suggests that cycling may influence neuromuscular control during subsequent running but the relationship between altered neuromuscular control and run performance in triathletes is not well understood. The aim of this study was to determine if a 45 min high-intensity cycle influences lower limb movement and muscle recruitment during running and whether changes in limb movement or muscle recruitment are associated with changes in running economy (RE) after cycling. RE, muscle activity (surface electromyography) and limb movement (sagittal plane kinematics) were compared between a control run (no preceding cycle) and a run performed after a 45 min high-intensity cycle in 15 moderately trained triathletes. Muscle recruitment and kinematics during running after cycling were altered in 7 of 15 (46%) triathletes. Changes in kinematics at the knee and ankle were significantly associated with the change in VO(2) after cycling (precruitment in some triathletes and that changes in kinematics, especially at the ankle, are closely related to alterations in running economy after cycling. Copyright 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Comparison of fractions of inactive modules between Run1 and Run2
Motohashi, Kazuki; The ATLAS collaboration
Fraction of inactive modules for each component of the ATLAS pixel detector at the end of Run 1 and the beginning of Run 2. A similar plot which uses a result of functionality tests during LS1 can be found in ATL-INDET-SLIDE-2014-388.
Weekly running volume and risk of running-related injuries among marathon runners
Rasmussen, Christina Haugaard; Nielsen, R.O.; Juul, Martin Serup
The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race.......The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race....
Rasmussen, Christina Haugaard; Nielsen, Rasmus Østergaard; Juul, Martin Serup
PURPOSEBACKGROUND: The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race.......PURPOSEBACKGROUND: The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race....
Review of SKB's Code Documentation and Testing
Hicks, T.W.
SKB is in the process of developing the SR-Can safety assessment for a KBS 3 repository. The assessment will be based on quantitative analyses using a range of computational codes aimed at developing an understanding of how the repository system will evolve. Clear and comprehensive code documentation and testing will engender confidence in the results of the safety assessment calculations. This report presents the results of a review undertaken on behalf of SKI aimed at providing an understanding of how codes used in the SR 97 safety assessment and those planned for use in the SR-Can safety assessment have been documented and tested. Having identified the codes us ed by SKB, several codes were selected for review. Consideration was given to codes used directly in SKB's safety assessment calculations as well as to some of the less visible codes that are important in quantifying the different repository barrier safety functions. SKB's documentation and testing of the following codes were reviewed: COMP23 - a near-field radionuclide transport model developed by SKB for use in safety assessment calculations. FARF31 - a far-field radionuclide transport model developed by SKB for use in safety assessment calculations. PROPER - SKB's harness for executing probabilistic radionuclide transport calculations using COMP23 and FARF31. The integrated analytical radionuclide transport model that SKB has developed to run in parallel with COMP23 and FARF31. CONNECTFLOW - a discrete fracture network model/continuum model developed by Serco Assurance (based on the coupling of NAMMU and NAPSAC), which SKB is using to combine hydrogeological modelling on the site and regional scales in place of the HYDRASTAR code. DarcyTools - a discrete fracture network model coupled to a continuum model, recently developed by SKB for hydrogeological modelling, also in place of HYDRASTAR. ABAQUS - a finite element material model developed by ABAQUS, Inc, which is used by SKB to model repository buffer
TOPIC: a debugging code for torus geometry input data of Monte Carlo transport code
Iida, Hiromasa; Kawasaki, Hiromitsu.
TOPIC has been developed for debugging geometry input data of the Monte Carlo transport code. the code has the following features: (1) It debugs the geometry input data of not only MORSE-GG but also MORSE-I capable of treating torus geometry. (2) Its calculation results are shown in figures drawn by Plotter or COM, and the regions not defined or doubly defined are easily detected. (3) It finds a multitude of input data errors in a single run. (4) The input data required in this code are few, so that it is readily usable in a time sharing system of FACOM 230-60/75 computer. Example TOPIC calculations in design study of tokamak fusion reactors (JXFR, INTOR-J) are presented. (author)
User's manual for computer code RIBD-II, a fission product inventory code
Marr, D.R.
The computer code RIBD-II is used to calculate inventories, activities, decay powers, and energy releases for the fission products generated in a fuel irradiation. Changes from the earlier RIBD code are: the expansion to include up to 850 fission product isotopes, input in the user-oriented NAMELIST format, and run-time choice of fuels from an extensively enlarged library of nuclear data. The library that is included in the code package contains yield data for 818 fission product isotopes for each of fourteen different fissionable isotopes, together with fission product transmutation cross sections for fast and thermal systems. Calculational algorithms are little changed from those in RIBD. (U.S.)
SURE: a system of computer codes for performing sensitivity/uncertainty analyses with the RELAP code
Bjerke, M.A.
A package of computer codes has been developed to perform a nonlinear uncertainty analysis on transient thermal-hydraulic systems which are modeled with the RELAP computer code. Using an uncertainty around the analyses of experiments in the PWR-BDHT Separate Effects Program at Oak Ridge National Laboratory. The use of FORTRAN programs running interactively on the PDP-10 computer has made the system very easy to use and provided great flexibility in the choice of processing paths. Several experiments simulating a loss-of-coolant accident in a nuclear reactor have been successfully analyzed. It has been shown that the system can be automated easily to further simplify its use and that the conversion of the entire system to a base code other than RELAP is possible
Running and Osteoarthritis: Does Recreational or Competitive Running Increase the Risk?
Exercise, like running, is good for overall health and, specifically, our hearts, lungs, muscles, bones, and brains. However, some people are concerned about the impact of running on longterm joint health. Does running lead to higher rates of arthritis in knees and hips? While many researchers find that running protects bone health, others are concerned that this exercise poses a high risk for age-related changes to hips and knees. A study published in the June 2017 issue of JOSPT suggests that the difference in these outcomes depends on the frequency and intensity of running. J Orthop Sports Phys Ther 2017;47(6):391. doi:10.2519/jospt.2017.0505.
Split-phase motor running as capacitor starts motor and as capacitor run motor
Yahaya Asizehi ENESI
Full Text Available In this paper, the input parameters of a single phase split-phase induction motor is taken to investigate and to study the output performance characteristics of capacitor start and capacitor run induction motor. The value of these input parameters are used in the design characteristics of capacitor run and capacitor start motor with each motor connected to rated or standard capacitor in series with auxiliary winding or starting winding respectively for the normal operational condition. The magnitude of capacitor that will develop maximum torque in capacitor start motor and capacitor run motor are investigated and determined by simulation. Each of these capacitors is connected to the auxiliary winding of split-phase motor thereby transforming it into capacitor start or capacitor run motor. The starting current and starting torque of the split-phase motor (SPM, capacitor run motor (CRM and capacitor star motor (CSM are compared for their suitability in their operational performance and applications.
Report number codes
Nelson, R.N. (ed.)
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.
Nelson, R.N.
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name
Long-Run Neutrality and Superneutrality in an ARIMA Framework.
Fisher, Mark E; Seater, John J
The authors formalize long-run neutrality and long-run superneutrality in the context of a bivariate ARIMA model; show how the restrictions implied by long-run neutrality and long-run superneutrality depend on the orders of integration of the variables; apply their analysis to previous work, showing how that work is related to long-run neutrality and long-run superneutrality; and provide some new evidence on long-run neutrality and long-run superneutrality. Copyright 1993 by American Economic...
The arbitrary order design code Tlie 1.0
Zeijts, J. van; Neri, Filippo
We describe the arbitrary order charged particle transfer map code TLIE. This code is a general 6D relativistic design code with a MAD compatible input language and among others implements user defined functions and subroutines and nested fitting and optimization. First we describe the mathematics and physics in the code. Aside from generating maps for all the standard accelerator elements we describe an efficient method for generating nonlinear transfer maps for realistic magnet models. We have implemented the method to arbitrary order in our accelerator design code for cylindrical current sheet magnets. We also have implemented a self-consistent space-charge approach as in CHARLIE. Subsequently we give a description of the input language and finally, we give several examples from productions run, such as cases with stacked multipoles with overlapping fringe fields. (Author)
Recent advances in neutral particle transport methods and codes
Azmy, Y.Y.
An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned
Recent advances in the Poisson/superfish codes
Ryne, R.; Barts, T.; Chan, K.C.D.; Cooper, R.; Deaven, H.; Merson, J.; Rodenz, G.
We report on advances in the POISSON/SUPERFISH family of codes used in the design and analysis of magnets and rf cavities. The codes include preprocessors for mesh generation and postprocessors for graphical display of output and calculation of auxiliary quantities. Release 3 became available in January 1992; it contains many code corrections and physics enhancements, and it also includes support for PostScript, DISSPLA, GKS and PLOT10 graphical output. Release 4 will be available in September 1992; it is free of all bit packing, making the codes more portable and able to treat very large numbers of mesh points. Release 4 includes the preprocessor FRONT and a new menu-driven graphical postprocessor that runs on workstations under X-Windows and that is capable of producing arrow plots. We will present examples that illustrate the new capabilities of the codes. (author). 6 refs., 3 figs
COMPBRN III: a computer code for modeling compartment fires
Ho, V.; Siu, N.; Apostolakis, G.; Flanagan, G.F.
The computer code COMPBRN III deterministically models the behavior of compartment fires. This code is an improvement of the original COMPBRN codes. It employs a different air entrainment model and numerical scheme to estimate properties of the ceiling hot gas layer model. Moreover, COMPBRN III incorporates a number of improvements in shape factor calculations and error checking, which distinguish it from the COMPBRN II code. This report presents the ceiling hot gas layer model employed by COMPBRN III as well as several other modifications. Information necessary to run COMPBRN III, including descriptions of required input and resulting output, are also presented. Simulation of experiments and a sample problem are included to demonstrate the usage of the code. 37 figs., 46 refs
Habituation contributes to the decline in wheel running within wheel-running reinforcement periods.
Belke, Terry W; McLaughlin, Ryan J
Habituation appears to play a role in the decline in wheel running within an interval. Aoyama and McSweeney [Aoyama, K., McSweeney, F.K., 2001. Habituation contributes to within-session changes in free wheel running. J. Exp. Anal. Behav. 76, 289-302] showed that when a novel stimulus was presented during a 30-min interval, wheel-running rates following the stimulus increased to levels approximating those earlier in the interval. The present study sought to assess the role of habituation in the decline in running that occurs over a briefer interval. In two experiments, rats responded on fixed-interval 30-s schedules for the opportunity to run for 45 s. Forty reinforcers were completed in each session. In the first experiment, the brake and chamber lights were repeatedly activated and inactivated after 25 s of a reinforcement interval had elapsed to assess the effect on running within the remaining 20 s. Presentations of the brake/light stimulus occurred during nine randomly determined reinforcement intervals in a session. In the second experiment, a 110 dB tone was emitted after 25 s of the reinforcement interval. In both experiments, presentation of the stimulus produced an immediate decline in running that dissipated over sessions. No increase in running following the stimulus was observed in the first experiment until the stimulus-induced decline dissipated. In the second experiment, increases in running were observed following the tone in the first session as well as when data were averaged over several sessions. In general, the results concur with the assertion that habituation plays a role in the decline in wheel running that occurs within both long and short intervals. (c) 2004 Elsevier B.V. All rights reserved.
Healthy Living Initiative: Running/Walking Club
Stylianou, Michalis; Kulinna, Pamela Hodges; Kloeppel, Tiffany
This study was grounded in the public health literature and the call for schools to serve as physical activity intervention sites. Its purpose was twofold: (a) to examine the daily distance covered by students in a before-school running/walking club throughout 1 school year and (b) to gain insights on the teachers perspectives of the club.…
The QCD Running Coupling and its Measurement
Altarelli, Guido
In this lecture, after recalling the basic definitions and facts about the running coupling in QCD, I present a critical discussion of the methods for measuring $\\alpha_s$ and select those that appear to me as the most reliably precise
Daytime running lights : its safety evidence revisited.
Koornstra, M.J.
Retrospective in-depth accident studies from several countries confirm that human perception errors are the main causal factor in road accidents. The share of accident types which are relevant for the effect of daytime running lights (DRL), such as overtaking and crossing accidents, in the total of
105-KE Basin Pilot Run design plan
Sherrell, D.L.
This document identifies all design deliverables and procedures applicable to the 105-KE Basin Pilot Run. It also establishes a general design strategy, defines interface control requirements, and covers planning for mechanical, electrical, instrument/control system, and equipment installation design
AUTHOR|(INSPIRE)INSPIRE-00222798; The ATLAS collaboration
The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...
Collagen gene interactions and endurance running performance
to complete any of the individual components (3.8 km swim, 180 km bike or 42.2 km run) of the 226 km event. The major ... may affect normal collagen fibrillogenesis and alter the mechanical properties of ... using a XP Thermal Cycler (Block model XP-G, BIOER Technology Co.,. Japan). ..... New insights into the function of.
Jet physics at CDF Run II
Safonov, A.; /UC, Davis
The latest results on jet physics at CDF are presented and discussed. Particular attention is paid to studies of the inclusive jet cross section using 177 pb{sup -1} of Run II data. Also discussed is a study of gluon and quark jet fragmentation.
EMBL rescue package keeps bioinformatics centre running
Abott, A
The threat to the EBI arising from the EC refusal to fund its running costs seems to have been temporarily lifted. At a meeting in EMBL, Heidelberg, delegates agreed in principle to make up the shortfall of 5 million euros. A final decision will be taken at a special meeting of the EMBL council in March (1 page).
Measuring the running top-quark mass
Langenfeld, Ulrich; Uwer, Peter
In this contribution we discuss conceptual issues of current mass measurements performed at the Tevatron. In addition we propose an alternative method which is theoretically much cleaner and to a large extend free from the problems encountered in current measurements. In detail we discuss the direct determination of the top-quark's running mass from the cross section measurements performed at the Tevatron. (orig.)
Individualism, innovation, and long-run growth.
Gorodnichenko, Yuriy; Roland, Gerard
Countries having a more individualist culture have enjoyed higher long-run growth than countries with a more collectivist culture. Individualist culture attaches social status rewards to personal achievements and thus, provides not only monetary incentives for innovation but also social status rewards, leading to higher rates of innovation and economic growth.
Estimating Stair Running Performance Using Inertial Sensors
Lauro V. Ojeda
Full Text Available Stair running, both ascending and descending, is a challenging aerobic exercise that many athletes, recreational runners, and soldiers perform during training. Studying biomechanics of stair running over multiple steps has been limited by the practical challenges presented while using optical-based motion tracking systems. We propose using foot-mounted inertial measurement units (IMUs as a solution as they enable unrestricted motion capture in any environment and without need for external references. In particular, this paper presents methods for estimating foot velocity and trajectory during stair running using foot-mounted IMUs. Computational methods leverage the stationary periods occurring during the stance phase and known stair geometry to estimate foot orientation and trajectory, ultimately used to calculate stride metrics. These calculations, applied to human participant stair running data, reveal performance trends through timing, trajectory, energy, and force stride metrics. We present the results of our analysis of experimental data collected on eleven subjects. Overall, we determine that for either ascending or descending, the stance time is the strongest predictor of speed as shown by its high correlation with stride time.
Numerical Modelling of Wave Run-Up
Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke
Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...
Daytime running lights : costs or benefits?
Brouwer, R.F.T.; Janssen, W.H.; Theeuwes, J.; Alferdinck, J.W.A.M.; Duistermaat, M.
The present study deals with the possibility that road users in the vicinity of a vehicle with daytime running lights (DRL) would suffer from a decreased conspicuity because of (he presence of that vehicle. In an experiment the primary effects of DRL on the conspicuity of other road users were
Running coupling constants of the Luttinger liquid
Boose, D.; Jacquot, J.L.; Polonyi, J.
We compute the one-loop expressions of two running coupling constants of the Luttinger model. The obtained expressions have a nontrivial momentum dependence with Landau poles. The reason for the discrepancy between our results and those of other studies, which find that the scaling laws are trivial, is explained
Wave run-up on sandbag slopes
Thamnoon Rasmeemasmuang
Full Text Available On occasions, sandbag revetments are temporarily applied to armour sandy beaches from erosion. Nevertheless, an empirical formula to determine the wave run -up height on sandbag slopes has not been available heretofore. In this study a wave run-up formula which considers the roughness of slope surfaces is proposed for the case of sandbag slopes. A series of laboratory experiments on the wave run -up on smooth slopes and sandbag slopes were conducted in a regular-wave flume, leading to the finding of empirical parameters for the formula. The proposed empirical formula is applicable to wave steepness ranging from 0.01 to 0.14 and to the thickness of placed sandbags relative to the wave height ranging from 0.17 to 3.0. The study shows that the wave run-up height computed by the formula for the sandbag slopes is 26-40% lower than that computed by the formula for the smooth slopes.
The CDF Run II disk inventory manager
Hubbard, Paul; Lammel, Stephan
The Collider Detector at Fermilab (CDF) experiment records and analyses proton-antiproton interactions at a center-of-mass energy of 2 TeV. Run II of the Fermilab Tevatron started in April of this year. The duration of the run is expected to be over two years. One of the main data handling strategies of CDF for Run II is to hide all tape access from the user and to facilitate sharing of data and thus disk space. A disk inventory manager was designed and developed over the past years to keep track of the data on disk, to coordinate user access to the data, and to stage data back from tape to disk as needed. The CDF Run II disk inventory manager consists of a server process, a user and administrator command line interfaces, and a library with the routines of the client API. Data are managed in filesets which are groups of one or more files. The system keeps track of user access to the filesets and attempts to keep frequently accessed data on disk. Data that are not on disk are automatically staged back from tape as needed. For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard
Common Running Overuse Injuries and Prevention
Žiga Kozinc
Full Text Available Runners are particularly prone to developing overuse injuries. The most common running-related injuries include medial tibial stress syndrome, Achilles tendinopathy, plantar fasciitis, patellar tendinopathy, iliotibial band syndrome, tibial stress fractures, and patellofemoral pain syndrome. Two of the most significant risk factors appear to be injury history and weekly distance. Several trials have successfully identified biomechanical risk factors for specific injuries, with increased ground reaction forces, excessive foot pronation, hip internal rotation and hip adduction during stance phase being mentioned most often. However, evidence on interventions for lowering injury risk is limited, especially regarding exercise-based interventions. Biofeedback training for lowering ground reaction forces is one of the few methods proven to be effective. It seems that the best way to approach running injury prevention is through individualized treatment. Each athlete should be assessed separately and scanned for risk factors, which should be then addressed with specific exercises. This review provides an overview of most common running-related injuries, with a particular focus on risk factors, and emphasizes the problems encountered in preventing running-related injuries.
The running athlete: Roentgenograms and remedies
Pavlov, H.; Torg, J.S.
The authors have put together an atlas of radiographs of almost every conceivable running injury to the foot, ankle, leg, knee, femur, groin, and spine. Text material is limited to legends which describe the figures, and the remedies listed are brief. The text indicates conservative versus surgical treatment and, in some instances, recommends a surgical procedure
ATLAS Data Preparation in Run 2
Laycock, Paul; The ATLAS collaboration
In this presentation, the data preparation workflows for Run 2 are presented. Online data quality uses a new hybrid software release that incorporates the latest offline data quality monitoring software for the online environment. This is used to provide fast feedback in the control room during a data acquisition (DAQ) run, via a histogram-based monitoring framework as well as the online Event Display. Data are sent to several streams for offline processing at the dedicated Tier-0 computing facility, including dedicated calibration streams and an "express" physics stream containing approximately 2% of the main physics stream. This express stream is processed as data arrives, allowing a first look at the offline data quality within hours of a run end. A prompt calibration loop starts once an ATLAS DAQ run ends, nominally defining a 48 hour period in which calibrations and alignments can be derived using the dedicated calibration and express streams. The bulk processing of the main physics stream starts on expi...
The D0 run II trigger system
Schwienhorst, Reinhard; Michigan State U.
The D0 detector at the Fermilab Tevatron was upgraded for Run II. This upgrade included improvements to the trigger system in order to be able to handle the increased Tevatron luminosity and higher bunch crossing rates compared to Run I. The D0 Run II trigger is a highly exible system to select events to be written to tape from an initial interaction rate of about 2.5 MHz. This is done in a three-tier pipelined, buffered system. The first tier (level 1) processes fast detector pick-off signals in a hardware/firmware based system to reduce the event rate to about 1. 5kHz. The second tier (level 2) uses information from level 1 and forms simple Physics objects to reduce the rate to about 850 Hz. The third tier (level 3) uses full detector readout and event reconstruction on a filter farm to reduce the rate to 20-30 Hz. The D0 trigger menu contains a wide variety of triggers. While the emphasis is on triggering on generic lepton and jet final states, there are also trigger terms for specific final state signatures. In this document we describe the D0 trigger system as it was implemented and is currently operating in Run II
Run-2 ATLAS Trigger and Detector Performance
Winklmeier, Frank; The ATLAS collaboration
The 2nd LHC run has started in June 2015 with a pp centre-of-mass collision energy of 13 TeV, and ATLAS has taken first data at this new energy. In this talk the improvements made to the ATLAS experiment during the 2-year shutdown 2013/2014 will be discussed, and first detector and trigger performance results from the Run-2 will be shown. In general, reconstruction algorithms of tracks, e/gamma, muons, taus, jets and flavour tag- ging have been improved for Run-2. The new reconstruction algorithms and their performance measured using the data taken in 2015 at sqrt(s)=13 TeV will be discussed. Reconstruction efficiency, isolation performance, transverse momentum resolution and momentum scales are measured in various regions of the detector and in momentum intervals enlarged with respect to those measured in the Run-1. This presentation will also give an overview of the upgrades to the ATLAS trigger system that have been implemented during the LHC shutdown in order to deal with the increased trigger rates (fact...
KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR
John A. Mercer
Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor
Cryptography cracking codes
While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.
Coded Splitting Tree Protocols
Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar
This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...
Transport theory and codes
Clancy, B.E.
This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on
Gravity inversion code
Burkhard, N.R.
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
The MESORAD dose assessment model: Computer code
Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.
MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs
The efficacy of downhill running as a method to enhance running economy in trained distance runners.
Shaw, Andrew J; Ingham, Stephen A; Folland, Jonathan P
Running downhill, in comparison to running on the flat, appears to involve an exaggerated stretch-shortening cycle (SSC) due to greater impact loads and higher vertical velocity on landing, whilst also incurring a lower metabolic cost. Therefore, downhill running could facilitate higher volumes of training at higher speeds whilst performing an exaggerated SSC, potentially inducing favourable adaptations in running mechanics and running economy (RE). This investigation assessed the efficacy of a supplementary 8-week programme of downhill running as a means of enhancing RE in well-trained distance runners. Nineteen athletes completed supplementary downhill (-5% gradient; n = 10) or flat (n = 9) run training twice a week for 8 weeks within their habitual training. Participants trained at a standardised intensity based on the velocity of lactate turnpoint (vLTP), with training volume increased incrementally between weeks. Changes in energy cost of running (E C ) and vLTP were assessed on both flat and downhill gradients, in addition to maximal oxygen uptake (⩒O 2max). No changes in E C were observed during flat running following downhill (1.22 ± 0.09 vs 1.20 ± 0.07 Kcal kg -1 km -1 , P = .41) or flat run training (1.21 ± 0.13 vs 1.19 ± 0.12 Kcal kg -1 km -1 ). Moreover, no changes in E C during downhill running were observed in either condition (P > .23). vLTP increased following both downhill (16.5 ± 0.7 vs 16.9 ± 0.6 km h -1 , P = .05) and flat run training (16.9 ± 0.7 vs 17.2 ± 1.0 km h -1 , P = .05), though no differences in responses were observed between groups (P = .53). Therefore, a short programme of supplementary downhill run training does not appear to enhance RE in already well-trained individuals.
Accounting for Laminar Run & Trip Drag in Supersonic Cruise Performance Testing
Goodsell, Aga M.; Kennelly, Robert A.
An improved laminar run and trip drag correction methodology for supersonic cruise performance testing was derived. This method required more careful analysis of the flow visualization images which revealed delayed transition particularly on the inboard upper surface, even for the largest trip disks. In addition, a new code was developed to estimate the laminar run correction. Once the data were corrected for laminar run, the correct approach to the analysis of the trip drag became evident. Although the data originally appeared confusing, the corrected data are consistent with previous results. Furthermore, the modified approach, which was described in this presentation, extends prior historical work by taking into account the delayed transition caused by the blunt leading edges.
PP: A graphics post-processor for the EQ6 reaction path code
Stockman, H.W.
The PP code is a graphics post-processor and plotting program for EQ6, a popular reaction-path code. PP runs on personal computers, allocates memory dynamically, and can handle very large reaction path runs. Plots of simple variable groups, such as fluid and solid phase composition, can be obtained with as few as two keystrokes. Navigation through the list of reaction path variables is simple and efficient. Graphics files can be exported for inclusion in word processing documents and spreadsheets, and experimental data may be imported and superposed on the reaction path runs. The EQ6 thermodynamic database can be searched from within PP, to simplify interpretation of complex plots
Fulcrum Network Codes
Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....
Supervised Convolutional Sparse Coding
Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter
coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements
OCA Code Enforcement
Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...
The fast code
Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)
The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)
Code Disentanglement: Initial Plan
Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.
Induction technology optimization code
Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.
A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs
VT ZIP Code Areas
Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...
Bandwidth efficient coding
Anderson, John B
Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.
The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)
Critical Care Coding for Neurologists.
Nuwer, Marc R; Vespa, Paul M
Lattice Index Coding
Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele
The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...
Cracking the Gender Codes
Rennison, Betina Wolfgang
extensive work to raise the proportion of women. This has helped slightly, but women remain underrepresented at the corporate top. Why is this so? What can be done to solve it? This article presents five different types of answers relating to five discursive codes: nature, talent, business, exclusion...... in leadership management, we must become more aware and take advantage of this complexity. We must crack the codes in order to crack the curve....
Post-test analysis of ROSA-III experiment RUNs 705 and 706
Koizumi, Yasuo; Soda, Kunihisa; Kikuchi, Osamu; Tasaka, Kanji; Shiba, Masayoshi
The purpose of ROSA-III experiment with a scaled BWR Test facility is to examine primary coolant thermal-hydraulic behavior and performance of ECCS during a postulated loss-of-coolant accident of BWR. The results provide the information for verification and improvement of reactor safety analysis codes. RUNs 705 and 706 assumed a 200% double-ended break at the recirculation pump suction. RUN 705 was an isothermal blowdown test without initial power and initial core flow. In RUN 706 for an average core power and no ECCS, the main steam line and feed water line were isolated immediately on the break. Post-test analysis of RUNs 705 and 706 was made with computer code RELAP4J. The agreement in system pressure between calculation and experiment was satisfactory. However, the calculated heater rod surface temperature were significantly higher than the experimental ones. The calculated axial temperature profile was different in tendency from the experimental one. The calculated mixture level behavior in the core was different from the liquid void distribution observed in experiment. The rapid rise of fuel rod surface temperature was caused by the reduction of heat transfer coefficient attributed to the increase of quality. The need was indicated for improvement of analytical model of void distribution in the core, and also to performe a characteristic test of recirculation line under reverse flow and to examine the core inlet flow rate experimentally and analytically. (author)
Massively parallel Monte Carlo. Experiences running nuclear simulations on a large condor cluster
Tickner, James; O'Dwyer, Joel; Roach, Greg; Uher, Josef; Hitchen, Greg
The trivially-parallel nature of Monte Carlo (MC) simulations make them ideally suited for running on a distributed, heterogeneous computing environment. We report on the setup and operation of a large, cycle-harvesting Condor computer cluster, used to run MC simulations of nuclear instruments ('jobs') on approximately 4,500 desktop PCs. Successful operation must balance the competing goals of maximizing the availability of machines for running jobs whilst minimizing the impact on users' PC performance. This requires classification of jobs according to anticipated run-time and priority and careful optimization of the parameters used to control job allocation to host machines. To maximize use of a large Condor cluster, we have created a powerful suite of tools to handle job submission and analysis, as the manual creation, submission and evaluation of large numbers (hundred to thousands) of jobs would be too arduous. We describe some of the key aspects of this suite, which has been interfaced to the well-known MCNP and EGSnrc nuclear codes and our in-house PHOTON optical MC code. We report on our practical experiences of operating our Condor cluster and present examples of several large-scale instrument design problems that have been solved using this tool. (author)
Calcaneus length determines running economy: implications for endurance running performance in modern humans and Neandertals.
Raichlen, David A; Armstrong, Hunter; Lieberman, Daniel E
The endurance running (ER) hypothesis suggests that distance running played an important role in the evolution of the genus Homo. Most researchers have focused on ER performance in modern humans, or on reconstructing ER performance in Homo erectus, however, few studies have examined ER capabilities in other members of the genus Homo. Here, we examine skeletal correlates of ER performance in modern humans in order to evaluate the energetics of running in Neandertals and early Homo sapiens. Recent research suggests that running economy (the energy cost of running at a given speed) is strongly related to the length of the Achilles tendon moment arm. Shorter moment arms allow for greater storage and release of elastic strain energy, reducing energy costs. Here, we show that a skeletal correlate of Achilles tendon moment arm length, the length of the calcaneal tuber, does not correlate with walking economy, but correlates significantly with running economy and explains a high proportion of the variance (80%) in cost between individuals. Neandertals had relatively longer calcaneal tubers than modern humans, which would have increased their energy costs of running. Calcaneal tuber lengths in early H. sapiens do not significantly differ from those of extant modern humans, suggesting Neandertal ER economy was reduced relative to contemporaneous anatomically modern humans. Endurance running is generally thought to be beneficial for gaining access to meat in hot environments, where hominins could have used pursuit hunting to run prey taxa into hyperthermia. We hypothesize that ER performance may have been reduced in Neandertals because they lived in cold climates. Copyright © 2011 Elsevier Ltd. All rights reserved.
Similar Running Economy With Different Running Patterns Along the Aerial-Terrestrial Continuum.
Lussiana, Thibault; Gindre, Cyrille; Hébert-Losier, Kim; Sagawa, Yoshimasa; Gimenez, Philippe; Mourot, Laurent
No unique or ideal running pattern is the most economical for all runners. Classifying the global running patterns of individuals into 2 categories (aerial and terrestrial) using the Volodalen method could permit a better understanding of the relationship between running economy (RE) and biomechanics. The main purpose was to compare the RE of aerial and terrestrial runners. Two coaches classified 58 runners into aerial (n = 29) or terrestrial (n = 29) running patterns on the basis of visual observations. RE, muscle activity, kinematics, and spatiotemporal parameters of both groups were measured during a 5-min run at 12 km/h on a treadmill. Maximal oxygen uptake (V̇O 2 max) and peak treadmill speed (PTS) were assessed during an incremental running test. No differences were observed between aerial and terrestrial patterns for RE, V̇O 2 max, and PTS. However, at 12 km/h, aerial runners exhibited earlier gastrocnemius lateralis activation in preparation for contact, less dorsiflexion at ground contact, higher coactivation indexes, and greater leg stiffness during stance phase than terrestrial runners. Terrestrial runners had more pronounced semitendinosus activation at the start and end of the running cycle, shorter flight time, greater leg compression, and a more rear-foot strike. Different running patterns were associated with similar RE. Aerial runners appear to rely more on elastic energy utilization with a rapid eccentric-concentric coupling time, whereas terrestrial runners appear to propel the body more forward rather than upward to limit work against gravity. Excluding runners with a mixed running pattern from analyses did not affect study interpretation.
Muscle injury after low-intensity downhill running reduces running economy.
Baumann, Cory W; Green, Michael S; Doyle, J Andrew; Rupp, Jeffrey C; Ingalls, Christopher P; Corona, Benjamin T
Contraction-induced muscle injury may reduce running economy (RE) by altering motor unit recruitment, lowering contraction economy, and disturbing running mechanics, any of which may have a deleterious effect on endurance performance. The purpose of this study was to determine if RE is reduced 2 days after performing injurious, low-intensity exercise in 11 healthy active men (27.5 ± 5.7 years; 50.05 ± 1.67 VO2peak). Running economy was determined at treadmill speeds eliciting 65 and 75% of the individual's peak rate of oxygen uptake (VO2peak) 1 day before and 2 days after injury induction. Lower extremity muscle injury was induced with a 30-minute downhill treadmill run (6 × 5 minutes runs, 2 minutes rest, -12% grade, and 12.9 km·h(-1)) that elicited 55% VO2peak. Maximal quadriceps isometric torque was reduced immediately and 2 days after the downhill run by 18 and 10%, and a moderate degree of muscle soreness was present. Two days after the injury, steady-state VO2 and metabolic work (VO2 L·km(-1)) were significantly greater (4-6%) during the 65% VO2peak run. Additionally, postinjury VCO2, VE and rating of perceived exertion were greater at 65% but not at 75% VO2peak, whereas whole blood-lactate concentrations did not change pre-injury to postinjury at either intensity. In conclusion, low-intensity downhill running reduces RE at 65% but not 75% VO2peak. The results of this study and other studies indicate the magnitude to which RE is altered after downhill running is dependent on the severity of the injury and intensity of the RE test.
PEAR code review
De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.
As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)
KENO-V code
The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes
Code, standard and specifications
Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail
Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...
Coupling a Basin Modeling and a Seismic Code using MOAB
Yan, Mi; Jordan, Kirk; Kaushik, Dinesh; Perrone, Michael; Sachdeva, Vipin; Tautges, Timothy J.; Magerlein, John
We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.
Yan, Mi
Recent developments in KTF. Code optimization and improved numerics
Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin
The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)
Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin [Karlsruhe Institute of Technology (KIT) (Germany). Inst. for Neutron Physics and Reactor Technology (INR)
TRANSURANUS: A fuel rod analysis code ready for use
Lassmann, K; O` Carroll, C; Van de Laar, J [Commission of the European Communities, Karlsruhe (Germany). European Inst. for Transuranium Elements; Ott, C [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
The basic concepts of fuel rod performance codes are discussed. The TRANSURANUS code developed at the Institute for Transuranium Elements, Karlsruhe (GE) is presented. It is a quasi two-dimensional (1{sub 1/2}-D) code designed for treatment of a whole fuel rod for any type of reactor and any situation. The fuel rods found in the majority of test- or power reactors can be analyzed for very different situations (normal, off-normal and accidental). The time scale of the problems to be treated may range from milliseconds to years. The TRANSURANUS code consists of a clearly defined mechanical/mathematical framework into which physical models can easily be incorporated. This framework has been extensively tested and the programming very clearly reflects this structure. The code is well structured and easy to understand. It has a comprehensive material data bank for different fuels, claddings, coolants and their properties. The code can be employed in a deterministic and a statistical version. It is written in standard FORTRAN 77. The code system includes: 2 preprocessor programs (MAKROH and AXORDER) for setting up new data cases; the post-processor URPLOT for plotting all important quantities as a function of the radius, the axial coordinate or the time; the post-processor URSTART evaluating statistical analyses. The TRANSURANUS code exhibits short running times. A new WINDOWS-based interactive interface is under development. The code is now in use in various European institutions and is available to all interested parties. 7 figs., 15 refs.
DESIGN IMPROVEMENT OF THE LOCOMOTIVE RUNNING GEARS
S. V. Myamlin
Full Text Available Purpose. To determine the dynamic qualities of the mainline freight locomotives characterizing the safe motion in tangent and curved track sections at all operational speeds, one needs a whole set of studies, which includes a selection of the design scheme, development of the corresponding mathematical model of the locomotive spatial fluctuations, construction of the computer calculation program, conducting of the theoretical and then experimental studies of the new designs. In this case, one should compare the results with existing designs. One of the necessary conditions for the qualitative improvement of the traction rolling stock is to define the parameters of its running gears. Among the issues related to this problem, an important place is occupied by the task of determining the locomotive dynamic properties on the stage of projection, taking into account the selected technical solutions in the running gear design. Methodology. The mathematical modeling studies are carried out by the numerical integration method of the dynamic loading for the mainline locomotive using the software package «Dynamics of Rail Vehicles » («DYNRAIL». Findings. As a result of research for the improvement of locomotive running gear design it can be seen that the creation of the modern locomotive requires from engineers and scientists the realization of scientific and technical solutions. The solutions enhancing design speed with simultaneous improvement of the traction, braking and dynamic qualities to provide a simple and reliable design, especially the running gear, reducing the costs for maintenance and repair, low initial cost and operating costs for the whole service life, high traction force when starting, which is as close as possible to the ultimate force of adhesion, the ability to work in multiple traction mode and sufficient design speed. Practical Value. The generalization of theoretical, scientific and methodological, experimental studies aimed
Run scenarios for the linear collider
M. Battaglia et al. email = crathbun@fnal.gov
We have examined how a Linear Collider program of 1000 fb -1 could be constructed in the case that a very rich program of new physics is accessible at √s ≤ 500 GeV. We have examined possible run plans that would allow the measurement of the parameters of a 120 GeV Higgs boson, the top quark, and could give information on the sparticle masses in SUSY scenarios in which many states are accessible. We find that the construction of the run plan (the specific energies for collider operation, the mix of initial state electron polarization states, and the use of special e - e - runs) will depend quite sensitively on the specifics of the supersymmetry model, as the decay channels open to particular sparticles vary drastically and discontinuously as the underlying SUSY model parameters are varied. We have explored this dependence somewhat by considering two rather closely related SUSY model points. We have called for operation at a high energy to study kinematic end points, followed by runs in the vicinity of several two body production thresholds once their location is determined by the end point studies. For our benchmarks, the end point runs are capable of disentangling most sparticle states through the use of specific final states and beam polarizations. The estimated sparticle mass precisions, combined from end point and scan data, are given in Table VIII and the corresponding estimates for the mSUGRA parameters are in Table IX. The precision for the Higgs boson mass, width, cross-sections, branching ratios and couplings are given in Table X. The errors on the top quark mass and width are expected to be dominated by the systematic limits imposed by QCD non-perturbative effects. The run plan devotes at least two thirds of the accumulated luminosity near the maximum LC energy, so that the program would be sensitive to unexpected new phenomena at high mass scales. We conclude that with a 1 ab -1 program, expected to take the first 6-7 years of LC operation, one can do
The UK core performance code package
Hutt, P.K.; Gaines, N.; McEllin, M.; White, R.J.; Halsall, M.J.
Over the last few years work has been co-ordinated by Nuclear Electric, originally part of the Central Electricity Generating Board, with contributions from the United Kingdom Atomic Energy Authority and British Nuclear Fuels Limited, to produce a generic, easy-to-use and integrated package of core performance codes able to perform a comprehensive range of calculations for fuel cycle design, safety analysis and on-line operational support for Light Water Reactor and Advanced Gas Cooled Reactor plant. The package consists of modern rationalized generic codes for lattice physics (WIMS), whole reactor calculations (PANTHER), thermal hydraulics (VIPRE) and fuel performance (ENIGMA). These codes, written in FORTRAN77, are highly portable and new developments have followed modern quality assurance standards. These codes can all be run ''stand-alone'' but they are also being integrated within a new UNIX-based interactive system called the Reactor Physics Workbench (RPW). The RPW provides an interactive user interface and a sophisticated data management system. It offers quality assurance features to the user and has facilities for defining complex calculational sequences. The Paper reviews the current capabilities of these components, their integration within the package and outlines future developments underway. Finally, the Paper describes the development of an on-line version of this package which is now being commissioned on UK AGR stations. (author)
Ultra-obligatory running among ultramarathon runners.
Hoffman, Martin D; Krouse, Rhonna
Participants in the Ultrarunners Longitudinal TRAcking (ULTRA) Study were asked to answer "yes" or "no" to the question "If you were to learn, with absolute certainty, that ultramarathon running is bad for your health, would you stop your ultramarathon training and participation?" Among the 1349 runners, 74.1% answered "no". Compared with those answering "yes", they were younger (p life meaning (p = 0.0002) scores on the Motivations of Marathoners Scales. Despite a high health orientation, most ultramarathon runners would not stop running if they learned it was bad for their health as it appears to serve their psychological and personal achievement motivations and their task orientation such that they must perceive enhanced benefits that are worth retaining at the risk of their health.
CMS Computing Operations During Run1
Gutsche, Oliver
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this presentation we will discuss the operational experience from the first run. We will present the workflows and data flows that were executed, we will discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. In this presentation we will also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.
Effects of intermittent hypoxia on running economy.
Burtscher, M; Gatterer, H; Faulhaber, M; Gerstgrasser, W; Schenk, K
We investigated the effects of two 5-wk periods of intermittent hypoxia on running economy (RE). 11 male and female middle-distance runners were randomly assigned to the intermittent hypoxia group (IHG) or to the control group (CG). All athletes trained for a 13-wk period starting at pre-season until the competition season. The IHG spent additionally 2 h at rest on 3 days/wk for the first and the last 5 weeks in normobaric hypoxia (15-11% FiO2). RE, haematological parameters and body composition were determined at low altitude (600 m) at baseline, after the 5 (th), the 8 (th) and the 13 (th) week of training. RE, determined by the relative oxygen consumption during submaximal running, (-2.3+/-1.2 vs. -0.3+/-0.7 ml/min/kg, Ptraining phase. Georg Thieme Verlag KG Stuttgart . New York.
CMS computing operations during run 1
Adelman, J; Artieda, J; Bagliese, G; Ballestero, D; Bansal, S; Bauerdick, L; Behrenhof, W; Belforte, S; Bloom, K; Blumenfeld, B; Blyweert, S; Bonacorsi, D; Brew, C; Contreras, L; Cristofori, A; Cury, S; da Silva Gomes, D; Dolores Saiz Santos, M; Dost, J; Dykstra, D; Fajardo Hernandez, E; Fanzango, F; Fisk, I; Flix, J; Georges, A; Gi ffels, M; Gomez-Ceballos, G; Gowdy, S; Gutsche, O; Holzman, B; Janssen, X; Kaselis, R; Kcira, D; Kim, B; Klein, D; Klute, M; Kress, T; Kreuzer, P; Lahi , A; Larson, K; Letts, J; Levin, A; Linacre, J; Linares, J; Liu, S; Luyckx, S; Maes, M; Magini, N; Malta, A; Marra Da Silva, J; Mccartin, J; McCrea, A; Mohapatra, A; Molina, J; Mortensen, T; Padhi, S; Paus, C; Piperov, S; Ralph; Sartirana, A; Sciaba, A; S ligoi, I; Spinoso, V; Tadel, M; Traldi, S; Wissing, C; Wuerthwein, F; Yang, M; Zielinski, M; Zvada, M
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.
28 CFR 544.34 - Inmate running events.
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Inmate running events. 544.34 Section 544... EDUCATION Inmate Recreation Programs § 544.34 Inmate running events. Running events will ordinarily not... available for all inmate running events. ...
Wave Run-up on the Zeebrugge Rubble Mound Breakwater
De Rouck, Julien; de Walle, Bjorn Van; Troch, Peter
Full-scale wave run-up measurements have been carried out on the Zeebrugge rubble mound breakwater in the frame of the EU-funded OPTICREST project. Wave run-up has been measured by a run-up gauge and by a so-called spiderweb system. The dimensionless wave run-up value Ru2%Hm0 measured in Zeebrugg...
HUDU: The Hanford Unified Dose Utility computer code
Scherpelz, R.I.
The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs
1987 DOE review: First collider run operation
Childress, S.; Crawford, J.; Dugan, G.
This review covers the operations of the first run of the 1.8 TeV superconducting super collider. The papers enclosed cover: PBAR source status, fixed target operation, Tevatron cryogenic reliability and capacity upgrade, Tevatron Energy upgrade progress and plans, status of the D0 low beta insertion, 1.8 K and 4.7 K refrigeration for low-β quadrupoles, progress and plans for the LINAC and booster, near term and long term and long term performance improvements
CERN Running Club – Sale of Items
CERN Running club
The CERN Running Club is organising a sale of items on 26 June from 11:30 – 13:00 in the entry area of Restaurant 2 (504 R-202). The items for sale are souvenir prizes of past Relay Races and comprise: Backpacks, thermos, towels, gloves & caps, lamps, long sleeve winter shirts and windproof vest. All items will be sold at 5 CHF.
Analysis of Biomechanical Factors in Bend Running
Bing Zhang; Xinping You; Feng Li
Sprint running is the demonstration of comprehensive abilities of technology and tactics, under various conditions. However, whether it is just to allocate the tracks for short-distance athletes from different racetracks has been the hot topic. This study analyzes its forces, differences in different tracks and winding influences, in the aspects of sport biomechanics. The results indicate, many disadvantages exist in inner tracks, middle tracks are the best and outer ones are inferior to midd...
Marathon Running for Amateurs: Benefits and Risks
Farhad Kapadia
The habitual level of physical activity of the human race has significantly and abruptly declined in the last few generations due to technological developments. The professional societies and government health agencies have published minimum physical activity requirement guidelines to educate the masses about the importance of exercise and to reduce cardiovascular (CV) and all-cause mortality at the population level. There is growing participation in marathon running by amateur, middle-aged c...
Forecasting Long-Run Electricity Prices
Hamm, Gregory; Borison, Adam
Estimation of long-run electricity prices is extremely important but it is also very difficult because of the many uncertainties that will determine future prices, and because of the lack of sufficient historical and forwards data. The difficulty is compounded when forecasters ignore part of the available information or unnecessarily limit their thinking about the future. The authors present a practical approach that addresses these problems. (author)
Comparison of computer codes related to the sodium oxide aerosol behavior in a containment building
Fermandjian, J.
In order to ensure that the problems of describing the physical behavior of sodium aerosols, during hypothetical fast reactor accidents, were adequately understood, a comparison of the computer codes (ABC/INTG, PNC, Japan; AEROSIM, UKAEA/SRD, United Kingdom; PARDISEKO IIIb, KfK, Germany; AEROSOLS/A2 and AEROSOLS/B1, CEA France) was undertaken in the frame of the CEC: exercise in which code users have run their own codes with a prearranged input
Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.
Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J
A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.
Financial Performance of Health Insurers: State-Run Versus Federal-Run Exchanges.
Hall, Mark A; McCue, Michael J; Palazzolo, Jennifer R
Many insurers incurred financial losses in individual markets for health insurance during 2014, the first year of Affordable Care Act mandated changes. This analysis looks at key financial ratios of insurers to compare profitability in 2014 and 2013, identify factors driving financial performance, and contrast the financial performance of health insurers operating in state-run exchanges versus the federal exchange. Overall, the median loss of sampled insurers was -3.9%, no greater than their loss in 2013. Reduced administrative costs offset increases in medical losses. Insurers performed better in states with state-run exchanges than insurers in states using the federal exchange in 2014. Medical loss ratios are the underlying driver more than administrative costs in the difference in performance between states with federal versus state-run exchanges. Policy makers looking to improve the financial performance of the individual market should focus on features that differentiate the markets associated with state-run versus federal exchanges.
Run-off from roofing materials
In order to find the runn-off from roof material, a roof has been constructed with two different slopes (30 deg. and 45 deg.). 7 Be and 137 Cs have been used as tracers. Considering new roof material, the pollution removed by run-off processes has been shown to be very different for various roof materials. The pollution is much more easily removed from silicon-treated material than from porous red-tile roof material. Cesium is removed more easily than beryllium. The content of cesium in old roof materials is greater in red-tile than in other less porous roof materials. However, the measured removal from new material does not correspond to the amount accumulated in the old. This could be explained by weathering and by saturation effects. The last effect is probably the more important. The measurements on old material indicate a removal of 44-86% of cesium pollution by run-off, whereas the measurement on new material showed a removal of only 31-50%. It has been demonstrated that the pollution concentration in run-off water could be very different from that in rainwater
Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E
Management of the large volume of data collected by any large scale sci- enti�c experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user in- terfaces, to pinpoint collections of data needed for speci�c purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called "runBrowser� makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions at...
Management of the large volume of data collected by any large scale scienti�c experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for speci�c purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called "runBrowser� makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attrib...
Running vacuum cosmological models: linear scalar perturbations
Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)
In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - �-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density �-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely �-bar {sub Λ} = Σ {sub i} �-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.
The aerodynamic signature of running spiders.
Jérôme Casas
Full Text Available Many predators display two foraging modes, an ambush strategy and a cruising mode. These foraging strategies have been classically studied in energetic, biomechanical and ecological terms, without considering the role of signals produced by predators and perceived by prey. Wolf spiders are a typical example; they hunt in leaf litter either using an ambush strategy or by moving at high speed, taking over unwary prey. Air flow upstream of running spiders is a source of information for escaping prey, such as crickets and cockroaches. However, air displacement by running arthropods has not been previously examined. Here we show, using digital particle image velocimetry, that running spiders are highly conspicuous aerodynamically, due to substantial air displacement detectable up to several centimetres in front of them. This study explains the bimodal distribution of spider's foraging modes in terms of sensory ecology and is consistent with the escape distances and speeds of cricket prey. These findings may be relevant to the large and diverse array of arthropod prey-predator interactions in leaf litter.
Running-related injuries in school-age children and adolescents treated in emergency departments from 1994 through 2007.
Mehl, Ann J; Nelson, Nicolas G; McKenzie, Lara B
Running for exercise is a popular way to motivate children to be physically active. Running-related injuries are well studied in adults but little information exists for children and adolescents. Through use of the National Electronic Injury Surveillance System database, cases of running-related injuries were selected by using activity codes for exercise (which included running and jogging). Sample weights were used to calculate national estimates. An estimated 225 344 children and adolescents 6 to 18 years old were treated in US emergency departments for running-related injuries. The annual number of cases increased by 34.0% over the study period. One third of the injuries involved a running-related fall and more than one half of the injuries occurred at school. The majority of injuries occurred to the lower extremities and resulted in a sprain or strain. These findings emphasize the need for scientific evidence-based guidelines for pediatric running. The high proportion of running-related falls warrants further research.
SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE
F.N. HASOON
Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.
Nuclear code abstracts (1975 edition)
Akanuma, Makoto; Hirakawa, Takashi
Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)
Some new ternary linear codes
Rumen Daskalov
Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].
The Relationship between Running Velocity and the Energy Cost of Turning during Running
Hatamoto, Yoichi; Yamada, Yosuke; Sagayama, Hiroyuki; Higaki, Yasuki; Kiyonaga, Akira; Tanaka, Hiroaki
Ball game players frequently perform changes of direction (CODs) while running; however, there has been little research on the physiological impact of CODs. In particular, the effect of running velocity on the physiological and energy demands of CODs while running has not been clearly determined. The purpose of this study was to examine the relationship between running velocity and the energy cost of a 180°COD and to quantify the energy cost of a 180°COD. Nine male university students (aged 18–22 years) participated in the study. Five shuttle trials were performed in which the subjects were required to run at different velocities (3, 4, 5, 6, 7, and 8 km/h). Each trial consisted of four stages with different turn frequencies (13, 18, 24 and 30 per minute), and each stage lasted 3 minutes. Oxygen consumption was measured during the trial. The energy cost of a COD significantly increased with running velocity (except between 7 and 8 km/h, p = 0.110). The relationship between running velocity and the energy cost of a 180°COD is best represented by a quadratic function (y = −0.012+0.066x +0.008x2, [r = 0.994, p = 0.001]), but is also well represented by a linear (y = −0.228+0.152x, [r = 0.991, prunning velocities have relatively high physiological demands if the COD frequency increases, and that running velocities affect the physiological demands of CODs. These results also showed that the energy expenditure of COD can be evaluated using only two data points. These results may be useful for estimating the energy expenditure of players during a match and designing shuttle exercise training programs. PMID:24497913
Short-run and long-run effects of unemployment on suicides: does welfare regime matter?
Gajewski, Pawel; Zhukovska, Kateryna
Disentangling the immediate effects of an unemployment shock from the long-run relationship has a strong theoretical rationale. Different economic and psychological forces are at play in the first moment and after prolonged unemployment. This study suggests a diverse impact of short- and long-run unemployment on suicides in liberal and social-democratic countries. We take a macro-level perspective and simultaneously estimate the short- and long-run relationships between unemployment and suicide, along with the speed of convergence towards the long-run relationship after a shock, in a panel of 10 high-income countries. We also account for unemployment benefit spending, the share of the population aged 15-34, and the crisis effects. In the liberal group of countries, only a long-run impact of unemployment on suicides is found to be significant (P = 0.010). In social-democratic countries, suicides are associated with initial changes in unemployment (P = 0.028), but the positive link fades over time and becomes insignificant in the long run. Further, crisis effects are a much stronger determinant of suicides in social-democratic countries. Once the broad welfare regime is controlled for, changes in unemployment-related spending do not matter for preventing suicides. A generous welfare system seems efficient at preventing unemployment-related suicides in the long run, but societies in social-democratic countries might be less psychologically immune to sudden negative changes in their professional lives compared with people in liberal countries. Accounting for the different short- and long-run effects could thus improve our understanding of the unemployment-suicide link. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Test results of Run-1 and Run-2 in steam generator safety test facility (SWAT-3)
Kurihara, A.; Yatabe, Toshio; Tanabe, Hiromi; Hiroi, Hiroshi
Large leak sodium-water reaction tests were carried out using SWAT-1 rig and SWAT-3 facility in Power Reactor and Nuclear Fuel Development Corporation (PNC) O-arai Engineering Center to obtain the data on the design of the prototype LMFBR Monju steam generator against a large leak accident. This report provides the results of SWAT-3 Runs 1 and 2. In Runs 1 and 2, the heat transfer tube bundle of the evaporator, fabricated by TOSHIBA/IHI, were used, and the pressure relief line was located at the top of evaporator. The water injection rates in the evaporator were 6.7 kg/s and 14.2 (initial)-9.7 kg/s in Runs 1 and 2 respectively, which corresponded to 3.3 tubes and 7.1 (initial)-4.8 tubes failure in actual size system according to iso-velocity modeling. Approximately two hundreds of measurement points were provided to collect data such as pressure, temperature, strain, sodium level, void, thrust load, acceleration, displacement, flow rate, and so on in each run. Initial spike pressures were 1.13 MPa and 2.62 MPa nearest to injection point in Runs 1 and 2 respectively, and the maximum quasi-steady pressures in evaporator were 0.49 MPa and 0.67 MPa in Runs 1 and 2. No secondary tube failure was observed. The rupture disc of evaporator (RD601) burst at 1.1s in Run-1 and at 0.7s in Run-2 after water injected, and the pressure relief system was well-functioned though a few items for improvement were found. (author)
Yahaya Asizehi ENESI; Jacob TSADO; Mark NWOHU; Usman Abraham USMAN; Odu Ayo IMORU
In this paper, the input parameters of a single phase split-phase induction motor is taken to investigate and to study the output performance characteristics of capacitor start and capacitor run induction motor. The value of these input parameters are used in the design characteristics of capacitor run and capacitor start motor with each motor connected to rated or standard capacitor in series with auxiliary winding or starting winding respectively for the normal operational condition. The ma...
Changes in running kinematics, kinetics, and spring-mass behavior over a 24-h run.
Morin, Jean-Benoît; Samozino, Pierre; Millet, Guillaume Y
This study investigated the changes in running mechanics and spring-mass behavior over a 24-h treadmill run (24TR). Kinematics, kinetics, and spring-mass characteristics of the running step were assessed in 10 experienced ultralong-distance runners before, every 2 h, and after a 24TR using an instrumented treadmill dynamometer. These measurements were performed at 10 km·h, and mechanical parameters were sampled at 1000 Hz for 10 consecutive steps. Contact and aerial times were determined from ground reaction force (GRF) signals and used to compute step frequency. Maximal GRF, loading rate, downward displacement of the center of mass, and leg length change during the support phase were determined and used to compute both vertical and leg stiffness. Subjects' running pattern and spring-mass behavior significantly changed over the 24TR with a 4.9% higher step frequency on average (because of a significantly 4.5% shorter contact time), a lower maximal GRF (by 4.4% on average), a 13.0% lower leg length change during contact, and an increase in both leg and vertical stiffness (+9.9% and +8.6% on average, respectively). Most of these changes were significant from the early phase of the 24TR (fourth to sixth hour of running) and could be speculated as contributing to an overall limitation of the potentially harmful consequences of such a long-duration run on subjects' musculoskeletal system. During a 24TR, the changes in running mechanics and spring-mass behavior show a clear shift toward a higher oscillating frequency and stiffness, along with lower GRF and leg length change (hence a reduced overall eccentric load) during the support phase of running. © 2011 by the American College of Sports Medicine
EPIC: an Error Propagation/Inquiry Code
Baker, A.L.
The use of a computer program EPIC (Error Propagation/Inquiry Code) will be discussed. EPIC calculates the variance of a materials balance closed about a materials balance area (MBA) in a processing plant operated under steady-state conditions. It was designed for use in evaluating the significance of inventory differences in the Department of Energy (DOE) nuclear plants. EPIC rapidly estimates the variance of a materials balance using average plant operating data. The intent is to learn as much as possible about problem areas in a process with simple straightforward calculations assuming a process is running in a steady-state mode. EPIC is designed to be used by plant personnel or others with little computer background. However, the user should be knowledgeable about measurement errors in the system being evaluated and have a limited knowledge of how error terms are combined in error propagation analyses. EPIC contains six variance equations; the appropriate equation is used to calculate the variance at each measurement point. After all of these variances are calculated, the total variance for the MBA is calculated using a simple algebraic sum of variances. The EPIC code runs on any computer that accepts a standard form of the BASIC language. 2 refs., 1 fig., 6 tabs
ACE - Manufacturer Identification Code (MID)
Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...
Algebraic and stochastic coding theory
Kythe, Dave K
Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.
Optical coding theory with Prime
Kwong, Wing C
Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct
Adjustments with running speed reveal neuromuscular adaptations during landing associated with high mileage running training.
Verheul, Jasper; Clansey, Adam C; Lake, Mark J
It remains to be determined whether running training influences the amplitude of lower limb muscle activations before and during the first half of stance and whether such changes are associated with joint stiffness regulation and usage of stored energy from tendons. Therefore, the aim of this study was to investigate neuromuscular and movement adaptations before and during landing in response to running training across a range of speeds. Two groups of high mileage (HM; >45 km/wk, n = 13) and low mileage (LM; joint stiffness might predominantly be governed by tendon stiffness rather than muscular activations before landing. Estimated elastic work about the ankle was found to be higher in the HM runners, which might play a role in reducing weight acceptance phase muscle activation levels and improve muscle activation efficiency with running training. NEW & NOTEWORTHY Although neuromuscular factors play a key role during running, the influence of high mileage training on neuromuscular function has been poorly studied, especially in relation to running speed. This study is the first to demonstrate changes in neuromuscular conditioning with high mileage training, mainly characterized by lower thigh muscle activation after touch down, higher initial knee stiffness, and greater estimates of energy return, with adaptations being increasingly evident at faster running speeds. Copyright © 2017 the American Physiological Society.
The Aster code
Delbecq, J.M.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
Preventing running injuries. Practical approach for family doctors.
Johnston, C. A. M.; Taunton, J. E.; Lloyd-Smith, D. R.; McKenzie, D. C.
OBJECTIVE: To present a practical approach for preventing running injuries. QUALITY OF EVIDENCE: Much of the research on running injuries is in the form of expert opinion and comparison trials. Recent systematic reviews have summarized research in orthotics, stretching before running, and interventions to prevent soft tissue injuries. MAIN MESSAGE: The most common factors implicated in running injuries are errors in training methods, inappropriate training surfaces and running shoes, malalign...
Speech coding code- excited linear prediction
Bäckström, Tom
This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...
Run-Time and Compiler Support for Programming in Adaptive Parallel Environments
Guy Edjlali
Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.
Changes in foot and shank coupling due to alterations in foot strike pattern during running.
Pohl, Michael B; Buckley, John G
Determining if and how the kinematic relationship between adjacent body segments changes when an individual's gait pattern is experimentally manipulated can yield insight into the robustness of the kinematic coupling across the associated joint(s). The aim of this study was to assess the effects on the kinematic coupling between the forefoot, rearfoot and shank during ground contact of running with alteration in foot strike pattern. Twelve subjects ran over-ground using three different foot strike patterns (heel strike, forefoot strike, toe running). Kinematic data were collected of the forefoot, rearfoot and shank, which were modelled as rigid segments. Coupling at the ankle-complex and midfoot joints was assessed using cross-correlation and vector coding techniques. In general good coupling was found between rearfoot frontal plane motion and transverse plane shank rotation regardless of foot strike pattern. Forefoot motion was also strongly coupled with rearfoot frontal plane motion. Subtle differences were noted in the amount of rearfoot eversion transferred into shank internal rotation in the first 10-15% of stance during heel strike running compared to forefoot and toe running, and this was accompanied by small alterations in forefoot kinematics. These findings indicate that during ground contact in running there is strong coupling between the rearfoot and shank via the action of the joints in the ankle-complex. In addition, there was good coupling of both sagittal and transverse plane forefoot with rearfoot frontal plane motion via the action of the midfoot joints.
Spatially coded backscatter radiography
Thangavelu, S.; Hussein, E.M.A.
Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography
Aztheca Code; Codigo Aztheca
Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)
The Coding Question.
Gallistel, C R
Recent electrophysiological results imply that the duration of the stimulus onset asynchrony in eyeblink conditioning is encoded by a mechanism intrinsic to the cerebellar Purkinje cell. This raises the general question - how is quantitative information (durations, distances, rates, probabilities, amounts, etc.) transmitted by spike trains and encoded into engrams? The usual assumption is that information is transmitted by firing rates. However, rate codes are energetically inefficient and computationally awkward. A combinatorial code is more plausible. If the engram consists of altered synaptic conductances (the usual assumption), then we must ask how numbers may be written to synapses. It is much easier to formulate a coding hypothesis if the engram is realized by a cell-intrinsic molecular mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.
Revised SRAC code system
Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.
Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)
Code query by example
Vaucouleur, Sebastien
We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.
The correspondence between projective codes and 2-weight codes
Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.
The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of
Visualizing code and coverage changes for code review
Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.
One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.
Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...
Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...
Western diet increases wheel running in mice selectively bred for high voluntary wheel running.
Meek, T H; Eisenmann, J C; Garland, T
Mice from a long-term selective breeding experiment for high voluntary wheel running offer a unique model to examine the contributions of genetic and environmental factors in determining the aspects of behavior and metabolism relevant to body-weight regulation and obesity. Starting with generation 16 and continuing through to generation 52, mice from the four replicate high runner (HR) lines have run 2.5-3-fold more revolutions per day as compared with four non-selected control (C) lines, but the nature of this apparent selection limit is not understood. We hypothesized that it might involve the availability of dietary lipids. Wheel running, food consumption (Teklad Rodent Diet (W) 8604, 14% kJ from fat; or Harlan Teklad TD.88137 Western Diet (WD), 42% kJ from fat) and body mass were measured over 1-2-week intervals in 100 males for 2 months starting 3 days after weaning. WD was obesogenic for both HR and C, significantly increasing both body mass and retroperitoneal fat pad mass, the latter even when controlling statistically for wheel-running distance and caloric intake. The HR mice had significantly less fat than C mice, explainable statistically by their greater running distance. On adjusting for body mass, HR mice showed higher caloric intake than C mice, also explainable by their higher running. Accounting for body mass and running, WD initially caused increased caloric intake in both HR and C, but this effect was reversed during the last four weeks of the study. Western diet had little or no effect on wheel running in C mice, but increased revolutions per day by as much as 75% in HR mice, mainly through increased time spent running. The remarkable stimulation of wheel running by WD in HR mice may involve fuel usage during prolonged endurance exercise and/or direct behavioral effects on motivation. Their unique behavioral responses to WD may render HR mice an important model for understanding the control of voluntary activity levels.
The Robust Running Ape: Unraveling the Deep Underpinnings of Coordinated Human Running Proficiency
Full Text Available In comparison to other mammals, humans are not especially strong, swift or supple. Nevertheless, despite these apparent physical limitations, we are among Natures most superbly well-adapted endurance runners. Paradoxically, however, notwithstanding this evolutionary-bestowed proficiency, running-related injuries, and Overuse syndromes in particular, are widely pervasive. The term 'coordination' is similarly ubiquitous within contemporary coaching, conditioning, and rehabilitation cultures. Various theoretical models of coordination exist within the academic literature. However, the specific neural and biological underpinnings of 'running coordination,' and the nature of their integration, remain poorly elaborated. Conventionally running is considered a mundane, readily mastered coordination skill. This illusion of coordinative simplicity, however, is founded upon a platform of immense neural and biological complexities. This extensive complexity presents extreme organizational difficulties yet, simultaneously, provides a multiplicity of viable pathways through which the computational and mechanical burden of running can be proficiently dispersed amongst expanded networks of conditioned neural and peripheral tissue collaborators. Learning to adequately harness this available complexity, however, is a painstakingly slowly emerging, practice-driven process, greatly facilitated by innate evolutionary organizing principles serving to constrain otherwise overwhelming complexity to manageable proportions. As we accumulate running experiences persistent plastic remodeling customizes networked neural connectivity and biological tissue properties to best fit our unique neural and architectural idiosyncrasies, and personal histories: thus neural and peripheral tissue plasticity embeds coordination habits. When, however, coordinative processes are compromised—under the integrated influence of fatigue and/or accumulative cycles of injury, overuse
Code of Medical Ethics
. SZD-SZZ
Full Text Available Te Code was approved on December 12, 1992, at the 3rd regular meeting of the General Assembly of the Medical Chamber of Slovenia and revised on April 24, 1997, at the 27th regular meeting of the General Assembly of the Medical Chamber of Slovenia. The Code was updated and harmonized with the Medical Association of Slovenia and approved on October 6, 2016, at the regular meeting of the General Assembly of the Medical Chamber of Slovenia.
Affara, Lama Ahmed
Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.
CONCEPT computer code
Delene, J.
CONCEPT is a computer code that will provide conceptual capital investment cost estimates for nuclear and coal-fired power plants. The code can develop an estimate for construction at any point in time. Any unit size within the range of about 400 to 1300 MW electric may be selected. Any of 23 reference site locations across the United States and Canada may be selected. PWR, BWR, and coal-fired plants burning high-sulfur and low-sulfur coal can be estimated. Multiple-unit plants can be estimated. Costs due to escalation/inflation and interest during construction are calculated
Principles of speech coding
Ogunfunmi, Tokunbo
It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the
Evaluation Codes from an Affine Veriety Code Perspective
Geil, Hans Olav
Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...
The Effects of Backwards Running Training on Forward Running Economy in Trained Males.
Ordway, Jason D; Laubach, Lloyd L; Vanderburgh, Paul M; Jackson, Kurt J
Backwards running (BR) results in greater cardiopulmonary response and muscle activity compared with forward running (FR). BR has traditionally been used in rehabilitation for disorders such as stroke and lower leg extremity injuries, as well as in short bursts during various athletic events. The aim of this study was to measure the effects of sustained backwards running training on forward running economy in trained male athletes. Eight highly trained, male runners (26.13 ± 6.11 years, 174.7 ± 6.4 cm, 68.4 ± 9.24 kg, 8.61 ± 3.21% body fat, 71.40 ± 7.31 ml·kg(-1)·min(-1)) trained with BR while harnessed on a treadmill at 161 m·min(-1) for 5 weeks following a 5-week BR run-in period at a lower speed (134 m·min(-1)). Subjects were tested at baseline, postfamiliarized, and post-BR training for body composition, a ramped VO2max test, and an economy test designed for trained male runners. Subjects improved forward running economy by 2.54% (1.19 ± 1.26 ml·kg(-1)·min(-1), p = 0.032) at 215 m·min(-1). VO2max, body mass, lean mass, fat mass, and % body fat did not change (p > 0.05). Five weeks of BR training improved FR economy in healthy, trained male runners without altering VO2max or body composition. The improvements observed in this study could be a beneficial form of training to an already economical population to improve running economy.
Is There an Economical Running Technique? A Review of Modifiable Biomechanical Factors Affecting Running Economy.
Moore, Isabel S
Running economy (RE) has a strong relationship with running performance, and modifiable running biomechanics are a determining factor of RE. The purposes of this review were to (1) examine the intrinsic and extrinsic modifiable biomechanical factors affecting RE; (2) assess training-induced changes in RE and running biomechanics; (3) evaluate whether an economical running technique can be recommended and; (4) discuss potential areas for future research. Based on current evidence, the intrinsic factors that appeared beneficial for RE were using a preferred stride length range, which allows for stride length deviations up to 3Â % shorter than preferred stride length; lower vertical oscillation; greater leg stiffness; low lower limb moment of inertia; less leg extension at toe-off; larger stride angles; alignment of the ground reaction force and leg axis during propulsion; maintaining arm swing; low thigh antagonist-agonist muscular coactivation; and low activation of lower limb muscles during propulsion. Extrinsic factors associated with a better RE were a firm, compliant shoe-surface interaction and being barefoot or wearing lightweight shoes. Several other modifiable biomechanical factors presented inconsistent relationships with RE. Running biomechanics during ground contact appeared to play an important role, specifically those during propulsion. Therefore, this phase has the strongest direct links with RE. Recurring methodological problems exist within the literature, such as cross-comparisons, assessing variables in isolation, and acute to short-term interventions. Therefore, recommending a general economical running technique should be approached with caution. Future work should focus on interdisciplinary longitudinal investigations combining RE, kinematics, kinetics, and neuromuscular and anatomical aspects, as well as applying a synergistic approach to understanding the role of kinetics.
Ground reaction forces in shallow water running are affected by immersion level, running speed and gender.
Haupenthal, Alessandro; Fontana, Heiliane de Brito; Ruschel, Caroline; dos Santos, Daniela Pacheco; Roesler, Helio
To analyze the effect of depth of immersion, running speed and gender on ground reaction forces during water running. Controlled laboratory study. Twenty adults (ten male and ten female) participated by running at two levels of immersion (hip and chest) and two speed conditions (slow and fast). Data were collected using an underwater force platform. The following variables were analyzed: vertical force peak (Fy), loading rate (LR) and anterior force peak (Fx anterior). Three-factor mixed ANOVA was used to analyze data. Significant effects of immersion level, speed and gender on Fy were observed, without interaction between factors. Fy was greater when females ran fast at the hip level. There was a significant increase in LR with a reduction in the level of immersion regardless of the speed and gender. No effect of speed or gender on LR was observed. Regarding Fx anterior, significant interaction between speed and immersion level was found: in the slow condition, participants presented greater values at chest immersion, whereas, during the fast running condition, greater values were observed at hip level. The effect of gender was only significant during fast water running, with Fx anterior being greater in the men group. Increasing speed raised Fx anterior significantly irrespective of the level of immersion and gender. The magnitude of ground reaction forces during shallow water running are affected by immersion level, running speed and gender and, for this reason, these factors should be taken into account during exercise prescription. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.
Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes
Baratta, A.J.
To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together
To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.
Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD
Trambauer, K. [GRS, Garching (Germany)
The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonable accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.
Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions
Design and Development of RunForFun Mobile Application
Anci Anthony
Full Text Available Race run for 5 km or 10 km has been trending recently in many places in Indonesia, especially in Surabaya where there were at least 11 events of race run. The participant's number also increased significantly compared to years before. However, among several race run events, it was seen that some events tended to be replicative and monotone, while among the participants recently were identified the need for increasing the fun factor. RunForFun is a mobile application which designed for participants to reach new experience when participating in a race run event. The mobile application will run on Android OS. The development method of this mobile application would use Reverse Waterfall method. The development of this mobile application uses Ionic Framework which utilizes Cordova as its base to deploy to smartphone devices. Subsequently, RunForRun was tested on 10 participants, and the test shows a significant increase in the fun factor from run race participants. | CommonCrawl |
Common formation mechanism of basin of attraction for bipedal walking models by saddle hyperbolicity and hybrid dynamics
Ippei Obayashi nAff1,
Shinya Aoi2,3,
Kazuo Tsuchiya2,3 &
Hiroshi Kokubu3,4
Japan Journal of Industrial and Applied Mathematics volume 32, pages315–332(2015)Cite this article
In this paper, we investigate the mathematical structures and mechanisms of bipedal walking from a dynamical viewpoint. Especially, we focus on the basin of attraction since it determines the stability of bipedal walking. We treat two similar but different bipedal walking models (passive and active dynamic walking models) and examine common mathematical structure between these models. We find that the saddle hyperbolicity and hybrid system play important roles for the shape of the basin of attraction in both models, which are quite common for more general bipedal models and important for understanding the stability mechanism of bipedal walking.
In this paper, we study bipedal walking using the mathematical models. Especially, we focus on the basin of attraction of stable walking. In a bipedal walking model, a limit cycle corresponds to stable walking and the size and shape of the basin of attraction of the limit cycle determine the robustness of walking for various noises and disturbances. The study for the stability of walking will contribute to designing a biped robot and walking support system, as well as to understanding human walking.
The study using a mathematical model often has a question as to whether the result of analysis using the model is essential for the phenomenon or model specific. One method to answer this question is to compare various models and to find a common mechanism, which may suggest an essential feature of the phenomenon beyond mathematical models. In this paper, we compare two models (passive and active dynamic walking models) and search for a common mechanism of formation of the basin of attraction of their stable walking.
The passive dynamic walking was proposed by McGeer [10], which walks down a shallow slope without any actuator or controller. To investigate the linear stability, the simplest walking model was introduced by Garcia et al. [6] and the basin of attraction was computed by Schwab and Wisse [16] (we use this model as a passive dynamic walking model). They showed that the basin of attraction is very small and thin, and it has a fractal-like shape. However, why the basin of attraction has such a shape remains as an open question. In [13], we introduced some new ideas about a mechanism of forming the shape of basin of attraction and showed that the saddle hyperbolicity of the upright equilibrium point plays an important role to form the basin of attraction.
In this paper, in addition to the passive dynamic walking model, we take an active dynamic walking model. Different from the passive dynamic walking model, it walks on even ground using an actuator controlled by a phase oscillator, which is inspired by central pattern generator [2, 17]. This paper shows that the basic formation mechanism of basin of attraction of the passive dynamic walking model in [13] is common to that of the active dynamic walking model, despite such differences in their models.
On the study of bipedal walking, the following two facts are important:
Bipedal walking is a typical hybrid system due to foot contact and foot off
The center of mass of the human body moves like an inverted pendulum (the inverted pendulum mechanism [9, 14])
Modeling bipedal walking should reflect these facts. More specifically, during human walking, there are two states called the single support phase and the double support phase. On the single support phase, one leg (called stance leg) supports the body and the other leg (called swing leg) swings from back to front. When the swing leg contacts the ground, the state switches to the double support phase, where the both legs supports the body. The former stance leg lifts off the ground and the state switches to the single support phase again. Physical condition between the single and double support phases is quite different and these states are dominated by different equations of motion. This means that the dynamical system of bipedal walking is a hybrid system.
When we observe human walking more closely, we find that the stance leg is almost straight, and it rotates around the foot contact point like an inverted pendulum. Therefore, the center of mass is at its highest position during the midstance phase and at its lowest position during the double support phase. In contrast, the locomotion speed is lowest during the midstance phase and highest during the double support phase. This means that humans produce efficient walking through the pendular exchange of potential and kinetic energy while conserving mechanical energy [3–5]. This is called the inverted pendulum mechanism [9, 14], and inverted pendulums have been widely used as the simplest model for the movement of the center of mass, when investigating the underlying mechanism in human walking [1, 7, 8, 11, 12].
In the present study, we aim to clarify the mechanism that determines the geometric characteristics of the basin of attraction for our bipedal walking models by considering the theory of dynamical systems and focusing on the saddle point or the saddle periodic orbit that is inherent in the governing dynamics related to the inverted pendulum. We show that the basin of attraction is quite thin, and that its reason is common between these two models. In [13], we showed that the saddle property (hyperbolicity) of the upright equilibrium point and the \(\lambda \)-lemma, one of the most basic properties of the hyperbolicity of dynamical systems, are important for the thin basin of attraction in the passive dynamic walking model.
In this paper, we show that the saddle-center periodic orbit of the active dynamic walking model plays a similar role. Indeed, we show that the basic formation mechanism of the basin of attraction is common between the two models, although the detailed structure is rather different. We have already shown that Poincaré sections and center-stable/center-unstable manifolds play important roles for such a shape to be formed by the passive dynamic walking model [13]. In this paper, a similar formation mechanism is also found in the active dynamic walking model.
Because the saddle property is embedded in general locomotor systems (that are not limited to our models), our results may contribute not only to elucidating the stability mechanism in our bipedal walking models, but also to improving the understanding of the stability mechanism in human walking and thus to producing design principles for the control of biped robots and walking-support systems.
In this paper, we use a simple compass-type bipedal walking model (Fig. 1). This model has two legs (rigid links), each having the length l, that are connected by a frictionless hip joint. Let the angle of the stance leg with respect to the slope normal be \(\theta _1\), and the angle between the stance leg and the swing leg be \(\theta _2\). The mass is located only at the hip and the feet; the hip mass is M and the foot mass is m. g is the acceleration due to gravity. This model walks on a slope of angle \(\gamma \) and is controlled by the input torque u at the hip.
Bipedal walking model. The passive dynamic walking model walks on a slope (\(\gamma > 0\)) without any input torque (\(u=0\)), while the active dynamic walking model walks walks on even plane (\(\gamma = 0\)) with input torque (\(u \not = 0\))
In this model, the typical walking behavior is as follows. A new step starts when both feet are on the slope, just after the swing leg makes contact with the slope. The front leg is the stance leg, and the other leg is the swing leg. The stance foot is fixed on the slope, and the stance leg rotates freely without friction. The stance and swing legs move as a double pendulum. The swing leg swings forward, and the swing foot contacts the slope. In this model, the collision is assumed to be fully inelastic (no slip, no-bound). The swing leg immediately becomes the new stance leg, and vice versa (double support duration is infinitesimal).
The passive dynamic walking model describes walking down the slope (\(\gamma > 0\)) without any torque (\(u=0\)) by balancing the energy dissipation due to foot contact with the energy generation due to the gravitational potential energy. In contrast, the active dynamic walking model walks on even ground (\(\gamma =0\)) controlled by the input torque (\(u \not =0\)).
We note that since the legs are rigid links, the swing leg collides with the slope when the stance leg is nearly vertical. We can avoid this foot scuffing by adding complications to the model, such as passive knees. However, for simplicity, we ignore the foot scuffing and allow the leg to pass through the slope in our models.
Equations of motion for the single support phase
The configuration of the mechanical model is described by two variables \((\theta _1, \theta _2)\). The equations of motion are given by a Lagrangian equation:
$$\begin{aligned}&\begin{bmatrix} Ml^2+2ml^2(1-\cos \theta _2)&ml^2(-1+\cos \theta _2) \\ ml^2(-1+\cos \theta _2)&ml^2 \\ \end{bmatrix} \begin{bmatrix} \ddot{\theta }_1 \\ \ddot{\theta }_2 \end{bmatrix} \nonumber \\&\qquad + \begin{bmatrix} ml^2(2\dot{\theta }_1 - \dot{\theta }_2)\sin \theta _2 \\ -ml^2\dot{\theta }_1^2\sin \theta _2 \end{bmatrix} + \begin{bmatrix} -gMl\sin (\theta _1-\gamma ) \nonumber \\ 0 \end{bmatrix} \\&\qquad + \begin{bmatrix} gml[\sin (\theta _1-\gamma ) + \sin (\theta _2-\theta _1 + \gamma )] \\ gml\sin (\theta _2 - \theta _1 + \gamma ) \end{bmatrix} \nonumber \\&\quad = \begin{bmatrix} 0 \\ u \end{bmatrix}. \end{aligned}$$
For the passive dynamic walking model, we take the torque \(u=0\). For the active dynamic walking model, we use a phase oscillator for generating a torque, whose phase is \(\phi \), to control the model. The oscillator phase follows the dynamics
$$\begin{aligned} \dot{\phi } = \omega \end{aligned}$$
where \(\omega \) is the frequency and we simply determine the input torque u by
$$\begin{aligned} u = A \cos \phi \end{aligned}$$
where A is the amplitude.
In the passive dynamic walking model, the phase space is four dimensional with variables \((\theta _1, \theta _2, \dot{\theta }_1, \dot{\theta }_2)\). On the other hand, in the active dynamic walking model the phase space is five dimensional with \((\theta _1, \theta _2, \dot{\theta }_1, \dot{\theta }_2, \phi )\).
After appropriate rescaling, we have the following non-dimensionalized equations.
$$\begin{aligned}&\begin{bmatrix} 1+2\beta (1-\cos \theta _2)&\beta (-1+\cos \theta _2) \\ -1+\cos \theta _2&1 \end{bmatrix} \begin{bmatrix} \ddot{\theta }_1 \\ \ddot{\theta }_2 \end{bmatrix} + \begin{bmatrix} \beta (2\dot{\theta }_1 - \dot{\theta }_2)\sin \theta _2 \\ -\dot{\theta }_1^2\sin \theta _2 \end{bmatrix} \nonumber \\&\qquad + \begin{bmatrix} -\sin (\theta _1 - \gamma ) + \beta [\sin (\theta _1 - \gamma ) + \sin (\theta _2-\theta _1 + \gamma )] \\ \sin (\theta _2 - \theta _1+ \gamma ) \end{bmatrix} \nonumber \\&= \begin{bmatrix} 0 \\ A_0 \cos \phi \end{bmatrix} \nonumber \\ \dot{\phi }&= \omega _0 \end{aligned}$$
where \(\beta = m{/}M\), \(A_0 = A{/}(Ml^2\beta )\), and \(\omega _0 = \omega \sqrt{l {/} g}\) (\(A_0 =0\) for the passive dynamic walking model).
Foot contact
The swing foot contacts the slope when the following conditions are satisfied:
$$\begin{aligned} 2 \theta _1 - \theta _2&= 0 \end{aligned}$$
$$\begin{aligned} \theta _1&< 0 \end{aligned}$$
$$\begin{aligned} 2\dot{\theta }_1 - \dot{\theta }_2&< 0. \end{aligned}$$
Conditions (3) and (4) are used to ignore the foot scuffing when the swing leg moves forward.
We assume that foot contact is a fully inelastic collision (no-slip, no-bound) and that the stance foot lifts off the slope as soon as the swing foot hits the slope. The relationship between the state just before foot contact \((\theta _1^-, \theta _2^-)\) and the state just after foot contact \((\theta _1^+, \theta _2^+)\) is as follows:
$$\begin{aligned} \theta _1^+&= -\theta _1^- \nonumber \\ \theta _2^+&= -\theta _2^-. \end{aligned}$$
Due to the collision, the angular velocities discontinuously change just at the moment of foot contact. We assume that the stance leg does not interact with the slope when it lifts off and that the input torque does not work at the instant. From these assumptions, the conservation of angular momentum yields the following relationships:
$$\begin{aligned} \dot{\theta }_1^+&= \frac{2\dot{\theta }_1^- \cos \theta _2^-}{2+\beta (1-\cos 2\theta _2^-)} \\ \dot{\theta }_2^+&= \frac{2\cos \theta _2^-(1- \cos \theta _2^-)\dot{\theta }_1^-}{2+\beta (1-\cos 2\theta _2^-)}. \end{aligned}$$
Since the roles of the legs are swapped at the collision so that \(\theta _2\) varies as (5), we change the oscillator phase: \(\phi ^+ = \phi ^- - \pi \) so that \(u^+ = -u^-\).
When we use \(2\theta _1^- - \theta _2^- = 0\) from (2), we have
$$\begin{aligned} \begin{bmatrix} \theta _1^+ \\ \theta _2^+ \\ \dot{\theta }_1^+ \\ \dot{\theta }_2^+ \\ \phi ^+ \end{bmatrix} = \begin{bmatrix} -\theta _1^- \\ -2\theta _1^- \\ \frac{2\dot{\theta }_1^-\cos 2\theta _1^-}{2+\beta (1-\cos 4\theta _1^-)} \\ \frac{2\cos 2\theta _1^-(1-\cos 2\theta _1^-) \dot{\theta }_1^-}{2+\beta (1-\cos 4\theta _1^-)} \\ \phi ^- - \pi \end{bmatrix}. \end{aligned}$$
Note that the state just after foot contact depends only on \((\theta _1^-, \dot{\theta }_1^-, \phi ^-)\) and is independent of \((\theta _2^-, \dot{\theta }_2^-)\). From the fact, in the passive dynamic walking model, the state just after foot contact forms a two-dimensional submanifold in the four-dimensional phase space since \(\phi \) is ignored. In the active dynamic walking model, the state just after foot contact forms a three-dimensional submanifold in the five-dimensional phase space.
Structure of phase space by hybrid dynamics
The models are hybrid systems composed of the continuous dynamics during the single support phase and the discontinuous dynamics at foot contact. The hybrid dynamics determines the structure of the phase space, as shown in Fig. 2a. H is the section of foot contact defined by the conditions (2)–(4). T is the jump in the phase space from the state just before foot contact to the state just after foot contact, defined by the relationship (6). Therefore, the image of T, T(H), is the region representing all states just after foot contact and a new step starts from T(H). U is the map from the start of a step to the foot contact. In other words, U is the map from T(H) to H, defined by the equations of motion (1). The Poincaré map S is defined by \(S = T \circ U:T(H) \rightarrow T(H)\) on the Poincaré section T(H). This Poincaré map represents one gait cycle, and an attractor of the Poincaré map represents stable walking.
Note that the structures of the phase spaces of the passive and active dynamic walking models are quite similar, but the number of dimensions is different. In the passive dynamic walking model, the phase space is four dimensional, the section is three dimensional, and T(H) is two dimensional. On the other hand, in the active dynamic walking model, the phase space is five dimensional, the section is four dimensional, and T(H) is three dimensional.
Structure of the phase space. a Foot contact condition (section) H bounded by two conditions (orange lines), the jump by foot contact T, the state just after foot contact event T(H), the map from T(H) to H by the equations of motion for the swing phase U, and the Poincaré map S defined by \(S = T \circ U\) on the Poincaré section T(H). b Domain D (red region) bounded by the backward orbits of two boundaries of H by the equations of motion for the swing phase (red lines)
To investigate the basin of attraction, the domain of T is important. The map S is not defined for all T(H), since the model may fall down from some initial conditions. We define the domain, D, by the collection of initial conditions on which the model takes at least one step. In other words, if \(x \not \in D\), the orbit from x does not reach the section H, and the model falls down. D is in T(H) and bounded, as shown in Fig. 2b. H has two boundaries (orange lines) defined by \(\theta _1=0\) and \(2\dot{\theta }_1 - \dot{\theta }_2=0\) from the conditions (3) and (4), and the backward flows of these boundaries by the equations of motion (1) determine the boundaries of D (red lines).
We also consider the sequence of inverse images of D, \(S^{-n}(D)\ (n=1,2,\ldots )\). The region \(S^{-n}(D)\) indicates the collections of initial conditions on which the model takes at least \(n+1\) steps. The sequence approximates the basin of attraction, and we investigate the mechanism by which the shape of the basin of attraction is formed from the geometric structure of these inverse images.
In the following sections, we numerically compute \(D, S^{-1}(D), S^{-2}(D), \ldots \). Numerical computation of the region D is rather straightforward. We take many initial points in T(H), numerically integrate the equations of motion from these initial points and check whether the orbit can take one step or falls down. We use fall down threshold at \(\theta _1 = \pm \pi /2\). T(H) is two dimensional and parameterized by two variable \(\theta _1, \dot{\theta }_1\) in the passive dynamic dynamic walking model, and T(H) is three dimensional and parameterized by three variable \(\theta _1, \dot{\theta }_1, \phi \) in the active dynamic dynamic walking model. Therefore we use these variables to compute and show the numerical results. We can compute \(S^{-1}(D), S^{-2}(D), \ldots \) in a similar way. Because \(S^{-n}(D)\) for sufficiently large n is considered to be a sharp approximation of the basin of attraction, we compute \(S^{-n}(D)\) for \(n=50\) and \(n=200\) and take them as the basin of attraction if the two numerical results are sameFootnote 1.
In this paper, we choose the following parameters to analyze the models. In the passive dynamic walking model, we use \(A_0 = 0, \beta = 0, \gamma = 0.011\), where we consider a limit case in which the foot mass is much smaller than the hip mass as in [6]. With this parameter, the corresponding Poincaré map has a unique attracting fixed point at \((\theta _1, \dot{\theta }_1) \approx (0.214, -0.212)\) on the Poincaré section. In the active dynamic walking model, we use \(A_0 = -0.29135, \beta = 0.15, \gamma = 0, \omega _0 = 1.1191\). We use these parameters so that the corresponding Poincaré map has a unique attracting fixed point at \((\theta _1, \dot{\theta }_1, \phi ) \approx (0.144, -0.179, 0.0535)\) on the Poincaré section. More complicated cases for the passive dynamic walking model (for example, the Poincaré map has a chaotic attractor) were analyzed in the paper [13].
Center-stable and center-unstable manifolds
The equations of motion (1) for the passive dynamic walking model have an equilibrium point \((\theta _1, \dot{\theta }_1, \theta _2, \dot{\theta }_2) = (\gamma , 0, 0, 0)\), where the legs remain upright. The equilibrium point is deeply related to the geometric structure of the basin of attraction, as explained in Sect. 3. The eigenvalues of the linearized equations of motion at the equilibrium point are \(\pm 1\) and \(\pm i\), and the equilibrium point is a saddle-center with one stable direction, one unstable direction, and two neutral directions.
The equations of motion (1) for the active dynamic walking model (at the selected parameters) have a \(2\pi /\omega _0\)-periodic orbitFootnote 2. We can find this periodic orbit by the Newton method. The time-\(2\pi {/}\omega _0\) map of the equations of motion from \(\phi = 0\) to \(2\pi \) can be considered as a map from \({\mathbb {R}}^4\) to itself, which has a saddle-center fixed point at \((\theta _1, \dot{\theta }_1, \theta _2, \dot{\theta }_2) \sim (-0.041, 0.000, 0.891 , 0.000)\). The eigenvalues of the linearized matrix of the map are approximately \(316.8, 0.003157, 0.4021 \pm 0.9156i\): one is greater than 1, another is positive and less than 1, and the other two are complex conjugate and on the unit circle, which shows that the periodic orbit is of the saddle-center type.
The equilibrium point for the passive dynamic walking model and the periodic orbit for the active dynamic walking model have a codimension one center-stable manifold (\(W^{cs}\)) and a codimension one center-unstable manifold (\(W^{cu}\)). Fig. 3a, b show these structures.
Phase diagram of the passive (a) and active (b) dynamic walking models. The passive dynamic walking model has a saddle-center equilibrium point (open dot in a) and the active dynamic walking model has a saddle-center periodic orbit (black arrow in b). a The center-stable and center-unstable manifolds of the equilibrium point are represented by the green and blue curves, and b the center-stable and center-unstable manifolds of the periodic orbit are represented by the green and blue surfaces. b Some orbits on the center-stable and center-unstable manifolds are drawn by the green and blue arrows. The red arrows a, b represent stable walking (attracting periodic orbit). The curved and straight red arrows represent U (motion in single support phase) and T (jump at foot contact), respectively
Basin of attraction in passive dynamic walking model
In this section, we briefly summarize results on the passive dynamic walking model given in [13] for comparison with the active dynamic walking model.
Geometric characteristics
Figure 4a shows the domain D and the basin of attraction B on T(H). Both D and B are very thin in the space of \((\theta _1, \dot{\theta }_1)\). Fig. 4b is a zoom-in view. To clearly see the geometrical details, we use \(\theta _1 + \dot{\theta }_1\) and \(\theta _1 - \dot{\theta }_1\) for the axis in Fig. 4c. The intersection of the center-stable manifold \(W^{cs}\) and T(H) is shown by a green line in Fig. 4b, c. We showed that D had the following properties:
D is a long, thin region in \((\theta _1, \dot{\theta }_1)\)-space;
Two boundary curves of D are almost parallel, and one of them is very close to \(W^{cs}\).
We also showed the following properties for B:
B is located inside D and is thinner than D;
B is V-shaped;
There are fractal-like slits in B and a stripe pattern in the cusp of the V-shaped region.
To investigate how these geometric characteristics of B are generated from D, we calculated the inverse images of D, \(S^{-n}(D) \ (n=1,2,\ldots )\). Figure 5 shows D, \(S^{-1}(D)\), and \(S^{-2}(D)\), which showed the following:
\(S^{-1}(D)\) is contained in D and is V-shaped;
\(S^{-2}(D)\) is located inside \(S^{-1}(D)\), is V-shaped, and has a slit.
Geometric characteristics of the basin of attraction for the passive dynamic walking model. a Domain D and basin of attraction B on \((\theta _1, \dot{\theta }_1)\). b Magnified view (blue box in a). c Rotated view using \(\theta _1 + \dot{\theta }_1\) and \(\theta _1 - \dot{\theta }_1\) for the axes
Domain D and inverse images \(S^{-1}(D)\) and \(S^{-2}(D)\)
The formation mechanism of thin domain
The domain D is very thin, as shown in Fig. 4a. This fact is related to the so-called "\(\lambda \)-lemma", one of the most important theorems in the theory of dynamical systems. From this theorem, we can say the following (see the textbook by Robinson [15] for the exact statement of the theorem and its proof):
A region intersecting the unstable manifold of a saddle equilibrium point moves toward the stable manifold of the saddle under the backward flow (time-reversal of the flow);
When the region comes close to the stable manifold under the backward flow, the region becomes thinner due to the hyperbolicity of the dynamics near the saddle.
Figure 6 illustrates how a region X moves and is deformed into thinner regions Y and Z under the backward flow. Because the equilibrium point of our model is a saddle-center and not a saddle equilibrium point, we cannot apply this theorem directly. However, a region intersecting the center unstable manifold near the equilibrium point moves close to the center stable manifold under the backward flow of the equations of motions in a way similar to Fig. 6 in the stable and unstable directions. As shown in Fig. 2b, D is obtained by the intersection of T(H) and the backward orbit whose initial point is in H. Therefore, D becomes thin along the center stable manifold, as shown in Figs. 4A, B, and 6. This explains why the domain is very thin.
\(\lambda \)-lemma. Region X moves and is deformed to thinner regions Y and Z by the backward flow. In our model, the foot contact section H and domain D correspond to the regions X and Z, respectively
The formation mechanism of V-shaped region and slits
Since the sequence of the inverse images \(S^{-n}(D)\ (n=1,2,\ldots )\) converges to the basin of attraction, it is important to clarify how the geometric structure of the inverse images is formed, and hence the shape of the basin of attraction. In particular, we show why the inverse images are V-shaped and why they have slits and stripe patterns.
First, \(S^{-1}(D)\) is V-shaped in the thin region D, as shown in Fig. 5. The formation of this shape is explained as follows (Fig. 7a, b). D is a thin region along the center stable manifold, as described in Sect. 3.2. Since \(S=T\circ U\), we have \(S^{-1}(D) = U^{-1}(T^{-1}(D))\), where \(T^{-1}(D)\) is contained in H (green region in Fig. 7a). Since D is a thin strip, \(T^{-1}(D)\) is also a thin strip. Since \(U^{-1}\) is given by the backward flow of the equations of motion, \(T^{-1}(D)\) is strongly expanded along the direction of the stable manifold and strongly contracted along the direction of the unstable manifold by \(U^{-1}\), as shown in Fig. 6. In addition, the flow of \(\theta _1(t)\) becomes slow near the equilibrium point due to the saddle hyperbolicity and this causes nonuniform expansion by \(U^{-1}\) near \(W^{cu}\). Therefore, \(S^{-1}(D)\) becomes V-shaped, as shown in Fig. 7b.
We can also explain why the inverse image \(S^{-2}(D)\) has a slit in the same way (Fig. 7c, d). We can also give a similar explanation for the stripe pattern, which is formed by the repeated expansion of nested regions.
This explains how the slits and stripe patterns in the basin of attraction are formed. The relative positions of \(T^{-1}(D)\) and the center unstable manifold and the hyperbolicity near the saddle determine the geometric characteristics of the basin of attraction.
This figure illustrates how the inverse images of the domain are V-shaped and have slits. a \(T^{-1}(D)\) is obtained by the inverse image of D and becomes a thin region in H. b \(T^{-1}(D)\) is moved and deformed by the backward flow to \(S^{-1}(D)=U^{-1}(T^{-1}(D))\). c \(T^{-1}(S^{-1}(D))\) is obtained by the inverse image of \(S^{-1}(D)\). d \(T^{-1}(S^{-1}(D))\) is moved and deformed by the backward flow to \(S^{-2}(D)=U^{-1}(T^{-1}(S^{-1}(D)))\)
Basin of attraction in active dynamic walking model
In this section, we investigate the basin of attraction for the active dynamic walking model in comparison with the passive dynamic walking model. We will see the same mechanisms as in Sect. 3 for this model.
Figure 8 shows the two dimensional slices of the basin of attraction and the domain of the Poincaré map at \(\phi = -0.342\) and 0.0535 (in 3D space with coordinate variables \(\theta _1, \dot{\theta }_1\), and \(\phi \)) and their rotated views. The domain is thin in the three dimensional space along the center-stable manifold of the saddle-center periodic orbit in the figure. This fact is common to the passive dynamic walking model. On the other hand, the basin of attraction in the active dynamic walking model has a horn-like shape, and the shape is quite different from the passive dynamic walking model. Furthermore, these two slices of the basin of attraction have different shapes. The slice at \(\phi = -0.342\) (Fig. 8a, c) looks like two horns without a head. On the other hand, the slice at \(\phi = 0.0535\) (Fig. 8b, d) looks like two horns with an animal head. These two structures coexist in the same phase space.
2D slices of the basin of attraction and the domain for the active dynamic walking model at \(\phi = -0.342\) (a), and \(\phi = 0.0535\) (b) and the rotated views of a (c) and b (d). An attracting fixed point exists on the slice at \(\phi = 0.0535\)
To investigate these structures, we calculate the inverse images of D, \(S^{-n}(D) \ (n=1,2,\ldots )\) as in Sect. 3. Figure 9a, b shows D, \(S^{-1}(D)\), and \(S^{-2}(D)\) at \(\phi = -0.342\) and 0.0535. These two figures are quite different from that of the passive dynamic walking model, and these two figures are different with each other.
Domain D and inverse images \(S^{-1}(D)\) and \(S^{-2}(D)\) for the active dynamic walking model on the slices at \(\phi = -0.342\) (a) and 0.0535 (b)
The formation mechanism of the basin of attraction
The thin domain in the active dynamic walking model is caused by the same reason as the passive dynamic walking model. The region H deforms to the thin region by the time-backward flow of the equations of motion because of the contraction and expansion properties of the saddle-center periodic orbit, as shown in Fig. 6.
Figures 10 and 11 explain why the basin of attraction looks like horns. When \(T^{-1}(D)\) intersects with the center unstable manifold like Figs. 10a and 11a, the green regions cannot leave the center unstable manifold since the manifold is invariant under the equations of motion. As a result, the regions are deformed as in Figs. 10b and 11b. In addition, these figures also explain why Fig. 8a, b have different shapes. The difference of the relative position between \(W^{cu}\) and \(T^{-1}(D)\) induces the difference between these two figures. Figures 10c, d and 11c, d explain \(S^{-2}(D)\) of these two slices. The repeated application of this mechanism forms the "horn-like" shape of the basin of attraction. These two types of structures (and also other types of structures) coexist in the three dimensional space T(H).
Formation mechanism of Fig. 9a
These arguments show that the two different walking models share the common formation mechanism of the basin of attraction studied in this paper.
Formation mechanism of Fig. 9b
Transition between "two horns without head" and "two horns with head"
When the slicing position changes from \(\phi = -0.342\) to \(\phi = 0.0535\), the slice of the basin of attraction changes from "two horns without head" to "two horns with head". Figure 12a, b show the slices of the basin of attraction at \(\phi = -0.0780\) and \(\phi = -0.0720\). Between these two slices, the horn-like region merges and the geometric structure changes drastically.
2D slices of the basin of attraction and the domain at \(\phi = -0.0780\) (a) and \(\phi = -0.0720\) (b)
As mentioned in Sect. 4.2, the difference between Fig. 8c and d can be explained by the geometric change of \(S^{-1}(D)\). Therefore, we focus on the geometric change of \(S^{-1}(D)\) when the slicing position changes. Figure 13 shows the change of \(S^{-1}(D)\) when the slicing position changes gradually. From these figures, we observe that two horns approach each other from Fig. 13a and b, two horns are merged between Fig. 13b, c, and \(S^{-1}(D)\) becomes larger from Fig. 13c, d.
2D slices of D and \(S^{-1}(D)\) at \(\phi = -0.321\) (a), \(\phi = -0.255\) (b), \(\phi = -0.249\) (c), and \(\phi = -0.177\) (d)
The change of the shape of \(S^{-1}(D)\) comes from the transition from Fig. 10a to Fig. 11a. Between Fig. 13b and c, the boundary of \(S^{-1}(D)\) becomes tangent to the center unstable manifold, and the shape changes drastically. \(S^{-2}(D), S^{-3}(D),\ldots \) also change similarly as the slicing position changes, and finally the shape of the basin of attraction changes drastically as in Fig. 12a, b.
We created a movie Footnote 3 about the structural changes of D, \(S^{-n}(D)\) for \(n=1,\ldots ,5\) and the basin of attraction. You can see the details of the change of the shapes of these regions from the movie.
Conclusion and future works
In the present study, we clarified the formation mechanism of the basin of attraction for two bipedal walking models by focusing on the intrinsic hyperbolicity in the governing dynamics and based on the viewpoint of the theory of dynamical systems. We showed that the formation mechanism of the basin of attraction for the passive dynamic walking model in [13] is not specific to that model, but is applicable also to the active dynamic walking model studied in this paper.
The thin basin of attraction of the passive dynamic walking model is closely related to the one-dimensional instability of the upright equilibrium. The thin basin of attraction of the active dynamic walking model is also closely related to the one-dimensional instability of the periodic orbit. In these models, there is a codimension one center-stable manifold. For the both models, the one-dimensional instability comes from the saddle hyperbolicity of an inverted pendulum. Although the present study focuses on these two models, our analysis strongly suggests that the results are not specific to them, but are widely applicable to more general bipedal walking models due to the intrinsic saddle hyperbolicity of bipedal walking.
The detailed structures of the basin of attraction are, however, rather different between the passive and active dynamic walking models. We showed that the difference came from the relative position of the center-unstable manifold and \(T^{-1}(S^{-n}(D))\ (n=0,1,\ldots )\). The formation mechanism of basin of attraction by the relative position of the center-unstable manifold and inverse images can explain various types of shapes of basin of attraction. Furthermore, we can use the mechanism to deform the basin of attraction by regulating the relative position of those objects under additional controllers and support systems. Therefore, the present study will hopefully contribute to improving the understanding of the stability mechanism in human walking and to producing design principles for the control of biped robots and walking support systems. This will be a subject of our future work.
This method does not work in some cases, for example, if the Poincaré map is bistable. But in this paper any orbits whose initial points are in the numerically computed basin of attraction converge to a unique attracting fixed point by the iterations of the Poincaré map. The fact is numerically checked.
The movie of this periodic orbit is shown at: https://www.math.kyoto-u.ac.jp/%7eobayashi/bipedal-walking/periodic.gif.
https://www.math.kyoto-u.ac.jp/%7eobayashi/bipedal-walking/slices.mpeg.
Alexander, R.: Mechanics of bipedal locomotion. In: Spencer-Davies, P. (ed.) Perspectives in Experimental Biology, vol. 1, pp. 493–504. Pergamon Press, Oxford (1980)
Aoi, S., Ogihara, N., Funato, T., Sugimoto, Y., Tsuchiya, K.: Evaluating functional roles of phase resetting in generation of adaptive human bipedal walking with a physiologically based model of the spinal pattern generator. Biol. Cybern. 102(5), 373–387 (2010). doi:10.1007/s00422-010-0373-y
Cavagna, G.A., Heglund, N.C., Taylor, C.R.: Mechanical work in terrestrial locomotion: two basic mechanisms for minimizing energy expenditure. Am. J. Physiol. Regul. Integr. Comp. Physiol. 233(5), R243–R261 (1977)
Cavagna, G.A., Margaria, R.: Mechanics of walking. J. Appl. Physiol. 21(1), 271–278 (1966)
Cavagna, G.A., Saibene, F.P., Margaria, R.: External work in walking. J. Appl. Physiol. 18(1), 1–9 (1963)
Garcia, M., Chatterjee, A., Ruina, A., Coleman, M.: The simplest walking model: stability, complexity, and scaling. ASME J. Biomech. Eng. 120(2), 281–288 (1998). doi:10.1115/1.2798313
Kuo, A.D.: Energetics of actively powered locomotion using the simplest walking model. ASME J. Biomech. Eng. 124(1), 113–120 (2001). doi:10.1115/1.1427703
Kuo, A.D.: A simple model of bipedal walking predicts the preferred speed-step length relationship. ASME J Biomech. Eng. 123(3), 264–269 (2001). doi:10.1115/1.1372322
Kuo, A.D.: The six determinants of gait and the inverted pendulum analogy: a dynamic walking perspective. Hum. Mov. Sci. 26(4), 617–656 (2007)
McGeer, T.: Passive dynamic walking. Int. J. Robotics Res. 9(2), 62–82 (1990). doi:10.1177/027836499000900206
Mochon, S., McMahon, T.A.: Ballistic walking. J. Biomech. 13(1), 49–57 (1980)
Mochon, S., McMahon, T.A.: Ballistic walking: an improved model. Math. Biosci. 52(3–4), 241–260 (1980)
Obayashi, I., Aoi, S., Tsuchiya, K., Kokubu, H.: Construction mechanism of a basin of attraction for passive dynamic walking induced by intrinsic hyperbolicity. Preprint (2014). arXiv:1407.5720
Ogihara, N., Aoi, S., Sugimoto, Y., Tsuchiya, K., Nakatsukasa, M.: Forward dynamic simulation of bipedal walking in the Japanese macaque: investigation of causal relationships among limb kinematics, speed, and energetics of bipedal locomotion in a nonhuman primate. Am. J. Phys. Anthropol. 145(4), 568–580 (2011). doi:10.1002/ajpa.21537
Robinson, C.: Dynamical systems: stability, symbolic dynamics, and chaos. In: Studies in Advanced Mathematics. CRC Press, Boca Raton (2008)
Schwab, A.L., Wisse, M.: Basin of attraction of the simplest walking model. In: ASME Design Engineering Technical Conferences (2001)
Taga, G., Yamaguchi, Y., Shimizu, H.: Self-organized control of bipedal locomotion by neural oscillators in unpredictable environment. Biol. Cybern. 65(3), 147–159 (1991). doi:10.1007/BF00198086
Ippei Obayashi
Present address: Advanced Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai, 980-8577, Japan
Department of Aeronautics and Astronautics, Graduate School of Engineering, Kyoto University, Kyoto daigaku-Katsura, Nishikyo-ku, Kyoto, 615-8540, Japan
Shinya Aoi
& Kazuo Tsuchiya
JST, CREST, 5 Sanbancho, Chiyoda-ku, Tokyo, 102-0075, Japan
, Kazuo Tsuchiya
& Hiroshi Kokubu
Department of Mathematics, Graduate School of Science, Kyoto University, Kitashirakawa Oiwakecho, Sakyo-ku, Kyoto, 606-8502, Japan
Search for Ippei Obayashi in:
Search for Shinya Aoi in:
Search for Kazuo Tsuchiya in:
Search for Hiroshi Kokubu in:
Correspondence to Hiroshi Kokubu.
Supplementary material 1 (mpeg 8004 KB)
Supplementary material 2 (gif 479 KB)
Obayashi, I., Aoi, S., Tsuchiya, K. et al. Common formation mechanism of basin of attraction for bipedal walking models by saddle hyperbolicity and hybrid dynamics. Japan J. Indust. Appl. Math. 32, 315–332 (2015). https://doi.org/10.1007/s13160-015-0181-9
Revised: 31 January 2015
Issue Date: July 2015
Bipedal walking
Basin of attraction
Hyperbolicity
Hybrid system
Mathematics Subject Classification
37N25 Dynamical systems in biology
34K34 Hybrid systems | CommonCrawl |
Business Business Essentials
EBITDAR Definition
What Is EBITDAR?
Earnings before interest, taxes, depreciation, amortization, and restructuring or rent costs (EBITDAR) is a non-GAAP tool used to measure a company's financial performance. Although EBITDAR does not appear on a company's income statement, it can be calculated using information from the income statement.
The Formula for EBITDAR Is
EBITDAR=EBITDA + Restructuring/Rental Costswhere:EBITDA = Earnings before interest, taxes,depreciation, and amortization\begin{aligned} &\text{EBITDAR}=\text{EBITDA + Restructuring/Rental Costs}\\ &\textbf{where:}\\ &\text{EBITDA = Earnings before interest, taxes,}\\ &\text{depreciation, and amortization}\\ \end{aligned}EBITDAR=EBITDA + Restructuring/Rental Costswhere:EBITDA = Earnings before interest, taxes,depreciation, and amortization
EBITDAR
What Does EBITDAR Tell You?
EBITDAR is a metric used primarily to analyze the financial health and performance of companies that have gone through restructuring within the past year. It is also useful for businesses such as restaurants or casinos that have unique rent costs. It exists alongside earnings before interest and tax (EBIT) and earnings before interest, tax, depreciation, and amortization (EBITDA).
Using EBITDAR in analysis helps to reduce variability from one company's expenses to the next, in order to focus only on costs that are related to operations. This is helpful when comparing peer companies within the same industry.
EBITDAR doesn't take rent or restructuring into account because this metric seeks to measure a company's core operational performance. For example, imagine an investor comparing two restaurants, one in New York City with expensive rent and the other in Omaha with significantly lower rent. To compare those two businesses effectively, the investor excludes their rent costs, as well as interest, tax, depreciation, and amortization.
Similarly, an investor may exclude restructuring costs when a company has gone through a restructuring and has incurred costs from the plan. These costs, which are included on the income statement, are usually seen as nonrecurring and are excluded from EBITDAR to give a better idea of the company's ongoing operations.
EBITDAR is a profitability measure, like EBIT or EBITDA, but it's better for casinos, restaurants, and other companies that have non-recurring or highly variable rent or restructuring costs.
EBITDAR gives analysts a view of a company's core operational performance apart from expenses unrelated to operations, such as taxes, rent, restructuring costs, and non-cash expenses.
Using EBITDAR allows for easier comparison of one firm to another by minimizing unique variables that don't relate directly to operations.
Example of How to Use EBITDAR
EBITDAR is most often calculated for internal purposes only, as it is not a required financial reporting metric for public companies. A firm might calculate it each quarter to isolate and review operational expenses without having to consider fluctuating costs such as restructuring, or rent costs that may differ within various subsidiaries of the company or among the firm's competitors.
The starting point is earnings before interest and tax (EBIT), also referred to as operating income. This metric excludes interest and taxes. The next step is to exclude costs associated with depreciation, amortization, rent or restructuring, to arrive at EBITDAR.
For example, imagine the XYZ company earns $1 million in a year, and it has $400,000 in total operating expenses. Subtracting operating expenses from revenue results in $600,000 of EBIT, or operating income ($1 million revenue - $400,000 operating expenses) = $600,000.
The operating expenses do not include interest and tax expenses, as the company chooses to show them further down on the income statement, after EBIT.
Included in the firm's $400,000 operating expenses is depreciation of $15,000, amortization of $10,000, and rent of $50,000. To arrive at EBITDAR, an analyst excludes depreciation, amortization and rent ($15,000 + $10,000 + $50,000) from the calculation by starting with EBIT and adding back the amounts as follows:
EBITDAR = $600,000 EBIT + ($15,000+$10,000+$50,000) = $675,000
Note that rent is excluded for the EBITDAR metric only.
The Difference Between EBITDAR and EBITDA
The difference between EBITDA and EBITDAR is that the latter excludes restructuring or rent costs. However, both metrics are utilized to compare the financial performance of two companies without considering their taxes or non-cash expenses such as depreciation and amortization. When a business amortizes or depreciates an asset, it writes off a portion of the asset's cost each year over the course of several years, although it may have actually paid for the asset all in one year.
While essential for tax returns and accounting ledgers, these numbers may cloud the picture of a business's current financial state. As a result, investors want to consider the performance of a company without taking non-operational expenses into account as they may look quite different from one company to the next.
What the Debt/EBITDA Ratio Tells You
Debt/EBITDA is a ratio measuring the amount of income generation available to pay down debt before deducting interest, taxes, depreciation, and amortization.
Understanding EBITDAX
EBITDAX is an indicator of financial performance used when reporting earnings for oil and mineral exploration companies.
Earnings Before Interest, Taxes, Depreciation and Amortization – EBITDA Definition
EBITDA, or earnings before interest, taxes, depreciation and amortization, is a measure of a company's overall financial performance and is used as an alternative to simple earnings or net income in some circumstances.
What Does EBITDA Margin Mean?
EBITDA margin measures a company's profit as a percentage of revenue. EBITDA stands for earnings before interest, taxes, depreciation, and amortization.
Why Operating Margins Matter
The operating margin measures how much profit a company makes on a dollar of sales, after paying for variable costs of production, such as wages and raw materials, but before paying interest or tax.
Earnings Before Interest and Taxes – EBIT Definition
Earnings before interest and taxes is an indicator of a company's profitability and is calculated as revenue minus expenses, excluding taxes and interest.
The Formula for Calculating Ebitda (With Examples)
Tools for Fundamental Analysis
How Do Gross Profit and EBITDA Differ?
How Is Operating Margin And EBITDA Different?
The Difference Between Cash Flow and EBITDA
EBITDA Margin vs. Profit Margin: Comparing the Differences
A Clear Look at EBITDA | CommonCrawl |
Home > Journals > Geom. Topol. > Volume 2 > Issue 1
Geometry & Topology
VOL. 2 · NO. 1 | 1998
Content Email Alerts notify you when new content has been published.
Visit My Account to manage your email alerts.
Receive bi-monthly emailed content alerts
Receive immediate emailed alerts when a new issue has been published
Please select when you would like to receive an alert.
Alert saved!
< Previous Issue | Next Issue >
VIEW ALL ABSTRACTS +
Einstein metrics and smooth structures
Dieter Kotschick
Geom. Topol. 2 (1), 1-10, (1998) DOI: 10.2140/gt.1998.2.1
KEYWORDS: Einstein metric, smooth structure, 4–manifold, 57R55, 57R57, 53C25, 14J29
Read Abstract +
We prove that there are infinitely many pairs of homeomorphic non-diffeomorphic smooth 4–manifolds, such that in each pair one manifold admits an Einstein metric and the other does not. We also show that there are closed 4–manifolds with two smooth structures which admit Einstein metrics with opposite signs of the scalar curvature.
SAVE TO MY LIBRARY
The symmetry of intersection numbers in group theory
Geom. Topol. 2 (1), 11-29, (1998) DOI: 10.2140/gt.1998.2.11
KEYWORDS: ends, amalgamated free products, trees, 20F32, 20E06, 20E07, 20E08, 57M07
For suitable subgroups of a finitely generated group, we define the intersection number of one subgroup with another subgroup and show that this number is symmetric. We also give an interpretation of this number.
A natural framing of knots
Michael T Greene, Bert Wiest
KEYWORDS: knot, link, knot invariant, framing, natural framing, torus knot, Cayley graph, 57M25, 20F05
Given a knot K in the 3–sphere, consider a singular disk bounded by K and the intersections of K with the interior of the disk. The absolute number of intersections, minimised over all choices of singular disk with a given algebraic number of intersections, defines the framing function of the knot. We show that the framing function is symmetric except at a finite number of points. The symmetry axis is a new knot invariant, called the natural framing of the knot. We calculate the natural framing of torus knots and some other knots, and discuss some of its properties and its relations to the signature and other well-known knot invariants.
Group negative curvature for 3–manifolds with genuine laminations
David Gabai, William H Kazez
KEYWORDS: lamination, essential lamination, genuine lamination, group negatively curved, word hyperbolic, 57M50, 57R30, 57M07, 20F34, 20F32, 57M30
We show that if a closed atoroidal 3–manifold M contains a genuine lamination, then it is group negatively curved in the sense of Gromov. Specifically, we exploit the structure of the non-product complementary regions of the genuine lamination and then apply the first author's Ubiquity Theorem to show that M satisfies a linear isoperimetric inequality.
Flag manifolds and the Landweber–Novikov algebra
Victor M Buchstaber, Nigel Ray
Geom. Topol. 2 (1), 79-101, (1998) DOI: 10.2140/gt.1998.2.79
KEYWORDS: complex cobordism, double cobordism, flag manifold, Schubert calculus, toric variety, Landweber–Novikov algebra, 57R77, 14M15, 14M25, 55S25
We investigate geometrical interpretations of various structure maps associated with the Landweber–Novikov algebra S∗ and its integral dual S∗. In particular, we study the coproduct and antipode in S∗, together with the left and right actions of S∗ on S∗ which underly the construction of the quantum (or Drinfeld) double D(S∗). We set our realizations in the context of double complex cobordism, utilizing certain manifolds of bounded flags which generalize complex projective space and may be canonically expressed as toric varieties. We discuss their cell structure by analogy with the classical Schubert decomposition, and detail the implications for Poincaré duality with respect to double cobordism theory; these lead directly to our main results for the Landweber–Novikov algebra.
Symplectic fillings and positive scalar curvature
Paolo Lisca
Geom. Topol. 2 (1), 103-116, (1998) DOI: 10.2140/gt.1998.2.103
KEYWORDS: contact structures, monopole equations, Seiberg–Witten equations, positive scalar curvature, symplectic fillings, 53C15, 57M50, 57R57
Let X be a 4–manifold with contact boundary. We prove that the monopole invariants of X introduced by Kronheimer and Mrowka vanish under the following assumptions: (i) a connected component of the boundary of X carries a metric with positive scalar curvature and (ii) either b2+(X)>0 or the boundary of X is disconnected. As an application we show that the Poincaré homology 3–sphere, oriented as the boundary of the positive E8 plumbing, does not carry symplectically semi-fillable contact structures. This proves, in particular, a conjecture of Gompf, and provides the first example of a 3–manifold which is not symplectically semi-fillable. Using work of Frøyshov, we also prove a result constraining the topology of symplectic fillings of rational homology 3–spheres having positive scalar curvature metrics.
Intersections in hyperbolic manifolds
Igor Belegradek
KEYWORDS: hyperbolic manifold, intersection form, representation variety, 30F40, 53C23, 57R20, 22E40, 32H20, 51M10
We obtain some restrictions on the topology of infinite volume hyperbolic manifolds. In particular, for any n and any closed negatively curved manifold M of dimension ≥3, only finitely many hyperbolic n–manifolds are total spaces of orientable vector bundles over M.
Completions of $\mathbb{Z}/(p)$–Tate cohomology of periodic spectra
Matthew Ando, Jack Morava, Hal Sadofsky
KEYWORDS: root invariant, Tate cohomology, periodicity, formal groups, 55N22, 55P60, 14L05
We construct splittings of some completions of the ℤ∕(p)–Tate cohomology of E(n) and some related spectra. In particular, we split (a completion of) tE(n) as a (completion of) a wedge of E(n−1)s as a spectrum, where t is shorthand for the fixed points of the Z∕(p)–Tate cohomology spectrum (ie the Mahowald inverse limit invlimk((P−k∧ΣE(n)))). We also give a multiplicative splitting of tE(n) after a suitable base extension.
A new algorithm for recognizing the unknot
Joan S Birman, Michael D Hirsch
KEYWORDS: knot, unknot, Braid, Foliation, algorithm, 57M25, 57M50, 68Q15, 57M15, 68U05
The topological underpinnings are presented for a new algorithm which answers the question: "Is a given knot the unknot?" The algorithm uses the braid foliation technology of Bennequin and of Birman and Menasco. The approach is to consider the knot as a closed braid, and to use the fact that a knot is unknotted if and only if it is the boundary of a disc with a combinatorial foliation. The main problems which are solved in this paper are: how to systematically enumerate combinatorial braid foliations of a disc; how to verify whether a combinatorial foliation can be realized by an embedded disc; how to find a word in the the braid group whose conjugacy class represents the boundary of the embedded disc; how to check whether the given knot is isotopic to one of the enumerated examples; and finally, how to know when we can stop checking and be sure that our example is not the unknot.
The structure of pseudo-holomorphic subvarieties for a degenerate almost complex structure and symplectic form on $S^1 \times B^3$
Clifford Henry Taubes
KEYWORDS: 4–manifold invariants, symplectic geometry, 53C07, 52C15
A self-dual harmonic 2–form on a 4–dimensional Riemannian manifold is symplectic where it does not vanish. Furthermore, away from the form's zero set, the metric and the 2–form give a compatible almost complex structure and thus pseudo-holomorphic subvarieties. Such a subvariety is said to have finite energy when the integral over the variety of the given self-dual 2–form is finite. This article proves a regularity theorem for such finite energy subvarieties when the metric is particularly simple near the form's zero set. To be more precise, this article's main result asserts the following: Assume that the zero set of the form is non-degenerate and that the metric near the zero set has a certain canonical form. Then, except possibly for a finite set of points on the zero set, each point on the zero set has a ball neighborhood which intersects the subvariety as a finite set of components, and the closure of each component is a real analytically embedded half disk whose boundary coincides with the zero set of the form.
Correction to "The symmetry of intersection numbers in group theory"
Theorem 3.1 of "The symmetry of intersection numbers in group theory" is false: the error occurs in the proof of Lemma 3.6. A counterexample is given. | CommonCrawl |
Optimal price subsidies for appropriate malaria testing and treatment behaviour
Kristian Schultz Hansen1,2,
Tine Hjernø Lesner3 &
Lars Peter Østerdal4
Malaria continues to be a serious public health problem particularly in Africa. Many people infected with malaria do not access effective treatment due to high price. At the same time many individuals receiving malaria drugs do not suffer from malaria because of the common practice of presumptive diagnosis. A global subsidy on artemisinin-based combination therapy (ACT) has recently been suggested to increase access to the most effective malaria treatment.
Following the recommendation by World Health Organization that parasitological testing should be performed before treatment and ACT prescribed to confirmed cases only, it is investigated in this paper if a subsidy on malaria rapid diagnostic tests (RDTs) should be incorporated. A model is developed consisting of a representative individual with fever suspected to be malaria, seeking care at a specialized drug shop where RDTs, ACT medicines, and cheap, less effective anti-malarials are sold. Assuming that the individual has certain beliefs of the accuracy of the RDT and the probability that the fever is malaria, the model predicts the diagnosis-treatment behaviour of the individual. Subsidies on RDTs and ACT are introduced to incentivize appropriate behaviour: choose an RDT before treatment and purchase ACT only if the test is positive.
Solving the model numerically suggests that a combined subsidy on both RDT and ACT is cost minimizing and improves diagnosis-treatment behaviour of individuals. For certain beliefs, such as low trust in RDT accuracy and strong belief that a fever is malaria, subsidization is not sufficient to incentivize appropriate behaviour.
A combined subsidy on both RDT and ACT rather than a single subsidy is likely required to improve diagnosis-treatment behaviour among individuals seeking care for malaria in the private sector.
Malaria continues to be a major cause of mortality and morbidity with 214 million cases and 438,000 deaths worldwide in 2014. The majority of all deaths (90%) occurred in Africa and with 74% of these in children below 5 years [1]. Malaria deaths are largely avoidable, as a broad range of effective and cost-effective tools for prevention and cure of malaria exists. The cost of prevention per disability-adjusted life year averted ranges between US$27–143 [2]. The current manufacturer price of artemisinin-based combinations, the most effective anti-malarials on the market, is about US$2 for an adult course and US$0.5 for a treatment course for a child under five while the less effective chloroquine costs US$0.05–0.15 [3]. Huge investments by governments and international donors over the last 10 years have contributed to the decrease in malaria mortality rates by 25% globally and 33% in Africa [4].
One major obstacle to bringing the disease burden further down is the widespread problem of inappropriate treatment of malaria. Many people infected with malaria do not receive an effective anti-malarial (the access problem) while a large proportion of people receiving treatment for malaria does not suffer from malaria (the targeting problem).
Public health sectors in many countries offer free malaria treatment services, but access is impaired by frequent stock-outs of drugs, short opening hours, long travel distances and prescribed anti-malarials are not always artemisinin-based combinations [5–8]. Therefore, it is common behaviour in many African countries to seek malaria treatment in the private sector especially at small, specialized drug shops and general stores [9, 10]. The price of artemisinin-based combination therapy (ACT) may be 10–15 times higher than other anti-malarials and many customers instead buy cheaper but much less effective monotherapies, sub-therapeutic doses or no anti-malarials at all [5, 11–14]. Common anti-malarial monotherapies include chloroquine, sulfadoxine-pyrimethamine (SP) and quinine [5].
Targeting effective drugs to those who are truly suffering from malaria is hampered by the widespread use in many countries of presumptive diagnosis rather than more accurate parasitological tests leading to overdiagnosis of malaria and underdiagnosis of other diseases [15, 16]. The proportion of parasitological testing among patients treated for malaria was estimated to be 47% in the public sector in the African Region in 2011 [8] with a considerably lower testing rate in the private sector—possibly one-third of the public sector and even less frequently in drug shops [17]. Studies across different countries and settings have documented that between 30 and 80% of people treated with an anti-malarial do not have malaria parasites in their blood [18–25].
With an objective of improving access to high quality ACT medicines, both in the public and private sectors, a global subsidy paid directly to accredited ACT manufacturers was proposed in the early 2000s and subsequently operationalized under the name of 'the Affordable Medicines Facility-malaria (AMFm)' and hosted by the Global Fund [26, 27]. Pilot tests in several malaria endemic countries found that such a subsidy achieved considerable success in terms of increasing availability of ACT, hugely reducing the retail price differences between ACT and older, less effective monotherapies in the private sector and increasing the sales volume of ACT medicines [28–31].
Subsidizing ACT medicines may increase access but it may also lead to increased treatment of patients not suffering from malaria. The World Health Organization now recommends that all suspected malaria cases should be confirmed with a parasitological test before treatment and that positive cases should be treated with an ACT [32]. Accurate rapid diagnostic tests (RDTs) for malaria have recently been developed which are easy to use with immediate result, require only limited training of providers and could feasibly be sold and performed in drug shops and other private sector outlets [8, 33, 34].
The AMFm idea of a global subsidy on ACT has recently been abandoned to consider alternative, possibly more cost-effective interventions including increased focus on introducing RDTs. In the meantime, individual malaria stricken countries may still apply for funds to finance ACT medicines and even RDTs from the Global Fund [35]. Cohen et al. [22] conducted a randomized controlled trial in rural Kenya to assess the impact of changing both RDT and ACT prices through the use of subsidies. They found that ACT use increased 59% in presence of a subsidy of 90%—but only 56% of those buying ACT test positive for malaria. However, they also found that targeting increased to 81% when the subsidy for ACT was slightly reduced (from 90 to 80%) and the freed resources directed to an RDT subsidy of 85% instead. This increased the testing rate more than 50% and had no significantly negative effect on ACT uptake.
In this paper the characteristics of an optimal subsidy policy will be investigated when a health planner has the objective that suspected malaria patients should be diagnosed and treated according to WHO guidelines. The focus is on the private sector, in particular private drug retailers. These are an extremely important source of anti-malarial treatment and the problem of inappropriate treatment of malaria is common in terms of frequent sale of less effective drugs (non-artemisinins) and parasitological testing being the exception rather than the rule. An analytical framework is developed based on expected utility theory where a representative individual with suspected malaria has to make a choice at a drug shop regarding purchasing an RDT and a type of anti-malarial. The framework also contains a health planner who can influence the prices of RDTs and ACT at drug shops using subsidies. Optimal subsidy levels for RDTs and ACT are explored within this framework and supplemented by numerical simulations to investigate the influence of key factors such as the prior belief of the individual that the fever is due to malaria as well as his/her trust in the accuracy of RDTs. The results from this framework suggest that exclusively subsidizing ACT, as proposed by the AMFm approach, is in general not sufficient for incentivizing the individual to behave as desired by the health planner. A price reduction on RDTs is necessary as well and the optimal use of subsidy funds is a combined subsidy on RDT and ACT. The present paper complements the paper by Cohen et al. [22] by explicitly modelling both the subsidy choices of a public health planner and the household decision making by households. This framework enables a search for an 'optimal' combination of RDT and ACT subsidy levels.
Model of individual behaviour in malaria treatment-seeking in the private sector
A simple decision model is developed where a representative febrile individual can choose among different strategies involving choice of drugs and whether to take a parasitological test before treatment. The focus is here on malaria treatment and testing strategies and there are two possible health states: The individual either has malaria or not malaria. \(V_{m}\) is the utility of having malaria and \(V_{nm}\) is the utility of not having malaria with \(V_{nm} > V_{m}\). The utilities \(V_{m}\) and \(V_{nm}\) may be thought of as expressing monetary values so that \(V_{nm} - V_{m}\) is the willingness to pay to avoid malaria. The individual does not know for certain whether the fever is malaria or not but holds a belief p (a subjective probability) that the fever is malaria. This belief is affected by the result of an RDT. Define \(p_{p}\) as the belief that a fever is malaria having observed that the RDT result is positive, whereas \(p_{n}\) is the belief that a fever is malaria having observed that the RDT result is negative. It is assumed that \(p_{n} < p < p_{p}\), so that a positive RDT result will increase the individual's belief that the fever is malaria while a negative RDT result will decrease the belief that the fever is caused by malaria. If the individual has complete confidence in the accuracy of the test, i.e. believes that there are no false positive or false negative test results, then \(p_{p}\) will be equal to 1 and \(p_{n}\) will be equal to 0. Let us call \(p^{*}\) the individual's belief that the test result will be positive. From \(p\), \(p_{p}\) and \(p_{n}\) the following can be defined \(p = p^{*} p_{p} + \left( {1 - p^{*} } \right)p_{n}\), and therefore
$$p^{*} = (p - p_{n} )/(p_{p} - p_{n} ).$$
The belief \(p^{*}\) may not necessarily be equal to \(p\) if for instance the individual is concerned that the RDT will occasionally miss positive malaria cases (false negatives) in which case \(p^{*}\) will be lower than \(p\). Similarly, the individual holds beliefs that two available types of drugs, monotherapy and ACT, will cure malaria, \(E_{MT}\) and \(E_{ACT}\) where \(E_{ACT} > E_{MT}\). The retail prices of the drugs are denoted \(C_{MT}\) and \(C_{ACT}\), where \(C_{ACT} > C_{MT}\), and with the price of the test denoted \(C_{RDT}\). Values of beliefs \(p\), \(p_{n}\), \(p_{p}\), \(E_{MT}\) and \(E_{ACT}\) fall between 0 and 1 while prices of drugs and RDT are positive.
One possible strategy for the individual is to do nothing about the fever if it is believed to be self-resolving, a strategy that will be denoted \(S_{NO}\), another is that the individual seeks treatment at a drug shop or another private health provider if he believes the fever to be caused by malaria. While it is a possibility that the fever is caused by a serious non-malarial disease, the focus is here on whether it is malaria or not and it is assumed that the individual will seek care at formal providers in case a fever is expected to be a serious non-malarial disease. In the drug shop, the individual faces the following options: (a) buy cheap, less effective antimalarial monotherapy such as chloroquine or SP, strategy \(S_{MT}\), (b) buy more effective but also more expensive ACT, strategy \(S_{ACT}\) or (c) buy a rapid diagnostic test (RDT) and let the subsequent decision of buying an ACT medicine, monotherapy or no drugs depend on the result of the test. The decision to purchase an RDT will lead to nine possible strategies. One example of a strategy is that the individual purchases a cheap anti-malarial monotherapy if the RDT is positive and does not buy any anti-malarials if the RDT is negative, strategy \(S_{(MT,NO)}^{RDT}\). The possible strategies of the individual are represented graphically in Fig. 1.
Diagnosis-treatment strategies
All possible strategies involve risky outcomes and it is assumed that the individual chooses the strategy with the highest expected utility U. The expected utility of buying no drugs and without having a test is:
$$U(S_{NO} ) = pV_{m} + (1 - p)V_{nm}$$
The expected utility of not purchasing a test or drugs is therefore the belief that the fever is malaria times the utility of having malaria plus the belief that the fever is not malaria times the utility of being free of malaria. The expected utility of buying a cheap anti-malarial monotherapy without having a test is:
$$U(S_{MT} ) = pE_{MT} V_{nm} + p(1 - E_{MT} )V_{m} + (1 - p)V_{nm} - C_{MT}$$
The expected utility is the probability of being cured for malaria after taking monotherapy times the utility of being malaria free (first term) plus the probability of monotherapy not working times the utility of having malaria (second term) plus the belief of the fever not being malaria times the utility of being malaria free (third term). In addition, the retail price of anti-malarial monotherapy must be subtracted. Likewise the expected utility of buying an ACT medicine without having a test is:
$$U(S_{ACT} ) = pE_{ACT} V_{nm} + p(1 - E_{ACT} )V_{m} + (1 - p)V_{nm} - C_{ACT}$$
with a similar interpretation as above.
The utility of a strategy of buying first an RDT followed by the purchase of a course of ACT if the test is positive and not purchase any drugs if the test is negative is:
$$\begin{aligned} U\left( {S_{{(ACT,NO)}}^{{RDT}} } \right) &= p^{*} \left[ p_{p} E_{{ACT}} V_{{nm}} + p_{p} \left( {1 - E_{{ACT}} } \right)V_{m} \right. \\ &\; \qquad \left. + \left( {1 - p_{p} } \right)V_{{nm}} - C_{{ACT}} \right] \\& \quad + \left( {1 - p^{*} } \right)\left[ {\left( {1 - p_{n} } \right)V_{{nm}} + p_{n} V_{m} } \right] - C_{{RDT}} \\ \end{aligned}$$
The first component of the expected utility consists of the belief that the RDT will be positive, \(p^{*}\), times the utility of taking a course of ACT and with a belief that the fever is malaria adjusted upwards from \(p\) to \(p_{p}\). The second component is the belief that the RDT will be negative, \((1 - p^{*} )\), times the utility of not taking any anti-malarials and with a belief that the fever is malaria adjusted downwards from \(p\) to \(p_{n}\). Finally, the third component is the RDT price, \(C_{RDT}\), which must be subtracted. The expected utility function for the remaining eight RDT-strategies arising from the decision tree in Fig. 1 can be written in a similar fashion (Additional file 1).
Some of the possible strategies are not rational. Consider a strategy consisting of first purchasing an RDT associated with a decision to purchase an ACT medicine irrespective of the test result (\(S_{(ACT,ACT)}^{RDT}\)). It would make more sense to save the money for purchasing an RDT and instead go directly to acquiring an ACT medicine: The strategy \(S_{(ACT,ACT)}^{RDT}\) is dominated by the strategy \(S_{ACT}\). It also seems irrational to choose a strategy of buying the most effective and expensive drug only when the test is negative like the strategy \(S_{(MT,ACT)}^{RDT}\). It can be shown formally that six such strategies are suboptimal (see Additional file 2 for details). Consequently, a rational individual will choose from the remaining six strategies: \(S_{ACT}\), \(S_{MT}\), \(S_{NO}\), \(S_{(ACT,NO)}^{RDT}\), \(S_{(MT,NO)}^{RDT}\) and \(S_{(ACT,MT)}^{RDT}\).
The objective of the health planner
A health policy planner is now introduced who wants the current malaria treatment guidelines as recommended by WHO to be followed: All suspected malaria cases must be diagnosed with a parasitological test before treatment and patients with confirmed malaria should be treated with an ACT while patients with a negative test should not receive an anti-malarial [32]. An individual visiting a drug shop does not necessarily behave according to the guidelines. For instance, if the expected utility for the individual of strategy \(S_{ACT}\) is higher than the expected utility of strategy \(S_{(ACT,NO)}^{RDT}\) then the individual will purchase an ACT directly rather than following the strategy advised by the health planner. However, the health planner could potentially reverse the ranking of these two strategies by changing the relative prices of ACT and RDT through subsidies. This will be the case if a combination of subsidies can be found such that the utility of strategy \(S_{(ACT,NO)}^{RDT}\) is higher than the utility of strategy \(S_{ACT}\), when the prices are reduced due to the subsidies. Similar conditions are needed to ensure that the utility of strategy \(S_{(ACT,NO)}^{RDT}\) is higher than the remaining four non-eliminated strategies. There are therefore five conditions which are presented in Additional file 3 as inequalities (1)–(5).
There may be more than one combination of ACT and RDT subsidy levels ensuring that the individual prefers strategy \(S_{(ACT,NO)}^{RDT}\) to all other strategies. The health planner therefore has as an objective that the total subsidy cost should be minimized subject to the constraint that the treatment guidelines are followed. In general, total subsidy cost for the health planner of a combination of subsidy levels is:
$${\text{Total subsidy cost}} = \beta^{ACT} *\tilde{p} + \beta^{RDT}$$
where \(\beta^{ACT}\) is the subsidy cost per ACT course, \(\beta^{RDT}\) is the subsidy cost per RDT and \(\tilde{p}\) is the probability that an RDT will be positive which depends on the malaria parasite prevalence among individuals visiting drug shops and the accuracy of the RDT. Positive RDT results will include both true and false positives and \(\tilde{p}\) can be written as
$$\tilde{p} = \bar{p}*ss^{RDT} + \left( {1 - \bar{p}} \right)*(1 - sp^{RDT} )$$
where \(\bar{p}\) is the malaria parasite prevalence among febrile individuals visiting drug shops while \(ss^{RDT}\) and \(sp^{RDT}\) are the sensitivity (probability of a positive test for an infected person) and specificity (probability of a negative test result for an uninfected person) respectively of the RDT.
The decision problem of the health planner consists of minimizing total subsidy cost (6) over subsidy levels for ACT and RDT subject to inequalities (1)–(5) listed in Additional file 3 being simultaneously obeyed.
Searching for optimal RDT and ACT subsidies: individual beliefs and numerical simulations
There is no general solution to this optimization problem as it will depend on specific values of prices and parameters. Therefore, an approach is followed where a series of numerical examples will give indications on what combinations of subsidies on ACT and RDT incentivize appropriate behaviour and have the lowest total subsidy cost for the health planner. Two sets of numerical assumptions are applied related to (1) prices and RDT accuracy and (2) beliefs of the individual with fever.
(1) The retail prices, drug effectiveness and RDT accuracy listed in Table 1 are intended to represent 'the average' or 'a common' situation in sub-Saharan Africa. It is assumed in the numerical examples that subsidies will directly change retail prices corresponding to an assumption that the subsidy is perfectly passed on to individuals visiting private sector providers. For instance, if a subsidy is 75%, then the individual will pay only 25% of the pre-subsidy price. This approach to subsidization in the analysis may therefore be interpreted as a subsidy on the retail prices facing the individual in contrast to the AMFm approach where the subsidy is given to ACT medicine manufacturers at the top of the supply chain [28].
Table 1 Retail prices excluding subsidies and parameter values used in numerical simulations
In the model above, monetary (US$) retail prices are converted into a price comparable to the utility model using a linear transformation where the monetary prices are divided with the individual's willingness to pay (WTP) for avoiding malaria illness. Unfortunately, an empirical estimate of such a WTP does not exist. Instead a contingent valuation survey from Uganda is relied on which found an average WTP for an adult course of ACT of US$2.05 among drug shop customers who were asked their valuation of a course of ACT after having purchased an RDT that turned out positive [36]. Because this is a WTP for a specific drug to cure malaria and not as such a WTP to avoid malaria in the first place, the estimate of US$2.05 is considered as a lower bound and a WTP of US$3.00 is used as the best guess.
(2) An individual may hold different beliefs with respect to the fever being malaria (\(p\)) and change this belief after RDT testing (\(p_{n}\) and \(p_{p}\)). Large differences between \(p\) on the one hand and \(p_{n}\) and \(p_{p}\) on the other indicate high trust in the RDT result. There is evidence that people have strong beliefs in a positive test result but the belief in a negative test result typically varies and can be quite low [37, 38]. Methods have recently been developed that may be used to elicit empirical values of \(p\), \(p_{n}\) and \(p_{p}\) from population members in specific settings as has been done in western Kenya [39]. For the numerical examples, individual beliefs from low to high are used except in the case of a positive RDT result where the individual always has high trust in the test. Total subsidy cost (6) is influenced by the extent of the malaria problem among individuals visiting drug shops so the impact of different malaria prevalences on subsidy levels of RDT and ACT is also investigated.
To gain intuition on the subsidy sizes that are needed to fulfil the health planner's objective for a range of different beliefs of the individual, a series of numerical examples or simulations are developed using the parameter values described above. The calculations are performed using linear programming methods to ensure that the costs of the health planner are minimized by finding the minimum subsidy levels of ACT and RDT that at the same time ensure that incentive constraints (1)–(5) listed in Additional file 3 hold for an individual with given beliefs and a given set of parameter values (from Table 1).Footnote 1
Results: optimal subsidies for RDT and ACT
Figure 2 presents a situation where an individual has a low belief that the fever is malaria, a low trust in a negative RDT result, a high trust in a positive RDT result and with low parasite prevalence among individuals visiting drug shops. Such an individual may be incentivized always to purchase an RDT before treatment and buy an ACT medicine only in the case of a positive RDT result if the combined subsidies on RDT and ACT are on the solid line. For example, the individual will behave appropriately if the RDT subsidy is 93% and the ACT subsidy is 81% and also if the RDT subsidy is 97% and the ACT subsidy is 54%. Note that even if the RDT is free (100% subsidy) a positive ACT subsidy is required. In addition, the individual will only behave appropriately if the RDT subsidy is at least 93%; any RDT subsidy below this value will lead to inappropriate behaviour irrespective of the level of subsidy on the ACT—if the RDT is too expensive relative to ACT, the individual will go directly to buying ACT medicines without taking an RDT first.
Optimal combination of RDT and ACT subsidies ensuring appropriate behaviour for a representative individual. Individual characterized by low belief that a fever is malaria (\(p = 0.20\)), low trust in negative RDT result (\(p_{n} = 0.15\)), high trust in positive RDT result (\(p_{p} = 0.97\)) and low malaria prevalence (\(\bar{p} = 0.15\))
The dotted line in Fig. 2 shows combinations of ACT and RDT subsidies giving equal total subsidy cost for the health planner (the sum of subsidy cost of ACT and RDT). The further to the south-west this line is situated, the lower the total subsidy cost. The optimal combination of subsidies from the health planner's point of view is the point of tangency between the two lines at 96% subsidy on the RDT and 54% subsidy on the ACT since this will at the same time ensure appropriate behaviour of the individual and the lowest possible subsidy cost of the health planner. The total subsidy cost at this point is US$2.08 per individual.
Optimal subsidy combinations in situations of different beliefs of the individual and malaria prevalence are presented in Table 2. Among the beliefs investigated, it is not possible to ensure appropriate behaviour by subsidizing only ACT or RDT. The subsidy policy must be a combined subsidy on both commodities characterized by a high subsidy on the RDT of 80–96% of the retail price and a more moderate subsidy on the ACT in the range 54–76%. The intuition behind such a subsidy pattern is that in this model a low price of RDT is required to ensure that the individual is willing to purchase a test before treatment combined with a moderately reduced ACT price still high enough to ensure adherence to the RDT result. If the ACT price is too low, the individual may decide always to purchase an ACT medicine even if the RDT is negative and if the ACT price is too high, the individual may choose to purchase monotherapy even when the RDT is positive.
Table 2 Optimal combinations of RDT and ACT subsidies for different beliefs of a representative individual and malaria parasite prevalence
For some combinations of beliefs of the individual, there are no solutions to the problem meaning that no subsidies can be found to incentivize the individual to behave appropriately. This was found to be the case if the individual has a strong prior belief in being malaria positive (40% and above) and at the same time a weak belief in a negative test result.
The calculations performed further suggest that low confidence in a negative RDT result requires a higher RDT subsidy compared to high confidence while the ACT subsidy is not affected. No associations are apparent between RDT and ACT subsidy levels and the degree of belief that the fever is malaria and the malaria prevalence among individuals visiting drug shops. Finally, the total subsidy costs are higher for increasing malaria prevalence and for decreasing belief in a negative RDT result.
Sensitivity analyses are performed using the lower and upper bound parameter values in Table 1. Higher retail prices of ACT medicines and RDTs lead to higher subsidy costs (Table 3). However, a higher price on anti-malarial monotherapy may actually lead to lower required subsidies on RDT and ACT and lower subsidy cost, as a higher price on monotherapy makes it less attractive to follow strategies involving buying these drugs. Note that using the lowest bound estimate on ACT prices means that ACT should in fact be taxed and not subsidized to incentivize optimal behaviour.
Table 3 Sensitivity analysis of prices of ACT, RDT and monotherapy
It is also investigated how changes in monotherapy effectiveness affect the results (Table 4). A lower effectiveness of monotherapy will, all else equal, make it less attractive for the individual to buy monotherapy and thus easier for the health planner to incentivize the use of ACT. However, the effect of monotherapy effectiveness on RDT uptake is not straightforward as a higher relative (perceived) effectiveness of ACT means that the individual needs a larger incentive to buy an RDT before buying an ACT medicine. The beliefs of the relative effectiveness of the different treatment types are, therefore, also important for reducing total subsidy costs.
Table 4 Sensitivity analysis of monotherapy effectiveness
The simulations using the framework developed suggested that irrespective of the beliefs of the representative individual, the optimal subsidy policy of the health planner would involve a shared subsidy on RDT and ACT. In other words, the individual with fever would not be incentivized to behave appropriately through a subsidy on the RDT or the ACT alone. Even in a situation where the individual has high trust in both positive and negative RDT results, it would still be necessary to subsidize both the RDT and the ACT. Simulations further found that the optimal policy incorporated a high subsidy on RDT and a more moderate subsidy on ACT (Table 2). Previous empirical research has provided some support for a combined subsidy. Cohen et al. [21] provided subsidized RDTs to drug shops in Uganda but no subsidy on ACT treatment and found that among customers buying RDTs only 32% of RDT-positive patients purchased an ACT. Contrary to this, the introduction of both subsidized RDTs and ACT medicines in Ugandan drug shops resulted in high willingness to purchase an RDT before treatment and with almost all RDT-positive customers also buying an ACT and RDT-negative patients not buying an anti-malarial [40]. A similar study involving a combined subsidy among Kenyan drug shops also improved appropriate behaviour among drug shop customers but to a lesser extent [22]. These studies therefore point to different subsidy recommendations than the original AMFm approach which proposed subsidizing only ACT at a very high percentage of up to 95% of the manufacturer price [27]. The main objective of the latter was improving access to high quality ACT medicines and less concern for overprescription of ACT to patients with no malaria parasites in their blood [26].
It was found that a solution to the decision problem could not be identified in all situations including if the individual was highly convinced that his fever was malaria even before considering a test and at the same time had a very high distrust in a negative RDT result. Such an individual would prefer to purchase an anti-malarial without first taking a test as was also confirmed for some settings in a model-based study involving six African countries [41]. Qualitative research has confirmed that some patients and child caregivers indeed have confidence in their own ability to recognize malaria symptoms [37, 38]. In addition, perceived benefits of parasitological diagnosis among customers in the private sector are negatively affected when the risk of taking anti-malarials is perceived to be minimal, the concerns for delayed treatment of the true cause of fever if not malaria are minimal or believing more in an approach where different drugs are taken until one proves effective (diagnosis-by-treatment) [37, 38, 42]. If such perceptions are common, the subsidy instrument must be supplemented by a behaviour change communication campaign addressing unfortunate behaviours in a particular community.
A key parameter influencing the optimal subsidy structure in the present model is the degree of belief in a negative RDT result. The higher the mistrust in a negative RDT result, the higher a subsidy on RDTs is required. Mistrust in negative test results has been a matter of great concern for a long time in malaria care and several studies have indeed demonstrated a significant tendency to disregard negative RDT and microscopy results both among health providers and patients [22, 37, 43]. However, more recent studies indicate a higher belief in negative test results e.g. Mbonye et al. [40]: Following an information campaign on the advantages of RDTs and ACT treatment for malaria, RDTs were introduced in drug shops in an area of Uganda. The study found a high willingness to purchase a subsidized RDT among drug shop customers with fever and a nearly complete acceptance of negative RDT results as measured by the finding that almost all RDT-negative customers did not buy an ACT. This is encouraging since a high level of belief in the accuracy of the RDT will in the model require a lower RDT subsidy and lead to a lower overall subsidy cost.
The model developed for the presented analysis is a simplification in at least three respects. It was assumed that there is only one representative individual (or many identical individuals), that there are no drugs for non-malarial fevers offered at drug shops and that drug shops have a very simplified behaviour limited to wanting to sell RDTs and anti-malarials at the market price or at the subsidized price without any other considerations such as maximizing their own profit. One first possible expansion of the model could be allowing for many individuals with heterogeneous beliefs in for instance negative RDT results or the conviction that their fever is malaria. This would not change the health planner's decision problem in principle, but instead of one set of constraints ensuring that the representative individual prefers the appropriate treatment strategy to any of the other strategies, it would require at set of constraints for each type of individual. It is also likely that the health planner is not able to find an RDT and ACT subsidy allocation that will simultaneously ensure appropriate behaviour in all drug shop customers. As shown above, individuals with certain beliefs cannot be incentivized into appropriate behaviour through the use of subsidies. The health planner will therefore have to decide on the minimum acceptable share of drug shop customers behaving appropriately.
Another possible extension to the model is assuming that a wider range of drugs relevant for fevers are available at drug shops such as antipyretics and antibiotics. Such an extension to the model would lead to an increase in the possible strategies of the individual due to a higher number of drugs and possibly also diagnostic tests. Identifying the optimal subsidy strategy is a significantly more complicated decision problem and will require further research.
A third possible extension to the model is allowing a more realistic behaviour of drug shops involving for instance consideration on how to maximize their profit. Drug shop behaviour may also be analysed under different market conditions facing drug shops in the community including monopoly, a situation with few competitors or many drug shops leading to perfect competition. Such extensions are likely to affect the assumption that the entire subsidy amount is passed on to customers. As a result, the health planner's problem will be much more complicated to solve since it must be determined first what shares of the subsidies are passed on to the customers before the optimal combination of RDT and ACT subsidies can be identified.
In practice, a linear optimization model was set up in Excel and the solver's LP simplex function was applied.
WHO. World malaria report 2015. Geneva: World Health Organization; 2015.
White MT, Conteh L, Cibulskis R, Ghani AC. Costs and cost-effectiveness of malaria control interventions—a systematic review. Malar J. 2011;10:337.
Management Sciences for Health. International drug price indicator guide 2009. Cambridge: Management Sciences for Health; 2010.
O'Connell KA, Gatakaa H, Poyer S, Njogu J, Evance I, Munroe E, et al. Got ACTs? Availability, price, market share and provider knowledge of anti-malarial medicines in public and private sector outlets in six malaria-endemic countries. Malar J. 2011;10:326.
Kangwana BB, Njogu J, Wasunna B, Kedenge SV, Memusi DN, Goodman CA, et al. Malaria drug shortages in Kenya: a major failure to provide access to effective treatment. Am J Trop Med Hyg. 2009;80:737–8.
Zurovac D, Tibenderana JK, Nankabirwa J, Ssekitooleko J, Njogu JN, Rwakimari JB, et al. Malaria case-management under artemether–lumefantrine treatment policy in Uganda. Malar J. 2008;7:181.
Goodman C, Brieger W, Unwin A, Mills A, Meek S, Greer G. Medicine sellers and malaria treatment in sub-Saharan Africa: what do they do and how can their practice be improved? Am J Trop Med Hyg. 2007;77(Suppl 6):203–18.
Patouillard E, Hanson KG, Goodman CA. Retail sector distribution chains for malaria treatment in the developing world: a review of the literature. Malar J. 2010;9:50.
Whitty CJM, Chandler C, Ansah E, Leslie T, Staedke SG. Deployment of ACT antimalarials for treatment of malaria: challenges and opprtunities. Malar J. 2008;7(Suppl 1):S7.
Wafula FN, Miriti EM, Goodman CA. Examining characteristics, knowledge and regulatory practices of specialized drug shops in Sub-Saharan Africa: a systematic review of the literature. BMC Health Ser Res. 2012;12:223.
Mbonye AK, Lal S, Cundill B, Hansen KS, Clarke S, Magnussen P. Treatment of fevers prior to introducing rapid diagnostic tests for malaria in registered drug shops in Uganda. Malar J. 2013;12:131.
Palafox B, Patouillard E, Tougher S, Goodman C, Hanson K, Kleinschmidt I, et al. Prices and mark-ups on antimalarials: evidence from nationally representative studies in six malaria-endemic countries. Health Policy Plan. 2016;31:148–60.
Reyburn H, Mbatia R, Drakeley C, Carneiro I, Mwakasungula E, Mwerinde O, et al. Overdiagnosis of malaria in patients with severe febrile illness in Tanzania: a prospective study. BMJ. 2004;329:1212.
Perkins MD, Bell DR. Working without a blindfold: the critical role of diagnostics in malaria control. Malar J. 2008;7(Suppl 1):S5.
ACTwatch. Results and publications. Undated. http://www.actwatch.info/publications.
Amexo M, Talhurst R, Barnish G, Bates I. Malaria misdiagnosis: effects on the poor and vulnerable. Lancet. 2004;364:1896–8.
Mwanziva C, Shekalaghe S, Ndaro A, Mengerink B, Megiroo S, Mosha F, et al. Overuse of artemisinin-combination therapy in Mto wa Mbu (river of mosquitoes), an area misinterpreted as high endemic for malaria. Malar J. 2008;7:232.
Ansah EK, Epokor M, Whitty CJM, Yeung S, Hansen KS. Cost-effectiveness analysis of introducing RDTs for malaria diagnosis as compared to microscopy and presumptive diagnosis in central and peripheral public health facilities in Ghana. Am J Trop Med Hyg. 2014;89:724–36.
Cohen J, Fink G, Berg K, Aber F, Jordan M, Maloney K, et al. Feasibility of distributing rapid diagnostic tests for malaria in the retail sector: evidence from an implementation study in Uganda. PLoS ONE. 2012;7:e48296.
Cohen J, Dupas P, Schaner SG. Price subsidies, diagnostic tests, and targeting of malaria treatment: evidence from a randomized controlled trial. Am Econ Rev. 2015;105:609–45.
Kachur SP, Schulden J, Goodman CA, Kassala H, Elling BF, Khatib RA, et al. Prevalence of malaria parasitemia among clients seeking treatment for fever or malaria at drug stores in rural Tanzania 2004. Trop Med Int Health. 2006;11:441–51.
Schellenberg D, Reyburn H, Yeung S, Bosman A, Snow S, Lansang MA, et al. Consultation on the economics and financing of universal access to parasitological confirmation of malaria. Geneva: The Global Fund; 2010. www.theglobalfund.org/documents/amfm/AMFm_EconFinancePreread_Appendix02_en.
Briggs MA, Kalolella A, Bruxvoort K, Wiegand R, Lopez G, Festo C, et al. Prevalence of malaria parasitemia and purchase of artemisinin-based combination therapies (ACTs) among drug shop clients in two regions in Tanzania with ACT subsidies. PLoS ONE. 2014;9:e94074.
Arrow K, Panosian C, Gelband H, editors. Saving lives, buying time: economics of malaria drugs in an age of resistance. Washington (DC): National Academies Press; 2004.
Gelband H, Seiter A. A global subsidy for antimalarial drugs. Am J Trop Med Hyg. 2007;77(Suppl 6):219–21.
Tougher S, Ye Y, Amuasi JH, Kourgueni IA, Thomson R, Goodman C, et al. Effect of the Affordable Medicines Facility—malaria (AMFm) on the availability, price, and market share of quality-assured artemisinin-based combination therapies in seven countries: a before-and-after analysis of outlet survey data. Lancet. 2012;380:1916–26.
Sabot OJ, Mwita A, Cohen JM, Ipuge Y, Gordon M, Bishop D, et al. Piloting the global subsidy: the impact of subsidized artemisinin-based combination therapies distributed through private drug shops in rural Tanzania. PLoS ONE. 2009;4:e6857.
Kangwana BP, Kedenge S, Noor AM, Alegana VA, Nyandigisi AJ, Pandit J, et al. The impact of retail-sector delivery of artemether–lumefantrine on malaria treatment of children under five in Kenya: a cluster randomized controlled trial. PLoS Med. 2011;8:e1000437.
Fink G, Dickens WT, Jordan M, Cohen JL. Access to subsidized ACT and malaria treatment—evidence from the first year of the AMFm program in six districts in Uganda. Health Policy Plan. 2014;29:517–27.
WHO. Guidelines for the treatment of malaria. 2nd ed. Geneva: World Health Organization; 2010.
de Oliveira AM, Skarbinski J, Ouma PO. Performance of malaria rapid diagnostic tests as part of routine malaria case management in Kenya. Am J Trop Med Hyg. 2009;80:470–4.
Baiden F, Webster J, Tivura M, Delimini R, Berko Y, Amenga-Etego S, et al. Accuracy of rapid tests for malaria and treatment outcomes for malaria and non-malaria cases among under-five children in rural Ghana. PLoS ONE. 2012;7:e34073.
The Global Fund. Board approves integration of AMFm into core global fund grant processes. 2012. http://www.theglobalfund.org/en/mediacenter/newsreleases/2012-11-15_Board_Approves_Integration_of_AMFm_into_Core_Global_Fund_Grant_Processes.
Hansen KS, Pedrazzoli D, Mbonye A, Clarke S, Cundill B, Magnussen P, et al. Willingness-to-pay for a rapid malaria diagnostic test and artemisinin-based combination therapy from private drug shops in Mukono district, Uganda. Health Policy Plan. 2013;28:185–96.
Chandler CIR, Hall-Clifford R, Asaph T, Magnussen P, Clarke S, Mbonye AK. Introducing malaria rapid diagnostic tests at registered drug shops in Uganda: limitations of diagnostic testing in the reality of diagnosis. Soc Sci Med. 2011;72:937–44.
Cohen J, Cox A, Dickens W, Maloney K, Lam F, Fink G. Determinants of malaria diagnostic uptake in the retail sector: qualitative analysis from focus groups in Uganda. Malar J. 2015;14:89.
Prudhomme O'Meara W, Laktabai J, Mohanan M, Turner E, Maffioli E, Platt A, et al. Targeting antimalarial subsidies to confirmed cases in the retail sector—testing a diagnosis-dependent voucher scheme in western Kenya. Annual Meeting of the American Society of Tropical Medicine and Hygiene 2015 (Abstract 1187); 2015.
Mbonye A, Magnussen P, Lal S, Hansen K, Cundill B, Chandler C, et al. Cluster randomised trial introducing rapid diagnostic tests into the private health sector in Uganda: the impact on appropriate targeting of malaria treatment. PLoS ONE. 2015;10:e0129545.
Basu S, Modrek S, Bendavid E. Comparing decisions for malaria testing and presumptive treatment: a net health benefit analysis. Med Decis Mak. 2014;34:996–1005.
Mbonye AK, Ndyomugyenyi R, Turinde A, Magnussen P, Clarke S, Chandler C. The feasibility of introducing rapid diagnostic tests for malaria in drug shops in Uganda. Malar J. 2010;9:367.
Hamer D, Ndhlovu M, Zurovac D, Fox M. Improved diagnostic testing and malaria treatment practices in Zambia. JAMA. 2007;297:2227–31.
Goodman C, Kachur SP, Abdulla S, Bloland P, Mills A. Concentration and drug prices in the retail market for malaria treatment in rural Tanzania. Health Econ. 2009;18:727–42.
Morel CM, Lauer JA, Evans DB. Cost-effectiveness analysis of strategies to combat malaria in developing countries. BMJ. 2005;331:1299.
Mueller O, Razum O, Traore C, Kouyate B. Community effectiveness of chloroquine and traditional remedies in the treatment of young children with falciparum malaria in rural Burkina Faso. Malar J. 2004;3:36.
Sinclair D, Zani B, Donegan S, Olliaro P, Garner P. Artemisinin-based combination therapy for treating uncomplicated malaria. Cochrane Database Syst Rev. 2009;8:007483.
Björkman A, Mårtensson A. Risks and benefits of targeted malaria treatment based on rapid diagnostic test results. Clin Infect Dis. 2010;51:512–4.
Bisoffi Z, Sirima SB, Menten J, Pattaro C, Angheben A, Gobbi F, et al. Accuracy of a rapid diagnostic test on the diagnosis of malaria infection and of malaria-attributable fever during low and high transmission season in Burkina Faso. Malar J. 2010;9:192.
Medicines for Malaria Venture. Understanding the antimalarials market: Uganda 007—an overview of the supply side. Geneva: Medicines for Malaria Venture; 2008.
KSH and LPØ conceived the research and all authors participated in developing the model, solving the model, creating the excel sheet used for performing the numerical simulations, writing the paper and approving the final version of the paper. All authors read and approved the final manuscript.
We would like to thank Catherine Goodman, Charles M. Harvey, Baptiste Leurent, Troels Martin Range and Ulrika Enemark for helpful comments to previous versions of this paper.
No primary data were collected as part of this research.
All authors give consent to publish the paper in its present form.
All input into the model developed for this paper were summary results obtained from published research studies and no additional information from individuals was collected. No ethics approval of the study or consent from individuals was therefore necessary.
Financial support for this study was provided by the ACT Consortium, through a grant from the Bill & Melinda Gates Foundation to the London School of Hygiene and Tropical Medicine (Grant Number 39640). The funders had no role in study design, data collection, analysis, decision to publish or preparation of the manuscript.
Department of Global Health and Development, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH, UK
Kristian Schultz Hansen
Department of Public Health, University of Copenhagen, Øster Farimagsgade 5, 1014, Copenhagen, Denmark
Department of Business and Economics, and Centre of Health Economics Research (COHERE), University of Southern Denmark, Campusvej 55, 5230, Odense M, Denmark
Tine Hjernø Lesner
Department of Economics, Copenhagen Business School, Porcelænshaven 16A, 2000, Frederiksberg, Denmark
Lars Peter Østerdal
Search for Kristian Schultz Hansen in:
Search for Tine Hjernø Lesner in:
Search for Lars Peter Østerdal in:
Correspondence to Kristian Schultz Hansen.
12936_2016_1582_MOESM1_ESM.docx
Additional file 1. List of expected utility functions for all possible combinations of treatment after an individual has decided to purchase an RDT.
Additional file 2. Formal proof that six of the possible diagnosis-treatment strategies of an individual are dominated or suboptimal.
Additional file 3. Derivation of the conditions under which an individual will always choose to purchase an RDT followed by buying an ACT only if the test is positive.
Hansen, K.S., Lesner, T.H. & Østerdal, L.P. Optimal price subsidies for appropriate malaria testing and treatment behaviour. Malar J 15, 534 (2016) doi:10.1186/s12936-016-1582-1
Treatment-seeking | CommonCrawl |
On strong causal binomial approximation for stochastic processes
DCDS-B Home
Traveling wave in backward and forward parabolic equations from population dynamics
August 2014, 19(6): 1523-1548. doi: 10.3934/dcdsb.2014.19.1523
Symmetric periodic orbits in three sub-problems of the $N$-body problem
Nai-Chia Chen 1,
School of Mathematics, University of Minnesota, Minneapolis, MN 55455, United States
Received November 2013 Revised March 2014 Published June 2014
We study three sub-problems of the $N$-body problem that have two degrees of freedom, namely the $n-$pyramidal problem, the planar double-polygon problem, and the spatial double-polygon problem. We prove the existence of several families of symmetric periodic orbits, including ``Schubart-like" orbits and brake orbits, by using topological shooting arguments.
Keywords: Three-body problem, periodic orbits., n-body problem.
Mathematics Subject Classification: Primary: 70F07; Secondary: 37C2.
Citation: Nai-Chia Chen. Symmetric periodic orbits in three sub-problems of the $N$-body problem. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1523-1548. doi: 10.3934/dcdsb.2014.19.1523
R. Broucke, On the isosceles triangle configuration in the planar general three body problem, Astron. Astrophys., 73 (1979), 303-313. Google Scholar
N. C. Chen, Periodic brake orbits in the planar isosceles three-body problem, Nonlinearity, 26 (2013), 2875-2898. doi: 10.1088/0951-7715/26/10/2875. Google Scholar
J. Delgado and C. Vidal, The tetrahedral $4$-body problem, J. Dynam. Differential Equations, 11 (1999), 735-780. doi: 10.1023/A:1022667613764. Google Scholar
D. Ferrario and A. Portaluri, On the dihedral $n$-body problem, Nonlinearity, 21 (2008), 1307-1321. doi: 10.1088/0951-7715/21/6/009. Google Scholar
R. Martínez, On the existence of doubly symmetric "Schubart-like" periodic orbits, Discrete Contin. Dyn. Syst. Ser. B, 17 (2012), 943-975. doi: 10.3934/dcdsb.2012.17.943. Google Scholar
R. Martínez, Families of double symmetric 'Schubart-like' periodic orbits, Celest. Mech. Dyn. Astr., 117 (2013), 217-243. doi: 10.1007/s10569-013-9509-4. Google Scholar
R. McGehee, Triple collision in the collinear three-body problem, Invent. Math., 27 (1974), 191-227. doi: 10.1007/BF01390175. Google Scholar
R. Moeckel, R. Montgomery and A. Venturelli, From brake to syzygy, Arch. Ration. Mech. Anal., 204 (2012), 1009-1060. doi: 10.1007/s00205-012-0502-y. Google Scholar
R. Moeckel and C. Simó, Bifurcation of spatial central configurations from planar ones, SIAM J. Math. Anal., 26 (1995), 978-998. doi: 10.1137/S0036141093248414. Google Scholar
R. Moeckel, A topological existence proof for the Schubart orbits in the collinear three-body problem, Discrete Contin. Dyn. Syst. Ser. B, 10 (2008), 609-620. doi: 10.3934/dcdsb.2008.10.609. Google Scholar
J. Schubart, Numerische Aufsuchung periodischer Lösungen im Dreikörperproblem, Astron. Nachr., 283 (1956), 17-22. doi: 10.1002/asna.19562830105. Google Scholar
M. Shibayama, Minimizing periodic orbits with regularizable collisions in the $n$-body problem, Arch. Ration. Mech. Anal., 199 (2011), 821-841. doi: 10.1007/s00205-010-0334-6. Google Scholar
C. Simó, Analysis of triple collision in the isosceles problem, in Classical Mechanics and Dynamical Systems, (eds. R L Devaney and Z H Nitecki ), New York: Marcel Dekker, (1981), 203-224. Google Scholar
C. Simó and R. Martínez, Qualitative study of the planar isosceles three-body problem,, Celest. Mech. Dyn. Astr., 41 (): 179. doi: 10.1007/BF01238762. Google Scholar
A. Venturelli, A variational proof of the existence of von Schubart's orbit, Discrete Contin. Dyn. Syst. Ser. B, 10 (2008), 699-717. doi: 10.3934/dcdsb.2008.10.699. Google Scholar
Jungsoo Kang. Some remarks on symmetric periodic orbits in the restricted three-body problem. Discrete & Continuous Dynamical Systems, 2014, 34 (12) : 5229-5245. doi: 10.3934/dcds.2014.34.5229
Richard Moeckel. A topological existence proof for the Schubart orbits in the collinear three-body problem. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 609-620. doi: 10.3934/dcdsb.2008.10.609
Rongchang Liu, Jiangyuan Li, Duokui Yan. New periodic orbits in the planar equal-mass three-body problem. Discrete & Continuous Dynamical Systems, 2018, 38 (4) : 2187-2206. doi: 10.3934/dcds.2018090
Niraj Pathak, V. O. Thomas, Elbaz I. Abouelmagd. The perturbed photogravitational restricted three-body problem: Analysis of resonant periodic orbits. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 849-875. doi: 10.3934/dcdss.2019057
Abimael Bengochea, Manuel Falconi, Ernesto Pérez-Chavela. Horseshoe periodic orbits with one symmetry in the general planar three-body problem. Discrete & Continuous Dynamical Systems, 2013, 33 (3) : 987-1008. doi: 10.3934/dcds.2013.33.987
Hildeberto E. Cabral, Zhihong Xia. Subharmonic solutions in the restricted three-body problem. Discrete & Continuous Dynamical Systems, 1995, 1 (4) : 463-474. doi: 10.3934/dcds.1995.1.463
Edward Belbruno. Random walk in the three-body problem and applications. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 519-540. doi: 10.3934/dcdss.2008.1.519
Daniel Offin, Hildeberto Cabral. Hyperbolicity for symmetric periodic orbits in the isosceles three body problem. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 379-392. doi: 10.3934/dcdss.2009.2.379
Tiancheng Ouyang, Duokui Yan. Variational properties and linear stabilities of spatial isosceles orbits in the equal-mass three-body problem. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 3989-4018. doi: 10.3934/dcds.2017169
Mitsuru Shibayama. Non-integrability of the collinear three-body problem. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 299-312. doi: 10.3934/dcds.2011.30.299
Richard Moeckel. A proof of Saari's conjecture for the three-body problem in $\R^d$. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 631-646. doi: 10.3934/dcdss.2008.1.631
Hiroshi Ozaki, Hiroshi Fukuda, Toshiaki Fujiwara. Determination of motion from orbit in the three-body problem. Conference Publications, 2011, 2011 (Special) : 1158-1166. doi: 10.3934/proc.2011.2011.1158
Kuo-Chang Chen. On Chenciner-Montgomery's orbit in the three-body problem. Discrete & Continuous Dynamical Systems, 2001, 7 (1) : 85-90. doi: 10.3934/dcds.2001.7.85
Chjan C. Lim, Joseph Nebus, Syed M. Assad. Monte-Carlo and polyhedron-based simulations I: extremal states of the logarithmic N-body problem on a sphere. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 313-342. doi: 10.3934/dcdsb.2003.3.313
Marcel Guardia, Tere M. Seara, Pau Martín, Lara Sabbagh. Oscillatory orbits in the restricted elliptic planar three body problem. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 229-256. doi: 10.3934/dcds.2017009
Gianni Arioli. Branches of periodic orbits for the planar restricted 3-body problem. Discrete & Continuous Dynamical Systems, 2004, 11 (4) : 745-755. doi: 10.3934/dcds.2004.11.745
Elbaz I. Abouelmagd, Juan Luis García Guirao, Jaume Llibre. Periodic orbits for the perturbed planar circular restricted 3–body problem. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1007-1020. doi: 10.3934/dcdsb.2019003
Regina Martínez, Carles Simó. On the stability of the Lagrangian homographic solutions in a curved three-body problem on $\mathbb{S}^2$. Discrete & Continuous Dynamical Systems, 2013, 33 (3) : 1157-1175. doi: 10.3934/dcds.2013.33.1157
Xiaojun Chang, Tiancheng Ouyang, Duokui Yan. Linear stability of the criss-cross orbit in the equal-mass three-body problem. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 5971-5991. doi: 10.3934/dcds.2016062
Hadia H. Selim, Juan L. G. Guirao, Elbaz I. Abouelmagd. Libration points in the restricted three-body problem: Euler angles, existence and stability. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 703-710. doi: 10.3934/dcdss.2019044
Nai-Chia Chen | CommonCrawl |
Climate Dynamics
Evaluation of CMIP5 ability to reproduce twentieth century regional trends in surface air temperature and precipitation over CONUS
Jinny Lee
Duane Waliser
Huikyo Lee
Paul Loikith
Kenneth E. Kunkel
The ability of the 5th phase of the Coupled Model Intercomparison Project (CMIP5) to reproduce twentieth-century climate trends over the seven CONUS regions of the National Climate Assessment is evaluated. This evaluation is carried out for summer and winter for three time periods, 1895–1939, 1940–1979, and 1980–2005. The evaluation includes all 206 CMIP5 historical simulations from 48 unique models and their multi-model ensemble (MME), as well as a gridded in situ dataset of surface air temperature and precipitation. Analysis is performed on both individual members and the MME, and considers reproducing the correct sign of the trends by the members as well as reproducing the trend values. While the MME exhibits some trend bias in most cases, it reproduces historical temperature trends with reasonable fidelity for summer for all time periods and all regions, including at the CONUS scale, except the Northern Great Plains from 1895 to 1939 and Southeast during 1980–2005. Likewise, for DJF, the MME reproduces historical temperature trends across all time periods over all regions, including at the CONUS scale, except the Southeast from 1895 to 1939 and the Midwest during 1940–1979. Model skill was highest across all of the seven regions during JJA and DJF for the 1980–2005 period. The quantitatively best result is seen during DJF in the Southwest region with at least 74% of the ensemble members correctly reproducing the observed trend across all of the time periods. No clear trends in MME precipitation were identified at these scales due to high model precipitation variability.
CMIP5 Model evaluation Surface air temperature Multi-model ensemble
The online version of this article ( https://doi.org/10.1007/s00382-019-04875-1) contains supplementary material, which is available to authorized users.
We would like to acknowledge the World Climate Research Programme's Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups (listed in "Appendix B" of this paper) for producing and making available their model output. For CMIP the U.S. Department of Energy's Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. The primary author would also like to acknowledge California State University, Los Angeles NASA DIRECT-STEM program and director, Dr. Hengchun Ye for funding and support. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Support for this project was provided by NASA National Climate Assessment 11-NCA 11-0028. Kenneth Kunkel was supported by NOAA through the Cooperative Institute for Climate and Satellites—North Carolina under Cooperative Agreement NA14NES432003.
382_2019_4875_MOESM1_ESM.docx (886 kb)
Supplementary material 1 (DOCX 886 kb)
Methodology equations
For each NCA region, the seasonal mean time series from the reference data is represented as:
$$x\left( t \right), \quad \left( {t = 1, 2, \ldots , m} \right)$$
where t the year from the starting year in each time block. m number of years in the time block.
The regionally-averaged time series from ensemble member i is defined as:
$$y_{i} \left( t \right), \quad \left( {t = 1, 2, \ldots , m} \right)$$
In addition, the regionally-averaged time series from the MME is represented as:
$$Y\left( t \right), \quad \left( {t = 1, 2, \ldots , m} \right)$$
where the ensemble average of N simulations is calculated using the following equation:
$$Y\left( t \right) = \frac{1}{N}\mathop \sum \limits_{k = 1}^{N} y_{i} \left( t \right)$$
See Table 2 for the number of simulations (N) used to calculate Y(t) in each time block.
The seasonal mean trend for the reference data, \(\alpha_{ref}\) [K year−1], is defined as the least square fit for a linear regression model:
$$x = \alpha_{ref} \times t + \beta_{ref}$$
The linear trend, \(\alpha_{yi}\) [K year−1], for ensemble members, \(y_{i} \left( t \right)\), and the ensemble linear trend, \(\alpha_{Y}\), for MME, \(Y\left( t \right)\), is calculated in the same manner for three time blocks (1895–1939, 1940–1979, 1980–2005). The choice of the time blocks is based on the observed warming and cooling trends, and closely mimics those in Kunkel et al. (2006).
The performance metric of the simulated trends in each region are:
trend bias of the MME, \(\alpha_{Y} - \alpha_{ref}\),
trend biases of ensemble members, \(\alpha_{yi} - \alpha_{ref}\),
percentage of the ensemble members reproducing the same sign (±) trend as the observed trend, and
percentage of the ensemble members whose trend biases are small relative to standard errors of the observed and simulated trends.
For (a) and (b), the following null hypothesis is tested per time block per region.
$$H_{o} : \alpha_{ref} = \alpha_{Y} \quad {\text{for}}\; ( {\text{a}}).$$
$$H_{o} : \alpha_{ref} = \alpha_{yi} \quad {\text{for}}\; ( {\text{b}}).$$
For the reference, linear trend calculation, the standard error of \(\alpha_{ref}\) is defined as (Hogg and Tanis 2009):
$$s_{ref} = \sqrt {\frac{{\mathop \sum \nolimits_{k = 1}^{m} \left( {x_{k} - \hat{x}_{k} } \right)^{2} /\left( {m - 2} \right)}}{{\mathop \sum \nolimits_{k = 1}^{m} \left( {t_{k} - \bar{t}} \right)^{2} }}}$$
$$\hat{x}_{k} = \alpha_{ref} \times k + \beta_{ref}$$
$$\bar{t} = \frac{1}{m}\mathop \sum \limits_{k = 1}^{m} k$$
It should be noted that (a), the trend bias of the MME, has high dependence on some models that contribute many ensemble members. For instance, there are models that contribute as little as one simulation or as many as 25 different ensemble members. In the case of a model contributing large ensemble simulations, this particular model will bear greater weight to the overall regional mean because each simulation is weighted equally when calculating a MME (Eq. 4). Considering the unequal weights of models in \(\alpha_{Y} ,\) the standard error of \(\alpha_{Y}\) was computed by randomly selecting N individual model trends with replacement (called bootstrapping) and computing the mean of that selection. We repeated this sampling 1000 times to obtain standard deviation across the 1000 random ensemble trends and use it as \(\alpha_{Y}\)'s standard error \((s_{Y} )\). We compared \(Y's\) bias \((\alpha_{Y} - \alpha_{ref} )\) with \(s_{ref}\) and \(s_{ref}\) to test the null hypothesis (6).
To assess the statistical significance of (b), the trend bias for simulation i, (\(\alpha_{yi} - \alpha_{ref}\)), it is reasonable to assume that \(\alpha_{ref}\) and \(\alpha_{yi}\) likely have unequal variances. Therefore, the Welch's t test statistic \(\left( {T_{i} } \right)\) is used to estimate the statistical significance of (\(\alpha_{yi} - \alpha_{ref}\)). \(T_{i}\) is defined as (Hogg and Tanis 2009):
$$T_{i} = \frac{{\alpha_{yi} - \alpha_{ref} }}{{\sqrt {\frac{{s_{ref}^{2} + s_{yi}^{2} }}{m - 2}} }}$$
Using the Welch–Satterthwaite equation, the degrees of freedom, \(f_{i}\), for \(T_{i}\) can be approximated by:
$$f_{i} \approx \frac{{\left( {\frac{{s_{ref}^{2} + s_{yi}^{2} }}{m - 2}} \right)^{2} }}{{\frac{{s_{ref}^{4} + s_{yi}^{4} }}{{\left( {m - 2} \right)^{2} \left( {m - 3} \right)^{2} }}}}$$
Let \(C^{{f_{i} }}\) be the cumulative density function of a student's t-distribution with \(f_{i}\) be the number of degrees of freedom. Then,
$$p_{i} = C^{{f_{i} }} \left( {T_{i} } \right)$$
and using \(p_{i}\), the confidence level \((d_{i} )\) of \(\alpha_{yi} - \alpha_{ref}\) can be calculated and used as a metric:
$${\text{when}}\;p_{i} < 0.5,\;d_{i} = \left( {1 - 2 \times p_{i} } \right)*100\left[ \% \right]$$
$${\text{when}}\;p_{i} > 0.5,\;d_{i} = \left( {2 \times p_{i} - 1} \right)*100\left[ \% \right]$$
$${\text{when}}\;p_{i} = 0.5,\; \alpha_{ref} = \alpha_{yi} ,\;{\text{therefore}}\;d_{i} = 0\%$$
The null hypothesis \((H_{o} )\) is rejected if \(T_{i}\) and \(p_{i}\) are too small (indicating \(\alpha_{yi} \ll \alpha_{ref}\)), or too large (indicating \(\alpha_{yi} \gg \alpha_{ref}\)). In this case, \(\alpha_{yi}\) is statistically different from \(\alpha_{ref}\) at a confidence level of \(d_{i}\). We calculated \(d_{i}\) of the 206 simulations for each period and region, and show a fraction of simulations whose trend biases are not statistically significant with 90% confidence level. In other words, the fraction represents how many simulations reproduce observed trends considering standard errors of the trends.
Part (c) calculated the total percentage of N simulations in which the \(\alpha_{yi}\) and \(\alpha_{ref}\) have the same sign. If the product of the two trends are greater than 0, than the two carry the same warming (cooling) trend. The total tally count is divided by the number of simulations and multiplied by 100 to produce a percentage as follows:
$$f = \frac{{\mathop \sum \nolimits_{i = 1}^{N} {\text{X}}_{i} }}{N}*100\;{\text{where}}\;{\text{X}}_{i} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {{\text{if}}\;\alpha_{yi} \cdot \alpha_{ref} \ge 0} \hfill \\ 0 \hfill & { {\text{if}}\;\alpha_{yi} \cdot \alpha_{ref} < 0} \hfill \\ \end{array} } \right.$$
In a similar manner, part (d) also produces a fraction examines the magnitude of the warming (cooling) trend. If a trend of a given simulation and its standard deviation ranges intersects with the reference and its standard error [as calculated with the equation from Hogg and Tanis (2009)], a tally is given. The total tally count is divided by the number of simulations and multiplied by 100 to produce a percentage as follows:
$$f = \frac{{\mathop \sum \nolimits_{i = 1}^{N} {\text{X}}_{i} }}{N}*100\;{\text{where}}\;{\text{X}}_{i} = \left\{ {\begin{array}{*{20}l} 0 \hfill & { {\text{if}}\;(\alpha_{yi} \pm 1\sigma ) \cap (\alpha_{ref} \pm SE) = \emptyset } \hfill \\ 1 \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right.$$
Summary of CMIP5 historical simulation dataset
Modeling center
Commonwealth Scientific and Industrial Research Organization Bureau of Meteorology
(Australia)
ACCESS1-0
r1i1p1
Collier and Uhe (2012)
Beijing Climate Center
(China)
bcc-csm1-1
Wu et al. (2014)
bcc-csm1-1-m
Beijing Normal University
BNU-ESM
Ji et al. (2014)
Canadian Center for Climate Modeling and Analysis
CanCM4
r10i1p1
Chylek et al. (2011)
CanESM2
CCSM4
Collins et al. (2004)
CESM1-BGC
CESM1-CAM5
CESM1-FASTCHEM
CESM1-WACCM
Marsh et al. (2013)
Centro Euro-Mediterraneo sui Cambiamenti Climatici
(Italy)
CMCC-CESM
Fogli and Iovino (2014)
CMCC-CM
Scoccimarro et al. (2011)
CMCC-CMS
Center National de Recherches Meteorologiques
Center Europeen de Recherche et de Formation Avancee en Calcul Scientifique
CNRM-CM5
Voldoire et al. (2013)
CNRM-CM5-2
Commonwealth Scientific and Industrial Research Organization
Queensland Climate Change Center of Excellence
CSIRO-Mk3-6-0
Gordon et al. (2010)
EC-EARTH Consortium published at Irish Center of High-End Computing
(Netherlands/Ireland)
EC-EARTH
Hazeleger et al. (2012)
Institute of Atmospheric Physics
FGOALS-g2
Li et al. (2013)
The First Institute of Oceanography, SOA
Qiao et al. (2013)
Geophysical Fluid Dynamics Laboratory
GFDL-CM2p1
Delworth et al. (2006)
GFDL-CM3
Donner et al. (2011)
GFDL-ESM2G
Dunne et al. (2013)
GFDL-ESM2 M
NASA/GISS
GISS-E2-H
Schmidt et al. (2014)
GISS-E2-H-CC
GISS-E2-R
r1i1p121
GISS-E2-R-CC
Met Office Hadley Center
HadCM3
Pope et al. (2000)
HadGEM2-CC
HadGEM2-ES
National Institute of Meteorological Research
Korea Meteorological Administration
HadGEM2-AO
Baek et al. (2013)
Institute of Numerical Mathematics
inmcm4
Volodin et al. (2010)
Institut Pierre Simon Laplace
IPSL-CM5A-LR
Dufresne et al. (2013)
IPSL-CM5A-MR
IPSL-CM5B-LR
Atmosphere and Ocean Research Institute
(The University of Tokyo)
National Institute for Environmental Studies
Japan Agency for Marine-Earth Science and Technology
MIROC-ESM
Watanabe et al. (2011)
MIROC-ESM-CHEM
MIROC4 h
Sakamoto et al. (2012)
MIROC5
Max Planck Institute for Meteorology
(Germany)
MPI-ESM-LR
Stevens et al. (2013)
MPI-ESM-MR
MPI-ESM-P
Meteorological Research Institute
MRI-CGCM3
Yukimoto et al. (2012)
MRI-ESM1
Yukimoto (2011)
Bjerknes Center for Climate Research
Norwegian Meteorological Institute
(Norway)
NorESM1-M
Bentsen et al. (2013)
NorESM1-ME
Baek H-J, Lee J, Lee H-S, Hyun Y-K, Cho C, Kwon W-T, Marzin C, Gan SY, Kim MJ, Choi DH, Lee J, Lee J, Boo K-O, Kang H-S, Byun Y-H (2013) Climate change in the 21st century simulated by HadGEM2-AO under representative concentration pathways. Asia Pac J Atmos Sci 49(5):603–618. https://doi.org/10.1007/s13143-013-0053-7 CrossRefGoogle Scholar
Barnett TP, Pierce DW, Hidalgo HG, Bonfils C, Santer BD, Das T, Bala G, Wood AW, Nozawa T, Mirin AA, Cayan DR, Dettinger MD (2008) Human-induced changes in the hydrology of the western United States. Science 319(5866):1080–1083. Retrieved from http://science.sciencemag.org/content/319/5866/1080.abstract
Bentsen M, Bethke I, Debernard JB, Iversen T, Kirkevåg A, Seland Ø, Drange H, Roelandt C, Seierstad IA, Hoose C, Kristjánsson JE (2013) The Norwegian earth system model, NorESM1-M—part 1: description and basic evaluation of the physical climate. Geosci Model Dev 6(3):687–720. https://doi.org/10.5194/gmd-6-687-2013 CrossRefGoogle Scholar
Bukovsky MS (2012) Temperature trends in the NARCCAP regional climate models. J Clim 25(11):3985–3991. https://doi.org/10.1175/JCLI-D-11-00588.1 CrossRefGoogle Scholar
Cayan DR, Tyree M, Kunkel KE, Castro C, Gershunov A, Barsugli J, Ray AJ, Overpeck J, Anderson A, Russell J, Rajagopalan B, Rangwala I, Duffy P, Barlow M (2013) Future climate: projected average. In: Garfin G, Jardine A, Merideth R, Black M, LeRoy S (eds) Assessment of climate change in the southwest United States: a report prepared for the national climate assessment. Island Press, Washington, DC, pp 101–125. https://doi.org/10.5822/978-1-61091-484-0_6 CrossRefGoogle Scholar
Chylek P, Li J, Dubey MK, Wang M, Lesins G (2011) Observed and model simulated 20th century Arctic temperature variability: Canadian earth system model CanESM2. Atmos Chem Phys Discuss 11(8):22893–22907. https://doi.org/10.5194/acpd-11-22893-2011 CrossRefGoogle Scholar
Collier M, Uhe P (2012) CMIP5 datasets from the ACCESS1.0 and ACCESS1.3 coupled climate models. CAWCR technical report no. 059. ISBN: 20978-1-922173-29-4Google Scholar
Collins WD, Rasch PJ, Boville BA, Hack JJ, Williamson DL, Kiehl JT, Briegleb B, Bitz C, Lin SJ, Zhang M, Dai Y (2004) Description of the NCAR community atmosphere model (CAM 3.0). Ncar/Tn-464+ Str (June), 214Google Scholar
Collins WJ, Bellouin N, Doutriaux-Boucher M, Gedney N, Halloran P, Hinton T, Hughes J, Jones CD, Joshi M, Liddicoat S, Martin G, O'Connor F, Rae J, Senior C, Sitch S, Totterdell I, Wiltshire A, Woodward S (2011) Development and evaluation of an earth-system model—HadGEM2. Geosci Model Dev 4(4):1051–1075. https://doi.org/10.5194/gmd-4-1051-2011 CrossRefGoogle Scholar
Crowley TJ (2000) Causes of climate change over the past 1000 years. Science 289(5477):270–277. Retrieved from http://science.sciencemag.org/content/289/5477/270.abstract
Delworth TL, Broccoli AJ, Rosati A, Stouffer RJ, Balaji V, Beesley JA, Cooke WF, Dixon KW, Dunne J, Dunne KA, Durachta JW, Findell KL, Ginoux P, Gnanadesikan A, Gordon CT, Griffies SM, Gudgel R, Harrison MJ, Held IM, Hemler RS, Horowitz LW, Klein SA, Knutson TR, Kushner PJ, Langenhorst AR, Lee H-C, Lin S-J, Lu J, Malyshev SL, Milly PCD, Ramaswamy V, Russell J, Schwarzkopf MD, Shevliakova E, Sirutis JJ, Spelman MJ, Stern WF, Winton M, Wittenberg AT, Wyman B, Zeng F, Zhang R (2006) GFDL's CM2 global coupled climate models. Part I: formulation and simulation characteristics. J Clim 19(5):643–674. https://doi.org/10.1175/JCLI3629.1 CrossRefGoogle Scholar
Donner LJ, Wyman BL, Hemler RS, Horowitz LW, Ming Y, Zhao M, Golaz J-C, Ginoux P, Lin S-J, Schwarzkopf MD, Austin J, Alaka G, Cooke WF, Delworth TL, Freidenreich SM, Gordon CT, Griffies SM, Held IM, Hurlin WJ, Klein SA, Knutson TR, Langenhorst AR, Lee H-C, Lin Y, Magi BI, Malyshev SL, Milly PCD, Naik V, Nath MJ, Pincus R, Ploshay JJ, Ramaswamy V, Seman CJ, Shevliakova E, Sirutis JJ, Stern WF, Stouffer RJ, Wilson RJ, Winton M, Wittenberg AT, Zeng F (2011) The dynamical core, physical parameterizations, and basic simulation characteristics of the atmospheric component AM3 of the GFDL global coupled model CM3. J Clim 24(13):3484–3519. https://doi.org/10.1175/2011JCLI3955.1 CrossRefGoogle Scholar
Dufresne J-L, Foujols M-A, Denvil S, Caubel A, Marti O, Aumont O, Balkanski Y, Bekki S, Bellenger H, Benshila R, Bony S, Bopp L, Braconnot P, Brockmann P, Cadule P, Cheruy F, Codron F, Cozic A, Cugnet D, de Noblet N, Duvel J-P, Ethé C, Fairhead L, Fichefet T, Flavoni S, Friedlingstein P, Grandpeix J-Y, Guez L, Guilyardi E, Hauglustaine D, Hourdin F, Idelkadi A, Ghattas J, Joussaume S, Kageyama M, Krinner G, Labetoulle S, Lahellec A, Lefebvre M-P, Lefevre F, Levy C, Li ZX, Lloyd J, Lott F, Madec G, Mancip M, Marchand M, Masson S, Meurdesoif Y, Mignot J, Musat I, Parouty S, Polcher J, Rio C, Schulz M, Swingedouw D, Szopa S, Talandier C, Terray P, Viovy N, Vuichard N (2013) Climate change projections using the IPSL-CM5 earth system model: from CMIP3 to CMIP5. Clim Dyn 40(9):2123–2165. https://doi.org/10.1007/s00382-012-1636-1 CrossRefGoogle Scholar
Dunne JP, John JG, Shevliakova E, Stouffer RJ, Krasting JP, Malyshev SL, Milly PC, Sentman LT, Adcroft AJ, Cooke W, Dunne KA, Griffies SM, Hallberg RW, Harrison MJ, Levy H, Wittenberg AT, Phillips PJ, Zadeh N (2013) GFDL's ESM2 global coupled climate–carbon earth system models. Part II: carbon system formulation and baseline simulation characteristics. J Clim 26(7):2247–2267. https://doi.org/10.1175/JCLI-D-12-00150.1 CrossRefGoogle Scholar
Easterling DR, Kunkel KE, Arnold JR, Knutson T, LeGrande AN, Leung LR, Vose RS, Waliser DE, Wehner MF (2017) Precipitation change in the United States. In: Wuebbles DJ, Fahey DW, Hibbard KA, Dokken DJ, Stewart BC, Maycock TK (eds) Climate science special report: fourth national climate assessment, vol I. U.S. Global Change Research Program, pp 207–230. https://doi.org/10.7930/j0h993cc
Fogli PG, Iovino D (2014) CMCC–CESM–NEMO: toward the new CMCC earth system model. In: CMCC research paper, no 248. https://doi.org/10.2139/ssrn.2603176
Gordon H, Farrell SO, Collier M, Dix M, Rotstayn L, Kowalczyk E, Hirst T, Watterson I (2010) The CSIRO Mk3.5 Climate Model. In: CAWCR Technical Report, no 21. CAWCR, Melbourne, pp 1–74Google Scholar
Hazeleger W, Wang X, Severijns C, Ştefănescu S, Bintanja R, Sterl A, Wyser K, Semmler T, Yang S, Van den Hurk B (2012) EC-Earth V2. 2: description and validation of a new seamless earth system prediction model. Clim Dyn 39(11):2611–2629CrossRefGoogle Scholar
Hogg RV, Tanis EA (2009) Probability and statistical inference. Pearson Educational International, LondonGoogle Scholar
IPCC (2013) Climate change 2013: the physical science basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, CambridgeGoogle Scholar
Janssen E, Wuebbles D, Kunkel K (2014) Observational and model based trends and projections of extreme precipitation over the contiguous United States. Earth's Future. https://doi.org/10.1002/2013EF000185.Received Google Scholar
Janssen E, Sriver RL, Wuebbles DJ, Kunkel KE (2016) Seasonal and regional variations in extreme precipitation event frequency using CMIP5. Geophys Res Lett 43(10):5385–5393. https://doi.org/10.1002/2016GL069151 CrossRefGoogle Scholar
Ji D, Wang L, Feng J, Wu Q, Cheng H, Zhang Q, Yang J, Dong W, Dai Y, Gong D, Zhang RH, Wang X, Liu J, Moore JC, Chen D, Zhou M (2014) Description and basic evaluation of Beijing Normal University earth system model (BNU-ESM) version 1. Geosci Model Dev 7(5):2039–2064. https://doi.org/10.5194/gmd-7-2039-2014 CrossRefGoogle Scholar
Karl TR, Knight RW, Easterling DR, Quayle RG (1996) Indices of climate change for the United States. Bull Am Meteorol Soc 77(2):279–292. https://doi.org/10.1175/1520-0477(1996)077%3c0279:IOCCFT%3e2.0.CO;2 CrossRefGoogle Scholar
Knutson TR, Delworth TL, Dixon KW, Held IM, Lu J, Ramaswamy V, Schwarzkopf MD, Stenchikov G, Stouffer RJ (2006) Assessment of twentieth-century regional surface temperature trends using the GFDL CM2 coupled models. J Clim 19(9):1624–1651. https://doi.org/10.1175/JCLI3709.1 CrossRefGoogle Scholar
Knutson TR, Zeng F, Wittenberg AT (2013) Multimodel assessment of regional surface temperature trends: CMIP3 and CMIP5 twentieth-century simulations. J Clim 26(22):8709–8743. https://doi.org/10.1175/JCLI-D-12-00567.1 CrossRefGoogle Scholar
Kumar S III, Kinter J, Dirmeyer PA, Pan Z, Adams J (2013a) Multidecadal climate variability and the "warming hole" in North America: results from CMIP5 twentieth- and twenty-first-century climate simulations. J Clim 26(11):3511–3527. https://doi.org/10.1175/JCLI-D-12-00535.1 CrossRefGoogle Scholar
Kumar S, Merwade V, Kinter JL, Niyogi D (2013b) Evaluation of temperature and precipitation trends and long-term persistence in CMIP5 twentieth-century climate simulations. J Clim 26(12):4168–4185. https://doi.org/10.1175/JCLI-D-12-00259.1 CrossRefGoogle Scholar
Kunkel KE, Liang XZ (2005) GCM simulations of the climate in the central United States. J Clim 18(7):1016–1031. https://doi.org/10.1175/JCLI-3309.1 CrossRefGoogle Scholar
Kunkel KE, Easterling DR, Redmond K, Hubbard K (2003) Temporal variations of extreme precipitation events in the United States: 1895–2000. Geophys Res Lett. https://doi.org/10.1029/2003GL018052 Google Scholar
Kunkel KE, Liang X-Z, Zhu J, Lin Y (2006) Can CGCMs simulate the twentieth-century "warming hole" in the central United States? J Clim 19(17):4137–4153. https://doi.org/10.1175/JCLI3848.1 CrossRefGoogle Scholar
Kunkel K, Stevens LE, Stevens SE, Sun L, Janssen E, Wuebbles D, Dobson JG (2013) Regional climate trends and scenarios for the U. S. National Climate Assessment part 9. Climate of the contiguous United States (January), 77. Retrieved from http://www.nesdis.noaa.gov/technical_reports/149_Climate_Scenarios.html
Kunkel KE, Vose RS, Stevens LE, Knight RW (2015) Is the monthly temperature climate of the United States becoming more extreme? Geophys Res Lett 42(2):629–636. https://doi.org/10.1002/2014GL062035 CrossRefGoogle Scholar
Li L, Lin P, Yu Y, Wang B, Zhou T, Liu L, Liu J, Bao Q, Xu S, Huang W, Xia K, Pu Y, Dong L, Shen S, Liu Y, Hu N, Liu M, Sun W, Shi X, Zheng W, Wu B, Song M, Liu H, Zhang X, Wu G, Xue W, Huang X, Yang G, Song Z, Qiao F (2013) The flexible global ocean-atmosphere-land system model, grid-point version 2: FGOALS-g2. Adv Atmos Sci 30(3):543–560. https://doi.org/10.1007/s00376-012-2140-6 CrossRefGoogle Scholar
Marsh DR, Mills MJ, Kinnison DE, Lamarque J-F, Calvo N, Polvani LM (2013) Climate change from 1850 to 2005 simulated in CESM1(WACCM). J Clim 26(19):7372–7391. https://doi.org/10.1175/JCLI-D-12-00558.1 CrossRefGoogle Scholar
Meehl GA, Tebaldi C (2004) More intense, more frequent, and longer lasting heat waves in the 21st century. Science 305(5686):994–997. Retrieved from http://science.sciencemag.org/content/305/5686/994.abstract
Meehl GA, Arblaster JM, Branstator G (2012) Mechanisms contributing to the warming hole and the consequent U.S. east–west differential of heat extremes. J Clim 25(18):6394–6408. https://doi.org/10.1175/JCLI-D-11-00655.1 CrossRefGoogle Scholar
Melillo JM, Richmond TC, Yohe GW, US National Climate Assessment (2014) Climate change impacts in the United States: the third national climate assessment. US Global Change Research Program, vol 841. https://doi.org/10.7930/j0z31WJ2
Menne MJ, Durre I, Vose RS, Gleason BE, Houston TG (2012) An overview of the global historical climatology network-daily database. J Atmos Ocean Technol 29(7):897–910. https://doi.org/10.1175/JTECH-D-11-00103.1 CrossRefGoogle Scholar
Pope VD, Gallani ML, Rowntree PR, Stratton RA (2000) The impact of new physical parametrizations in the Hadley Centre climate model: HadAM3. Clim Dyn 16(2):123–146. https://doi.org/10.1007/s003820050009 CrossRefGoogle Scholar
Qiao F, Song Z, Bao Y, Song Y, Shu Q, Huang C, Zhao W (2013) Development and evaluation of an earth system model with surface gravity waves. J Geophys Res Oceans 118(9):4514–4524CrossRefGoogle Scholar
Sakamoto TT, Komuro Y, Nishimura T, Ishii M, Tatebe H, Shiogama H, Hasegawa A, Toyoda T, Mori M, Suzuki T, Imada Y, Nozawa T, Takata K, Mochizuki T, Ogochi K, Emori S, Hasumi H, Kimoto M (2012) MIROC4h—a new high-resolution atmosphere-ocean coupled general circulation model. J Meteorol Soc Jpn Ser II 90(3):325–359. https://doi.org/10.2151/jmsj.2012-301 CrossRefGoogle Scholar
Schmidt GA, Kelley M, Nazarenko L, Ruedy R, Russell GL, Aleinov I, Bauer M, Bauer SE, Bhat MK, Bleck R, Canuto V, Chen Y-H, Cheng Y, Clune TL, Del Genio A, de Fainchtein R, Faluvegi G, Hansen JE, Healy RJ, Kiang NY, Koch D, Lacis AA, LeGrande AN, Lerner J, Lo KK, Matthews EE, Menon S, Miller RL, Oinas V, Oloso AO, Perlwitz JP, Puma MJ, Putman WM, Rind D, Romanou A, Sato M, Shindell DT, Sun S, Syed RA, Tausnev N, Tsigaridis K, Unger N, Voulgarakis A, Yao M-S, Zhang J (2014) Configuration and assessment of the GISS ModelE2 contributions to the CMIP5 archive. J Adv Model Earth Syst 6(1):141–184. https://doi.org/10.1002/2013MS000265 CrossRefGoogle Scholar
Scoccimarro E, Gualdi S, Bellucci A, Sanna A, Fogli PG, Manzini E, Vichi M, Oddo P, Navarra A (2011) Effects of tropical cyclones on ocean heat transport in a high-resolution coupled general circulation model. J Clim 24(16):4368–4384. https://doi.org/10.1175/2011JCLI4104.1 CrossRefGoogle Scholar
Stevens B, Giorgetta M, Esch M, Mauritsen T, Crueger T, Rast S, Salzmann M, Schmidt H, Bader J, Block K, Brokopf R, Fast I, Kinne S, Kornblueh L, Lohmann U, Pincus R, Reichler T, Roeckner E (2013) Atmospheric component of the MPI-M earth system model: ECHAM6. J Adv Model Earth Syst 5(2):146–172. https://doi.org/10.1002/jame.20015 CrossRefGoogle Scholar
Taylor KE, Stouffer RJ, Meehl GA (2012) An overview of CMIP5 and the experiment design. Bull Am Meteorol Soc 93(4):485–498. https://doi.org/10.1175/BAMS-D-11-00094.1 CrossRefGoogle Scholar
Voldoire A, Sanchez-Gomez E, y Mélia D, Decharme B, Cassou C, Sénési S, Valcke S, Beau I, Alias A, Chevallier M, Déqué M, Deshayes J, Douville H, Fernandez E, Madec G, Maisonnave E, Moine M-P, Planton S, Saint-Martin D, Szopa S, Tyteca S, Alkama R, Belamari S, Braun A, Coquart L, Chauvin F (2013) The CNRM-CM5.1 global climate model: description and basic evaluation. Clim Dyn 40(9):2091–2121. https://doi.org/10.1007/s00382-011-1259-y CrossRefGoogle Scholar
Volodin EM, Dianskii NA, Gusev AV (2010) Simulating present-day climate with the INMCM4.0 coupled model of the atmospheric and oceanic general circulations. Izv Atmos Ocean Phys 46(4):414–431. https://doi.org/10.1134/S000143381004002X CrossRefGoogle Scholar
Vose RS, Applequist S, Squires M, Durre I, Menne CJ, Williams CN, Fenimore C, Gleason K, Arndt D (2014) Improved historical temperature and precipitation time series for U.S. climate divisions. J Appl Meteorol Climatol 53(5):1232–1251. https://doi.org/10.1175/JAMC-D-13-0248.1 CrossRefGoogle Scholar
Watanabe M, Suzuki T, O'ishi R, Komuro Y, Watanabe S, Emori S, Takemura T, Chikira M, Ogura T, Sekiguchi M, Takata K, Yamazaki D, Yokohata T, Nozawa T, Hasumi H, Tatebe H, Kimoto M (2010) Improved climate simulation by MIROC5: mean states, variability, and climate sensitivity. J Clim 23(23):6312–6335. https://doi.org/10.1175/2010JCLI3679.1 CrossRefGoogle Scholar
Watanabe S, Hajima T, Sudo K, Nagashima T, Takemura T, Okajima H, Nozawa T, Kawase H, Abe M, Yokohata T, Ise T, Sato H, Kato E, Takata K, Emori S, Kawamiya M (2011) MIROC-ESM 2010: model description and basic results of CMIP5-20c3m experiments. Geosci Model Dev 4(4):845–872. https://doi.org/10.5194/gmd-4-845-2011 CrossRefGoogle Scholar
Wu T, Song L, Li W, Wang Z, Zhang H, Xin X, Zhang Zhang L, Li J, Wu F, Liu Y, Zhang F, Shi X, Chu M, Zhang J, Fang Y, Wang F, Lu Y, Liu X, Wei M, Liu Q, Zhou W, Dong M, Zhao Q, Ji J, Li L, Zhou M (2014) An overview of BCC climate system model development and application for climate change studies. J Meteorol Res 28(1):34–56. https://doi.org/10.1007/s13351-014-3041-7 Google Scholar
Wuebbles DJ, Kunkel K, Wehner M, Zobel Z (2014) Severe weather in United States under a changing climate. Eos Trans Am Geophys Union 95(18):149–150. https://doi.org/10.1002/2014EO180001 CrossRefGoogle Scholar
Wuebbles DJ, Fahey DW, Hibbard KA, Dokken DJ, Stewart BC, Maycock TK (eds) (2017) US Global Change Research Program, Washington, DC, USA. https://doi.org/10.7930/j0j964j6
Yukimoto S (2011) Meteorological research institute earth system model version 1 (MRI-ESM1): model descriptionGoogle Scholar
Yukimoto S, Adachi Y, Hosaka M, Sakami T, Yoshimura H, Hirabara M, Tanaka TY, Shindo E, Tsujino H, Deushi M, Mizuta R, Yabu S, Obata A, Nakano H, Koshiro T, Ose T, Kitoh A (2012) A new global climate model of the meteorological research institute: MRI-CGCM3—model description and basic performance. J Meteorol Soc Jpn Ser II 90A:23–64. https://doi.org/10.2151/jmsj.2012-A02 CrossRefGoogle Scholar
© Springer-Verlag GmbH Germany, part of Springer Nature 2019
Email authorView author's OrcID profile
1.Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental EngineeringUniversity of California IrvineIrvineUSA
2.Jet Propulsion LaboratoryCalifornia Institute of TechnologyPasadenaUSA
3.Department of GeographyPortland State UniversityPortlandUSA
4.Cooperative Institute for Climate and Satellites - North CarolinaNorth Carolina State UniversityAshevilleUSA
Lee, J., Waliser, D., Lee, H. et al. Clim Dyn (2019). https://doi.org/10.1007/s00382-019-04875-1
Immediate access to your online only subscription
Includes issues from January to December 2019
Not logged in Not affiliated 54.81.220.239 | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.