text
stringlengths
100
500k
subset
stringclasses
4 values
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. math exams: July 2011 GRE Practice Questions -3 If $m=\frac{\sqrt{5}-3}{\sqrt{2}+1}$, then for which one of the following equals $m-3$ $\sqrt{10}-3\sqrt{2}-\sqrt{5}+3$ $\sqrt{10}-3\sqrt{2}-\sqrt{5}$ $\sqrt{10}-3\sqrt{2}+\sqrt{5} $ $\sqrt{10}-3\sqrt{2}-\sqrt{5}-6 $ $\sqrt{10}+3\sqrt{2}-\sqrt{3} $ If $y < 0$ and $x$ is 7 more than the square of $y$, which one of the following expresses $y$ in terms of $x$? $y=-\sqrt{x-7}$ $y=\sqrt{x-7} $ $y=\sqrt{x+7} $ $y=\sqrt{x^{2}-7}$ $y=-\sqrt{x^{2}-7}$ If $\sqrt[m]{125}=5^{3m}$ and $4^{m} > \frac{1}{2}$, then what is the value of $m$? $-\frac{1}{5}$ $\frac{1}{5}$ $1$ $x^{2}(x^{5})^{2}$ $(x^{4})^{3}$ $17^{\frac{1}{x}-\frac{1}{y}}$ $0 < x < y$ $17^{x-y}$ GMAT Practice Questions -5 If $x \ne 3$, then $\frac{3x^{2}+18x+27}{(x+3)^{2}}$ $ 9$ $ 27$ $81 $ If $\frac{x+5}{x-5}=y$, what is the value of $x$ in terms of $y$? $-5-y$ $\frac{5}{y} $ $\sqrt{y^{2}+5} $ $\frac{-5y-5}{1-y} $ $\frac{-5y+5}{1-y} $ $\frac{1-\frac{1}{3}}{2}$ $\frac{2}{3} $ $\frac{\frac{1}{x}}{\frac{1}{y}-z}$ $\frac{xy}{y-xyz}$ $\frac{1}{xy-xyz} $ $\frac{y}{xyz+x} $ $\frac{y}{x-xyz} $ $\frac{x-xyz}{y} $ The average of $x$, $\frac{1}{x}$ and $\frac{1}{x^{2}}$ is $\frac{1+x^{2}}{3x}$ $\frac{1+x^{2}+x^{3}}{3x^{2}} $ $ \frac{1+x+x^{2}}{3x^{2}}$ $\frac{1-x+x^{2}}{3x} $ $\frac{1+x^{2}+x^{3}}{3} $ $\frac{1}{5}$ of $.01$ percent equals : $.00002$ $.0002 $ $.002 $ $.02 $ $.2 $ $\frac{2^{a+1}-2^{a-1}}{2^{a+1}+2^{a-1}}$ If $x$ is $\frac{50}{51}$ of $\frac{51}{52}$ and $y=\frac{50}{51}$, then $\frac{x}{y}=$ $\frac{50}{51}$ $\frac{50}{52} $ $\frac{2550}{2500}$ The decimal $.01$ is how many times greater than the decimal $(.0001)^{4}$ $10^{6}$ $10^{8} $ $10^{10} $ Let $a=.79$, $b=\sqrt{.79}$ and $c=(.79)^{2}$, then which of the following is true? $a < b < c$ $c < b < a$ $c < a < b$ $b < a < c$ Posted by Ssafini : at 9:56 AM 0 comments If $n$ is a positive integer and $(n+3)(n+5)$ is odd, then $(n+4)(n+6)$ must be a multiple of which one of the following? The number of prime numbers divisible par 2 plus the number of prime numbers divisible by 5 is If $13x+17=0$, then $-13|x|$ equals which one of the following? $-\frac{17}{13}$ $-17 $ Which one of the following is divisible by both 2 and 3? $ 3096$ $1616 $ Which one of the following equals the product of exactly two prime numbers? $13.6$ $17.21$ If $m$, $n$, and $p$ are different prime numbers, then the least common multiple of the the three numbers must equal which one of the following? $mn(p+n)$ $m+n+p$ $m+np$ $m+n-p$ $pnm$ Each of the positive integer $a$ and $b$ ends with the digit 3. With which one of the following numbers does $a-b$ ends? If $p-10$ is divisible by 4, then which one of the following must be divisible by 4? $p$ $p-2$ $p+3$ If $a+5a$ is 6 less than $b+5b$, then $a-b=$ If $w \ne 0$, $w=5x=\sqrt{5}y$, what is the value of $w-x$ in terms of $y$? $5y$ $\frac{\sqrt{5}}{5}y$ $\sqrt{5y}$ $\frac{5}{4\sqrt{5}}y$ $\frac{4\sqrt{5}}{5}y $ If $(a-1)(a+5)(a-7)=0$, and $a < 0$, then $a=$ A aytem of equations is as shown below $x-l=8$ $x+m=7 $ $x-p=6 $ $x+q=5 $ What is the value of $l+m+p+q$? If $\frac{a^{2}-25}{20a}=\frac{a-5}{a+5}$, $a=5 \ne 0$, and $a \ne 0$, then $a=$ If $a$, $b$, $c$,and $d$ are not equal to 0 or 1, and if $a^{x}=b$, $b^{y}=c$, $c^{z}=d$ and $d^{t}=a$, then $xyzt=$ $ abc$ $abcd $ $a^{b^{c^{d}}} $ If $(x-3y)(x+3y)=-9$ and $(3x-y)(3x+y)=-1$, then $\frac{x^{2}+y^{2}}{x^{2}-y^{2}}=$ $-1 $ If $p-q=5$ and $pq=11$, then is the value of $\frac{1}{p^{2}}+\frac{1}{q^{2}}$? $\frac{25}{121}$ $-\frac{47}{121} $ $\frac{5}{11}$ $-\frac{5}{11}$ If n is an integer, which of the following CANNOT be an integer? $\frac{n+2}{2}$ $\sqrt{n+1} $ $\frac{3}{n+2} $ $\sqrt{n^{2}+5} $ $\sqrt{\frac{1}{n^{2}+3}} $ If n is an integer, which one of the following is an odd integer? $n^{2}$ $\frac{n+3}{2} $ $-2n-8 $ $n^{2}-3 $ If $x$, $y$, $z$ and $t$ are positive integers such that $x < y < z < t$ and $x+y+z+t=10$, then what is the value of $t$? The remainder when the positive integer $m$ is divided by $n$ is r. What is the remainder when $3m$ is divided by $3n$? $r$ $3r$ $3n$ $m-3n$ $3(m-nr)$ If $(x-5)(x+4)=(x-4)(x+5)$, then $x=$ $ -4$ If $(3x-1)^{2}=121$, then which one of the following COULD equal x? $\frac{10}{3}$ $\frac{13}{3} $ $-\frac{10}{3} $ (The average of 5 consecutive integers starting from 17)-(The average of 6 consecutive integers starting from 17)= If $n^{3}+n^{2}-n-2=-1$, then which one of the following could be the value of $n$ Solve the the system of equations given? $x+3y=8$ $-1,4$ $1,3$ $1,-3$ If $(a-b)(a+b)=7 \times 3$ then $a$ and $b$ equals respectively? $-5,-2$ $5,3 $ $-3,-10 $ $4x+3=-1$, then $x-1=$ $\frac{7}{\frac{1}{6}+1}$ $32^{17}=2^{3a+4}$, then $a=$ $ 243$ $a=5b$, $b^{2}=3c$ and $5c=d$, then $\frac{a^{2}}{d}=$ $15$ $(x-3)(x+2)=(x+4)(x-5)$ There is no solution $\frac{1}{2}$ of $0.02$ percent equals $0.1$ $0.01$ $0.001$ $0.0001$ If the average of $2x$, $3x$ and $5x$ is 3, then $x=$ $-2^{5}-(1-x^{2})^{2}$ $-x^{4}+2x^{2}+31$ $-x^{4}+2x^{2}-31$ $-x^{4}-2x^{2}+33$ A truckmaker sells six models of cars, and each model comes with 7 options. How many different types of trucks does the truckmaker sell? If $a$, $b$, and $c$ are consecutive integers and $a < b < c$, which of the following must be true? $b^{3}$ is a prime number $\frac{a+c}{2}=b$ $\frac{c-a}{2}=1$ is odd $\frac{ab}{3}$ $c-a=b$ If $C=\frac{7r}{3s}$ and $s \ne 0$, what is the value of C? $r=6s$ $r=\frac{3}{7}$ If $7a=5b$ and $ab \ne 0$, what is the ratio of $\frac{a}{5}$ to $\frac{b}{7}$ If the diameter of a circle is $18$, then the area of the circle is $9 \pi$ $18 \pi $ $324 \pi $ For any numbers $a$ and $b$, $a*b=ab(5-b)$. If $a$ and $a*b$ both represent positive numbers, which of the following could be a value of $b$? $\sqrt{7^{2}+5^{2}-10}$ $4\sqrt{2}$ $\sqrt{117} $ What is the perimeter of a square with area $\frac{7p}{4}$? $\frac{7p}{4}$ $\frac{7p^{2}}{4} $ $7p$ $7p^{2}$ $\sqrt{169}+\sqrt{256}+\sqrt{361}=$ If $45$ percent of $500$ is $50$ percent of x, then $x=$ $350$ $1,200$ $(5^{3}-1)(5^{3}+1)(5^{6}+1)(5^{12}+1)$ $(5^{24}-1)$ $(5^{24}+1)$ $5^{3}(5^{24}-1) $ How many minutes does it take to travel $160$ miles at $200$ miles per hour? If $5x+7=11$, then $5x-4=0$ The smallest prime number greter than 50 is $\sqrt{(120-39)(68+13)}$ $(9^{y})^{3}$ $9^{y+3}$ $9^{6y} $ $3^{3y^{2}} $ $9^{y^{3}} $ $27^{13}=3^{a}$, then $a=$ $x-y=h$, then $5x^{2}-10xy+5y^{2}=$ $5h^{2}$ $10h$ $h^{2}$ $5h$ $10h^{2} $ $27x^{2}-12$ $3(3x-2)(3x+2)$ $(9x-2)(9x+2)$ If the average of $3x$ and $9x$ is 6, then $x=$ What percent of $5x$ is $10y$ if $x=2y$? $20 \%$ SAT Practice Questions - 7 If $3x+7=5x+1$ $3.5 $ $ 4.5$ What is the next term in the sequence: 6, 3, 10, 7, 14, 11, ...? The area of the basis of Cylinder A is 8 times the area of the basis of Cylinder B. What is the radius of Cylinder A in terms of the radius of Cylinder B? $r_{A}=\frac{r_{B}}{8}$ $r_{A}=8r_{B} $ $r_{A}=2\sqrt{2}r_{B} $ $r_{A}=\frac{r_{B}}{4} $ If $x^{2}-2xy+y^{2}=121$, $x-y=$ If c is equal to the sum b and twice of a, which of the following is the average of b and c? $a$ $b $ $c $ $a+b $ $b+c $ $f(x)=4x+8$, $f(c+3)=8$, $f(c)=$ $5^{n}.125^{m}=78,125$, $n+3m=$ $\frac{3b^{2}}{a^{3}}=27a^{2}$ $3a^{3}$ $9a^{3} $ $\frac{1}{9a^{3}} $ $\frac{1}{a^{3}} $ Which of the following statements must be true about the x and y coordinates that satisfy the equation $ay-ax=0$, $a \ne 0$, $x \ne 0$ ,$y \ne 0$ $x>y$ $xy=1 $ $x=-y $ $y>x $ $x=y $ What is the length of the side of a cube whose volume is 125 cubic units? If $\frac{1}{2}$ of a number is 3, what is $\frac{1}{3}$ of the number? If $x=-1$, then $x^{5}+x^{4}+x^{3}+x^{2}-5=$ $-10$ If $f(x)=2^{x}+7x$, then $f(4)=$ If $x-3=y$, then $(y-x)^{3}=$ $ -54$ If $a>b$, and $\frac{a}{b}>0$, which of the following is true? $a>0$ $b>0$ $ab>0$ I only II only III only I and II only I and III only Which of the following is equal to $(\frac{x^{-7}y^{-5}}{x^{-3}y^{3}})^{-2}$ $x^{8}y^{16}$ $\frac{x^{8}}{y^{16}} $ $\frac{y^{16}}{x^{8}} $ $x^{4}y^{8}$ What is the slope of the line passing through the points (-1,7) and (3,5)? $-\frac{1}{2} $ The symbol $\otimes$ represents a binary operation defined as $a \otimes b=3^{a}+2^{b}$, what is the value of $(-2)\otimes (-3)$ $ \frac{17}{72}$ If $\sqrt{\frac{49}{x}}=\frac{7}{3}$ A bike that originally sold for $150 \$$ was on sale for $120 \$$. What was the rate of discount? $25 \% $ If $ 0.10 < x < 0.12$, which of the following could be a value of $x$? $9 \%$ If $\frac{xyz}{t}=w$ and $x$ and $t$ are doubled, what happens to the value of w The value of $w$ is two times smaller. The value of $w$ is halved. The value of $w$ is four times greater. The value of $w$ is doubled The value of $w$ remains the same. What is the tenth term of the pattern below? $\frac{3}{2}$, $\frac{9}{4}$, $\frac{27}{8}$, $\frac{81}{16}$,... $\frac{3}{2^{10}}$ $(\frac{3}{2})^{10}$ $\frac{3^{10}}{2}$ $\frac{300}{200}$ If $a > 0$ and $b < 0$, which of the following is always negative? $-ab$ $a+b$ $|a|-|b|$ $\frac{a}{b}$ $b^{a}$ Which of the following number pairs is in the ratio $3:7$? $\frac{1}{3}$,$\frac{1}{7}$ $7$,$\frac{1}{3}$ If $x=-\frac{1}{4}$, then $(-x)^{-3}+(\frac{1}{x})^{2}=$ For which of the following values of $x$ is the relationship $x < x^{2} < x^{3}$ true? $x^{2}+2xy+y^{2}=169$, $-|-(x+y)|=$ How many distincts factors does 900 have? more than $5$ If $x=-\frac{1}{7}$, then which of the following is always positive for $n > 0$? $x^{n}$ $n^{x}$ $nx$ $n-x$ $\frac{x}{n} $ For how many positive integers, $n$, is true that $n^{2} \leq 3n$ If $a^{4}=16$, then $3^{a}$ $\sqrt{20}\sqrt{5}=$ $5\sqrt{10}$ $10\sqrt{5}$ The sum of three positive consecutive even integers is x. What is the value of the middle of the three integers? $\frac{x}{3}-1$ $\frac{x}{3}+2$ $3x$ $\frac{x-2}{3}$ $\frac{x}{3}$ What is the average of $5^{10}$, $5^{20}$, $5^{30}$, $5^{40}$ and $5^{50}$? $5^{9}+5^{19}+5^{29}+5^{39}+5^{49}$ $5^{30}$ $5^{149}$ Which of the following is equal to $(5^{6} \times 5^{9})^{10}$? $25^{150}$ What is the value of $3^{\frac{1}{3}} \times 3^{\frac{2}{3}} \times 3^{\frac{3}{3}}$? How many integers satisfy the inequality $|x| < 2 \pi$. What is the average of $5^{a} \times 5^{b}=5^{300}$ If $5^{a}5^{b}=\frac{5^{c}}{5^{d}}$, what is d in terms of $a$, $b$ and $c$? $\frac{c}{a+b}$ $c+ab$ $c-a-b$ $c+a-b$ $\frac{b}{ac}$ SAT Practice Questions - 3 & Answer Key Which of the following is equivalent to $5^{9}$ $5^{4}+5^{4}+5^{1}$ $5^{2} \times 5^{4} \times 5^{3}$ $\frac{10^{9}}{2^{10}}$ $(5^{4})^{5}$ $\frac{5^{5}}{5^{4}}$ Which of the following is equivalent to $\sqrt{289}$ Which of the following is a perfect square? Which of the following is equivalent to $3\sqrt{10}$ $3\sqrt{5} \times \sqrt{5}$ $\sqrt{90}$ $3\sqrt{5} + 3\sqrt{2}$ $3\sqrt{5}+3\sqrt{5}$ $\frac{3\sqrt{2}}{\sqrt{5}}$ Which of the following is equivalent to $10^{\frac{2}{5}}$ $\sqrt[5]{5}$ $\sqrt[5]{10}$ $\sqrt[5]{100}$ $\sqrt[5]{1000}$ Which of the following fractions is equivalent to $\frac{3}{6} \times \frac{2}{5}$? Which of the following expressions is equivalent to $\frac{7}{6} \div \frac{5}{2}$? $\frac{7}{30}+\frac{2}{30}$ $\frac{9}{6}+\frac{9}{5}$ $\frac{1}{7}+\frac{2}{35}$ If $3^{x}=729$, what is $x^{3}$? What is the value of $||4|-|-7||$ What is the value of $(\sqrt{3}+\sqrt{5})^{2}-(\sqrt{8})^{2}$ Posted by Ssafini : at 10:10 AM 0 comments Solve $15x-32=18-10x$ Solve $\frac{x}{8}=\frac{x-2}{4}$ Which of the following are the factors of $t^{2}+8t+16$ $(t-4)(t-4)$ $(t+8)(t+2)$ $(t+1)(t+16)$ Solve for a in term of b, if $6a+12b=24$ $24-12b$ $2-\frac{1}{2}b$ $4-2b$ $2b-4$ If $ax+2b=5c-dx$, what does x equal in terms of a, b, c, and d? $5c-d-2b-a$ $a-d$ $(5c-2b)(a-d)$ $\frac{5c-d-2b}{a}$ $\frac{5c-2b}{a-d}$ If $(z-9)(z+3)=0$, what are the two possible values of z? $z=-9$ abd $z=3$ $z=9$ abd $z=0$ $z=0$ abd $z=-3$ $z=-12$ abd $z=12$ If $z^{2}-6z=16$, which of the following could be a value of $z^{2}+6z$? If $3\sqrt{ a}-10=2$, what is the value of a? Given $\frac{(x+6)(x^{2}-2x-3)}{x^{2}+3x-18}=10$, find the value of x. Solve the equation $\frac{5x}{8}-\frac{3x}{5}=2$. CLEP Precalculus Practice Questions -1 Evaluate the expression: $1000(2^{-1.5})$ $2828,427$ $2000.00$ $353.55$ Evaluate the expression: $\log_{49}7$ Place into standard form: $(5+i)-(7-7i)$ $-2+8i$ $2+8i$ $12+8i$ $2-8i$ Find the domaine of the function: $f(x)=\sqrt{-6x+12}$ $x \geq 2$ $x \leq -2$ $x \leq 2$ What is the value of: $3\ln e^{6}$ What is the value of: $\csc (150 deg)$ Solve the equation: $x^{2}-10x+50=0$ $5+5i$ or $5-5i$ $5+i$ or $5-i$ What is the value of $x$: $\log_{10}x=-3$ Factor the expression: $x^{2}-3ix-2$ $(x+i)(x+2i)$ $(x+i)(x-2i)$ $(x-i)(x-2i)$ $(-x-i)(x-2i)$ $(x-1)(x-2)$ Identify the horizontal and vertical asymptotes for: $\frac{5x^{2}}{x^{2}-9}$ $y=5$, $x=-3$, $x=3$ $y=-5$, $x=-3$ $y=5$, $x=3$ $y=5$, $x=-3$ $y=-5$, $x=3$, $x=-3$ CLEP College Algebra Practice Questions -- Functions and their properties If $f(x)=-x^{3}+2x+1$ what is $f(-3x)$ $27x^{3}-6x+1$ $-27x^{3}-6x+1 $ $-27x^{3}+6x+1 $ $27x^{3}-6x-1 $ $-3x^{3}-6x+1 $ If $f(x)=9x+3$ then $f^{-1}(x)=$ $\frac{1}{9}x-\frac{1}{3}$ $\frac{1}{3}+\frac{1}{9} $ $9x-3 $ $\frac{1}{3}x+1 $ $x-\frac{1}{9} $ If $f(x,y)=\frac{x \log x}{y \log y}$ then $f(8,2)=$ $ \frac{3}{2}$ $\log 2 $ $\log_{5}(\frac{1}{125})$ The function $f$ is defined by $f(x)=\frac{1}{1-x}$. For what values of $x$ is $f(f(x))$ undefined? $\{0\}$ $\{1\} $ $\{-1,2\} $ $\{0,1\} $ If $f(x)=\frac{7x-5}{2}$, find the solution set for $f(x)>3x$ $\{x/ x>5 \}$ $\{x/ x<5 \} $ $\{ x/ x>3 \} $ $\{ x/ x \leq -3 \} $ If $f(x)=3x+7$ and $g(x)=2x-1$, what is $f(g(1))$? Find the equation for the line passing through $(4,2)$ and $(-1,3)$ $5y-x=14$ $5y+x=-14$ $-5y+x=14$ $y+5x=14$ $5y+x=14$ Solve $2^{7x}=4^{2x-1}$ Find $\log_{3} 81$ CLEP College Algebra Practice Questions -- Functio...
CommonCrawl
ICHEP 2014 Timetable (all) Conference Home Page Proceedings - Accepted papers info@ichep2014.es Neutrino Physics Download current session: Neutrino Physics: Neutrinoless Double Beta Decay silvia capelli (universita` degli Studi e Sezione INFN di Milano Bicocca) Neutrino Physics: Three-Neutrino Oscillations, Part I Steve Brice (Fermilab) Neutrino Physics: Three-Neutrino Oscillations, Part II Jun Cao () Neutrino Physics: Neutrino Mass, and Neutrinos from the Cosmos and at Colliders Amol Dighe (Tata Institute of Fundamental Research) Neutrino Physics: Beyond Three-Neutrino Oscillations Michel Sorel (IFIC (CSIC - U. Valencia)) Neutrino Physics: Neutrinos and Nuclei Allocated time includes time for questions as follows: 15 (13+2), 20 (17+3), 30 (25+5) Contribution list Timetable 849. The GERDA Experiment for the Search of Neutrinoless Double Beta Decay Mr. Giovanni Benato (University of Zurich) The search for neutrinoless double beta decay ($0\nu\beta\beta$) is playing since two decades a major role in astroparticle physics. The discovery of this process would demonstrate the violation of lepton number conservation and the presence of a Majorana term in the neutrino mass. The GERmanium Detector Array ({\sc{gerda}}) experiment, located at the Gran Sasso underground laboratory in... 657. Latest results from KamLAND-Zen second phase Dr. Yoshihito Gando (Research Center for Neutrino Science, Tohoku Univ.) KamLAND-Zen is an experiment for neutrinoless double beta decay search with xenon 136 based on large liquid scintillator detector KamLAND. The first phase of the experiment was operated from Oct. 12, 2011 to June 14, 2012 and we set lower limit for the neutrino-less double beta decay half-life , T1/2(0nu) > 1.9*10^{25} yr. The combined result of KamLAND-Zen and EXO data give T1/2(0nu) >... 733. Neutrinoless double beta decay with EXO-200 Prof. Michelle Dolinski (Drexel University) See attached PDF for proper formatting. EXO-200 is one of the most sensitive searches for neutrinoless double beta decay in the world. The experiment uses 175 kg of enriched liquid xenon in an ultralow background time projection chamber installed at the Waste Isolation Pilot Plant, a salt mine with a 1600 m water equivalent overburden. This detector has demonstrated excellent energy... 148. Status of the NEXT experiment juan jose gomez cadenas (IFIC) Neutrinos may be Majorana particles. If so, neutrinoless double beta decay processes could be observed by the next-generation bb0nu experiments. I will briefly discuss one of the most promising ideas in the field, the use of a High Pressure Gas Xenon TPC (HPGXe) with electroluminescence gain and optical readout. A 100 kg incarnation of such a device will start operations at the Canfranc... 631. The SNO+ Experiment for Neutrinoless Double Beta Decay Dr. Valentina Lozza (TU Dresden) The SNO+ experiment will employ 780 tonnes of liquid scintillator to explore a variety of fundamental physics. Chief amongst these will be a sensitive search for neutrinoless double beta decay, to be achieved by loading 130Te into the scintillator using a new technique. A combination of purification, passive shielding and background tagging is expected to leave 8B solar neutrinos and 2nbb... 61. Status of the CUORE and results from the CUORE-0 neutrinoless double beta decay experiments Dr. Monica Sisti (Università degli Studi di Milano-Bicocca and INFN Milano-Bicocca) CUORE is a 741 kg array of TeO2 bolometers for the search of neutrinoless double beta decay of Te-130. The detector is being constructed at the Laboratori Nazionali del Gran Sasso, Italy, where it will start taking data in 2015. If the target background of 0.01 counts/(keV kg y) will be reached, in five years of data taking CUORE will have an half life sensitivity of about 10^26 y. CUORE-0 is... 443. Current status and perspectives of the LUCIFER experiment Dr. Filippo Orio (INFN - Sezione di Roma) The quest for Neutrinoless Double Beta Decay ($0\nu$DBD) represents one of the most promising ways to investigate the neutrino mass nature, Dirac or Majorana. A convincing detection claim demands for detectors with excellent energy resolution and almost zero background in the energy region of interest. These features can be obtained with the approach of the LUCIFER project, funded by an... 445. Scintillating bolometers based on ZnMoO4 and Zn100MoO4 crystals to search for 0nu2b decay of 100Mo (LUMINEU project): first tests at the Modane Underground Laboratory Dr. Denys Poda (Centre de Sciences Nucléaires et de Sciences de la Matière (CSNSM)) Neutrinoless double beta (0nu2b) decay is a powerful tool to investigate neutrino properties, weak interaction, and effects beyond the Standard Model of particle physics. The main aim of the LUMINEU project (Luminescent Underground Molybdenum Investigation for NEUtrino mass and nature) is to realize a pilot experiment to search for 0nu2b decay of 100Mo with the help of zinc molybdate (ZnMoO4)... 247. Latest results of NEMO-3 experiment and present status of SuperNEMO Dr. Hector GOMEZ MALUENDA (Laboratoire de l'Accélérateur Linéaire (LAL)) The NEMO-3 experiment looked for neutrinoless double beta decay processes from 2003 to 2011 at the Modane Underground Laboratory. Seven isotopes were studied by the simultaneous recording of the energy and track of the event, standing out 100-Mo and 82-Se since they were the most massive ones. No evidence for neutrinoless double beta decay has been observed, leading to set limits on the... 354. Updated three-neutrino oscillation parameters from global fits Dr. Mariam Tórtola (IFIC, Valencia) In this work we present an updated global fit to neutrino oscillation data within the three-flavour framework. The most recent data from solar and atmospheric neutrino experiments are included in our analysis together with the latest results from the long-baseline accelerator experiments T2K and MINOS and the recent measurements of reactor neutrino disappearance reported by Double Chooz,... 789. Latest results on nu_mu -> nu_tau oscillations from the OPERA experiment Dr. Masahiro Komatsu (Nagoya University, JAPAN) The OPERA experiment is designed to prove neutrino oscillations in the nu_mu to nu_tau channel through the direct observation of the tau lepton in tau neutrino charged current interactions. The experiment has accumulated data for five years, from 2008 to 2012, with the CERN Neutrinos to Gran Sasso (CNGS), an almost pure nu_mu beam. In the last two years, a very large amount of the data... 479. Precision measurement of muon neutrino disappearance by T2K Alexander Himmel (Duke University) Please see attached file. 478. Initial probe of delta_CP by T2K with muon neutrino disappearance and electron neutrino appearance Mrs. Lorena Escudero (IFIC) Please see attached file 502. Commissioning and Early Performance of the NOvA Detectors Prof. Jim Musser (Indiana university) NOvA, the NuMI Off-Axis electron Neutrino Appearance experiment is designed to carry out studies of numu->nue oscillation, char- acterized by the mixing angle theta-13. A complementary pair of detectors have been constructed roughly 14 mrad off the beam axis to optimize the purity of the electron neutrino signal at the far detector against neutral current backgrounds. The far detector is... 855. The need for an early anti neutrino run for NOvA Prof. Sankagiri Umasankar (Indian Institute of Technology Bombay) The moderately large value of theta13, measured recently by reactor experiments, is very welcome news for the future neutrino experiments. In particular, the NOvA experiment, with 3 years of neutrino run followed by an equal anti-neutrino run, will be able to determine the mass hierarchy if one of the following two favourable combinations is true: normal hierarchy with the CP phase in... 512. LBNE in the Precision Era of Neutrino Oscillation: Status and Schedule Dr. Zelimir Djurcic (Argonne National Laboratory) LBNE (Long-Baseline Neutrino Experiment) is an accelerator-based neutrino oscillation experiment. LBNE will produce a muon-neutrino beam using protons from Fermilab's Main Injector and will detect electron-neutrino appearance and muon-neutrino disappearance using a Liquid Argon TPC located at a distance of 1300 km at Sanford Underground Research Facility in South Dakota. The primary physics... 762. The LAGUNA/LBNO neutrino observatory in Europe Vyacheslav Galymov (IPNL, CNRS/IN2P3) The LAGUNA and LAGUNA-LBNO consortia have performed two detailed design studies from 2008 to 2014 to define the optimal combination of baseline and detector technology for the next generation neutrino observatory. Starting from seven sites and three detector technologies we have prioritized our options and selected the Pyhäsalmi mine in Finland, 2300 km from CERN at 1400 m depth, using a... 919. Hyper-Kamiokande: A next generation neutrino observatory to search for CP violation in the lepton sector Mr. hirohisa tanaka (UBC/IPP) Hyper-Kamiokande (Hyper-K), a proposed one-megaton water cherenkov detector to be built in Japan, is the logical continuation of the highly successful program of neutrino (astro)physics and proton decay using the water Cherenkov technique. Hyper-K will search for CP violation in neutrino oscillations associated with the irreducible phase delta in the lepton mixing matrix using the neutrino... 122. Intense Neutrino Super Beam Experiment for Leptonic CP Violation Discovery based on the European Spallation Source Linac Dr. Marcos Dracos (IPHC/IN2P3 Strasbourg) The European Spallation Source (ESS) linac with 5 MW proton power has the potential to become the proton driver of - in addition to the world's most intense pulsed spallation neutron source - the world's most intense neutrino beam. The physics performance of that neutrino Super Beam in conjunction with a megaton Water Cherenkov neutrino detector installed 1000 m down in a mine at a distance of... 154. Solar neutrinos in Super-Kamiokande Dr. Hiroyuki Sekiya (ICRR, University of Tokyo) Recently the concern with the effect of matter on the neutrino oscillation has been growing, because the possibility of mass hierarchy determination by next generation experiments through the matter effect has been recognized. We report an indication that the elastic scattering rate of solar B8 neutrinos with electrons in the Super-Kamiokande detector is larger when the neutrinos pass... 959. Current status of the Double Chooz experiment Julia Haser (Max-Planck-Institut fuer Kernphysik) The Double Chooz reactor antineutrino experiment aims for a precision measurement of the neutrino mixing angle $\theta_{13}$. Located at the Chooz nuclear power plant in France, it observes an energy dependent deficit in the neutrino spectrum, currently with one detector filled with gadolinium-loaded liquid scintillator at a baseline of 1.05\,km. The past Double Chooz publications featured... 783. The latest oscillation results from the Daya Bay reactor neutrino experiment Dr. Wei Wang (College of William and Mary, Sun Yat-Sen University) The Daya Bay reactor neutrino experiment (Daya Bay) is one of the three current-generation short-baseline reactor neutrino experiments designed to measure the lastly known neutrino mixing angle theta13. Its unique design of eight identical 20t liquid scintillator (LS) antineutrino detectors (AD) at the three near and far experimental sites does not only make it the most sensitive theta13... 481. JUNO: A Next Generation Reactor Antineutrino Experiment Dr. Liang Zhan (Institute of High Energy Physics) After the discovery of the large neutrino mixing angle 13, the next generation neutrino experiments focus on the measurement of the neutrino mass hierarchy and the leptonic CP violating phase. JUNO, a next generation reactor antineutrino experiment, was proposed to determine the neutrino mass hierarchy independent of the CP phase. We studied the sensitivity and found the mass hierarchy can be... 971. Statistical issues in neutrino mass ordering sensitivity Dr. Mattias Blennow (KTH Royal Institute of Technology) During the last two years, there has been some confusion in the field on how to assess the sensitivity of future neutrino oscillation experiments to the neutrino mass ordering. A factor of two difference to the common approach has been proposed. We resolve the situation by going back to the basic statistical definitions and apply the results to compare future possibilities of experiments... 529. Neutrino oscillation study with atmospheric neutrinos in Super-Kamiokande Masato SHIOZAWA (Kamioka Observatory, ICRR, University of Tokyo) Atmospheric neutrinos have been playing important roles in understanding neutrino properties. In Super-Kamiokande, we have been performing precise measurement of the 2-3 mixing angle and mass squared difference predominantly by muon neutrino disappearance. In addition to that, muon to tau neutrino oscillation channel was established by confirming tau neutrino appearance in the atmospheric... 817. PINGU and the Neutrino Mass Hierarchy Dr. Kenneth Clark (University of Toronto) The Precision IceCube Next Generation Upgrade (PINGU) is a proposed IceCube in-fill array designed to measure the neutrino mass hierarchy using atmospheric neutrino interactions in the ice at the South Pole. PINGU will have a neutrino energy threshold of a few GeV with a multi-megaton effective volume. We present PINGU's expected sensitivity to the hierarchy with an optimized geometry and... 499. India-Based Neutrino Observatory (INO) Project Prof. Sanjib Kumar Agarwalla (Institute of Physics, Bhubaneswar, India) India-based Neutrino Observatory (INO) is a proposed underground facility in the southern part of India. The project envisage the construction of an underground laboratory with a large cavern of dimensions 132m X 26m X 20m to house a 50 kton magnetized iron tracking calorimeter detector (ICAL) to study atmospheric neutrinos. In addition, two smaller caverns will also be constructed to host... 468. The KATRIN Neutrino Mass Experiment Dr. Noah Oblath (Massachusetts Institute of Technology) The Karlsruhe Tritium Neutrino (KATRIN) Experiment aims to measure the neutrino mass using tritium beta decays. KATRIN will carefully determine the shape of the tritium beta-decay spectrum near the endpoint. After collecting three years of data, KATRIN will be able to discover a neutrino mass as small as 350~meV (5$\sigma$), or place an upper limit at 200~meV (90\% CL). The experiment is... 1004. Neutrino mass experiments with Ho Dr. Elena Ferri (Uiversità Milano-Bicocca) Neutrino oscillation experiments have proven that neutrinos are massive particles, nevertheless the assessment of their absolute mass scale is still an outstanding challenge in today particle physics and cosmology. The experiments dedicated to effective electron-neutrino mass determination are the ones based on the study of nuclear processes involving neutrino, like single beta decay... 190. Borexino: recent solar and terrestrial neutrino results Sandra Zavatarelli (INFN - Genova) The first phase of the Borexino experiment, currently running at the Laboratori del Gran Sasso in Italy, has been completed in 2010, and after a successful purification campaign which have further brought down the background levels, a second data taking phase is now in progress, started in October 2011. In this talk the, after recalling the main features of the detector, the final results of... 882. Supernovae Neutrinos: Oscillation and Phenomenology. Dr. Sovan Chakraborty (Max plank for Physics) Supernovae (SN) are one of the highest energetic astrophysical events. Almost all the enormous energy (10^(53) ergs) released during such an event is emitted in terms of neutrinos. These neutrinos while free streaming out of the SN will undergo flavor oscillations. Apart from the usual MSW oscillations the SN neutrinos will have nonlinear flavor evolution due to neutrino-neutrino interactions.... 904. Core-Collapse Supernova Neutrino Detection Dr. Alexander Himmel (Duke University) This talk will briefly survey the capabilities of current detectors sensitive to supernova neutrino bursts. It will then cover recent progess in development of supernova neutrino detection techniques as well as prospects for specific future experiments. 516. Underground Physics with LBNE Prof. Giles Barr (Oxford University) The Long-Baseline Neutrino Experiment plans a 34-kton (fiducial mass) liquid argon time projection chamber to be sited at 4850 ft depth at the Sanford Underground Research Facility in South Dakota. The significant overburden at this site gives LBNE significant physics reach for several non-beam physics topics. These include neutrino oscillation studies with atmospheric neutrinos, for which... 851. Heavy neutrino hunting in Higgs- and Z decays at high luminosity Higgs and Z factory. Alain Blondel (UNIGE) With the discovery of the Higgs H(126) boson at the LHC, the Standard Model of particle physics is still lacking an understanding of the generation and nature of neutrino masses. Dirac mass term? Majorana mass term? The favorite theoretical scenario is that both mass terms are present, leading to the existence of heavy partners of the light neutrinos, presumably more massive and nearly... 929. Minimal SO(10) unification at two loops Dr. Michal Malinsky (Charles University in Prague) I will review the current status of the minimal SO(10) GUT and comment on the new results of a dedicated two-loop analysis and their phenomenological implications, focusing, in particular, on the complementarity of the constraints from the LHC physics and those from the future proton decay searches. 191. SOX : Short Distance Neutrino Oscillations with Borexino Dr. David Bravo (Virginia Tech (United States)) The Borexino detector has convincingly shown its outstanding performances in the in the low energy, sub-MeV regime through its unprecedented accomplishments in the solar and geo neutrinos detection. These performances make it the ideal tool to accomplish a state-of-the-art experiment able to test unambiguously the long-standing issue of the existence of a sterile neutrino, as suggested by the... 864. Recent results from the ICARUS experiment Alessandro Menegolli (Universita di Pavia) ICARUS is the largest liquid Argon TPC detector ever built (~600 ton LAr mass). It was smoothly operated underground at the LNGS laboratory in Gran Sasso since summer 2010, up to june 2013, collecting data with the CNGS beam and with cosmics. Liquid argon TPCs are really ``electronic bubble chambers'' providing a completely uniform imaging and calorimetry with unprecedented accuracy on... 921. Short-baseline neutrino physics at Fermilab Dr. Wesley Ketchum (Los Alamos National Laboratory) The existing Booster Neutrino Beam (BNB) and the exceptional reconstruction capabilities of the liquid argon TPC detector technology provide an opportunity to execute a world-leading short-baseline neutrino physics program at Fermilab. The MicroBooNE detector, located 470m from the beamline target, is set to begin operation in 2014. The Liquid Argon Near Detector, LAr1-ND, is a proposed new... 151. The NESSiE way for sterile neutrinos Dr. Luca Stanco (INFN - Padova) Neutrino physics is nowadays receiving more and more attention as a possible source of infor- mation for the long–standing problem of new physics beyond the Standard Model. The recent measurement of the third mixing angle θ13 in the standard mixing oscillation scenario encourages us to pursue the still missing results on leptonic CP violation and absolute neutrino masses. However, several... 345. Neutrinos from STORed Muons, nuSTORM Dr. Jean-Baptiste Lagrange (Imperial College, London) Neutrino beams produced from the decay of muons in a racetrack-like decay ring (the so called Neutrino Factory) provide a powerful way to study neutrino oscillation physics and in addition provide unique beams for neutrino interaction studies. The Neutrinos from STORed Muons (nuSTORM) facility is a neutrino factory-like facility designed for short baseline neutrino oscillation and neutrino... 365. Searching for Sterile Neutrinos and CP Violation: The IsoDAR and Daedalus Experiments Prof. Michael Shaevitz (Columbia University) The IsoDAR experiment uses a novel isotope decay-at-rest (DAR) source of electron antineutrinos using protons from a 60 MeV cyclotron. Paired with a large neutrino detector (such as KamLAND or WATCHMAN), the experiment can observe hundreds of thousands of inverse beta-decay events and do a decisive test of the current hints for sterile neutrino. Daedalus is a phased program leading to a... 779. Constraining new physics scenarios in neutrino oscillations Dr. Davide Meloni (Dipartimento Matematica e Fisica, Universita' di RomaTre) We consider the disappearance data of the Daya Bay experiment to constrain the parameter space of models where sterile neutrinos can propagate in a large compactified extra dimension (LED) and models where non-standard interactions affect the neutrino production and detection (NSI). I will show that compactification radius R in LED scenarios can be constrained at the level of 0.57μm for normal... 372. The KTY formalism and nonadiabatic contributions to the neutrino oscillation probability Osamu Yasuda (Tokyo Metropolitan University) It is shown that it is possible to obtain the analytical expression for the effective mixing angle in matter using the formalism which was developed by Kimura, Takamura and Yokomakura for the neutrino oscillation probability in matter with constant density. If we assume that the imaginary part of the integral of the difference of the energy eigenvalues of the two levels at... 369. Neff in low-scale seesaw models versus the lightest neutrino mass Ms. Marija Kekic (University of Valencia) We evaluate the contribution to effective number of relativistic degrees of freedom of extra sterile states at their decoupling temperatures in Type I seesaw models with two and three extra sterile states as a function of the seesaw and the light neutrino mass. 937. Constraints on heavy neutrinos and applications Mr. Francisco Escrihuela (AHEP, University of Valencia) Several models of neutrino masses predict the existence of neutral heavy leptons. Here, we review current constraints on heavy neutrinos and apply them to inverse and linear seesaw models. We discuss the effect of a fourth heavy neutrino in oscillation experiments 526. Quasi-elastic scattering, RPA, 2p2h and neutrino energy reconstruction Dr. Juan M Nieves (IFIC, CSIC-UV) We discuss some nuclear eects, RPA correlations and 2p2h (multinucleon) mechanisms, on charged-current neutrino-nucleus reactions that do not produce a pion in the nal state. We study a wide range of neutrino energies, from few hundreds of MeV up to 10 GeV. We also examine the in uence of 2p2h mechanisms on the neutrino energy reconstruction. 1014. Quasi-Elastic Scattering in MINERvA Dr. Heidi Schellman (Northwestern University (United States)) MINERvA (Main INjector Experiment for v-A) has recently measured neutrino and antineutrino quasi-elastic cross-sections on plastic (CH) scintillator. These results will provide insight into neutrino and anti-neutrino cross sections off of nuclear targets which are important for neutrino oscillation experiments and the probing of the nuclear medium. We will focus on these results and how they... 1015. Studies of the Nuclear Environment in MINERvA Dr. Richard Gran (University of Minnesota Duluth) MINERvA (Main INjector ExpeRiment for neutrino-A) is a few-GeV neutrino nucleus scattering experiment at Fermilab that probes the nuclear environment in both inclusive charged current interactions off various targets, and by studying in detail the process of pion production on Carbon, which itself is sensitive to the nuclear environment through final state interactions. An analysis of the... 16. Effective Spectral Function for Quasielastic Scattering on Nuclei Prof. Arie Bodek (University of Rochester) Spectral functions that are used in modeling of quasi elastic scattering in neutrino event generators such as GENIE, NEUT, NUANCE and NUWRO, and GiBUU include Fermi gas, local Fermi gas, Bodek-Ritche Fermi gas with high momentum tail, and the Benhar Fantoni two dimensional spectral function. We find that the $\frac{d\sigma}{d\nu}$ predictions for these models are in disagreement with... 1032. Analysis of muon and electron neutrino charged current interactions in the T2K near detectors Dr. Anthony Hillairet (University of Victoria) We present the updated measurement of the muon neutrino interaction rates and spectrum at the T2K near detector complex, ND280, located at the JPARC accelerator facility in Tokai, Japan, 280 meters downstream from the target. The measurements are obtained using all the data collected until 2014. The spectrum measured at ND280 off-axis detector constrains the flux and cross section... 753. Measurement of Reactor Antineutrino Flux and Spectrum at Daya Bay Dr. Weili Zhong (Institute of High Energy Physics) Electron antineutrinos from six 2.9 GW$_{th}$ reactors are detected with six detectors deployed in two near and one far underground experimental halls at Daya Bay. Using 217 days of data, more than 300,000 antineutrino candidates were detected in the three halls. In this talk, a measurement of absolute reactor antineutrino flux and spectrum will be described, including comparisons of the... 56. Electromagnetic interactions of neutrinos: a window to new physics Prof. Alexander Studenikin (Moscow State University and JINR-Dubna) A wide review on neutrino electromagnetic properties and interactions is presented, both theoretical and experimental aspects of the problem are discussed. It is shown that these studies open a window to new physics. The talk is based on the recent wide review on the subject available on web since March 25, 2014: C.Giunti, A.Studenikin, "Electromagnetic interactions of neutrinos: a window... 461. Neutrinos and Nuclear Astrophysics at LUNA Dr. Carlo Gustavino (INFN-Roma) The LUNA experiment plays an important role in understanding open issue of neutrino physics. As an example, two key reactions of the solar p-p chain $^3He(^3He,2p)^4He$ and $^3He(^4He,\gamma)^7Be$ have been studied at low energy with LUNA, providing an accurate experimental input to the Standard Solar Model and consequently to the study of the neutrino mixing parameters. The LUNA collaboration... Building timetable...
CommonCrawl
Evaluating cell lines as models for metastatic breast cancer through integrative analysis of genomic data Ke Liu ORCID: orcid.org/0000-0003-3190-16561,2, Patrick A. Newbury ORCID: orcid.org/0000-0002-3756-17471,2, Benjamin S. Glicksberg3, William Z. D. Zeng3, Shreya Paithankar4, Eran R. Andrechek5 & Bin Chen ORCID: orcid.org/0000-0001-8858-874X1,2 Nature Communications volume 10, Article number: 2138 (2019) Cite this article Cancer models Computational biology and bioinformatics Cell lines are widely-used models to study metastatic cancer although the extent to which they recapitulate the disease in patients remains unknown. The recent accumulation of genomic data provides an unprecedented opportunity to evaluate the utility of them for metastatic cancer research. Here, we reveal substantial genomic differences between breast cancer cell lines and metastatic breast cancer patient samples. We also identify cell lines that more closely resemble the different subtypes of metastatic breast cancer seen in the clinic and show that surprisingly, MDA-MB-231 cells bear little genomic similarities to basal-like metastatic breast cancer patient samples. Further comparison suggests that organoids more closely resemble the transcriptome of metastatic breast cancer samples compared to cell lines. Our work provides a guide for cell line selection in the context of breast cancer metastasis and highlights the potential of organoids in these studies. Cancer cell lines were initially derived from tumors and cultured in a two-dimensional environment. Due to the merit of cell culture, they have been widely used as models to study cancer biology and test drug candidates1. However, the fact that many drugs with promising preclinical evidence fail in the clinic urges the reinvestigation of cell lines as tumor models2. The differences between cell lines and tumors have raised the critical question to what extent cell lines recapitulate the biology of tumor samples3,4. The emergence of large-scale genomic data provides an unprecedented opportunity to quantify the biological differences between cancer cell lines and human tumors. The Cancer Genome Atlas (TCGA) project characterized both genomic and transcriptomic profiles for more than 10,000 human tumor samples across over 32 tumor types5. The Cancer Cell Line Encyclopedia (CCLE) characterized both genomic and transcriptomic profiles for more than 1000 cell lines6. Domcke et al.7 performed a comprehensive comparison of molecular profiles between 47 ovarian cancer cell lines and ovarian tumor samples and showed that several rarely used cell lines more closely resembled high-grade serous ovarian tumor samples than popular cell lines. We examined the transcriptome similarity between hepatocellular carcinoma (HCC) cell lines and HCC tumor samples and demonstrated that nearly half of the HCC cell lines did not resemble HCC tumor samples8. Jiang et al.9 conducted a comprehensive comparison of molecular portraits between breast cancer cell lines and primary breast cancer samples, and uncovered both similar and dissimilar molecular features. Cancer metastasis is the most common cause of cancer-related death, thus there is an urgent need of new drugs for treating cancer metastasis10,11. Previous cell line evaluation analysis was mainly performed in reference to primary tumors. It remains unknown whether cell lines closely resemble metastatic cancer and thus are appropriately used in translational research. Robinson et al.12 performed whole-exome and transcriptome sequencing on 500 adult patients with metastatic solid tumors and recently released their dataset (MET500). This large-scale genomic profiling combined with existing genomic data allows the evaluation of the utility of cell lines as models for metastatic cancer. Using breast cancer as a case study, we comprehensively compare multiple types of molecular features between breast cancer cell lines and metastatic breast cancer samples (Fig. 1a–e). Based on our analyses, we identify cell lines that closely resemble the transcriptome of different subtypes of metastatic breast cancer samples. In addition, we evaluate patient-derived organoids and show their potential in preclinical studies. Our work provides useful guidance for choosing cell lines in metastasis-related translational research and could be easily extended to other cancer types. Overall research design. a Data sources and sample types used in this study. b–e Evaluations performed in this study: b comparison of genomic profiles, c transcriptome correlation analysis, d comparison between cell lines and organoids, and e characterization of transcriptome differences. TC analysis: transcriptome correlation analysis; DE analysis: differential gene expression analysis; DA analysis: gene set differential activity analysis Comparison of genomic profiles We first compared somatic mutation profiles between MET500 breast cancer samples and CCLE breast cancer cell lines. Whole-exome sequencing was performed for MET500 breast cancer samples, while hybrid capture sequencing was performed for CCLE cell lines. We thus only focused on the 1630 genes genotyped in both studies. We were particularly interested in two types of genes that may play important roles in breast cancer metastasis: genes that are highly mutated in metastatic breast cancer and genes that are differentially mutated between metastatic and primary breast cancers. Consistent with previous research, we identified a long-tailed mutation spectrum of the 1630 genes in MET500 breast cancer samples and 69 of them were highly mutated (mutation frequency >0.05; Supplementary Fig. 1a). The five most-altered genes were TP53 (0.67), PIK3CA (0.35), TTN (0.29), OBSCN (0.19), and ESR1 (0.14). We identified 19 differentially mutated genes between MET500 and TCGA samples (FDR < 0.001) and the five most significant genes were ESR1, TNK2, OBSCN, CAMKK2, and CLK1 (Supplementary Fig. 1b). Interestingly, all of these 19 differentially mutated genes had higher mutation frequency in MET500 than TCGA, which is consistent with a previous study showing that metastatic cancer has increased mutation burden compared to primary cancer12. Sixty-eight percent of them were also among the 69 highly mutated genes mentioned above. After merging the two gene lists, 75 unique genes remained (Fig. 2a and Supplementary Data 1). The median mutation frequency of the 75 genes across CCLE breast cancer cell lines was 0.07 and only 9% of them (PRKDC, MAP3K1, TTN, ADGRG4, TP53, FN1, and AKAP9) were mutated in at least 50% of cell lines, suggesting that majority of these gene mutations could be recapitulated by only a few cell lines. Surprisingly, 9 of the 75 genes (ESR1, GNAS, PIKFYVE, FFAR2, RNF213, MYBL2, KAT6A, MAP4K4, and FMO4) were not mutated in any cell line. Notably, ESR1 has been identified as a driver gene of cancer metastasis and associated mutations can cause endocrine resistance of metastatic breast cancer cells13,14, yet none of the cell lines could be used to appropriately model it. Comparison of genomic profiles between MET500 breast cancer samples and CCLE breast cancer cell lines. a Somatic mutation profile of the 75 genes across MET500 breast cancer samples and CCLE breast cancer cell lines. The top-side color bar indicates data sources (MET500 or CCLE) and the right-side color bar indicates mutation frequency. b Comparison of CNV profiles between MET500 breast cancer samples and CCLE breast cancer cell lines. c Comparison of CNV profiles between MET500 breast cancer samples and the 33 primary-site derived CCLE breast cancer cell lines. d Comparison of CNV profiles between MET500 breast cancer samples and the 24 metastatic-site-derived CCLE breast cancer cell lines. In panels b–d, each dot is a gene, with y-axis representing its median CNV value across MET500 breast cancer samples, and x-axis representing its median CNV value across CCLE breast cancer cell lines. In panels c and d, genes with high copy-number-gain in MET500 breast cancer samples were marked by red. Source data are provided as a Source Data file We next asked whether there were genes specifically hypermutated in breast cancer cell lines. To address this question, we examined the mutation spectrum of the 32 genes that were mutated in at least 50% of the breast cancer cell lines. Surprisingly, 25 of them had low mutation frequency (<0.05) in MET500 breast cancer samples. Further analysis of somatic mutation profiles of the 25 genes in TCGA breast cancer samples confirmed their hypermutations were specific to breast cancer cell lines (Supplementary Fig. 1c). In addition to somatic mutation spectrum, we also compared copy number variation (CNV) profiles between MET500 breast cancer samples and CCLE breast cancer cell lines. We observed a high correlation of median CNV values across the 1630 commonly genotyped genes (spearman rank correlation = 0.81; Fig. 2b). However, we also noticed that the gain-of-copy-number events in cell lines appeared to resemble metastatic breast cancer while loss-of-copy-number events did not. For the 711 genes showing copy-number-loss in CCLE breast cancer cell lines (median CNV < 0), their cell line derived median CNV values were significantly lower than that from MET500 breast cancer samples; however, no statistically significant difference was detected in the 919 genes with copy-number-gain (Supplementary Fig. 1d). Out of the 57 CCLE breast cancer cell lines, 24 were derived from metastatic sites (Supplementary Data 2). We further divided the cell lines into two groups (according to whether derived from metastatic sites or not) and then compared the CNV profile of each group with MET500 breast cancer samples. We found cell lines derived from metastatic sites more closely resembled the CNV status of the 109 genes with high copy-number-gain (median CNV ≥ 0.4) in MET500 breast cancer samples (Fig. 2c, d and Supplementary Fig. 1e). Correlating cell lines with MET500 patient samples Transcriptome correlation analysis (TC analysis) is proven to be an effective approach to evaluate cell lines for research purpose7,8,15. Therefore, we performed TC analysis and ranked all 1019 CCLE cell lines according to their transcriptome similarity with MET500 breast cancer samples (see Methods). The top 20 cell lines were all breast cancer cell lines, suggesting metastatic breast cancer cells retain the transcriptomic signature from the tissue they originated in and cell lines have the potential to resemble the transcriptome of them (Fig. 3a). MDA-MB-415 and HMC18 were the two breast cancer cell lines that had highest and lowest transcriptome similarity, respectively (Spearman rank correlation of 0.415 and of 0.087). TC analysis between MET500 breast cancer samples and CCLE breast cancer cell lines. a In total, 1019 CCLE cell lines are ranked according to their transcriptome similarity with MET500 breast cancer samples. Each dot is a CCLE cell line and breast cancer cell lines are marked by red. b Metastatic-site-specific TC analysis results are highly correlated between liver and lymph node. Each dot is a CCLE breast cancer cell line, with x-axis representing its transcriptome similarity with the nine lymph-node-derived MET500 breast cancer samples, and y-axis representing its transcriptome similarity with the 27 liver-derived MET500 breast cancer samples. c t-SNE plot of MET500 breast cancer samples. Metastatic-sites are labeled by color and subtypes are labeled by shape. d Pair-wise comparison of subtype-specific TC analysis results. In the lower-left plots, each dot is a CCLE breast cancer cell line, with the two axis representing transcriptome similarity of the cell line with MET500 breast cancer samples of the two intersecting subtypes. The upper-right shaded values are the corresponding pair-wise spearman rank correlation values of each pair. Source data are provided as a Source Data file We next assessed whether cell lines resembling the transcriptome of samples from different metastatic sites were identical. We were only able to consider liver and lymph node (the two sites which have at least nine samples) due to the lack of enough samples from other sites in the MET500 dataset. For each of them, we performed metastatic-site-specific TC analysis (i.e., compute transcriptome similarity of cell lines with samples derived from a specific metastatic site) and found the results were highly correlated (Fig. 3b) with MDA-MB-415 being the most-correlated cell line for both sites. In addition, we detected no statistically significant difference in expression correlation (with MDA-MB-415) between the two sites (Supplementary Fig. 2a). Given the genomic heterogeneity of breast cancer, we further asked whether cell lines resembling the transcriptome of metastatic breast cancer of different subtypes were identical. To address this question, we first determined the PAM50 subtype of MET500 breast cancer samples with R package genefu and then applied t-SNE to visualize them (Fig. 3c). We found Basal-like samples clustered together and separated from other subtypes; additionally, the majority of LuminalA/LuminalB/Her2-enriched/Normal-like samples were mixed together except two skin-derived samples. HER2-enriched samples seemed to be separated from LuminalA/LuminalB samples but the boundary was not clear. These results suggested that subtype information was well maintained in metastatic breast cancer samples and additionally confirmed the feasibility of using PAM50 for subtyping metastatic breast cancer though it was initially developed using primary breast cancer data. We further confirmed the subtyping results by performing the same analysis on a combined dataset which contains both MET500 and TCGA breast cancer samples (Supplementary Fig. 2b). Next, we performed subtype-specific TC analysis (i.e., compute transcriptome similarity of cell lines with samples of a specific subtype) and found high correlation within LuminalA/LuminalB/Her2-enriched subtypes, in contrast to their relatively lower correlation to Basal-like subtype (Fig. 3d). To confirm the robustness of our TC analysis results derived from the comparison between CCLE and MET500 RNA-Seq data, we downloaded CCLE gene expression data profiled by microarray and then searched the GEO database and assembled a microarray dataset containing the expression values of another 103 independent metastatic breast cancer samples. We repeated the TC analysis with microarray data and found results obtained from the two platforms were highly consistent with each other. First, there was a large overlap of the top-ranked cell lines. Out of the 10 cell lines having highest transcriptome similarity with the 103 metastatic breast cancer samples, 6 of them were within the 10 cell lines having highest transcriptome similarity with MET500 breast cancer samples. Second, both metastatic-site-specific and subtype-specific TC analysis results showed high correlations (Supplementary Fig. 3). Due to such high consistency, it is not surprising that we observed similar correlation trends in metastatic-site-specific (and subtype-specific) TC analysis results (Supplementary Figs. 4 and 5). About 24% of the 103 samples in the microarray dataset was derived from bone. Remarkably, the metastatic-site-specific TC analysis result of bone showed lower correlation with other sites (Supplementary Fig. 4). To exclude the possibility that this was caused by tumor purity issues, we applied ESTIMATE16 on the microarray dataset and found the tumor purity of bone-derived samples was not significantly lower than that of liver, lymph node, and lung (Supplementary Fig. 6). Our results may not be too surprising given the fact that bone provides a very unique microenvironment including enriched expression of osteolytic genes17; however, this result needs to be confirmed in the future as more data become available. Subtype-specific cell line evaluation We attempted to identify cell lines which closely resemble the transcriptome of different subtypes of metastatic breast cancer based on the results of subtype-specific TC analysis. Given a subtype, we noticed that for a random CCLE cell line, its transcriptome similarity with MET500 breast cancer samples of that subtype approximately followed a normal distribution (Supplementary Fig. 7). Therefore, those breast cancer cell lines showing significantly higher transcriptome similarity were of our interest. Driven by this finding, for each subtype we first fit a normal distribution (the null distribution) with the transcriptome similarity values (derived from subtype-specific TC analysis) of all non-breast-cancer cell lines and then assigned each of the 57 breast cancer cell lines a right-tailed p-value. The most significant cell lines for LuminalA, LuminalB, Her2-enriched, and Basal-like subtypes were MDA-MB-415 (p-value = 3.59e-05), BT483 (p-value = 2.22e-07), EFM192A (p-value = 0.11e-03), and HCC70 (p-value = 0.40e-03), respectively. Using criteria of FDR ≤ 0.01, we identified 20, 28, and 24 significant cell lines for LuminalA, LuminalB, and Her2-enriched subtypes, respectively. Notably, most of these significant cell lines were derived from metastatic sites and 18 were shared by the three subtypes. Surprisingly, no cell line passed the criterion for Basal-like subtype. We further examined whether this was due to the limited number of Basal-like MET500 breast cancer samples, but found that the number of LuminalA samples was even less than that of Basal-like samples. After we used a more loosened FDR cutoff of 0.05, we obtained 22 significant cell lines for Basal-like subtype. All statistical testing results are listed in Supplementary Data 3. We next searched PubMed to examine the popularity of the 57 breast cancer cell lines (see Methods and Supplementary Data 2). MCF7 is most commonly used in metastatic breast cancer research (43.6% of total PubMed citations). In our analysis, although it was a significant cell line (according to criteria FDR ≤ 0.01) for LuminalB subtype, it was less correlated with LuminalB MET500 breast cancer samples than BT483 (Supplementary Fig. 8a). Following MCF7 is MDA-MB-231 (40.2% of total PubMed citations); however, it was not a significant cell line for any subtype. The third most commonly used cell line was T47D (3.9% of total PubMed citations), which was a significant cell line for LuminalA and Her2-enriched subtypes. It did not show significantly lower correlation with LuminalA MET500 breast cancer samples than MDA-MB-415 (Supplementary Fig. 8b); however, compared to EFM192A, it was significantly less correlated with Her2-enriched MET500 breast cancer samples (Supplementary Fig. 8c). We further explored cell line MDA-MB-231, one of the most widely used triple-negative cell lines in metastatic breast cancer research. We ranked all of the 1019 CCLE cell lines according to their transcriptome similarity with Basal-like MET500 breast cancer samples and the rank of MDA-MB-231 was 583. Consistent with this, MDA-MB-231 was significantly less correlated with Basal-like MET500 breast cancer samples than HCC70 (Fig. 4a). We observed similar patterns with CNV data (Fig. 4b). We also examined how MDA-MB-231 recapitulated the somatic mutation spectrum of Basal-like metastatic breast cancer samples and found only three of the 25 highly mutated genes (mutation frequency ≥ 0.1 in Basal-like MET500 breast cancer samples) were mutated in MDA-MB-231 (Fig. 4c). Since CCLE data for MDA-MB-231 was generated in vitro, to confirm our finding we obtained another independent microarray dataset which profiled the gene expression of MDA-MB-231 cell lines derived from lung metastasis in vivo18. We found, however, that even these MDA-MB-231 cell lines in vivo did not most closely resemble the transcriptome of lung metastasis breast cancer samples. The breast cancer cell line which showed highest correlation with lung metastasis breast cancer samples was EFM192A (Fig. 4d). MDA-MB-231 has substantial genomic difference with Basal-like metastatic breast cancer samples. a The left panel shows the ranking of all 1019 CCLE cell lines according to their transcriptome similarity with Basal-like MET500 breast cancer samples. The top-left scatter plot within the first panel shows the expression of the 1000 genes used in TC analysis, with the x-axis representing expression value in MDA-MB-231, and the y-axis representing median expression value across Basal-like MET500 breast cancer samples. The boxplot on the right shows the distribution of expression correlation with Basal-like MET500 breast cancer samples for MDA-MB-231 and HCC70. In each box, the central line represents the median value and the bounds represent the 25th and 75th percentiles (interquartile range). The whiskers encompass 1.5 times the interquartile range. b The left panel shows the ranking of all 1019 CCLE cell lines according to their CNV similarity with Basal-like MET500 breast cancer samples; the boxplot on the right shows the distribution of CNV correlation with Basal-like MET500 breast cancer samples for MDA-MB-231 and HCC70. In each box, the central line represents the median value and the bounds represent the 25th and 75th percentiles (interquartile range). The whiskers encompass 1.5 times the interquartile range. c Somatic mutation profile of the 25 highly mutated genes across MDA-MB-231 and Basal-like MET500 breast cancer samples. d Boxplot of expression correlation between cell lines and lung-derived metastatic breast cancer samples. This includes CCLE breast cancer cell lines and lung-metastasis-derived MDA-MB-231 (colored red). In each box, the central line represents the median value and the bounds represent the 25th and 75th percentiles (interquartile range). The whiskers encompass 1.5 times the interquartile range. e Ranking CCLE breast cancer as well as additional seven MDA-MB-231 cell lines according to their transcriptome similarity with MET500 Basal-like breast cancer samples. Each dot is a cell line; the x-axis represents rank and the y-axis represents transcriptome similarity with Basal-like MET500 breast cancer samples. MDA-MB-231 cell lines were colored red. f Boxplot of KRT14 expression in MET500 breast cancer samples and MDA-MB-231 cell lines. The p-value is computed with the two-sided Wilcoxon rank-sum test. In each box, the central line represents the median value and the bounds represent the 25th and 75th percentiles (interquartile range). The whiskers encompass 1.5 times the interquartile range. Source data are provided as a Source Data file To further confirm the low transcriptome similarity of MDA-MB-231 with Basal-like MET500 breast cancer samples, we re-processed the RNA-Seq data of CCLE breast cancer cell lines (with the pipeline used to process MET500 RNA-Seq data); in addition, we also re-processed the RNA-Seq data of another seven MDA-MB-231 cell line samples collected from SRA database. We re-performed TC analysis between the breast cancer cell lines (with re-processed data) and Basal-like MET500 breast cancer samples and drew similar conclusion as our previous analysis (Fig. 4e). Recently, Nguyen et al.19 performed single cell RNA-Seq on human breast epithelial cells and confirmed that KRT14 expression was a hallmark of Basal cells. Strikingly, we found KRT14 expression in the eight MDA-MB-231 cell line samples was significantly lower than that of Basal-like MET500 breast cancer samples (p-value = 0.0007); however, such significant KRT14 differential expression was not detected between MDA-MB-231 and MET500 breast cancer samples of non-Basal-like subtypes (Fig. 4f). Our analysis indicates that although MDA-MB-231 was classified as Basal-like subtype and bears the triple-negative phenotype, its cell type of origin may not be Basal cell, which could partially explain why its genomic profile is substantially different from Basal-like MET500 breast cancer samples. Comparing cell lines with organoids Owing to the advancement of three-dimensional (3D) culture technology, more and more tumor patient-derived organoids have been established and widely used in translational research20,21. However, their utility to model metastatic cancer has not been comprehensively evaluated with large-scale genomic data. To fill this gap, we performed additional TC analysis on 26 patient-derived breast cancer organoids using RNA-Seq data. The aforementioned subtype-specific TC analysis showed that the Basal-like subtype had relatively lower correlation with other subtypes and we also observed similar trend in organoids (Fig. 5a). We next asked whether organoids outperformed cell lines in resembling the transcriptome of metastatic breast cancer. For each of the non-Basal-like organoids, we computed its transcriptome similarity with non-Basal-like MET500 breast cancer samples and found organoids had significantly higher transcriptome similarity than CCLE breast cancer cell lines (Fig. 5b, left panel). The superiority of organoids was also observed in the TC analysis of Basal-like subtype (Fig. 5b, right panel). The previous analysis revealed that MDA-MB-415, BT483, and EFM192A were the three most significant cell lines for LuminalA, LuminalB, and Her2-enriched subtypes, respectively. Interestingly, for all the three subtypes MMC01031 was the organoid showing highest transcriptome similarity and had significantly higher correlation with MET500 breast cancer samples than the corresponding most significant cell line. Organoid W1009 had the highest transcriptome similarity with Basal-like MET500 breast cancer samples and the expression correlation values were also significantly higher than HCC70, the triple-negative cell line that is most significant for Basal-like subtype (Fig. 5c). Comparing CCLE breast cancer cell lines with patient-derived organoids using gene expression data. a Pair-wise comparison of subtype-specific TC analysis results. In the lower-left plots, each dot is an established organoid, with the two axis representing transcriptome similarity of the organoid with MET500 breast cancer samples of the two intersecting subtypes. The upper-right shaded values are the corresponding pair-wise spearman rank correlation values of each pair. b Boxplot of transcriptome similarity (with MET500 breast cancer samples of different subtypes) of CCLE breast cancer cell lines and organoids. P-values are computed with the two-sided Wilcoxon rank-sum test. In each box, the central line represents the median value and the bounds represent the 25th and 75th percentiles (interquartile range). The whiskers encompass 1.5 times the interquartile range. c For each subtype, the most-correlated organoid has significantly higher expression correlation with MET500 breast cancer samples of that subtype than the most-correlated cell line. P-values are computed with the two-sided Wilcoxon rank-sum test. In each box, the central line represents the median value and the bounds represent the 25th and 75th percentiles (interquartile range). The whiskers encompass 1.5 times the interquartile range. Source data are provided as a Source Data file Transcriptome differences between models and patients Our TC analysis has shown that in vitro models such as cell lines and organoids could resemble the transcriptome of metastatic breast cancer to some extent. However, they are still different in many aspects. To characterize such differences, we performed differential gene expression analysis among MET500 breast cancer samples, CCLE breast cancer cell lines, and organoids (Supplementary Fig. 9). For non-Basal-like subtypes, 2348 genes (2143 up-regulated, 205 down-regulated) were identified as differentially expressed in both MET500-vs-CCLE and MET500-vs-organoids comparisons. For Basal-like subtype, there were 1378 common differential expressed (DE) genes (1117 up-regulated, 261 down-regulated). After intersecting the above two common DE gene lists, we finally obtained 1017 subtype-and-model-independent DE genes (947 up-regulated, 70 down-regulated) and then performed GO enrichment analysis. For the 947 up-regulated ones, 29 GO terms were identified as significant (FDR < 0.001) and most of them were immune-related, illustrating the large gap between culture media and tumor microenvironment (Supplementary Data 4). The two terms "platelet degranulation" and "chemotaxis" were also detected as significant. Besides microenvironment, our results also implicated the difference of intrinsic characteristics between metastatic breast cancer cells and in vitro models. For example, the enrichment on "steroid metabolic process" suggested that both cell lines and organoids may not sufficiently resemble the reprogrammed metabolism of metastatic breast cancer. Surprisingly, for the 70 down-regulated subtype-and-model-independent DE genes, no GO terms passed the FDR < 0.001 criteria, which could be due to the small gene number. We decreased the FDR cutoff to 0.1 and observed four significant terms with "dna replication" being the most significant (FDR = 0.037). To further confirm that batch effects were not dominating DE analysis results, we used RUVg22 to infer the values of hidden factors (k = 1) and re-performed DE analysis mentioned above and identified 749 subtype-and-model-independent DE genes, all of which were among the previously identified 1017 subtype-and-model-independent DE genes. We further performed gene set differential activity (DA) analysis on the 50 MSigDB hallmark gene sets to characterize differences regarding to specific biological processes (Fig. 6a, Supplementary Fig. 10). For non-Basal-like subtype, we identified 35 and 32 significant gene sets in MET500-vs-CCLE and MET500-vs-organoids comparisons, respectively (FDR < 0.001; Supplementary Data 5). There were 26 significant gene sets in common and for 23 of them the p-values derived from MET500-vs-CCLE comparison were lower than that derived from MET500-vs-organoid comparison, which may be unsurprising given that organoids more closely resemble the transcriptome of metastatic breast cancer samples (Fig. 6b, left panel). We also performed DA analysis for Basal-like subtype, identifying 19 and 24 significant gene sets in MET500-vs-CCLE and MET500-vs-organoids comparisons, respectively (Fig. 6b, right panel). For each of the subtypes, we classified the 50 hallmark gene sets into four categories according to DA analysis results: Category 1: Only significant in MET500-vs-organoids comparison (e.g., ANDROGEN RESPONSE). Category 2: Only significant in MET500-vs-CCLE comparison (e.g., E2F TARGETS). Category 3: Significant in both MET500-vs-organoids and MET500-vs-CCLE comparisons (e.g., COMPLEMENT). Category 4: Not significant in either comparison (e.g., FATTY ACID METABOLISM). Comparison of ssGSEA scores of the 50 MSigDB hallmark gene sets. a Visualization of ssGSEA scores across non-Basal-like CCLE breast cancer cell lines, MET500 breast cancer samples, and organoids. b DA analysis results of different breast cancer subtypes. Each dot is a hallmark gene set, with the x-axis representing −log10(FDR) derived from MET500-vs-organoids comparison, and the y-axis representing −log10(FDR) derived from MET500-vs-CCLE comparison. c Boxplot of ssGSEA scores of the four representative gene sets. In each box, the central line represents the median value and the bounds represent the 25th and 75th percentiles (interquartile range). The whiskers encompass 1.5 times the interquartile range. Source data are provided as a Source Data file Interestingly, 27 gene sets could be consensually classified into one specific category, regardless of the subtype. Figure 6c shows the distribution of ssGSEA scores of the representative gene set for each category. In cancer research, cell lines have been traditionally used to test drug candidates and study disease mechanism. The genomic profile comparison showed that breast cancer cell lines poorly recaptured somatic mutation patterns of metastatic breast cancer samples, while their CNV profiles were more consistent. Moreover, it is worth noting that cell lines carried many specific genomic alternations, possibly due to culture effects. Examples included the 25 genes presenting cell-line-specific hypermutation. Such large genomic differences and variations revealed by the comparison indicates the importance of selecting cell lines to represent heterogeneous metastatic cancer samples. This study investigated two important factors (i.e., metastatic site and cancer subtype) that need to be considered during cell line selection. Metastatic sites have their distinct microenvironment that has a large impact in shaping the genomic profiles of metastatic cancer cells. However, the metastatic-site-specific TC analysis did not identify cell lines with metastatic-site-specific utility, which seems not to reflect the impact of microenvironment. Further differential gene expression analysis revealed higher expression of immune-related genes in metastatic cancer samples (compared to cell lines), suggesting that the media used to culture cancer cell lines did not model tumor microenvironment appropriately. Therefore, we conclude cell lines evaluated in this study do not carry indicative genomic signatures that are shaped by the microenvironment of individual metastatic sites and that might be the reason why we did not find metastatic-site-specific cell lines. Breast cancer is quite heterogeneous and we showed that PAM50 subtypes were maintained in metastatic breast cancer cells. Considering the large genomic difference between Basal-like and other subtypes, it is not surprising that in subtype-specific TC analysis Basal-like subtype showed lower correlation with others. Our analysis reveals the importance and necessity of subtype-specific cell line selection. In the future as data continue to accumulate, more factors can be considered for appropriate cell line selection and we can start building an ad-hoc mapping algorithm: inputs would be the characteristics of metastatic cancer samples (subtype, metastatic site, even age, race, stage, etc.) as well as the specific scientific question of interest and the output would be a list of appropriate cell lines. Surprisingly, we found MDA-MB-231, the widely used triple-negative cell line in metastatic breast cancer research, was dramatically different from Basal-like metastatic breast cancer samples. According to our analysis, HCC70 seems to be a better model, but this does not mean it can be directly employed to study cancer metastasis as many other criteria are needed for the assessment. Triple-negative breast cancer is itself highly heterogeneous. In a recent study, Nguyen et al.19 showed that breast epithelial cell populations corresponded to breast cancer subtypes. Previous researchers have associated MDA-MB-231 with Claudin-low subtype and our analysis suggested that the cell type of origin of MDA-MB-231 cell line may not be Basal cells23. In the future, as more single-cell RNA-Seq data become available, it would be valuable to accurately determine the cell type of origin of MDA-MB-231 and optimize its usage in cancer metastasis-related studies. Organoids are recently established using 3D culture methods. Our analysis suggested that compared to cell lines, they resemble the transcriptome of patient samples more closely, which is a critical characteristic in drug testing. It is also important to note that cell lines evaluated in our study were established much earlier than organoids. During the culturing process, they could have accumulated additional genomic alternations, which may partially explain why organoids are more tightly correlated with patient samples. Recent studies have shown that organoids preserve the histological architecture, gene expression, and genomic landscape of the original tumor24. Together with our comparative studies, we conclude that although the value of organoids in translational research has not been fully recognized, while their high genomic similarity with patient samples warrants further investigation. Prior to this study, researchers have proposed different computational methods to measure the similarity between cell lines and patient samples7,9. It has been shown that gene expression is one of the most informative features to predict drug response and weighting cell lines based on their transcriptome similarity with patient samples increases predictive power in gene expression-based drug discovery8,25,26,27. Therefore, in this paper we ranked cell lines according to TC analysis results. Note that different expression features could be employed to measure transcriptome similarity depending on the question. For example, for researchers who specifically focused on the molecular mechanisms of cancer metastasis, it may not be necessary to require the cell line model (such as MDA-MB-231) to resemble the whole transcriptome of patient samples as long as it could mimic the key biological processes in cancer metastasis. In such scenario, ranking cell lines according to their invasiveness might be more appropriate. In summary, by leveraging publicly available genomic data, we comprehensively evaluated the utility of breast cancer cell lines as models for metastatic breast cancer. Our study introduces a simple framework for cell line selection which can be easily extended to other cancer types. Although there are concerns about data quality and discrepancies between different studies/platforms, our large-scale analysis and cross-platform validation hopefully addresses these concerns and demonstrates the power of leveraging open data to gain biological insights of cancer metastasis. We hope that the recommendations in this study may facilitate improved precision in selecting relevant cell lines for modeling in metastatic breast cancer research, which may accelerate the translational research. The raw RNA-Seq data of MET500 breast cancer samples (diagnosed as "Breast Invasive Ductal Carcinoma") were downloaded from dbGap (under accession number phs000673.v2.p1) and further processed using RSEM28,29. FPKM values were used as gene expression measure. To keep consistent with other RNA-Seq datasets, only the RNA-Seq samples profiled with PolyA protocol were considered. The somatic mutation and CNV data of MET500 samples were downloaded from MET500 web portal (https://met500.path.med.umich.edu/downloadMet500DataSets). All CCLE pre-processed data (including gene expression profiled by RNA-Seq and microarray, somatic mutation and CNV) were downloaded from the CCLE data portal (https://portals.broadinstitute.org/ccle). The raw RNA-Seq data of 55 CCLE breast cancer cell lines were downloaded from GDC Legacy Archive (https://portal.gdc.cancer.gov/legacy-archive/search/f; HMEL and HS274T were missing due to unknown reasons) and further processed by RSEM. For each CCLE breast cancer cell line, we computed spearman rank correlation between gene expression values quantified by CCLE pipeline and RSEM. Supplementary Fig. 11a shows the distribution of the derived spearman rank correlation values. The median of the distribution is 0.9, suggesting that the gene expression values quantified by the two pipelines are highly consistent. As an example, Supplementary Fig. 11b shows such consistency in MCF7 cell line. We noticed that for MET500 cohorts both tumor and matched normal DNA were profiled in exome sequencing while for CCLE cell lines the somatic mutation was called by MuTect2 using a mode that does not require matched normal DNA6,30. Therefore, in our analysis we used the filtered version of CCLE somatic mutation MAF file (CCLE_hybrid_capture1650_hg19_NoCommonSNPs_NoNeutralVariants_CDS_2012.05.07.maf) in which common polymorphism variants have been excluded. For TCGA Breast Invasive Ductal Carcinoma samples, somatic mutation calling results were downloaded from cBioPortal31,32 on 12 April 2018 (using R package cgdsr) and RSEM-processed gene expression data were downloaded from UCSC Xena data portal (https://xena.ucsc.edu/)33. The RNA-Seq data of patient-derived organoids were from BC Organoids Biobank34. We searched GEO and manually assembled a microarray dataset containing gene expression values of 103 metastatic breast cancer samples35,36,37,38. The GEO accession numbers used were GSE11078, GSE14017, GSE14018, and GSE54323. The gene expression data of lung-metastasis-derived MDA-MB-231 were downloaded from GEO under accession number GSE2603. The RNA-Seq data of non-CCLE MDA-MB-231 were downloaded from SRA and the accession numbers are ERR1982279, ERR1982280, ERR2022825, SRR2532366, SRR4822549, SRR6451704, and SRR6451705. Detailed statistics of the above datasets are listed in Supplementary Data 6. Gene filtering We downloaded the list of 1650 genes covered by CCLE hybrid capture sequencing from CCLE data portal (https://data.broadinstitute.org/ccle_legacy_data/hybrid_capture_sequencing/CCLE_hybrid_capture1650_HGNC_info_2012.02.20.txt). Then, we applied the following steps to get the final 1630 highly confident genes. Ten gene symbols which do not have associated HGNC records were removed. Nine gene symbols which do not have associated RefSeq records (downloaded from UCSC genome browser, hg19) were removed. One Y-chromosome located gene PRKY was removed. Identification of differentially mutated genes between MET500 and TCGA samples Given a gene, we computed the right-tailed p-value to test whether it has significantly higher mutation frequency in metastatic breast cancer samples as follows: $$p = 1 - \mathop {\sum }\limits_{i = 0}^n {\mathrm{{Pr}}}(i;N,\hat q),$$ where Pr is the probability mass function of binomial distribution, N is the number of genotyped MET500 breast cancer cohorts, n is the number of MET500 breast cancer cohorts in which the gene is mutated and \(\hat q\) is the mutation frequency of the gene in TCGA dataset (for genes with zero mutation frequency, we used the minimum mutation frequency across all genes). Similarly, we computed left-tailed p-value (1−p) to test whether a gene has significantly lower mutation frequency in metastatic breast cancer samples. To control FDR, we applied the Benjamini–Hochberg procedure on left-tailed and right-tailed p-values, respectively39. We noticed that somatic mutations of MET500 cohorts were called by Varscan2 (ref. 40) while TCGA somatic mutation data hosted on cBioPortal were called by MuTect30. To exclude the possibility that the differences inferred between MET500 and TCGA primary tumors were due to pipeline batch effects, we downloaded Varscan2 processed TCGA somatic mutation data from the GDC portal (https://portal.gdc.cancer.gov/) and found gene mutation frequency was highly consistent between the two databases (Supplementary Fig. 11c). In addition, the p-values computed in gene differential mutation analysis (with the formula mentioned above) were also highly correlated (Supplementary Fig. 11d). TC analysis with RNA-Seq and microarray data To perform TC analysis with RNA-Seq data, we first rank-transformed gene RPKM values for each CCLE cell line and then ranked all the genes according to their rank variation across all CCLE cell lines. The 1000 most-varied genes were kept as "marker genes" (we tried different gene sizes in the early preliminary analysis and did not find the large variation of results, so we decided to choose 1000 most-varied genes in this study). Given RNA-Seq profiles of a cell line (or an organoid) and several patient samples, we compute spearman rank correlation (across the 1000 marker genes) between the cell line (organoid) and each sample and the median value of computed spearman rank correlation values was defined as the transcriptome similarity of the cell line (organoid) with the patient samples. For microarray data, a similar procedure was applied and the 1000 most-varied probe sets were used to compute correlation values. We also extended the above method to compute CNV similarity. Instead of selecting "marker genes", all of the 1630 commonly genotyped genes were used. PAM50 subtyping and t-SNE visualization The genefu package was used to determine breast cancer subtype41,42. To visualize tumor samples with t-SNE, we first computed the pair-wise distance between every two samples as 1 minus the spearman rank correlation across PAM50 genes and then applied the function Rtsne to perform 2D reduction43. PubMed search The number of PubMed abstracts or full texts mentioning a CCLE breast cancer cell line was determined using the PubMed Search feature on 10 May 2018 (https://www.ncbi.nlm.nih.gov/pubmed/). For each cell line, we searched with a keyword"[cell line name] metastasis". We repeated this step for the terms "metastatic", "breast cancer", and "metastatic breast". These searches returned highly correlated results, so we used the search terms which returned the most results: "[cell line name] metastasis". Identification of differentially expressed genes and differentially activated gene sets DESeq2 was used to identify differentially expressed genes (FDR < 0.001 and abs(log2FC) > 1) and DAVID bioinformatics sever was used to perform Gene Ontology enrichment analysis44,45. In our DE analysis, only protein coding genes were considered. In DA analysis, we first used the R package GSVA to compute ssGSEA scores for the 50 MSigDB hallmark gene sets (http://software.broadinstitute.org/gsea/msigdb/)46,47,48,49. Then, for each gene set the two-sided Wilcoxon rank-sum test was used to assign the p-value in the comparison of ssGSEA scores between MET500 samples and cell lines (or organoids). Software tools and statistical methods All of the analysis was conducted in R. The ggplot2 and ComplexHeatmap packages were used for data visualization50,51. Tumor purity was estimated using ESTIMATE16. CNTools was used to map the segmented CNV data to genes52. If not specified, the two-sided Wilcoxon rank-sum test was used to compute the p-value in hypothesis testing. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. The authors declare that all data supporting the findings of this study are available within the article and its supplementary information files or from the corresponding author upon reasonable request. The data are also available at SYNAPSE (https://www.synapse.org/) under accession number syn18403108. The source data underlying Fig. 2–6 and Supplementary Figs. S1–S8, S10, S11 are provided as a Source Data file. Code availability The code is available at github https://github.com/Bin-Chen-Lab/MetaBreaCellLine. Iorio, F. et al. A landscape of pharmacogenomic interactions in cancer. Cell 166, 740–754 (2016). Wilding, J. L. & Bodmer, W. F. Cancer cell lines for drug discovery and development. Cancer Res. 74, 2377–2384 (2014). Ertel, A., Verghese, A., Byers, S. W., Ochs, M. & Tozeren, A. Pathway-specific differences between tumor cell lines and normal and tumor tissue cells. Mol. Cancer 5, 55 (2006). Gillet, J.-P. et al. Redefining the relevance of established cancer cell lines to the study of mechanisms of clinical anti-cancer drug resistance. Proc. Natl. Acad.Sci. USA 108, 18708–18713 (2011). ADS CAS Article Google Scholar Weinstein, J. N. et al. The Cancer Genome Atlas Pan-Cancer analysis project. Nat. Genet. 45, 1113–1120 (2013). Barretina, J. et al. The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity (vol 483, pg 603, 2012). Nature 492, 290 (2012). Domcke, S., Sinha, R., Levine, D. A., Sander, C. & Schultz, N. Evaluating cell lines as tumour models by comparison of genomic profiles. Nat. Commun. 4, 2126 (2013). ADS Article Google Scholar Chen, B., Sirota, M., Fan-Minogue, H., Hadley, D. & Butte, A. J. Relating hepatocellular carcinoma tumor samples and cell lines using gene expression data in translational research. BMC Med. Genomics 8, S5 (2015). Jiang, G. et al. Comprehensive comparison of molecular portraits between cell lines and tumors in breast cancer. BMC Genomics 17, 525 (2016). Lambert, A. W., Pattabiraman, D. R. & Weinberg, R. A. Emerging biological principles of metastasis. Cell 168, 670–691 (2017). Mehlen, P. & Puisieux, A. Metastasis: a question of life or death. Nat. Rev. Cancer 6, 449–458 (2006). Robinson, D. R. et al. Integrative clinical genomics of metastatic cancer. Nature 548, 297–303 (2017). Lefebvre, C. et al. Mutational profile of metastatic breast cancers: a retrospective analysis. PLoS Med. 13, e1002201 (2016). Bartels, S. et al. Estrogen receptor (ESR1) mutation in bone metastases from breast cancer. Mod. Pathol. 31, 56–61 (2018). Sandberg, R. & Ernberg, I. Assessment of tumor characteristic gene expression in cell lines using a tissue similarity index (TSI). Proc. Natl. Acad. Sci. USA 102, 2052–2057 (2005). Yoshihara, K. et al. Inferring tumour purity and stromal and immune cell admixture from expression data. Nat. Commun. 4, 2612 (2013). Kang, Y. B. et al. A multigenic program mediating breast cancer metastasis to bone. Cancer Cell 3, 537–549 (2003). Minn, A. J. et al. Genes that mediate breast cancer metastasis to lung. Nature 436, 518–524 (2005). Nguyen, Q. H. et al. Profiling human breast epithelial cells using single cell RNA sequencing identifies cell diversity. Nat. Commun. 1–12. https://doi.org/10.1038/s41467-018-04334-1 (2018). Weeber, F., Ooft, S. N., Dijkstra, K. K. & Voest, E. E. Tumor organoids as a pre-clinical cancer model for drug discovery. Cell Chem. Biol. 24, 1092–1100 (2017). Drost, J. & Clevers, H. Organoids in cancer research. Nat. Rev. Cancer 18, 407–418 (2018). Risso, D., Ngai, J., Speed, T. P. & Dudoit, S. Normalization of RNA-seq data using factor analysis of control genes or samples. Nat. Biotechnol. 32, 896 (2014). Prat, A. et al. Phenotypic and molecular characterization of the claudin-low intrinsic subtype of breast cancer. Breast Cancer Res. 12, R68 (2010). Broutier, L. et al. Human primary liver cancer-derived organoid cultures for disease modeling and drug screening. Nat. Med. 23, 1424+ (2017). Chen, B. et al. Computational discovery of niclosamide ethanolamine, a hepatocellular carcinoma cells in vitro and in mice by inhibiting cell division cycle 37 Signaling. Gasteroenterology. 2022–2036. https://doi.org/10.1053/j.gastro.2017.02.039 (2017). Chen, B. et al. Reversal of cancer gene expression correlates with drug efficacy and reveals therapeutic targets. Nat. Commun. 8, 16022 (2017). Wang, Y. et al. Systematic identification of non-coding pharmacogenomic landscape in cancer. Nat. Commun. https://doi.org/10.1038/s41467-018-05495-9. Li, B. & Dewey, C. N. RSEM: Accurate transcript quantification from RNA-Seq data with or without a reference genome. BMC Bioinformatics 12, 323 (2011). Li, B., Ruotti, V., Stewart, R. M., Thomson, J. A. & Dewey, C. N. RNA-Seq gene expression estimation with read mapping uncertainty. Bioinformatics 26, 493–500 (2010). Cibulskis, K. et al. Sensitive detection of somatic point mutations in impure and heterogeneous cancer samples. Nat. Biotechnol. 31, 213 (2013). Gao, J. et al. Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal. Sci. Signal. 6, pl1 (2013). Cerami, E. et al. The cBio Cancer Genomics Portal: an open platform for exploring multidimensional cancer genomics data (vol 2, pg 401, 2012). Cancer Discov. 2, 960 (2012). Vivian, J. et al. Toil enables reproducible, open source, big biomedical data analyses. Nat. Biotechnol. 35, 314–316 (2017). Sachs, N. et al. A living biobank of breast cancer organoids captures disease heterogeneity. Cell 172, 373+ (2018). Landemaine, T. et al. A six-gene signature predicting breast cancer lung metastasis. Cancer Res. 68, 6092–6099 (2008). Xu, J. et al. 14-3-3ζ turns TGF-β's function from tumor suppressor to metastasis promoter in breast cancer by contextual changes of Smad partners from p53 to Gli2. Cancer Cell 27, 177–192 (2015). Zhang, X. H.-F. et al. Latent bone metastasis in breast cancer tied to Src-dependent survival signals. Cancer Cell 16, 67–78 (2009). Foukakis, T. et al. Gene expression profiling of sequential metastatic biopsies for biomarker discovery in breast cancer. Mol. Oncol. 9, 1384–1391 (2015). Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate—a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B 57, 289–300 (1995). Koboldt, D. C. et al. VarScan 2: somatic mutation and copy number alteration discovery in cancer by exome sequencing. Genome Res. 22, 568–576 (2012). Bernard, P. S. et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol. 27, 1160–1167 (2009). Gendoo, D. M. A. et al. Genefu: an R/Bioconductor package for computation of gene expression-based signatures in breast cancer. Bioinformatics 32, 1097–1099 (2016). Van Der Maaten, L. J. P. & Hinton, G. E. Visualizing high-dimensional data using t-sne. J. Mach. Learn. Res. 9, 2579–2605 (2008). Love, M. I., Huber, W. & Anders, S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 15, 550 (2014). Huang, D. W., Sherman, B. T. & Lempicki, R. A. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat. Protoc. 4, 44–57 (2009). Subramanian, A. et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl. Acad. Sci. USA 102, 15545–15550 (2005). Liberzon, A. et al. Molecular signatures database (MSigDB) 3.0. Bioinformatics 27, 1739–1740 (2011). Liberzon, A. et al. The Molecular Signatures Database hallmark gene set collection. Cell Syst. 1, 417–425 (2015). Haenzelmann, S., Castelo, R. & Guinney, J. GSVA: gene set variation analysis for microarray and RNA-Seq data. BMC Bioinformatics 14, 7 (2013). The ComplexHeatmap package. https://bioconductor.org/packages/release/bioc/html/ComplexHeatmap.html. The ggplot2 package. https://cran.r-project.org/web/packages/ggplot2/index.html. Jianhua Zhang. The CNTools package. https://bioconductor.org/packages/release/bioc/html/CNTools.html. We thank Dr. Nijman of Utrecht Medical Center for providing us organoid RNA-Seq data. We thank Li Huang for helping us create Fig. 1a–e. The research is supported by R21 TR001743 and K01 ES028047 and the MSU Global Impact Initiative. The content is solely the responsibility of the authors and does not necessarily represent the official views of sponsors. Department of Pediatrics and Human Development, College of Human Medicine, Michigan State University, Grand Rapids, 49503, MI, USA Ke Liu, Patrick A. Newbury & Bin Chen Department of Pharmacology and Toxicology, College of Human Medicine, Michigan State University, Grand Rapids, 49503, MI, USA Bakar Computational Health Sciences Institute, University of California San Francisco, San Francisco, 94158, CA, USA Benjamin S. Glicksberg & William Z. D. Zeng Health Informatics and Bioinformatics, School of Computing and Information Systems, Grand Valley State University, Grand Rapids, 49504, MI, USA Shreya Paithankar Department of Physiology, Michigan State University, East Lansing, 48824, MI, USA Eran R. Andrechek Ke Liu Patrick A. Newbury Benjamin S. Glicksberg William Z. D. Zeng Bin Chen B.C. and K.L. conceived the study. K.L. performed the majority of computational analysis with the help from S.P., P.A.N., B.S.G., W.Z. D.Z., E.R., and B.C. B.C. supervised the study. All authors contributed to writing, reviewing, and editing the manuscript and approved the manuscript. Correspondence to Bin Chen. The authors declare no competing interests. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Description of Additional Supplementary Files Supplementary Data 1 Source data Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Liu, K., Newbury, P.A., Glicksberg, B.S. et al. Evaluating cell lines as models for metastatic breast cancer through integrative analysis of genomic data. Nat Commun 10, 2138 (2019). https://doi.org/10.1038/s41467-019-10148-6 Prognostic effect of a novel long noncoding RNA signature and comparison with clinical staging systems for patients with hepatitis B virus‐related hepatocellular carcinoma after hepatectomy , Tian Tian Liu , Yue Min Feng , Xiao Yu Xie , Xiao Nan Su , Jian Ni Qi , Qiang Zhu & Cheng Yong Qin Journal of Digestive Diseases (2020) Optimal, Large-Scale Propagation of Mouse Mammary Tumor Organoids Emma D. Wrenn , Breanna M. Moore , Erin Greenwood , Margaux McBirney & Kevin J. Cheung Journal of Mammary Gland Biology and Neoplasia (2020) 'Omics Approaches to Explore the Breast Cancer Landscape Joseph Parsons & Chiara Francavilla Frontiers in Cell and Developmental Biology (2020) OCTAD: an open workspace for virtually screening therapeutics targeting precise cancer patient groups using gene expression features Billy Zeng , Benjamin S. Glicksberg , Patrick Newbury , Evgeny Chekalin , Jing Xing , Ke Liu , Anita Wen , Caven Chow & Bin Chen Nature Protocols (2020) Histone deacetylase (HDAC) inhibitors and doxorubicin combinations target both breast cancer stem cells and non-stem breast cancer cells simultaneously Ling-Wei Hii , Felicia Fei-Lei Chung , Jaslyn Sian-Siu Soo , Boon Shing Tan , Chun-Wai Mai & Chee-Onn Leong Breast Cancer Research and Treatment (2020) Nature Protocols | Protocol Editors' Highlights Top Articles of 2019 Nature Communications ISSN 2041-1723 (online)
CommonCrawl
base rate fallacy positive predictive value Consider the $2\times2$ table below, where testing positive or negative corresponds to rejecting or not rejecting H$_{0}$, and the truth being positive or negative means that H$_{0}$ is false or true, respectively. The lower prevalence there is of a trait in a studied population, the greater the chance that a test will return a false positive. Why is training regarding the loss of RAIM given so much more emphasis than training regarding the loss of SBAS? Serology tests could provide epidemiologists with vital data on how COVID-19 is spreading through a community, and also lead to the issuing of "immunity passports" for individuals who have beaten back the infection. However, it is important to remember that a highly accurate test may not be as comforting as it first appears, and therefore the results of such assays should always be viewed with thoughtful reflection. Confronted with this data, I still believe there is a low chance that my friend has ESP because my prior probability was so low. Required fields are marked *. The correct answer to the question, 0.0909, is called in medical science the positive-predictive value of the test. The base rate fallacy is a tendency to focus on specific information over general probabilities. the probability that we made a true rejection) is sensitive to the base rate of cancer drugs that actually work. In case it is still not completely clear that the base rate fallacy is indeed a fallacy, lets employ a thought experiment with an extreme case. Put another way, there is an almost 70 percent probability in that case that the test will falsely indicate a person has antibodies. If so, how do they cope with it? MathJax reference. The confidence that we should have in an antibody test depends on the base rate of the coronavirus, a key factor which is often ignored. Your email address will not be published. Say we have setup a hypothesis test to check if the average height differs between males and females for a specific sample we collected. If Jedi weren't allowed to maintain romantic relationships, why is it stressed so much that the Force runs strong in the Skywalker family? Almost half said 95%, with the average answer being 56%. At the normative level, the base rate fallacy should be rejected because few tasks map unambiguously into the narrow framework that is held up as the standard of good decision making. In the table, the null hypothesis being true is the left column, and $\alpha$ (your willingness to reject the null when the null is true) is the number of false negatives over the total truly negative (or one minus the specificity of the test). 10 Here, this fallacy is described as "people's tendency to ignore base rates in favor of, e.g., individuating information (when such is available), rather than integrate the two" (p. 211). It then calculates a hundred hypothesis tests and concludes that. "One in a thousand people have a prevalence for a particular heart disease. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. False negative rate of 7.5% The prosecutor's fallacy would say that since the false positive rate is 0.1%, the positive test means that the suspect was 99.9% likely to have actually committed the crime (or at least, something close to this amount). But the predictive value of an antibody test with 90 percent accuracy could be as low as 32 percent if the base rate of infection in the population is 5 percent. Criminal Intent Prescreening and the Base Rate Fallacy. What is the difference between policy and consensus when it comes to a Bitcoin Core node validating scripts? Although immunological assays appear to offer a promising path forward, does a positive test mean you should feel confident to work, shop, and socialise without getting sick or infecting others? I.e. Is there a way to notate the repeat of a larger section that itself has repeats in it? It's called the base rate fallacy and it's counter-intuitive, to say the least. For manyyears, the so-called base rate fallacy, with its distinctive name and arsenal of catchy Altman, D. G. and Bland, J. M. (1994). In a classic and widely-referenced study, the following question was put to 60 students and staff at Harvard Medical School. Even deploying more accurate tests cannot change the statistical reality when the base rate of infection is very low. Why is frequency not measured in db in bode's plot? Altman, D. G. and Bland, J. M. (1994). Does a regular (outlet) fan work for drying the bathroom? The concepts of sensitivity and specificity, positive and negative predictive value, and the base rate fallacy are discussed. Even deploying more accurate tests cannot change the statistical reality when the base rate of infection is very low. revealed that 16% of positive results would be false even when using a test with 99% sensitivity and specificity. Base-rate Fallacy Example. Additionally, a recent study published in the journal Public Health revealed that 16% of positive results would be false even when using a test with 99% sensitivity and specificity. The samples? If the base rate is lowered (that vertical line shifts left), you can see that true positives shrink relative to false positives and therefore the PPV gets smaller (i.e. Suppose I am testing a hundred potential cancer medications. I have clarified the contents of the table in a new paragraph. "I think we're going to see [antibody testing] explode," commented Mitchell Grayson, chief of allergy and immunology at Nationwide Children's Hospital and Ohio State University in Columbus. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Does false discovery rate depend on the p-value or only on the alpha level? In a notional population of 100,000 individuals, 950 people will therefore be incorrectly informed they have had the infection. The truncation value is usually 40 but I have seen 45. " —Fannie Hurst (1889–1968) " Time, force, and death Do to this body what extremes you can, That's right, you have to know how many people test positive in the population as a whole before you can judge the predictive value of a test. Because the base rate of effective cancer drugs is so low – only 10% of our hundred trial drugs actually work – most of the tested drugs do not work, and we have many opportunities for false positives. Empirical research on base rate usage has been domi­ nated by the perspective that people ignore base rates and that it is an errorto do so. Effects of Different Levels of Base Rate, Sensitivity, and Specificity on Classification Accuracy. Famous quotes containing the words fallacy, base and/or rate: " It would be a fallacy to deduce that the slow writer necessarily comes up with superior work. 5) + ( 8) × (. But this is another example of the base rate fallacy. The Bayes Theorem is named after Reverend Thomas Bayes (1701–1761) whose manuscript reflected his solution to the inverse probability problem: computing the posterior conditional probability of an event given known prior probabilities related to the event and relevant conditions. Either my friend has ESP, which is why he was able to correctly predict all 10 flips, or my friend doesn't have ESP and was lucky. Many people who answer the question focus on the 5% false positive rate and exclude the general statistic that 999 out of 1000 students are innocent. I am skeptical, so I think there is an extremely small possibility that my friend has ESP. Diagnostic tests 1: sensitivity and specificity. I.e. If you imagine that the area in each quadrant of the table is proportional to the number in each quadrant, and further, imagine that the vertical line down the center of the $2 \times2$ table represents the base rate (e.g. Shuster is trying to have his cake and eat it in his criticism of statistics in clinical practice.1 He highlights that breast cancer screening is a "bad" test (by which I think he means it has a low positive predictive value), but it is precisely because we can calculate this probability that we know the relative utility of the test. By contrast, the $p$-value is the probability of observing your data, if in fact the null hypothesis is true. At this same disease prevalence, the CDC found that a test with 90% sensitivity and 95% specificity would yield a positive predictive value (PPV) of 49%. The inability of intelligent minds to apply simple mathematical reasoning and arrive at the correct value of 2% clearly demonstrates the aforementioned base rate fallacy. Methods The concepts of sensitivity and specificity, positive and negative predictive value, and the base rate fallacy are discussed. It only takes a minute to sign up. Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. 2) × (. Powered by Tom, Hamish & Aaron. these findings mean that we are all at risk of getting infected and spreading the virus, even if we've had a positive antibody test. [6] Conjunction fallacy – the assumption that an outcome simultaneously satisfying multiple conditions is more probable than … The Base Rate Fallacy: why we should be cautious with anti-body testing results. "In other words, less than half of those testing positive will truly have antibodies," according to the agency. how does this apply to a single hypothesis test performed on a single sample? Koehler: Base rate fallacy superiority of the nonnative rule reduces to an untested empirical claim. At this same disease prevalence, the CDC found that a test with 90% sensitivity and 95% specificity would yield a positive predictive value (PPV) of 49%. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. In the case of a single hypothesis test: (1) Reject H$_{0}$ height of men equals height of women; (2) pose the questions (i) what is the prevalence of. 5) (. The positive predictive value is sometimes called the positive predictive agreement, and the negative predictive value is sometimes called the negative predictive agreement. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. just because you rejected the null hypothesis for a drug means that you still probably made a false rejection). Information and translations of base rate fallacy in the most comprehensive dictionary definitions resource on the web. the probability that we made a true rejection) is sensitive to the base rate of cancer drugs that actually work. In the context of coronavirus infection, the predictive value of a test with 90% accuracy could be as low as 32% if the true population prevalence is 5%. The margins sum the rows and columns, and the sum of row margins equals the sum of column margins equals the total number of tests. @redblackbit As an example, suppose I am interested in trying to determine whether or not my friend has ESP. This simple fact is essential to understanding the accuracy of serology-based testing. Probability of correctly predicting disorder= (base rate of disorder) × (true positive rate) (base rate of disorder × true positive rate) + (1- base rate of disorder) × (false positive rate) For this example, the result is: Probability of correctly predicting disorder = (. A generic information about how frequently an event occurs naturally. There seems to be scant relationship between prolificness and quality. The base rate probability of one random inhabitant of the city being a terrorist is thus 0.0001 and the base rate probability of a random inhabitant being a non-terrorist is 0.9999. Commenting on these results, the Infectious Disease Society of America stated that: "A positive test result is more likely a false-positive result than a true positive result." This is particularly dangerous since it could lead to potentially susceptible hosts believing they have been infected with coronavirus, and acting as if they have immunity, when this is not the case. The test is 100% accurate for people who have the disease and is 95% accurate for those who don't (this means that 5% of people who do not have the disease will be wrongly diagnosed as having it). In reality, however, the correct answer was just below 2%. In the U.S., for example, this appears to be between five and 15%. 2) × (. If before collecting your data you believe it is extremely unlikely that your alternative hypothesis is true, then it's ok to still be skeptical of the alternative even after seeing a low p-value. In studies investigating clinicians' use of base rate information, participants typically overestimate PPV and often respond erroneously that the predictive value of a test is equivalent to the test's sensitivity or specificity (e.g., Casscells, Schoenberger, & Graboys, 1978; Heller, Saltzstein, & Caspe, 1992). The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative rates. Login . The positive predictive value (PPV; the probability that a drug actually working, given that we rejected the null hypothesis that it had no effect—i.e. Is p-value also the false discovery rate? 1. I.e. On the surface, this makes sense – after all, a test accuracy above 90% is fairly high. © 2020 Copyright The Boar. Despite this, antibody tests remain an important tool in the fight against coronavirus and we should therefore encourage greater access to them; healthy people who have antibodies in their blood and have tested positive for the virus in the past (but are now symptom-free) can donate blood plasma, which may be used as a possible treatment for COVID-19. Base rate fallacy – making a probability judgment based on conditional probabilities, without taking into account the effect of prior probabilities. When evaluating the probability of an event―for instance, diagnosing a disease, there are two types of information that may be available. The STANDS4 Network ... are used in place of positive predictive value and negative predictive value, which depend on both the test and the baseline prevalence of event. The PPV and NPV describe the performance of a diagnostic test or other statistical measure. Typically specificity, 1- the false positive rate, is reported as 99.9%, not 100%, when there are no false positives. Therefore, the probability that one of the drivers among the 1 + 49.95 = 50.95 positive test results really is drunk is. So, if the null hypothesis is true, and the base rate is low, the $p$ value being small enough to reject, even if it is very small, means that you are probably seeing a false positive. 999 drivers are not drunk, and among those drivers there are 5% false positive test results, so there are 49.95 false positive test results. @redblackbit I believe the intuition you may be missing regarding individual hypothesis tests is to think about your prior probabilities regarding which of the hypotheses is true. BMJ, 308:1552. In a city of 1 million inhabitants there are 100 known terrorists and 999,900 non-terrorists. lowering the prevalence lowers also the number of samples that turn out to be True Positives? What happens when the agent faces a state that never before encountered? Use MathJax to format equations. Another early explanation of the base rate fallacy can be found in Maya Bar-Hillel's 1980 paper, "The base-rate fallacy in probability judgments". Thanks for contributing an answer to Cross Validated! Anor Londo Elevator, Bootstrap Accordion With Arrow, Phuket News Tv, Potato Salad With Spinach And Bacon, Zinnia Meaning Name, Electrical Apprenticeship Washington State, For Rent By Owner Franklin, Tn, Water Restrictions Citrus County Fl, Avapro A Vendre En Ligne Achat Prednisone Pilule En Ligne. collegemajor.la comprimés de Avodart pas cher | Avodart pas cher Le Viagra Oral Jelly Est Il En Vente Libre Glucophage Pharmacie En Ligne Francaise Pas Cher | Service d'assistance en ligne 24h © Copyright 2012 - | www.collegemajor.la
CommonCrawl
Calculators Topics Go Premium About Snapxam ENG • ESP Processing image... Tap to take a pic of the problem In calculus, a method called implicit differentiation makes use of the chain rule to differentiate implicitly defined functions. For differentiating an implicit function $y(x)$, defined by an equation $R(x, y) = 0$, it is not generally possible to solve it explicitly for $y$ and then differentiate. Instead, one can differentiate $R(x, y)$ with respect to $x$ and $y$ and then solve a linear equation in $dy / dx$ for getting explicitly the derivative in terms of $x$ and $y$. $\frac{d}{dy}\left(y=\frac{\sin\left(x\right)}{\cos\left(x\right)}\right)$ 1d ago $\frac{d}{dx}\left(y=\sqrt{x^3\div\tan\left(x\right)}\right)$ 1d ago $\frac{d}{dx}\left(x^3-y^2=xy-8\right)$ 1d ago $\frac{d}{dx}\left(\sqrt{x}+\sqrt{y}=4y^2\right)$ 1d ago $\frac{d}{dx}\left(6\sqrt{y}=x-y\right)$ 1d ago $\frac{d}{dx}\left(x^2+y^2=16\right)$ 1d ago $\frac{d}{dx}\left(y=\left(xy\div\left(x^2+y^2\right)\right)\right)$ 1d ago $\frac{d}{dx}\left(x^3+y^3=xy\right)$ 1d ago $\frac{d}{dx}\left(x^2+y=x^3+y^2\right)$ 1d ago $\frac{d}{dx}\left(x^2+y=2x\right)$ 1d ago Struggling with math? Access detailed step by step solutions to millions of problems, growing every day! Higher-order derivatives Inverse trigonometric functions differentiation © 2018-2020 Snapxam, Inc. About Us Privacy Terms Contact Calculators Topics Go Premium
CommonCrawl
4.01 Functions and relations 4.02 Domain and range and set notation 4.03 Evaluating functions 4.04 Review: Graphing linear relationships from a table of values 4.05 Review: Slope as rate of change 4.06 The slope formula Investigation: Exploring slopes and equations of lines 4.07 Slope-intercept form 4.08 Standard form Investigation: Relationship between shoulder span and height 4.09 Point-slope form 4.10 Graphical solutions to linear equations or inequalities 4.11 Graphing linear inequalities in two variables 4.12 Modeling linear relationships 4.13 Comparing linear relationships What's the rate of change? Recall that a rate is a ratio between two measurements with different units. When we graph these rates, the rate of change can be understood as the slope, steepness or slope of a line. Further, we look at the equations in slope-intercept form (that is, $y=mx+b$y=mx+b, where $m$m is the slope), the larger the absolute value of $m$m, the steeper the slope of the line. For example, a line with a slope of of $4$4 is steeper than a line with a slope of $\frac{2}{3}$23​. Similarly, a line with a slope of $-2$−2 is steeper than a line with a slope of $1$1, even though one is positive and one is negative. Increasing or decreasing? The rate of change in a graph can be increasing or decreasing. The lines below have increasing slopes. Notice how as the values on the $x$x axis increase, the values on the $y$y axis also increase. These next graphs have decreasing rates of change. Unlike graphs with a positive slope, as the values on the $x$x axis increase, the values on the $y$y axis decrease. The rate of change of a line is a measure of how steep it is. In mathematics we also call this the slope. The rate of change is a single value that describes: if a line is increasing (has positive slope) if a line is decreasing (has negative slope) how far up or down the line moves (how the $y$y-value changes) with every step to the right (for every $1$1 unit increase in the $x$x-value) Take a look at this line, where the horizontal and vertical steps are highlighted: We call the horizontal measurement the run and the vertical measurement the rise. For this line, a run of $1$1 means a rise of $2$2, so the line has slope $2$2. Sometimes it is difficult to measure how far the line goes up or down (how much the $y$y value changes) in $1$1 horizontal unit, especially if the line doesn't line up with the grid points on the $xy$xy-plane. In this case we calculate the slope by using a formula: $\text{slope }=\frac{\text{rise }}{\text{run }}$slope =rise run ​ Where you take any two points on the line whose coordinates are known or can be easily found, and look for the rise and run between them. Slope from a graph You can find the rise and run of a line by drawing a right triangle created by any two points on the line. The line itself forms the hypotenuse. This line has a slope of $\frac{\text{rise }}{\text{run }}=\frac{4}{3}$rise run ​=43​ In this case, the slope is positive, because over the $3$3 unit increase in the $x$x-values, the $y$y-value has increased. If the $y$y-value decreased as the $x$x-value increases, the slope would be negative. Slope in action This applet allows you to see the rise and run between two points on a line of your choosing. Identify the vertical and then horizontal distances between the two endpoints. Is there a difference between the vertical distance (rise) in linear functions that are increasing and functions that are decreasing? Does the order in which you express rise and run matter when you are stating the slope? Discuss with a partner whether there is a connection between equivalent ratios (rates of change) and slope. Does the order in which you express the rise and run matter when you express a rate of change? Created with Geogebra Slope of horizontal and vertical lines Horizontal lines have no rise value. The $\text{rise }=0$rise =0. So the slope of a horizontal line is $\text{slope }=\frac{\text{rise }}{\text{run }}$slope =rise run ​$=$=$\frac{0}{\text{run}}$0run​$=$=$0$0. Vertical lines have no run value. The $\text{run }=0$run =0. So the slope of a vertical line is $\text{slope }=\frac{\text{rise }}{\text{run }}$slope =rise run ​$=$=$\frac{\text{rise }}{0}$rise 0​. Division by $0$0 results in the value being undefined. Description of rate of change: $\text{slope }=\frac{\text{rise }}{\text{run }}$slope =rise run ​ Slope of vertical line: undefined Slope of horizontal line: $0$0 In $y=mx$y=mx, $m$m represents the slope of the line Applications of slope Consider these scenarios: The horizontal speed of a projectile The reproduction rate of a colony of bacteria The revenue growth of a business over a year The flow in and out of a body of water In each of these cases, we are interested in how one measurement varies as another one does. That is, there is a dependent variable that varies with respect to an independent variable. However, we're not interested in the measurements themselves but in how they vary. We call this the rate of change. Let's focus on the first scenario listed above. We can't directly measure speed, but we can measure distance and time. Notice that speed is the amount that distance changes per unit of time. That is, speed is the rate of change of distance with respect to time, so we can use our measurements of distance and time to figure out the speed. Suppose that the distance ($x$x, in meters) is related to the time ($t$t, in seconds) by the relationship $x=3t$x=3t. Let's plot this relationship first: Notice that this is a linear function. Since speed is the rate of change of distance over time, we want to find out how the distance changes over any amount of time. Let's start by picking two points on the line. We'll use $\left(1,3\right)$(1,3) and $\left(3,9\right)$(3,9): First we find the change in the independent and dependent variables: Change in time $=$= $3-1$3−1 m $=$= $2$2 m Change in distance $=$= $9-3$9−3 s $=$= $6$6 s And then we divide the change in distance by the change of time to get the rate of change: Rate of change $=$= $\frac{6}{2}$62​ m/s $=$= $3$3 m/s So the speed is $3$3 m/s. We will get the same result no matter which two points we choose. Notice that this is the same as the slope of a linear function. In fact, this is always the case when the function is linear. The rate of change of a dependent variable with respect to an independent variable is how much the dependent variable changes as the independent variable changes. In the case of a linear function, the rate of change is the slope. Note that the independent variable is most often time, but can be anything else. What kind of slope does the following line have? Gasoline costs a certain amount per gallon. The table shows the cost of various amounts of gasoline. Number of gallons ($x$x) $0$0 $10$10 $20$20 $30$30 $40$40 Cost of gasoline ($y$y) $0$0 $12.70$12.70 $25.40$25.40 $38.10$38.10 $50.80$50.80 Write an equation linking the number of gallons of gasoline pumped ($x$x) and the cost of the gasoline ($y$y). How much does gasoline cost per gallon? How much would $73$73 gallons of gasoline cost at this unit price? In the equation, $y=1.27x$y=1.27x, what does $1.27$1.27 represent? The number of gallons of gasoline pumped. The total cost of gasoline pumped. The unit rate of cost of gasoline per gallon. A1.2.2.1.1 Identify, describe, and/or use constant rates of change.
CommonCrawl
A Novel Conductometric Urea Biosensor with Improved Analytical Characteristic Based on Recombinant Urease Adsorbed on Nanoparticle of Silicalite T. P. Velychko1,2, О. О. Soldatkin1, V. G. Melnyk3, S. V. Marchenko1, S. K. Kirdeciler4,5, B. Akata4,5, A. P. Soldatkin1,2, A. V. El'skaya1 & S. V. Dzyadevych1,2 Development of a conductometric biosensor for the urea detection has been reported. It was created using a non-typical method of the recombinant urease immobilization via adsorption on nanoporous particles of silicalite. It should be noted that this biosensor has a number of advantages, such as simple and fast performance, the absence of toxic compounds during biosensor preparation, and high reproducibility (RSD = 5.1 %). The linear range of urea determination by using the biosensor was 0.05–15 mM, and a lower limit of urea detection was 20 μM. The bioselective element was found to be stable for 19 days. The characteristics of recombinant urease-based biomembranes, such as dependence of responses on the protein and ion concentrations, were investigated. It is shown that the developed biosensor can be successfully used for the urea analysis during renal dialysis. Urea [(NH2)2CO] is synthesized in the liver and is the final product of detoxification of endogenous ammonia, which is formed due to the decay of proteins and other nitrogen-containing compounds. The synthesized urea is released from the liver into the blood and transported to the kidneys where it is filtered and excreted with the urine. Normally, the urea concentration in humans ranges from 2.5 to 7.5 mM [1], but the rate of its synthesis, and thus the concentration, increase partially if either the protein-rich food is used, or endogenous catabolism is enhanced under the conditions of starvation, or the tissues are damaged, etc. However, a drastically elevated level of urea (50–150 mM) in the blood plasma indicates a kidney dysfunction. Such abnormal level of urea may be reduced to 10 mM by hemodialysis or peritoneal dialysis [2]. Therefore, determination of the urea concentration is of vital importance in biomedical and clinical assays. To this end, numerous methods are developed including gas chromatography [3], spectrophotometry [4, 5], and fluorometry [6]. The disadvantages of the above methods are dependence of the results on the sample pretreatment, long-time procedure, the need for highly qualified personnel, and impossibility of online measurements. An alternative to the above methods is the use of biosensors—miniature analytical devices without the drawbacks listed. Numerous biosensors have been developed to date for urea analysis in biological samples including potentiometric [7–9], conductometric [10–12], and amperometric [13–15]. However, all of them have two significant disadvantages. First, they have rather a narrow linear range of determination and it is a characteristic trait of urease-based biosensors, which are used in urea assays. To solve this challenge, earlier, we have proposed recombinant urease from E. coli with high Km to shift the linear range to higher urea concentrations [16]. Another drawback of the known urea biosensors is associated with the immobilization of biological material on the surface of transducers. Urease can be immobilized by covalent binding [17], physical adsorption [18], binding with polymers [14, 19, 20], or coupling to the transducer surface [21, 22]. Some problems are intrinsic for these methods. They are as follows: the loss of enzyme activity, unstable reproducibility of biosensor signals, and toxicity of the compounds, which induce the binding. The latter is a particular problem in the determination of the enzyme activity in biological samples. To overcome these difficulties, zeolites were proposed as carriers for enzyme adsorption. The zeolites are slightly toxic and highly resistant to mechanical, chemical, and thermal injuries [23]; therefore, the zeolite-based biosensors can be used for multicomponent biological samples. This method of immobilization demonstrated promising results in a number of enzyme biosensors [24–26]. To create the biosensor for urea determination in biological samples, it was necessary to address the described problems simultaneously. This study was aimed at the development of the biosensor for highly accurate and stable determination of urea in a wide range of concentrations. For the purpose, it was proposed to use recombinant urease adsorbed on the surface of zeolite-modified conductometric transducers. The enzyme urease (EC 3.5.1.5) from E. coli was used in the work, activity 150 U/mg, produced from "USBiological" (USA). Bovine serum albumin (BSA, fraction V) and urea were obtained from "Sigma-Aldrich Chemie" (Germany). Working buffer was phosphate buffer (KH2P04-Na0H), pH 7.4, from "Helicon" (Moscow, Russia). Other inorganic compounds used were of analytical reagent grade. Silicalite was synthesized in the Middle-East Technical University (Ankara, Turkey). To synthesize the silicalite crystals, the gel 1TPAOH:4TEOS:350 H2O was prepared. To obtain the formula, tetraethoxysilane (TEOS) and tetrapropylammonium hydroxide (TPAOH) were mixed with distilled water under constant stirring for 6 h at room temperature. The crystallization took place at 125 °C for 18 h. The resulting solid material was washed four times with distilled water under centrifugation. The products were dried at 100 °C overnight. The size of silicalite particles was approximately 250 nm. Conductometric Transducers The conductometric transducers used in the work were produced at V.Ye. Lashkarev Institute of Semiconductor Physics, NASU (Kyiv, Ukraine) in accordance with our recommendations. They were 5 × 30 mm in size and consisted of two pairs of identical gold interdigitated electrodes on the sital substrate. The transducer design, preparation, and application are presented in detail in [27]. Scheme of Experimental Setup for Conductometric Measurements The portable conductometry МХР-3, developed and manufactured at the Institute of Electrodynamics, NASU (Fig. 1) served as a measuring device. The sensor block consisted of differential conductometric transducer (1), holder for conductometry (2), and support (3). Under measurements, the working cell (4) with test solutions is placed on the support, and the whole sensor block is set on the magnetic stirrer (5). Scheme of connection of MXP-3 device in a system for conductometric measurements The portable device MXP-3 (6) is connected to the electrical supply network via adapter (7), to the sensor block—with wires via contact (8), and to the personal computer (9) with a suite of related software—via contact (10). The current of frequency 37 kHz and amplitude 14 mV was used. First, conductometric transducer (1) was connected to the holder (2) and an initial baseline was obtained. Then, the tested substance was added to the working cell. The responses were recorded on a personal computer screen. Preparation of Bioselective Elements The procedure of urease adsorption on silicalite was developed earlier [28]. The transducers previously coated with silicalite were used; 0.15 μl of 5 % recombinant urease solution in 20 mM phosphate buffer, рН 6.5, was deposited onto one pair of electrodes and 0.15 μl of 5 % BSA in the analogous buffer—onto the other (reference) pair. Afterwards, the transducers were exposed to complete air-drying (for 17 min). Neither glutaraldehyde nor any other auxiliary compounds were used. Next, the transducers were submerged into the working buffer for 20 to 30 min to wash off the unbound enzyme. After experiments, the transducer surface was cleaned from silicalite and adsorbed urease with ethanol-wetted cotton. Procedure of Measurement The measurements were carried out in 5 mM phosphate buffer, pH 6.5, at room temperature in an open cell with constant stirring. The necessary substrate concentration in the working cell was obtained by addition of aliquots of the substrate stock solutions. All experiments were conducted in four series. The non-specific changes in output signal associated with the fluctuations in temperature, environmental pH, and electrical noise were avoided due to the differential mode of measurements. Characterization of Silicalite The resulting samples were characterized by powder X-ray diffraction (XRD) using Ni-filtered Cu-Kα radiation in a Philips PW 1729. Scanning electron microscopy (SEM) analysis were performed in a 400 Quant FEI. The surface area of the samples was obtained by multipoint BET, whereas the pore size and pore volumes were obtained by Saito-Foley (SF) and t-plot methods. The method of sample preparation included their outgassing under vacuum at 300 K for 4 h before analysis. The morphologies of the produced silicalite can be seen in Fig. 2a. Scanning electron microscopy image of KK46 silicalite (a) and XRD spectrum of KK46 silicalite (b) According to the X-ray diffraction data, presented in Fig. 2b, all samples exhibited the characteristic diffraction lines of their structures. In Table 1, particle sizes, pore sizes, surface area, and pore volume are given. Table 1 Characteristics of KK46 silicalite Analytical Characteristics of Biosensor The biosensor operation is underlain by the enzymatic reaction, which takes place in the membrane containing recombinant urease deposited on the surface of conductometric transducer: $$ \begin{array}{c}\mathrm{Recombinant}\ \mathrm{urease}\\ {}\mathrm{Urea} + 2{\mathrm{H}}_2\mathrm{O} + {\mathrm{H}}^{+}\to\ {{2\mathrm{N}\mathrm{H}}_4}^{+}{{ + \mathrm{H}\mathrm{C}\mathrm{O}}_3}^{\hbox{-}}\end{array} $$ In the course of enzymatic reaction, the local concentration of ions in the enzyme membrane increases. This changes the solution conductivity, which is registered by the conductometric transducer [29]. These changes and, consequently, biosensor responses are proportional to the concentration of urea. Effect of Solution Parameters on Value of Biosensor Response As known, the conductometric method is based on the measuring of changes in the sample solution conductivity. This change in conductivity may depend on both the enzymatic reaction itself and the characteristics of solution in which this reaction occurs. So, first, an influence of the solution parameters (ionic strength, buffer capacity, protein concentration in the solution) on the value of sensor response was studied. The buffer capacity of human blood is relatively high due to the presence of proteins and buffer salts. To avoid the effects of blood buffer capacity, the concentration of working buffer in the measurement cell should therefore be not less than 5 mM. The dependence of biosensor responses on the urea concentration at various concentrations of buffer solution (2.5, 5, 10, 25 mM) is shown in Fig. 3. As seen, with increasing buffer capacity, the biosensor responses to urea decrease, but are still good enough for concentration 5 mM. Dependence of value of biosensor response to urea concentration on various concentrations of buffer solution (1 2.5 mM, 2 5 mM, 3 10 mM, 4 25 mM). Measurements were carried out in phosphate buffer, pH 7.35 One of the important characteristics of the buffer solution, which may have a negative effect on the function of conductometric biosensor is ionic strength. The main salt component of blood is sodium chloride, with a concentration of 150 mM. To study this negative effect, the signals to the same substrate concentration were measured adding NaCl of different concentrations (from 1 to 350 mM) to the solution (Fig. 4). As seen, an increase of ionic strength caused exponential decrease of the response to substrate. At NaCl concentration of 350 mM, the signal value was 34 % of the initial response to urea (in the cell without NaCl). One of the main reasons of this effect is an increase in the solution background conductivity, which at the same time enhances the noise. This can be clearly seen from an increase in the standard deviation at higher salt concentrations. Dependence of value of biosensor response to urea concentration on various concentrations of NaCl. Measurements were carried out in 10 mM phosphate buffer, pH 7.35 Considering the possible non-specific binding of the enzyme and proteins in blood, which may influence the specific analysis, we used BSA to check this effect. The dependence of biosensor responses on the urea concentration at various protein concentrations solution is shown in Fig. 5. As seen, an increase of protein concentrations have not affected the response to substrate. It also confirms that the developed biosensor can be successfully used for the urea analysis in real biological samples. Dependence of values of biosensor responses to urea concentration on various protein concentrations in solution (1 0 % BSA, 2 0.1 % BSA, 3 0.25 % BSA, 4 0.5 % BSA, 5 1 % BSA). Measurements were carried out in 10 mM phosphate buffer, pH 7.35 Operational Stability and Response Reproducibility of Biosensor Reproducibility and operational stability are the most important characteristics of biosensors. To determine the reproducibility of biosensor, the responses to the same urea concentration (12 mM) were measured over one working day with 30-min intervals, the biosensor being remained between measurements in the working buffer with constant stirring. The relative standard deviation was 5.06 %, which is quite acceptable; therefore, theoretically, the biosensor can be used to determine urea in biological samples (Fig. 6). Signal reproducibility of biosensor based on recombinant urease over one working day. Measurements were carried out in 10 mM phosphate buffer, pH 7.35 A common shortcoming of biosensors with adsorbed enzymes is the fact that enzymes are gradually washed off with the working solution because of a weak link between the enzyme and adsorbent. Therefore, it was important to check the stability of the biosensor operation for several days. When not used, the biosensors were kept dry at room temperature. Over 19 days, the responses of biosensors have not undergone any loss of activity, which is a good indicator of operational stability (Fig. 7). The biosensors based on the recombinant urease adsorption on silicalite had better reproducibility as compared to the biosensors with the urease immobilized in glutaraldehyde vapor [30]. Signal reproducibility of biosensor based on recombinant urease over 19 days. Measurements were carried out in 10 mM phosphate buffer, pH 7.35 At the last stage of this work, a sensitivity of the biosensor based on silicalite-adsorbed recombinant urease to different urea concentrations was studied. The response dependence on the urea concentration in the analyzed sample was determined, and a calibration curve was plotted (Fig. 8). The biosensor based on recombinant urease had a wider linear range of urea determination (0.5–15 mM) and shifted toward higher concentrations (15 mM) as compared with the biosensor based on natural non-modified urease (0.025–0.75 mM) [26]. To determine the detection limit, the standard deviation of the baseline noise signal was multiplied by 3. The detection limit was 20 μM. This linear range is sufficient for the urea analysis in real biological samples. Dependence of biosensor responses to urea concentration in solution. Measurements were carried out in 10 mM phosphate buffer, pH 7.35 The biosensor based on recombinant urease adsorbed on silicalite was developed to determine the concentration of urea in biological samples. To adapt the biosensor to the work with real samples, its sensitivity to urea was tested depending on the concentration of the working buffer and the salt and protein concentration in the samples. The biosensor developed using the proposed method of immobilization was characterized by high operational stability over 19 days. A significant extension of the linear range of urea determination was demonstrated. This enables an analysis of the samples with high urea concentrations without significant dilution. The developed biosensor with improved analytical characteristics may be used in biomedical and clinical diagnostics. Dhawan G, Sumana G, Malhotra BD (2009) Recent developments in urea biosensors. Biochem Eng J 44:42–52 Koncki R (2007) Recent developments in potentiometric biosensors for biomedical analysis. Anal Chim Acta 599:7–15 Tserng KY, Kalhan SC (1982) Gas chromatography/mass spectrometric determination of [15 N]urea in plasma and application to urea metabolism study. Anal Chem 54:489–491 Patton CJ, Crouch SR (1977) Spectrophotometric and kinetics investigation of the Berthelot reaction for the determination of ammonia. Anal Chem 49:464–469 Ramsing A (1980) A new approach to enzymatic assay based on flow-injection spectrophotometry with acid-base indicators. Anal Chim Acta 114:165–181 Abdel-Latif MS, Guilbault GG (1990) Fluorometric determination of urea by flow injection analysis. J Biotechnol 14:53–61 Liu D, Meyerhoff ME, Goldberg HD, Brown RB (1993) Potentiometric ion- and bioselective electrodes based on asymmetric polyurethane membranes. Anal Chim Acta 274:37–46 Adeloju SB, Shaw SJ, Wallace GG (1993) Polypyrrole-based potentiometric biosensor for urea. Part 2. Analytical optimisation. Anal Chim Acta 281:621–627 Lakard B, Herlem G, Lakard S et al (2004) Urea potentiometric biosensor based on modified electrodes with urease immobilized on polyethylenimine films. Biosens Bioelectron 19:1641–1647 Chen K, Liu D, Nie L, Yao S (1994) Determination of urea in urine using a conductivity cell with surface acoustic wave resonator-based measurement circuit. Talanta 41:2195–2200 Sangodkar H, Sukeerthi S, Srinivasa RS et al (1996) A biosensor array based on polyaniline. Anal Chem 68:779–783 Jdanova AS, Poyard S, Soldatkin AP et al (1996) Conductometric urea sensor. Use of additional membranes for the improvement of its analytical characteristics. Anal Chim Acta 321:35–40 Adeloju SB, Shaw SJ, Wallace GG (1996) Polypyrrole-based amperometric flow injection biosensor for urea. Anal Chim Acta 323:107–113 Adeloju SB, Shaw SJ, Wallace GG (1997) Pulsed-amperometric detection of urea in blood samples on a conducting polypyrrole-urease biosensor. Anal Chim Acta 341:155–160 Rajesh, Bisht V, Takashima W, Kaneto K (2005) An amperometric urea biosensor based on covalent immobilization of urease onto an electrochemically prepared copolymer poly (N-3-aminopropyl pyrrole-co-pyrrole) film. Biomaterials 26:3683–3690 Soldatkin AP, Montoriol J, Sant W et al (2003) A novel urea sensitive biosensor with extended dynamic range based on recombinant urease and ISFETs. Biosens Bioelectron 19:131–135 Yoneyama K, Fujino Y, Osaka T, Satoh I (2001) Amperometric sensing system for the detection of urea by a combination of the pH-stat method and flow injection analysis. Sensor Actuat B-Chem 76:152–157 Kanungo M, Kumar A, Contractor AQ (2002) Studies on electropolymerization of aniline in the presence of sodium dodecyl sulfate and its application in sensing urea. J Electroanal Chem 528:46–56 Jiménez C, Bartrol J, De Rooij NF, Koudelka-Hep M (1997) Use of photopolymerizable membranes based on polyacrylamide hydrogels for enzymatic microsensor construction. Anal Chim Acta 351:169–176 Komaba S, Seyama M, Momma T, Osaka T (1997) Potentiometric biosensor for urea based on electropolymerized electroinactive polypyrrole. Electrochim Acta 42:383–388 Tinkilic N, Cubuk O, Isildak I (2002) Glucose and urea biosensors based on all solid-state PVC–NH 2 membrane electrodes. Anal Chim Acta 452:29–34 Hamlaoui ML, Reybier K, Marrakchi M et al (2002) Development of a urea biosensor based on a polymeric membrane including zeolite. Anal Chim Acta 466:39–45 Tavolaro A, Tavolaro P, Drioli E (2007) Zeolite inorganic supports for BSA immobilization: comparative study of several zeolite crystals and composite membranes. Colloids Surfaces B Biointerfaces 55:67–76 Liu B, Hu R, Deng J (1997) Characterization of immobilization of an enzyme in a modified Y zeolite matrix and its application to an amperometric glucose biosensor. Anal Chem 69:2343–2348 Liu B, Liu Z, Chen D et al (2000) An amperometric biosensor based on the coimmobilization of horseradish peroxidase and methylene blue on a beta-type zeolite modified electrode. Fresenius J Anal Chem 367:539–544 Soldatkin OO, Kucherenko IS, Marchenko SV et al (2014) Application of enzyme/zeolite sensor for urea analysis in serum. Mater Sci Eng C 42:155–160 Soldatkin OO, Kucherenko IS, Pyeshkova VM et al (2012) Novel conductometric biosensor based on three-enzyme system for selective determination of heavy metal ions. Bioelectrochemistry 83:25–30 Kucherenko IS, Soldatkin OO, Kasap BO et al (2012) Elaboration of urease adsorption on silicalite for biosensor creation. Electroanalysis 24:1380–1385 Jaffrezic-Renault N, Dzyadevych SV (2008) Conductometric microbiosensors for environmental monitoring. Sensors 8:2569–2588 Boubriak OA, Soldatkin AP, Starodub NF et al (1995) Determination of urea in blood serum by a urease biosensor based on an ion-sensitive field-effect transistor. Sensor Actuat B-Chem 26–27:429–431 The authors gratefully acknowledge the financial support of this study by the STCU Project 6052 "Enzyme multibiosensor system for renal dysfunction diagnosis and hemodialysis control." Furthermore, this study was partly supported by the National Academy of Sciences of Ukraine in the frame of Scientific and Technical Government Program "Sensor systems for medico-ecological and industrial-technological requirement: metrological support and experimental operation." Institute of Molecular Biology and Genetics of NAS of Ukraine, Zabolotnogo Street 150, 03143, Kyiv, Ukraine T. P. Velychko , О. О. Soldatkin , S. V. Marchenko , A. P. Soldatkin , A. V. El'skaya & S. V. Dzyadevych Taras Shevchenko National University of Kyiv, Volodymyrska Street 64, 01003, Kyiv, Ukraine Department of Electrical and Magnetic Measurements, Institute of Electrodynamics of National Academy of Sciences of Ukraine, 56, Peremohy Ave., Kyiv-57, 03680, Ukraine V. G. Melnyk Micro and Nanotechnology Department, Middle East Technical University, Ankara, 06531, Turkey S. K. Kirdeciler & B. Akata Central Laboratory, Middle East Technical University, Ankara, 06531, Turkey Search for T. P. Velychko in: Search for О. О. Soldatkin in: Search for V. G. Melnyk in: Search for S. V. Marchenko in: Search for S. K. Kirdeciler in: Search for B. Akata in: Search for A. P. Soldatkin in: Search for A. V. El'skaya in: Search for S. V. Dzyadevych in: Correspondence to T. P. Velychko. TPV, OOS, and SVM performed the experiments to study the effect of recombinant urease adsorbtion on silicalite on the biosensors operation. TPV and APS wrote and arranged the manuscript. VGM monitored the performance of measuring devices. APS, SVD, and AVE planned and supervised the experiments performed by TPV, OOS, and SVM. SKK and BA were involved in the synthesis of silicalites and took part in the deposition of particle of silicalite onto the conductometric transduser. BA and SVD proposed the idea of using silicalite for urea biosensor creation and controlled the silicalite synthesis by electron microscopy and XRD spectrum analysis. AVE and SVD are the supervisors of the whole work, the results of which are presented in this manuscript. SVD proposed the idea of the development of conductometric biosensors based on enzyme adsorbed on silicalite. All authors read and approved the final manuscript. Velychko, T.P., Soldatkin, О.О., Melnyk, V.G. et al. A Novel Conductometric Urea Biosensor with Improved Analytical Characteristic Based on Recombinant Urease Adsorbed on Nanoparticle of Silicalite. Nanoscale Res Lett 11, 106 (2016). https://doi.org/10.1186/s11671-016-1310-3 Silicalite Recombinant urease Conductometry
CommonCrawl
Research | Open | Published: 16 August 2017 Iterative robust adaptive beamforming Yang Li1,2, Hong Ma2 & Li Cheng1,2 EURASIP Journal on Advances in Signal Processingvolume 2017, Article number: 58 (2017) | Download Citation The minimum power distortionless response beamformer has a good interference rejection capability, but the desired signal will be suppressed if signal steering vector or data covariance matrix is not precise. The worst-case performance optimization-based robust adaptive beamformer (WCB) has been developed to solve this problem. However, the solution of WCB cannot be expressed in a closed form, and its performance is affected by a prior parameter, which is the steering vector error norm bound of the desired signal. In this paper, we derive an approximate diagonal loading expression of WCB. This expression reveals a feedback loop relationship between steering vector and weight vector. Then, a novel robust adaptive beamformer is developed based on the iterative implementation of this feedback loop. Theoretical analysis indicates that as the iterative step increases, the performance of the proposed beamformer gets better and the iteration converges. Furthermore, the proposed beamformer does not subject to the steering vector error norm bound constraint. Simulation examples show that the proposed beamformer has better performance than some classical and similar beamformers. The minimum variance distortionless response (MVDR) beamformer is capable of maximizing the output signal to interference-plus-noise ratio (SINR). The MVDR requires using the interference-plus-noise covariance matrix; however, in many applications, it is impossible to obtain it. When the training data contains the desired signal component, the MVDR beamformer becomes the minimum power distortionless response (MPDR) beamformer [1, 2]. The MVDR beamformer maximizes the output SINR by minimizing the total beamformer output power, subject to a distortionless constraint for the desired signal. However, due to the desired signal component, even small error in the steering vector or covariance matrix can lead to severe performance degradation [3], this phenomenon is often called desired signal cancellation. In practice, many factors can lead to steering vector estimation errors, such as inaccurate signal model [4], direction of arrival (DOA) estimation error [5], array perturbations [6], and calibration errors [7]. Finite sample snapshots [8] lead to an inaccurate data covariance matrix. Therefore, a robust technology is required to overcome these problems. We refer to adaptive beamformer that attempts to preserve good performance in the presence of steering vector or covariance matrix error as robust adaptive beamformer (RAB). In the past two decades, many technologies have been developed to improve the robustness of the MPDR beamformer against the steering vector error. For example, the class of diagonal loading technology [9, 10] augment the data covariance with a constant improves the robustness; the worst-case performance optimization-based beamformer (WCB) [11, 12] restrains the gain in signal uncertainty range that is larger than one; the covariance fitting-based beamformer [13, 14] solves a new steering vector which is fitting for the sample covariance matrix to avoid desired signal cancellation; the magnitude response constraints method [15, 16] improves the robustness by restraining the main beam pattern; the covariance matrix reconstruction approach [17, 18] eliminates the signal component from the data covariance matrix to prevent desired signal cancellation. The classical WCB [11] minimizing the total beamformer output power, subject to the gain in desired signal steering vector's uncertainty set, is larger than one. The WCB has a good robustness performance, but it has two inherent drawbacks. On the one hand, the constrained optimization equation of WCB is a nonconvex NP-hard problem; although there exists many methods [12, 19] to solve it, there is no closed-form solution until now. On the other hand, the performance of WCB is highly affected by the prior value of steering vector error norm bound. Unfortunately, the optimum bound value [20, 21] cannot be obtained in practice, and if the prior value is not big enough, the performance of WCB will decrease significantly. To solve these two problems of WCB, we propose a novel beamformer; its idea and way are as follows. Firstly, we propose an approximate diagonal loading expression of WCB under certain conditions. Then, we build a feedback loop relationship between steering vector and weight vector based on this expression. At last, a novel RAB is developed based on the iterative implementation of this feedback loop. The outline of this paper is as follows. The data model and background on adaptive beamforming are provided in Section 2. The proposed beamformer and its implementation are developed in Section 3. The simulation results are presented in Section 4. Finally, a brief conclusion appears in Section 5. In the paper, E[▪], (▪)H, (▪)−1, ∥▪∥, and ⊥ denote the expectation, Hermitian transpose, inverse, the two-norm, and orthogonal, respectively; superscript $\hat {{}}$ denotes the estimated value. Problem formulation The MPDR beamformer Considering a uniform linear array (ULA) with M omni-directional sensors, one desired signal and L interference signals impinging upon the array from different directions, and the source is in the far-field of the array. The received array signal can be expressed as $$ \mathbf{x}(k)={{\mathbf{a}}_{S}}{{s}_{S}}(k)+\sum\limits_{i=1}^{L}{{{\mathbf{a}}_{i}}{{s}_{i}}(k)+n(k)} $$ where x(k), a, and n(k) are M×1 complex vector, k denotes the snapshot number, and a i , i=S,1,…,L is the actual steering vector of the i-th signal. s i (k) is zero-mean stationary i-th signal; n(k) denotes the noise. Assuming that each signal and noise are statistically independent, the data covariance matrix of the array output is given by $$ \mathbf{R}=E[\mathbf{x}(k){{\mathbf{x}}^{H}}(k)]={{P}_{S}}{{\mathbf{a}}_{S}}\mathbf{a}_{S}^{H}+\sum\limits_{i=1}^{L}{{{P}_{i}}{{\mathbf{a}}_{i}}\mathbf{a}_{i}^{H}}+{{P}_{N}}\mathbf{I} $$ where P S , P i , and P N denote the power of desired signal, i-th interference, and noise, respectively. The beamformer output signal can be written as $$ \mathbf{y}(k)={{\mathbf{w}}^{H}}\mathbf{x}(k) $$ where w is the weighing vector of the beamformer. The MPDR beamformer is mathematically equivalent to the problem $$ \underset{\mathbf{w}}{\mathop{\min }}\,{{\mathbf{w}}^{H}}\mathbf{Rw} \ s.t.\ {{\mathbf{w}}^{H}}{{\mathbf{a}}_{S}}=1 $$ The solution of MPDR is often called optimum weight $$ {{\mathbf{w}}_{MPDR}}=\frac{{{\mathbf{R}}^{-1}}{{\mathbf{a}}_{S}}}{\mathbf{a}_{S}^{H}{{\mathbf{R}}^{-1}}{{\mathbf{a}}_{S}}} $$ However, we cannot achieve the optimum weight in practice due to two inaccurate parameters. On the one hand, since data covariance matrix R is unknown in practice, it is replaced by K snapshots sample covariance matrix $\hat {\mathbf {R}}=\frac {1}{K}\sum \limits _{k=1}^{K}{\mathbf {x}(k){{\mathbf {x}}^{H}}(k)}$. On the other hand, steering vector a S relates to signal frequency, direction of arrival, sensors locations, coupling effect, as well as other factors, any inaccurate of these factors can lead to steering vector error. If the signal-to-noise ratio (SNR) of desired signal is high, even slight error of R or a S will cause the MPDR beamformer suppresses the desired signal as an interference, which leads to a severe degradation of the performance [3]. This effect is often called desired signal cancellation. This paper only concerns about the error of steering vector, so we use actual covariance matrix R in all of the following formulas. The worst-case performance optimization-based beamformer The WCB [11] minimizing the total beamformer output power, subject to the gain in desired signal steering vector's uncertainty set, is larger than one. In rank-one signal and spherical uncertainty set case [22], the WCB can be expressed as $$ \left\{ \begin{aligned} & \underset{\mathbf{w}}{\mathop{\min }}\,{{\mathbf{w}}^{H}}\mathbf{Rw} \\ & s.t.\,\left| {{\mathbf{w}}^{H}}({{{\hat{\mathbf{a}}}}_{S}}+\Delta \mathbf{a}) \right|\ge 1,\ for\ all\ \left\| \Delta \mathbf{a} \right\|\le \mathsf{\varepsilon} \\ \end{aligned} \right. $$ where ${{\hat {\mathbf {a}}}_{S}}$ is the assumed steering vector of desired signal (obtained from estimated DOA and nominal array manifold). The prior known positive constant ε [20] can be explained as a norm bound of the unknown error between a S and ${{\hat {\mathbf {a}}}_{S}}$. Problem (6) is a nonconvex NP-hard problem. After some release and approximation [11], it can be converted to the following convex second-order cone programming problem $$ \left\{ \begin{aligned} & \underset{\mathbf{w}}{\mathop{\min }}\,{{\mathbf{w}}^{H}}\mathbf{Rw} \\ & s.t.\,{{\mathbf{w}}^{H}}{{{\hat{\mathbf{a}}}}_{S}}-\varepsilon \left\| \mathbf{w} \right\|=1 \\ \end{aligned} \right. $$ There are many methods to solve the problem above, such as, the convex optimization tools solve method [11], the eigen-decomposition root-searching method [19], the diagonal loading method [23], the recursive implementation [12], etc. Approximate diagonal loading solution of the WCB Using the Lagrange multiplier method, problem (7) can be written as $$ F(\mathbf{w},\lambda)={{\mathbf{w}}^{H}}\mathbf{Rw}-\lambda ({{\mathbf{w}}^{H}}{{\hat{\mathbf{a}}}_{S}}-\varepsilon \left\| \mathbf{w} \right\|-1) $$ where λ is the Lagrange multiplier. Differentiating (8) with w and equating the result to zero, we obtain the following equation: $$ \mathbf{Rw}+\lambda \varepsilon \frac{\mathbf{w}}{\left\| \mathbf{w} \right\|}=\lambda {{\hat{\mathbf{a}}}_{S}} $$ Using the fact that multiplying the weight vector by any arbitrary constant does not change the output SINR, we can transform (9) to $$ \mathbf{Rw}+\varepsilon \frac{\mathbf{w}}{\left\| \mathbf{w} \right\|}={{\hat{\mathbf{a}}}_{S}} $$ So that (10) does not contain the Lagrange multiplier anymore. Then, (10) can be written as $$ \mathbf{w}={{\left(\mathbf{R}+\frac{\varepsilon }{\left\| \mathbf{w} \right\|}\mathbf{I}\right)}^{-1}}{{\hat{\mathbf{a}}}_{S}} $$ It can be seen from (11) that the WCB belongs to the class of diagonal loading. Taking the norm squared of the both sides of (11), and defining the diagonal loading level ρ=ε/∥w∥, we obtain $$ {{\left(\frac{\varepsilon }{\rho} \right)}^{2}}={{\left\| {{(\mathbf{R}+\rho \mathbf{I})}^{-1}}{{{\hat{\mathbf{a}}}}_{S}} \right\|}^{2}} $$ In the following, we will solve (12). The solve idea takes reference to [9, 23], and [24]. Using Woodbury formula of matrix inverse, we have $$ \begin{aligned} & {{\left(\mathbf{R}+\rho \mathbf{I}\right)}^{-1}}\\ &={{\left[{{P}_{S}}{{\mathbf{a}}_{S}}\mathbf{a}_{S}^{H}+\left({{\mathbf{R}}_{IN}}+\rho \mathbf{I}\right)\right]}^{-1}} \\ &={{\left({{\mathbf{R}}_{IN}}+\rho \mathbf{I}\right)}^{-1}}-\frac{{{P}_{S}}{{\left({{\mathbf{R}}_{IN}}+\rho \mathbf{I}\right)}^{-1}}{{\mathbf{a}}_{S}}\mathbf{a}_{S}^{H}{{\left({{\mathbf{R}}_{IN}}+\rho \mathbf{I}\right)}^{-1}}}{1+{{P}_{S}}\mathbf{a}_{S}^{H}{{\left({{\mathbf{R}}_{IN}}+\rho \mathbf{I}\right)}^{-1}}{{\mathbf{a}}_{S}}} \\ \end{aligned} $$ where ${{\mathbf {R}}_{IN}}\text {=}{\sum \nolimits }_{i=1}^{L}{{{P}_{i}}{{\mathbf {a}}_{i}}\mathbf {a}_{i}^{H}}+{{P}_{N}}\mathbf {I}$ is interference-plus-noise covariance matrix. R IN can be expressed in eigen decomposition form as $$ {{} \begin{aligned} {{\mathbf{R}}_{IN}}\,=\,{{\mathbf{U}}_{I}}{{\mathbf{\Gamma}}_{I}}\mathbf{U}_{I}^{H}\,+\,{{\mathbf{U}}_{N}}{{\mathbf{\Gamma}}_{N}} \mathbf{U}_{N}^{H}\,=\,\sum\limits_{i=1}^{L}{{{\mathsf{\gamma}}_{i}}{{\mathbf{u}}_{i}}\mathbf{u}_{i}^{H}} +{{P}_{N}}\sum\limits_{i=L+1}^{M}{{{\mathbf{u}}_{i}}\mathbf{u}_{i}^{H}} \end{aligned}} $$ where γ i and u i are the eigenvalues and corresponding eigenvectors of R IN , eigenvalues are sorted in descending order, γ 1≥…≥γ L ≫γ L+1=…=γ M =P N ,U I =[u 1,…,u L ] spans the interference subspace, U N =[u L+1,…,u M ] spans the noise subspace, and ${{\mathbf {U}}_{I}}\mathbf {U}_{I}^{H}\text {+}{{\mathbf {U}}_{N}}\mathbf {U}_{N}^{H}\text {=}\mathbf {I}$, span{a i }=span{u i },i=1,2,…L [7]. When DOA separation between signal and interference is larger than a beam width, $\left | \mathbf {a}_{S}^{H}{{\mathbf {a}}_{i}} \right |/M\ll 1$, i=1,…,L [25] (Fig. 1 gives an example). Assuming this condition always holds, we can make the approximation ${{\left | \mathbf {a}_{S}^{H}{{\mathbf {u}}_{i}} \right |}^{2}}\ll M$ and $\mathbf {a}_{S}^{H}{{\mathbf {U}}_{I}}\mathbf {U}_{I}^{H}{{\mathbf {a}}_{S}}\ll M$, which can be further expanded to $\mathbf {a}_{S}^{H}{{\mathbf {U}}_{N}}\mathbf {U}_{N}^{H}{{\mathbf {a}}_{S}}=\mathbf {a}_{S}^{H}(\mathbf {I}-{{\mathbf {U}}_{I}}\mathbf {U}_{I}^{H}){{\mathbf {a}}_{S}}=M-\mathbf {a}_{S}^{H}{{\mathbf {U}}_{I}}\mathbf {U}_{I}^{H}{{\mathbf {a}}_{S}}\approx M$, and ${{\left \| \mathbf {U}_{I}^{H}{{\mathbf {a}}_{S}} \right \|}^{2}}\ll {{\left \| \mathbf {U}_{N}^{H}{{\mathbf {a}}_{S}} \right \|}^{2}}$. The value of $\left | {{\hat {\mathbf {a}}}^{H}}({{\theta }_{i}})\hat {\mathbf {a}}({{\hat {\theta }}_{S}}) \right |/M$ It is well known that the desired signal's steering vector a S is orthogonal to the noise subspace of data covariance matrix R. The result $\mathbf {a}_{S}^{H}{{\mathbf {U}}_{N}}\mathbf {U}_{N}^{H}{{\mathbf {a}}_{S}}\approx M$ reveals a new property: a S approximately belongs to the noise subspace of interference-plus-noise covariance matrix R IN . The precondition is that the DOA separation between desired signal and interference is larger than a beam width; this condition holds under normal conditions. The following Lemma 1 is used in this paper: Lemma 1 $$ {\begin{aligned} \mathbf{a}_{S}^{H}\!\sum\limits_{i=1}^{L}{f({{\mathsf{\gamma}}_{i}}){{\mathbf{u}}_{i}}\mathbf{u}_{i}^{H}}{{\mathbf{a}}_{S}} =\sum\limits_{i=1}^{L}{f\left({{\mathsf{\gamma}}_{i}}\right){{\left| \mathbf{a}_{S}^{H}{{\mathbf{u}}_{i}} \right|}^{2}}}\\=f(\tilde{\mathsf{\gamma}})\sum\limits_{i=1}^{L}{{{\left|\mathbf{a}_{S}^{H}{{\mathbf{u}}_{i}}\right|}^{2}}} \end{aligned}} $$ where f(·)is a monotonic function in this paper, and ${{\mathsf {\gamma }}_{1}}>\tilde {\mathsf {\gamma }}>{{\mathsf {\gamma }}_{L}}\gg {{P}_{N}}$ always holds. Lemma 1 is obvious, and it is easy to be proved, so we use this lemma directly. Using Lemma 1, and defining f(γ)=1/(γ+ρ), we have $$ \begin{aligned} \mathbf{a}_{S}^{H}{{({{\mathbf{R}}_{IN}}+\rho \mathbf{I})}^{-1}}{{\mathbf{a}}_{S}}&=\mathbf{a}_{S}^{H}\sum\limits_{i=1}^{M}{\frac{{{\mathbf{u}}_{i}}\mathbf{u}_{i}^{H}}{{{\mathsf{\gamma }}_{i}}\text{+}\rho }}{{\mathbf{a}}_{S}} \\ & =\sum\limits_{i=1}^{L}{\frac{{{\left| \mathbf{a}_{S}^{H}{{\mathbf{u}}_{i}} \right|}^{2}}}{{{\mathsf{\gamma }}_{i}}\text{+}\rho }}+\sum\limits_{i=L+1}^{M}{\frac{{{\left| \mathbf{a}_{S}^{H}{{\mathbf{u}}_{i}} \right|}^{2}}}{{{P}_{N}}\text{+}\rho }} \\ & =\frac{{{\left\| \mathbf{U}_{I}^{H}{{\mathbf{a}}_{S}} \right\|}^{2}}}{\tilde{\mathsf{\gamma}}\text{+}\rho }+\frac{{{\left\| \mathbf{U}_{N}^{H}{{\mathbf{a}}_{S}} \right\|}^{2}}}{{{P}_{N}}\text{+}\rho} \\ \end{aligned} $$ Since $\tilde {\mathsf {\gamma }}>{{\mathsf {\gamma }}_{L}}\gg {{P}_{N}}$, and ${{\left \| \mathbf {U}_{I}^{H}{{\mathbf {a}}_{S}} \right \|}^{2}}\ll {{\left \| \mathbf {U}_{N}^{H}{{\mathbf {a}}_{S}} \right \|}^{2}}$, (16) can be simplified as $$ \mathbf{a}_{S}^{H}{{\left({{\mathbf{R}}_{IN}}+\rho \mathbf{I}\right)}^{-1}}{{\mathbf{a}}_{S}}\approx \frac{{{\left\| \mathbf{U}_{N}^{H}{{\mathbf{a}}_{S}} \right\|}^{2}}}{{{P}_{N}}\text{+}\rho }\approx \frac{M}{{{P}_{N}}+\rho}\triangleq \mathsf{\kappa } $$ Usually, ${{\hat {\mathbf {a}}}_{S}}$ is very close to a S in practice, so we can make two approximations: $\mathbf {a}_{S}^{H}{{\mathbf {U}}_{N}}\mathbf {U}_{N}^{H}{{\hat {\mathbf {a}}}_{S}}\approx M$ and $\hat {\mathbf {a}}_{S}^{H}{{\mathbf {U}}_{N}}\mathbf {U}_{N}^{H}{{\hat {\mathbf {a}}}_{S}}\approx M$, which can be further extended to $$ \mathbf{a}_{S}^{H}{{\left({{\mathbf{R}}_{IN}}+\rho \mathbf{I}\right)}^{-1}}{{\hat{\mathbf{a}}}_{S}}\approx \hat{\mathbf{a}}_{S}^{H}{{\left({{\mathbf{R}}_{IN}}+\rho \mathbf{I}\right)}^{-1}}{{\hat{\mathbf{a}}}_{S}}\approx \frac{M}{{{P}_{N}}+\rho}=\mathsf{\kappa} $$ Similar to (16) and (17), we can obtain the following approximations $$ \begin{aligned} \hat{\mathbf{a}}_{S}^{H}{{\left({{\mathbf{R}}_{IN}}+\rho \mathbf{I}\right)}^{-2}}{{{\hat{\mathbf{a}}}}_{S}}&=\hat{\mathbf{a}}_{S}^{H}\sum\limits_{i=1}^{M}{\frac{{{\mathbf{u}}_{i}} \mathbf{u}_{i}^{H}}{{{\left({{\mathsf{\gamma }}_{i}}+\rho\right)}^{2}}}}{{{\hat{\mathbf{a}}}}_{S}} \\ & \approx \frac{1}{{{({{P}_{N}}+\rho)}^{2}}}\hat{\mathbf{a}}_{S}^{H}\sum\limits_{i=L+1}^{M}{{{\mathbf{u}}_{i}}\mathbf{u}_{i}^{H}}{{{\hat{\mathbf{a}}}}_{S}} \\ & =\frac{\hat{\mathbf{a}}_{S}^{H}{{\mathbf{U}}_{N}}\mathbf{U}_{N}^{H}{{{\hat{\mathbf{a}}}}_{S}}}{{{({{P}_{N}}+\rho)}^{2}}}\approx \frac{{{\mathsf{\kappa }}^{2}}}{M} \end{aligned} $$ $$ {\begin{aligned} \mathbf{a}_{S}^{H}{{({{\mathbf{R}}_{IN}}+\rho \mathbf{I})}^{-2}}{{\mathbf{a}}_{S}}&\approx \mathbf{a}_{S}^{H}{{({{\mathbf{R}}_{IN}}+\rho \mathbf{I})}^{-2}}{{\hat{\mathbf{a}}}_{S}}\\ &\approx \hat{\mathbf{a}}_{S}^{H}{{({{\mathbf{R}}_{IN}}+\rho \mathbf{I})}^{-2}}{{\mathbf{a}}_{S}}\approx \frac{{{\mathsf{\kappa}}^{2}}}{M} \end{aligned}} $$ According to (13), (18), (19), and (20), the following approximation holds $$ {{} \begin{aligned} &{{\left\| {{(\mathbf{R}+\rho \mathbf{I})}^{-1}}{{{\hat{\mathbf{a}}}}_{S}} \right\|}^{2}}\\ &\,=\,\!{{\left\| {{({{\mathbf{R}}_{IN}}\,+\,\rho \mathbf{I})}^{-1}}{{{\hat{\mathbf{a}}}}_{S}}\,-\,\frac{{{P}_{S}}{{({{\mathbf{R}}_{IN}}\,+\,\rho \mathbf{I})}^{-1}}{{\mathbf{a}}_{S}}\mathbf{a}_{S}^{H}{{({{\mathbf{R}}_{IN}}\,+\,\rho \mathbf{I})}^{-1}}{{{\hat{\mathbf{a}}}}_{S}}}{1\!+{{P}_{S}}\mathbf{a}_{S}^{H}{{({{\mathbf{R}}_{IN}}+\rho \mathbf{I})}^{-1}}{{\mathbf{a}}_{S}}} \right\|}^{2}} \\ &\approx {{\left\| {{({{\mathbf{R}}_{IN}}+\rho \mathbf{I})}^{-1}}{{{\hat{\mathbf{a}}}}_{S}}-\frac{{{P}_{S}}\mathsf{\kappa }}{1+{{P}_{S}}\mathsf{\kappa }}{{({{\mathbf{R}}_{IN}}+\rho \mathbf{I})}^{-1}}{{\mathbf{a}}_{S}} \right\|}^{2}} \\ & ={{\left({{{\hat{\mathbf{a}}}}_{S}}-\frac{{{P}_{S}}\mathsf{\kappa }}{1+{{P}_{S}}\mathsf{\kappa }}{{\mathbf{a}}_{S}} \right)}^{H}}{{({{\mathbf{R}}_{IN}}+\rho \mathbf{I})}^{-2}}\left({{{\hat{\mathbf{a}}}}_{S}}-\frac{{{P}_{S}}\mathsf{\kappa }}{1+{{P}_{S}}\mathsf{\kappa }}{{\mathbf{a}}_{S}}\right) \\ &\approx \frac{{{\mathsf{\kappa }}^{2}}}{M{{(1+{{P}_{S}}\mathsf{\kappa })}^{2}}} \\ \end{aligned}} $$ The approximate diagonal loading level can be solved by using (12) and (21) as $$ \rho \approx \frac{\varepsilon \left(M{{P}_{S}}+{{P}_{N}}\right)}{\sqrt{M}-\varepsilon } $$ Finally, the weight vector of WCB is $$ {{\mathbf{w}}_{WCB}}={{\left(\mathbf{R}+\rho \mathbf{I}\right)}^{-1}}{{\hat{\mathbf{a}}}_{S}} $$ Equation (22) indicates that the diagonal loading level relates to the desired signal's power P S , noise power P N , and steering vector error norm bound ε. The premise behind (22) is the Eqs. (18), (19), and (20). If ${{\hat {\mathbf {a}}}_{S}}={{\mathbf {a}}_{S}}$, (18), (19), and (20) are strictly true and (22) is reliable. If there exists error between ${{\hat {\mathbf {a}}}_{S}}$ and a S , the following iterative method will reduce this error step by step so as to make (22) reliable. The proposed beamformer The key problem of (22) is how to obtain the accurate value of MP S +P N , or reliable approximate value, and how to set a suitable ε. The idea of the proposed beamformer is to use iterative implementation. Firstly, we estimate an approximate value of MP S +P N , and a prior value of ε, to obtain the weight vector. Then, we estimate a more accurate value of MP S +P N by using this weight vector. Repeating this process, it is maybe possible that the updated MP S +P N approaches to its actual value. In this section, we propose a method to estimate the value of MP S +P N , establish a feedback loop relationship between desired signal's steering vector and weight vector, and propose a novel beamformer based on iterative implementation of this feedback loop. Feedback loop relationship between steering vector and weight vector Under the condition that the interferences are absent, the data covariance matrix becomes $\mathbf {R}={{P}_{S}}{{\mathbf {a}}_{S}}\mathbf {a}_{S}^{H}+{{P}_{N}}\mathbf {I}$. Its inverse is calculated by $$ {{\mathbf{R}}^{-1}}=\frac{1}{{{P}_{N}}}\mathbf{I}-\frac{{{P}_{S}}{{\mathbf{a}}_{S}}\mathbf{a}_{S}^{H}}{{{P}_{N}}\left({{P}_{N}}+M{{P}_{S}}\right)} $$ Then, we have $$ \frac{1}{\mathbf{a}_{S}^{H}{{\mathbf{R}}^{-1}}{{\mathbf{a}}_{S}}}={{P}_{S}}+\frac{{{P}_{N}}}{M} $$ Equation (25) reveals the relationship between steering vector a S and MP S +P N . In practice, a S is replaced by ${{\hat {\mathbf {a}}}_{S}}$ for MPDR beamformer in (5). For a weight vector obtained by WCB or other beamformers, we can define an "equivalent steering vector" for MPDR beamformer. For example, by combining (5) and (23), we can establish the following relationship $$ \alpha {{\left(\mathbf{R}+\rho \mathbf{I}\right)}^{-1}}{{\hat{\mathbf{a}}}_{S}}=\alpha {{\mathbf{w}}_{WCB}}={{\mathbf{w}}_{WCB-MPDR}}=\frac{{{\mathbf{R}}^{-1}}{{{\tilde{\mathbf{a}}}}_{S}}} {\tilde{\mathbf{a}}_{S}^{H}{{\mathbf{R}}^{-1}}{{{\tilde{\mathbf{a}}}}_{S}}} $$ where α is a constant, ${{\tilde {\mathbf {a}}}_{S}}$ is an equivalent steering vector for MPDR beamformer, and we guess that ${{\tilde {\mathbf {a}}}_{S}}$ is more accurate than ${{\hat {\mathbf {a}}}_{S}}$ if w WCB (obtained by (23)) is better than w MPDR (obtained by (5) with ${{\hat {\mathbf {a}}}_{S}}$). It is easy to express ${{\tilde {\mathbf {a}}}_{S}}$ by R and w from (26) [26] $$ {{\tilde{\mathbf{a}}}_{S}}=\alpha '\mathbf{R}{{\mathbf{w}}_{WCB}} $$ where $\alpha '=\alpha \tilde {\mathbf {a}}_{S}^{H}{{\mathbf {R}}^{-1}}{{\tilde {\mathbf {a}}}_{S}}$. The equivalent steering vector ${{\tilde {\mathbf {a}}}_{S}}$ should be scaled by the fact that the norm of ${{\tilde {\mathbf {a}}}_{S}}$ equals $\sqrt {M}$, that is $$ {{\tilde{\mathbf{a}}}_{S}}=\sqrt{M}\mathbf{R}{{\mathbf{w}}_{WCB}}/\left\| \mathbf{R}{{\mathbf{w}}_{WCB}} \right\| $$ Henceforth, we obtain the feedback loop relationship between steering vector a S , diagonal loading level ρ, and weight vector w through (22), (23), (25), and (28). Iterative implementation If the ${{\tilde {\mathbf {a}}}_{S}}$ obtained by (28) is closer to actual value than ${{\hat {\mathbf {a}}}_{S}}$, we preliminary think that the following iteration implementation will obtain a better steering vector and simultaneously obtain a better weight vector, step by step. Initialization: ${{\mathbf {a}}^{(0)}}={{\hat {\mathbf {a}}}_{S}}$ for k=1,2,… $\mathsf {\tau }=M\mathsf {\varepsilon }/\left (\sqrt {M}-\mathsf {\varepsilon }\right)$, p (k)=1/(a (k−1)H R −1 a (k−1)) w (k)=(R+τ p (k) I)−1 a (k−1) ${{\mathbf {a}}^{(k)}}=\mathbf {R}{{\mathbf {w}}^{(k)}},{{\mathbf {a}}^{(k)}}=\sqrt {M}{{\mathbf {a}}^{(k)}}/\left \| {{\mathbf {a}}^{(k)}} \right \|$ We call this iterative implementation as iterative robust adaptive beamformer (IRAB). The performance proof of IRAB The following two properties hold for the proposed IRAB: ∥a (k)−a S ∥2<∥a (k−1)−a S ∥2 for each iteration. The data covariance matrix R can be written in eigen-decomposition form as $$ \mathbf{R}={{\mathbf{Q}}_{S}}{{\mathbf{\Sigma }}_{S}}\mathbf{Q}_{S}^{H}+{{\mathbf{Q}}_{N}}{{\mathbf{\Sigma}}_{N}}\mathbf{Q}_{N}^{H}\,=\,\sum\limits_{i=1}^{L+1}{{{r}_{i}} {{\mathbf{q}}_{i}}\mathbf{q}_{i}^{H}}+\sum\limits_{i=L+2}^{M}{{{r}_{i}}{{\mathbf{q}}_{i}}\mathbf{q}_{i}^{H}} $$ where r i and q i are the eigenvalues and corresponding eigenvectors of R, eigenvalues are sorted in descending order r 1≥…≥r L+1≫r L+2=…=r M =P N , Q S =[q 1,…,q L+1] spans the signal-plus-interference subspace, Q N =[q L+2,…,q M ] spans the noise subspace. Defining ${{\mathbf {a}}_{\parallel }}={{\mathbf {Q}}_{S}}\mathbf {Q}_{S}^{H}{{\mathbf {a}}^{(0)}}$ and a ⊥=Q N Q N a (0), we have the following formulas $$ \left\{\begin{array}{l} {{\mathbf{a}}^{(0)}}={{\mathbf{a}}_{\parallel}}+{{\mathbf{a}}_{\bot}}\\ {{\mathbf{a}}_{\parallel}}\bot{{\mathbf{a}}_{\bot}}\\ {{\left\| {{\mathbf{a}}^{(0)}} \right\|}^{2}}={{\left\| {{\mathbf{a}}_{\parallel }} \right\|}^{2}}+{{\left\| {{\mathbf{a}}_{\bot}} \right\|}^{2}}\\ {{\mathbf{a}}_{S}}=\sqrt{M}{{\mathbf{a}}_{\parallel}}/\left\|{{\mathbf{a}}_{\parallel}}\right\| \end{array}\right. $$ The weight vector of k-th iterative step can be expressed by a ∥ and a ⊥ as $$ \begin{aligned} {{\mathbf{a}}^{(k)}}& ={{\mu }_{1}}\mathbf{R}{{\mathbf{w}}^{(k)}} \\ & ={{\mu}_{1}}\mathbf{R}{{\left(\mathbf{R}+\tau {{p}^{(k)}}\mathbf{I}\right)}^{-1}}{{\mathbf{a}}^{(k-1)}} \\ & ={{\mu}_{1}}{{\left(\tau {{p}^{(k)}}{{\mathbf{R}}^{-1}}+\mathbf{I}\right)}^{-1}}{{\mathbf{a}}^{(k-1)}} \\ & ={{\mu}_{1}}\prod\limits_{i=2}^{k}{{{\left(\tau {{p}^{(i)}}{{\mathbf{R}}^{-1}}\,+\,\mathbf{I}\right)}^{-1}}\cdot {{\left(\tau {{p}^{(1)}}{{\mathbf{R}}^{-1}}+\mathbf{I}\right)}^{-1}}{{\mathbf{a}}^{(0)}}} \\ \end{aligned} $$ where μ 1 is a constraint constant which subject to ∥a (k)∥2=M. We can obtain the following result from (29) and (30) $$ \begin{aligned} &{{(\tau {{p}^{(1)}}{{R}^{-1}}+I)}^{-1}}{{a}^{(0)}} \\ =&{{(\tau {{p}^{(1)}}{{R}^{-1}}+I)}^{-1}}({{a}_{\parallel }}+{{a}_{\bot }}) \\ =&\sum\limits_{i=1}^{L+1}{\frac{{{\mathbf{q}}_{i}}\mathbf{q}_{i}^{H}}{\tau {{p}^{(1)}}/{{r}_{i}}+1}{{\mathbf{a}}_{\parallel }}}+\sum\limits_{i=L+2}^{M}{\frac{{{\mathbf{q}}_{i}}\mathbf{q}_{i}^{H}}{\tau {{p}^{(1)}}/{{P}_{N}}+1}{{\mathbf{a}}_{\bot }}} \\ =&\left\| \sum\limits_{i=1}^{L+1}{\frac{{{\mathbf{q}}_{i}}\mathbf{q}_{i}^{H}}{\tau{{p}^{(1)}}/{{r}_{i}}+1}{{\mathbf{a}}_{\parallel }}} \right\|\frac{{{\mathbf{a}}_{\parallel }}}{\left\| {{\mathbf{a}}_{\parallel }} \right\|}+\frac{{{\mathbf{a}}_{\bot }}}{\tau {{p}^{(1)}}/{{P}_{N}}+1} \\ \end{aligned} $$ Similar to Lemma 1, by defining f(r)=1/(τ p (1)/r+1)2, we have $$ {\begin{aligned} {{\left\| \sum\limits_{i=1}^{L+1}{\frac{{{\mathbf{q}}_{i}}\mathbf{q}_{i}^{H}}{\tau {{p}^{(1)}}/{{r}_{i}}+1}{{\mathbf{a}}_{\parallel }}} \right\|}^{2}}=\sum\limits_{i=1}^{L+1}{\frac{\mathbf{a}_{\parallel }^{H}{{\mathbf{q}}_{i}}\mathbf{q}_{i}^{H}{{\mathbf{a}}_{\parallel }}}{{{\left(\tau {{p}^{(1)}}/{{r}_{i}}+1\right)}^{2}}}}\\=\frac{{{\left\| {{\mathbf{a}}_{\parallel }} \right\|}^{2}}}{{{\left(\tau {{p}^{(1)}}/r_{a}^{(1)}+1\right)}^{2}}} \end{aligned}} $$ where ${{r}_{1}}>r_{a}^{(1)}>{{r}_{L+1}}\gg {{P}_{N}}$. Substituting (33) into (32), we have $$ {{} \begin{aligned} {{\left(\tau {{p}^{(1)}}{{\mathbf{R}}^{-1}}+\mathbf{I}\right)}^{-1}}{{\mathbf{a}}^{(0)}}&=\frac{{{\mathbf{a}}_{\parallel }}}{\tau {{p}^{(1)}}/r_{a}^{(1)}+1}+\frac{{{\mathbf{a}}_{\bot }}}{\tau {{p}^{(1)}}/{{P}_{N}}+1}\\ &={{\mu }_{2}}\left({{\mathbf{a}}_{\parallel }}+{{\hat{\mathsf{\eta }}}^{(1)}}{{\mathbf{a}}_{\bot }}\right) \end{aligned}} $$ where μ 2 is a constant, and $$ {{\hat{\mathsf{\eta }}}^{(1)}}=\frac{\tau {{p}^{(1)}}/r_{a}^{(1)}+1}{\tau {{p}^{(1)}}/{{P}_{N}}+1} $$ From (31), (34), and (35), we can obtain $$ {{\mathbf{a}}^{(k)}}={{\mathsf{\mu }}_{1}}\left({{\mathbf{a}}_{\parallel }}+{{\mathsf{\eta }}^{(k)}}{{\mathbf{a}}_{\bot }}\right) $$ $$ {{\mathsf{\eta }}^{(k)}}=\prod\limits_{i=0}^{k}{{{{\hat{\mathsf{\eta }}}}^{(i)}}},{{\hat{\mathsf{\eta }}}^{(i)}}= \left\{ \begin{aligned} & \frac{\tau {{p}^{(i)}}/r_{a}^{(i)}+1}{\tau {{p}^{(i)}}/{{P}_{N}}+1},i>0 \\ & 1,i=0 \\ \end{aligned} \right. $$ Since $r_{a}^{(i)}>{{P}_{N}}$, we have $0<{{\hat {\mathsf {\eta }}}^{(i)}}<1$, i>0; therefore, 0<η (k)<η (k−1)≤1, k≥1. Then, the norm error of steering vector per iterative step is $$ {{} \begin{aligned} {{\left\| {{\mathbf{a}}^{(k)}}-{{\mathbf{a}}_{S}} \right\|}^{2}}&=2M-2\operatorname{Re}\left\{{{\mathbf{a}}^{(k)H}}{{\mathbf{a}}_{S}}\right\} \\ & =2M-2\operatorname{Re}\left\{\frac{\sqrt{M}{{\left({{\mathbf{a}}_{\parallel }}+{{\mathsf{\eta }}^{(k)}}{{\mathbf{a}}_{\bot }}\right)}^{H}}}{\left\| {{\mathbf{a}}_{\parallel }}+{{\mathsf{\eta }}^{(k)}}{{\mathbf{a}}_{\bot }} \right\|}\frac{\sqrt{M}{{\mathbf{a}}_{\parallel }}}{\left\| {{\mathbf{a}}_{\parallel }} \right\|}\right\} \\ & =2M-\frac{2M\left\| {{\mathbf{a}}_{\parallel }} \right\|}{\sqrt{{{\left\| {{\mathbf{a}}_{\parallel }} \right\|}^{2}}+{{\mathsf{\eta }}^{(k)2}}{{\left\| {{\mathbf{a}}_{\bot }} \right\|}^{2}}}} \\ \end{aligned}} $$ Therefore, η (k)<η (k−1)⇒∥a (k)−a S ∥2<∥a (k−1)−a S ∥2. Equation (37) shows that η (k) is a product of k number of variables that are less than one, so η (k) will approach to zero. If η (k)=0, ∥a (k)−a S ∥2=0, the actual steering vector is obtained. □ SINR (k)>SINR (k−1) for each iterative step, and SINR has an upper bound. $$ \begin{aligned} &{{\mathbf{w}}^{(k)H}}{{\mathbf{a}}_{S}} \\ =&{{({{\mathbf{R}}^{-1}}{{\mathbf{a}}^{(k)}})}^{H}}{{\mathbf{a}}_{S}} \\ =&{{({{\mathbf{a}}_{\parallel }}+{{\mathsf{\eta }}^{(k)}}{{\mathbf{a}}_{\bot }})}^{H}}({{\mathbf{Q}}_{S}}\mathbf{\Sigma }_{S}^{-1}\mathbf{Q}_{S}^{H}+{{\mathbf{Q}}_{N}}\mathbf{\Sigma }_{N}^{-1}\mathbf{Q}_{N}^{H})\frac{\sqrt{M}{{\mathbf{a}}_{\parallel }}}{\left\| {{\mathbf{a}}_{\parallel }} \right\|} \\ =&\mathbf{a}_{\parallel }^{H}{{\mathbf{Q}}_{S}}\mathbf{\Sigma }_{S}^{-1}\mathbf{Q}_{S}^{H}{{\mathbf{a}}_{\parallel }}\frac{\sqrt{M}}{\left\| {{\mathbf{a}}_{\parallel }} \right\|} \\ =&\sum\limits_{i=1}^{L+1}{\frac{1}{{{r}_{i}}}{{\left| \mathbf{a}_{\parallel }^{H}{{\mathbf{q}}_{i}} \right|}^{2}}\frac{\sqrt{M}}{\left\| {{\mathbf{a}}_{\parallel }} \right\|}} \\ \end{aligned} $$ Similar to Lemma 1, we have $$ \sum\limits_{i=1}^{L+1}{\frac{1}{{{r}_{i}}}{{\left| \mathbf{a}_{\parallel }^{H}{{\mathbf{q}}_{i}} \right|}^{2}}}=\frac{1}{{{r}_{b}}}\sum\limits_{i=1}^{L+1}{{{\left| \mathbf{a}_{\parallel }^{H}{{\mathbf{q}}_{i}} \right|}^{2}}}=\frac{1}{{{r}_{b}}}{{\left\| {{\mathbf{a}}_{\parallel }} \right\|}^{2}} $$ where r 1>r b >r L+1≫P N . Therefore, ${{\mathbf {w}}^{(k)H}}{{\mathbf {a}}_{S}}=\sqrt {M}\left \| {{\mathbf {a}}_{\parallel }} \right \|/{{r}_{b}}$. Then, we have $$ \begin{aligned} &{{\mathbf{w}}^{(k)H}}\mathbf{R}{{\mathbf{w}}^{(k)}}\\ &={{\mathbf{a}}^{(k)H}}{{\mathbf{R}}^{-1}}\mathbf{R}{{\mathbf{R}}^{-1}}{{\mathbf{a}}^{(k)}} \\ &={{\left({{\mathbf{a}}_{\parallel }}+{{\mathsf{\eta }}^{(k)}}{{\mathbf{a}}_{\bot }}\right)}^{H}}\left({{\mathbf{Q}}_{S}}\mathbf{\Sigma}_{S}^{-1}\mathbf{Q}_{S}^{H}+{{\mathbf{Q}}_{N}}\mathbf{\Sigma}_{N}^{-1}\mathbf{Q}_{N}^{H}\right) \\&\qquad\left({{\mathbf{a}}_{\parallel }}+{{\mathsf{\eta }}^{(k)}}{{\mathbf{a}}_{\bot }}\right) \\ =&\frac{1}{{{r}_{b}}}{{\left\| {{\mathbf{a}}_{\parallel }} \right\|}^{2}}+\frac{1}{{{P}_{N}}}{{\mathsf{\eta }}^{(k)2}}{{\left\| {{\mathbf{a}}_{\bot }} \right\|}^{2}} \\ \end{aligned} $$ $$ {{} \begin{aligned} \frac{SIN{{R}^{(k)}}}{1+SIN{{R}^{(k)}}}=\frac{{{P}_{S}}{{\left| {{\mathbf{w}}^{(k)H}}{{\mathbf{a}}_{S}} \right|}^{2}}}{{{\mathbf{w}}^{(k)H}}\mathbf{R}{{\mathbf{w}}^{(k)}}}=\frac{\frac{1}{r_{b}^{2}}M{{P}_{S}}{{\left\| {{\mathbf{a}}_{\parallel }} \right\|}^{2}}}{\frac{1}{{{r}_{b}}}{{\left\| {{\mathbf{a}}_{\parallel }} \right\|}^{2}}+\frac{1}{{{P}_{N}}}{{\mathsf{\eta }}^{(k)2}}{{\left\| {{\mathbf{a}}_{\bot }} \right\|}^{2}}} \end{aligned}} $$ Therefore, ${{\mathsf {\eta }}^{(k)}}<{{\mathsf {\eta }}^{(k-1)}}\Rightarrow \frac {SIN{{R}^{(k)}}}{1+SIN{{R}^{(k)}}}>\frac {SIN{{R}^{(k-1)}}}{1+SIN{{R}^{(k-1)}}}\Rightarrow SIN{{R}^{(k)}}>SIN{{R}^{(k-1)}}$. The upper bound of SINR (k) is achieved if η (k)=0. □ Some remarks on the IRAB The setting of prior parameter Equation (22) indicates that not only MP S +P N but also the prior parameter ε can affect the diagonal loading level. Many existed RABs use the constraint condition ∥Δ a∥≤ε, so they face the same problem: how to set a suitable ε? Jian Li suggest that ε should be chosen as small as possible but $\mathsf {\varepsilon }\ge {{\varepsilon }_{0}}\text {=}\underset {\phi }{\mathop {\min }}\,\left \| {{{\hat {\mathbf {a}}}}_{S}}{{e}^{j\phi }}-{{\mathbf {a}}_{S}} \right \|$ [20]. If ε<ε 0, the desired signal will be suppressed as interference. If ε is chosen much larger than ε 0, the ability of beamformer to suppress interferences that are close to the desired signal will degrade. The parameter τ is defined in the implementation of IRAB in Section 3.2. It can be seen from (37) that, if τ>0, the two properties of IRAB will always hold, which indicates that the iteration will always converge if $\mathsf {\varepsilon }<\sqrt {M}$. Therefore, the ε does no longer subject to the constraint ε≥ε 0; any $0<\mathsf {\varepsilon }<\sqrt {M}$ is suitable for IRAB. Notice that the value of $\varepsilon _{0}^{(k)}\text {=}\underset {\phi }{\mathop {\min }}\,\left \| {{\mathbf {a}}^{(k)}}{{e}^{j\phi }}-{{\mathbf {a}}_{S}} \right \|$ will decrease and approach to zero as the iterative step increases and the ε is better to be decreased as the iteration time increases. An experiential way is to reduce ε by half per iterative step. Thus, the proposed IRAB can be modified as follows ${{\mathsf {\varepsilon }}^{(0)}}=\mathsf {\varepsilon },{{\mathbf {a}}^{(0)}}={{\hat {\mathbf {a}}}_{S}},\beta =0.5$ ${{\mathsf {\varepsilon }}^{(k)}}=\mathsf {\beta }{{\mathsf {\varepsilon }}^{(k-1)}},{{\mathsf {\tau }}^{(k)}}\text {=}\frac {M{{\mathsf {\varepsilon }}^{(k)}}}{\sqrt {M}-{{\mathsf {\varepsilon }}^{(k)}}},{{p}^{(k)}}=\frac {1}{{{\mathbf {a}}^{(k-1)H}}{{\mathbf {R}}^{-1}}{{\mathbf {a}}^{(k-1)}}}$ w (k)=(R+τ (k) p (k) I)−1 a (k−1) The stopping criterion The iteration should be stopped under certain criterions, and the performance should not deteriorate on special occasions. On the one hand, three parameters are updated as the iterative step increases, the p (k), w (k), and a (k). The p (k) relates to a (k−1), the w (k) relates to p (k), and a (k−1), the a (k) relates to w (k). Therefore, we can make the stopping criterion only by the parameter a. If a (k) changes less than a threshold, such as a very small δ, with respect to a (k−1), we consider the iteration converges. Wei Jin uses ∥a (k)−a (k−1)∥≤δ as the stopping criterion [26]. However, because the phase rotate of a does not affect the performance of beamformer, using $\underset {\phi }{\mathop {\min }}\,\left \| {{\mathbf {a}}^{(k)}}-{{\mathbf {a}}^{(k-1)}}{{e}^{j\phi }} \right \|\le \delta $ is better, but the amount of calculation is increased. As the norm of a (k) and a (k−1) are both equal to $\sqrt {M}$, the stopping criterion 1 of IRAB is as follows $$ \left| {{\mathbf{a}}^{(k)H}}{{\mathbf{a}}^{(k-1)}} \right|/M\ge 0.999999 $$ It is obvious that the smaller difference between a (k) and a (k−1) is, the larger of the value of |a (k)H a (k−1)| is; the maximum value of |a (k)H a (k−1)| equals to M if a (k)=a (k−1) e jϕ; the phase rotate of a does not affect the value of |a (k)H a (k−1)|. On the other hand, two scenes should be considered. Firstly, when SNR of desired signal is very low, the updated steering vector cannot be able to converge to its actual value, even if it may converge to interferences or noise peaks. To avoid the desired signal's steering vector deviating its actual value too large, we add the following stopping criterion 2 [27] $$ {\begin{aligned} \left| {{\mathbf{a}}^{(k)H}}\hat{\mathbf{a}}({{{\hat{\theta }}}_{S}}) \right|<\min \left\{\left| {{{\hat{\mathbf{a}}}}^{H}}\left({{{\hat{\theta }}}_{S}}+\frac{{{\theta }_{W}}}{2}\right)\hat{\mathbf{a}}\left({{{\hat{\theta }}}_{S}}\right) \right|,\left| {{{\hat{\mathbf{a}}}}^{H}}\right.\right.\\ \left.\left.\left({{{\hat{\theta }}}_{S}}-\frac{{{\theta }_{W}}}{2}\right)\hat{\mathbf{a}}\left({{{\hat{\theta }}}_{S}}\right) \right|\right\} \end{aligned}} $$ where ${{\hat {\theta }}_{S}}$ is the prior DOA of desired signal and θ W is the uncertainty range. Stopping criterion 2 is based on the fact that, if |θ i −θ S | is larger, $\left | {{\hat {\mathbf {a}}}^{H}}({{\theta }_{i}})\hat {\mathbf {a}}({{\hat {\theta } }_{S}}) \right |$ is smaller (Fig. 1 shows an example, ignore the ripple). Therefore, the iteration stops when the corresponding DOA of a (k) is out of uncertainty range. However, these two stopping criterions have a defect. When angular separation between θ i and ${{\hat {\theta }}_{S}}$ is larger than a beam width, $\left | {{\hat {\mathbf {a}}}^{H}}({{\theta }_{i}})\hat {\mathbf {a}}({{{\hat {\theta }}}_{S}}) \right |\ll M$, i=1,…,L. At some specific angles, $\left | {{\hat {\mathbf {a}}}^{H}}({{\theta }_{i}})\hat {\mathbf {a}}({{{\hat {\theta }}}_{S}}) \right |$ approaches to zero. Using a ULA with M=16 as example, we plot the value of $\left | {{\hat {\mathbf {a}}}^{H}}({{\theta }_{i}})\hat {\mathbf {a}}({{{\hat {\theta }}}_{S}}) \right |/M$, ${{\hat {\theta }}_{S}}={{82}^{\circ }}$, θ i =1:180∘. As Fig. 1 shows, $\left | {{\hat {\mathbf {a}}}^{H}}({{110}^{\circ}})\hat {\mathbf {a}}({{{\hat {\theta }}}_{S}}) \right |/M\approx 0$. If the interference's DOA happens to be equal to 110°, $\left | {{[{{\mathbf {\mathbf {a}}}^{(k)}}+\hat {\mathbf {a}}({{110}^{\circ}})]}^{H}}\hat {\mathbf {a}}({{{\hat {\theta }}}_{S}}) \right |\approx \left | {{\mathbf {a}}^{(k)}}^{H}\hat {\mathbf {a}}({{{\hat {\theta }}}_{S}}) \right |$, thus the stopping criterion 2 may not work, and it does not affect the stopping criterion 1, which further means the updated steering vector may converge to the sum of desired signal and interferences. To deal with this special case, we add the following stopping criterion 3 $$ {{}\begin{aligned} \left\| {{\mathbf{a}}^{(k)}}-\hat{\mathbf{a}}\left({{{\hat{\theta}}}_{S}}\right) \right\|>&\max \left\{\left\| {{\mathbf{a}}^{(k)}}-\hat{\mathbf{a}}\left({{{\hat{\theta}}}_{S}}+\frac{{{\theta}_{W}}}{2}\right) \right\|,\left\| {{\mathbf{a}}^{(k)}}-\hat{\mathbf{a}}\right.\right.\\ &\left. \left.\left({{{\hat{\theta}}}_{S}}-\frac{{{\theta}_{W}}}{2}\right) \right\|\right\} \end{aligned}} $$ Stopping criterion 3 is based on the fact that if $\left | {{\theta }_{i}}-{{{\hat {\theta }}}_{S}} \right |$ is larger than a threshold, $\left \| \hat {\mathbf {a}}({{\theta }_{i}})-\hat {\mathbf {a}}({{{\hat {\theta }}}_{S}}) \right \|$ is large. Figure 2 shows the value of $\left \| \hat {\mathbf {a}}({{\theta }_{i}})-\hat {\mathbf {a}}({{{\hat {\theta }}}_{S}}) \right \|/\sqrt {M}$, ${{\hat {\theta }}_{S}}={{82}^{\circ }}$, θ i =1:180∘ for a ULA with M=16. Once the interferences component in a (k) surplus threshold, the iteration stops. The value of $\left \| \hat {\mathbf {a}}({{\theta }_{i}})-\hat {\mathbf {a}}({{{\hat {\theta }}}_{S}}) \right \|/\sqrt {M}$ The IRAB does not belong to the class of diagonal loading When the iteration stops at k-th step, the weight vector of IRAB is $$ {{} \begin{aligned} {{\mathbf{w}}^{(k)}}& ={{\left(\mathbf{R}+{{\tau }^{(k)}}{{p}^{(k)}}\mathbf{I}\right)}^{-1}}{{\mathbf{a}}^{(k-1)}} \\ & ={{\left({{\tau }^{(k)}}{{p}^{(k)}}{{\mathbf{R}}^{-1}}+\mathbf{I}\right)}^{-1}}{{\mathbf{w}}^{(k-1)}} \\ & =\prod\limits_{i=1}^{k}{{{\left({{\tau }^{(i)}}{{p}^{(i)}}{{\mathbf{R}}^{-1}}+\mathbf{I}\right)}^{-1}}{{{\hat{\mathbf{a}}}}_{S}}} \\ & =\mathbf{Q}~diag\left\{\frac{1}{{{\tau }^{(i)}}{{p}^{(i)}}/{{r}_{1}}+1},{\ldots},\frac{1}{{{\tau }^{(i)}}{{p}^{(i)}}/{{r}_{M}}+1}\right\}\mathbf{Q}{{{\hat{\mathbf{a}}}}_{S}} \\ \end{aligned}} $$ where the columns of Q contain the eigenvectors of R. Therefore, the IRAB does not belong to the class of diagonal loading. The computational complexity The computational complexity of IRAB is determined by the inversion of a M×M matrix, which is equal to O(M 3), per iterative step. Relationship between the IRAB and some similar beamformers Notice that the proposed IRAB relates to the following three beamformers: the DLWCB in [23], the IWCB in [26], and the IRCB2 in [24]. Their similarities and differences are as follows: (1) the equivalent diagonal loading levels of IRAB, DLWCB, and IRCB2 are the same, which is derived from the method in [9]; (2) DLWCB cannot be implemented in practice while IRAB is easy to be implemented; (3) although the equivalent steering vectors of IRAB and IWCB have the same form, which are deriving from Appendix B in [13], their solving methods per iterative step are quite different; (4) although the equivalent diagonal loading levels of IRAB and IRCB2 are the same, and their proof methods are similar, they are based on two different methods ([11] and [13]); and (5) the stopping criterion of IRAB is different to others. In the following simulation examples, a ULA with M=16 antennas and half-wavelength antenna spacing is considered. Assume each antenna is omni-directional, the array has been calibrated and omit the coupling effect. The desired signal and interferences are stationary Gaussian random process, and the additive noise is a spatially white Gaussian process. There are two interferences with DOAs and interference-to-noise ratios (INR) of [ 55°, 20 dB] and [ 115°, 30 dB], respectively. One desired signal is impinging on the array from 80°, but its prior DOA is 82°, except example 5. The DOA uncertainty range of desired signal is θ W =8∘. The actual norm bound of the error between a S and ${{\hat {\mathbf {a}}}_{S}}$ is calculated by ${{\varepsilon }_{0}}=\underset {\phi }{\mathop {\min }}\,\left \| {{\mathbf {a}}_{S}}-{{{\hat {\mathbf {a}}}}_{S}}{{e}^{j\phi }} \right \|$. 1000 runs are performed except for example 2. The number of snapshots is 200 except for example 6. The proposed IRAB is contrasted with some classical and similar RABs; they are as follows: OPT: The MPDR beamformer of (5) with actual R and a S . WCB: The WCB of [11]. The method proposed in [19] is used to solve WCB. The norm bound of steering vector error is set as ε WCB =1.1×ε 0, which is suggested by Jian Li [20], excepted for examples 1 and 5. DLWCB: The diagonal loading approach of WCB, which is proposed in [23]. Its diagonal loading level has the same form with (22). Notice that the DLWCB cannot be implemented in practice, actual P S and P N are used to calculate the diagonal loading level in the simulations. The norm bound of steering vector error is set as ε DLWCB =1.1×ε 0, except for example 5. IWCB: The iterative implementation of worst-case performance optimization-based beamformer [26]. The norm bound of steering vector error is set as ε IWCB =0.1. SINR versus steering vector error bound The first example simulates the SINR performance affected by norm bound of steering vector error. We set a group of different ε for WCB and IRAB. The actual norm bound of steering vector error corresponding to 2° pointing error is about ε 0=1.96. The input SNR of desired signal changes from −20 to 40 dB. Figures 3 and 4 show the results of WCB and IRAB, respectively. Results indicate that, when ε is set smaller than ε 0, the SINR performance of WCB decreases rapidly, while the performance of IRAB always keep stable. The theoretical analysis in Section 3.4 that the ε does no longer subject to the constraint ε≥ε 0 is verified. Figure 4 also shows that, the ε IRAB should be set appropriately, not too small or too large. An experience value is set ${{\varepsilon }_{IRAB}}=\sqrt {M}/2$; we use this setting in all the following examples. SINR versus SNR with difference ε for WCB SINR versus SNR with difference ε for IRAB Iterative convergence property The second example evaluates the convergence properties of IRAB and IWCB, SNR=25 dB. Figure 5 shows the norm bound error between the updated steering vector and actual steering vector per iterative steps. Figure 6 shows the updated SINR per iterative steps. Results show that as the iterative step increases, the error of updated steering vector grows smaller, and the updated SINR increases to a stable value. Additionally, the proposed IRAB has a faster convergence speed than IWCB. The convergence properties of IRAB and IWCB: steering vector error versus iterative steps The convergence properties of IRAB and IWCB: SINR versus iterative steps Output SINR performance The third example evaluates the output SINR performance versus input SNR. The result in Fig. 7 shows that the proposed IRAB outperforms other RABs almost at any input SNR. Output SINR versus input SNR Array beam pattern gain The fourth example presents the array beam pattern gain of four RABs, SNR=25 dB. The results of Fig. 8 show that the main beam peak of IRAB and IWCB nearly points to the actual DOA of desired signal, while the WCB and DLWCB do not. The theoretical result of (37) indicates that, as the iterative step increases, the η (k) approaches to zero, the a (k) gets closer to actual value, and therefore the main beam peak points to the actual DOA. For this reason, the IRAB and IWCB have better SINR performance than WCB and DLWCB, especially at SNR <20 dB, as shown in Fig. 7. SINR versus pointing error The fifth example evaluates the SINR versus pointing error, SNR=25 dB. Setting ε WCB =ε DLWCB =2, which corresponds to about 2° pointing error. The results in Fig. 9 show that the SINR performance of WCB decreases greatly when pointing error exceeds 2°; the IRAB and IWCB exhibit stable performance in the DOA uncertainty range θ W =8∘; the DLWCB has a wilder pointing error range and does not subject to the 2° pointing error constraint. Output SINR versus pointing error SINR versus snapshots We use actual data covariance matrix R in the theoretical analysis; the affect of finite sample effect with different snapshots is simulated in the sixth example, SNR=25 dB. The results in Fig. 10 show that as the snapshots increase from 16 to 400, the output SINR of WCB, DLWCB, IWCB, and IRAB increases about 7.5, 5.5, 8.0, and 7.0 dB respectively. The SINR performance of IRAB outperforms other RABs and goes to stable when snapshots number surplus 200. Output SINR versus snapshots SINR versus DOA separation and array size As declared in Section 2.3, the effectiveness of the proposed algorithm requires "DOA separation between desired signal and interference is larger than a beam width". In this section, we simulate the SINR performance versus different DOA separations between desired signal and interference and versus different array size. In the simulations, three arrays with element number M=10, 15, 20 are used; their mainbeam width are about 24°, 16°, and 12°, respectively (calculated by conventional beamformer with weight vector equals to the steering vector of desired signal). The SNR of desired signal is 20 dB. There is only one interference with INR=15 dB. The DOA separation between desired signal and interference varies from 0.4 to 5 times of main beam width. Other parameters are the same with the parameters declared in the beginning of Section 4. The results in Figs. 11, 12, and 13 show that there are some ripples; they are caused by the nulls of beam pattern; the performance of IRAB is better than others in most cases; the proposed IRAB can work even when DOA separation between desired signal and interference is smaller than a mainbeam width (The IRAB can work with DOA separation larger than half a mainbeam width); and as the DOA separation increases, the performance of IRAB gets better. SINR versus DOA separation: M=10, mainbeam width= 24° We have derived an approximate diagonal loading solution of the WCB in this paper. A novel beamformer named IRAB have been proposed based on this solution. Theoretical analysis indicates that the proposed IRAB has three properties: the iteration will converge; the performance gets better as the iterative step increases; the IRAB does not subject to the steering vector error norm bound constraint and exhibits stable performance through a wide steering vector error bound range. Simulation results not only verify these properties but also show that the proposed IRAB outperforms other contrasted RABs under the set parameters. L Harry, V Trees, Optimum array processing: part IV of detection, estimation, and modulation theory (2002). http://onlinelibrary.wiley.com/book/10.1002/0471221104. doi:10.1002/0471221104. L Ehrenberg, S Gannot, A Leshem, E Zehavi, in Electrical and Electronics Engineers in Israel (IEEEI), 2010 IEEE 26th Convention Of. Sensitivity analysis of MVDR and MPDR beamformers (IEEE, 2010), pp. 000416–000420. http://ieeexplore.ieee.org/document/5662190/. doi:10.1109/EEEI.2010.5662190. M Wax, Y Anu, Performance analysis of the minimum variance beamformer in the presence of steering vector errors. Signal Processing, IEEE Transactions on. 44(4), 938–947 (1996). doi:10.1109/78.492546. A Pezeshki, BD Van Veen, LL Scharf, H Cox, ML Nordenvaad, Eigenvalue beamforming using a multirank mvdr beamformer and subspace selection. Sig. Process. IEEE Trans.56(5), 1954–1967 (2008). doi:10.1109/TSP.2007.912248. W Zhang, J Wang, S Wu, Robust capon beamforming against large doa mismatch. Signal Process. 93(4), 804–810 (2013). doi:10.1016/j.sigpro.2012.10.002. JH Lee, CC Wang, Adaptive array beamforming with robust capabilities under random sensor position errors. Radar, Sonar and Navigation, IEE Proc.152(6), 383–390 (2005). doi:10.1049/ip-rsn:20045018. C-Y Tseng, DD Feldman, LJ Griffiths, Steering vector estimation in uncalibrated arrays. Signal Proc. IEEE Trans.43(6), 1397–1412 (1995). doi:10.1109/78.388853. X Mestre, MA Lagunas, Finite sample size effect on minimum variance beamformers: Optimum diagonal loading factor for large arrays. Signal Proc. IEEE Trans.54(1), 69–82 (2006). doi:10.1109/TSP.2005.861052. F Vincent, O Besson, in Radar, Sonar and Navigation, IEE Proceedings, 151. Steering vector errors and diagonal loading (IETIET, 2004), pp. 337–343. doi:10.1049/ip-rsn:20041069. Y Selén, R Abrahamsson, P Stoica, Automatic robust adaptive beamforming via ridge regression. Signal Process. 88(1), 33–49 (2008). doi:10.1016/j.sigpro.2007.07.003. SA Vorobyov, AB Gershman, Z-Q Luo, Robust adaptive beamforming using worst-case performance optimization: a solution to the signal mismatch problem. Signal Process. IEEE Trans.51(2), 313–324 (2003). doi:10.1109/TSP.2002.806865. A Elnashar, Efficient implementation of robust adaptive beamforming based on worst-case performance optimisation. IET Signal Process.2(4), 381–393 (2008). doi:10.1049/iet-spr:20070162. J Li, P Stoica, Z Wang, On robust Capon beamforming and diagonal loading. Signal Process. IEEE Trans.51(7), 1702–1715 (2003). doi:10.1109/TSP.2003.812831. A Hassanien, SA Vorobyov, KM Wong, Robust adaptive beamforming using sequential quadratic programming: an iterative solution to the mismatch problem. Signal Process. Letters IEEE.15:, 733–736 (2008). doi:10.1109/LSP.2008.2001115. ZL Yu, MH Er, W Ser, A novel adaptive beamformer based on semidefinite programming (sdp) with magnitude response constraints. Antennas Propag. IEEE Trans.56(5), 1297–1307 (2008). doi:10.1109/TAP.2008.922644. D Xu, R He, F Shen, Robust beamforming with magnitude response constraints and conjugate symmetric constraint. IEEE Commun. letters. 17(3), 561–564 (2013). doi:10.1109/LCOMM.2013.011513.122688. Y Gu, A Leshem, Robust adaptive beamforming based on interference covariance matrix reconstruction and steering vector estimation. Signal Process. IEEE Trans.60(7), 3881–3885 (2012). doi:10.1109/TSP.2012.2194289. Z Zhang, W Liu, W Leng, A Wang, H Shi, Interference-plus-noise covariance matrix reconstruction via spatial power spectrum sampling for robust adaptive beamforming. IEEE Signal Process. Letters. 23(1), 121–125 (2016). doi:10.1109/LSP.2015.2504954. K Zarifi, S Shahbazpanahi, AB Gershman, Z-Q Luo, Robust blind multiuser detection based on the worst-case performance optimization of the mmse receiver. Signal Process. IEEE Trans.53(1), 295–305 (2005). doi:10.1109/TSP.2004.838932. P Stoica, Z Wang, J Li, Robust capon beamforming. Signal Process. Letters IEEE. 10(6), 172–175 (2003). doi:10.1109/LSP.2003.811637. JP Lie, W Ser, CMS See, Adaptive uncertainty based iterative robust capon beamformer using steering vector mismatch estimation. Signal Process. IEEE Trans.59(9), 4483–4488 (2011). doi:10.1109/TSP.2011.2157500. RG Lorenz, SP Boyd, Robust minimum variance beamforming. Signal Process. IEEE Trans.53(5), 1684–1696 (2005). doi:10.1109/TSP.2005.845436. J-r Lin, Q-c Peng, H-z Shao, On diagonal loading for robust adaptive beamforming based on worst-case performance optimization. ETRI journal. 29(1), 50–58 (2007). doi:10.4218/etrij.07.0105.0186. Y Li, H Ma, D Yu, L Cheng, Iterative robust capon beamforming. Signal Process.118:, 211–220 (2016). doi:10.1016/j.sigpro.2015.07.004. L Chang, C-C Yeh, Performance of dmi and eigenspace-based beamformers. Antennas Propag. IEEE Trans.40(11), 1336–1347 (1992). doi:10.1109/8.202711. W Jin, W Jia, M Yao, S Zhou, Robust adaptive beamforming based on iterative implementation of worst-case performance optimisation. Electron. letters. 48(22), 1389–1391 (2012). doi:10.1049/el.2012.1718. SE Nai, W Ser, ZL Yu, H Chen, Iterative robust minimum variance beamforming. Signal Process. IEEE Trans.59(4), 1601–1611 (2011). doi:10.1109/TSP.2010.2096222. The authors wish to thank the Handling Editor and Reviewers for their detailed review, which helped improve this manuscript. This work is supported by the scientific research foundation of Wuhan Institute of Technology (No. K201768). School of Electrical and Information Engineering, Wuhan Institute of Technology, 693 Xiongchu Avenue, Wuhan, 430073, China & Li Cheng School of Electric Information and Communications, Huazhong University of Science and Technology, 1037 Luoyu Road, Wuhan, 430074, China , Hong Ma Search for Yang Li in: Search for Hong Ma in: Search for Li Cheng in: YL provided the idea and wrote the manuscript. HM guided this paper. LC gave some improvement suggestions. All authors read and approved the final manuscript. Correspondence to Yang Li. Yang Li received the B.S. and M.S. degrees from Wuhan University of Technology, Wuhan, China, in 2006 and 2010, respectively, and received the Ph.D. degree in Electromagnetic Field and Microwave Technology from Huazhong University of Science and Technology, Wuhan, China, in 2016. Now, he is a lecturer of School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan, China. His current research interests include array signal processing, adaptive filtering, and radio wave propagation. Hong Ma received the B.Eng., M. Eng., and Ph.D. degrees in Electromagnetic Field and Microwave Technology from Huazhong University of Science and Technology in 1988, 1992, and 1998, respectively. He is currently a Professor of School of Electronic Information and Communications, Huazhong University of Science and Technology. His research interests include radar system, electromagnetic and microwave technology, and nonlinear system theory. Li Cheng received the B.Eng. degree in Electronic Information Engineering from Hubei University in 2002 and the M. Eng. degree in Communication and Information System from Wuhan University of Technology in 2005. Now, she is presently working on her Ph.D. degree in Electromagnetic Field and Microwave Technology in Huazhong University of Science and Technology. She is currently an associate professor of School of Electrical and Information Engineering, Wuhan Institute of Technology. Her research interests include wireless communication, radio wave propagation model, and radar signal processing. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Array signal processing Robust adaptive beamforming Steering vector error Diagonal loading
CommonCrawl
Sp.4ML > Spatial Statistics > Spatial Interpolation 101: Introduction to Inverse Distance Weighting Interpolation Technique Spatial Interpolation 101: Introduction to Inverse Distance Weighting Interpolation Technique Spatial Statistics, Tutorials Do you think that points on a map are not enough to tell your story? Me too. Year ago, I'd prepared a presentation for GIS conference and I'd had a point map of measurements. "It won't be visible" – I thought – "I must do something with it". Then I spent half the night before presentation on the problem how to interpolate values in locations where measurements were not taken. I've used kriging for it, but nowadays I rather focus on simpler approach. This is Inverse Distance Weighting interpolation technique. The First Law of Geography Do you know the first law of geography proposed by Tobler? If not, here it is: Everything is related to everything else, but near things are more related than distant things. W.Tobler What does it mean in practice? Temperature in your city is similar to temperature in neighboring city, but it is probably very different than temperature in the city 1000 km South (spatial distance); Mean humidity tomorrow will be similar to mean humidity today, but it'll be different than mean humidity month ago (temporal distance); Your closest friends have similar interests and worldview as you, but friends of your friends are dedicated to other things than geostatistics and geography (social distance). We stick to the spatial distance case and skip other distances, but it is nice to see common ground between different dimensions of our reality. Ok, time to think physics and build our inverse distance weighted interpolation (IDW) model. We have one BIG assumption. That every point is a weighted sum of all other points' values. What does it mean? We're going through a specific story to uncover IDW. The story of shared mood and crocs IDW concept may be described in two sentences: To get value at unseen location you weight values of known locations and sum them. Weighting is a function of distance to the unseen location. That's it! But still it is hard to understand underlying concept. Thus we will start from the IDW equation (1) and then move to the specific application of IDW to understand its power. $$f(m) = \frac{\sum_{i}\lambda_{i}*f(m_{i})}{\sum_{i}\lambda{i}}; (1)$$ $f(m)$ is a value at unknown location, $i$ is i-th known location, $f(m_{i})$ is a value at known location, $\lambda$ is a weight assigned to the known location. We must assign specific weight to get proper results. And weight depends on the distance between known point and unknown point (2). $$\lambda_{i} = \frac{1}{d_{i}^{p}}; (2)$$ $d$ is a distance from known point $i$ to the unknown point, $p$ is a hyperparameter which controls how strong is a relationship between known point and unknown point. You may set large $p$ if you want to show strong relationship between closest point and very weak influence of distant points. On the other hand, you may set small $p$ to emphasize fact that points are influencing each other with the same power irrespectively of their distance. Specific example helps to better understand how lambda parameter affects our analysis. Let's imagine that you are living in flatland. This land is very strange because your mood is proportional to the mood of your neighbors. Proportion of it may be controlled by some super-three-dimensional entity which changes parameter $p$ of your universe. Larger $p$ increases effective distance between you and your neighbors; their mood is transferred by the valley full of mood-crocs which are eating it. Larger $p$ literally increases number of mood-crocs per each mood route, and if route is longer then more mood-crocs may fit into it (look into the Figures 1-3 if you feel lost here). What happens when $p$ is set to zero (0), one (1) and (2)? We analyze it and with this example you hopefully understand how IDW works. Set p to 0 Figure 1: Flatland mood transfer. Hyperparameter p is set to 0, what is the mood value of a flatman m? Credits: S. Moliński. Figure 1 shows three known flatmen with their moods. We have distance between them and the unknown flatman too. In the first step we calculate how many mood-crocs are placed between missing value $m$ and known values $m_{1}$, $m_{2}$ and $m_{3}$ per each mood-transfer route. Those crocs represent weights assigned to the known values; and you may place more crocs (smaller weight, we lost a lot of information on a larger distance) if distance is long. Our distance for every route is different, but if we assume that weighting factor is equal to 0 then our weight (number of crocs) is always one. You should remember from the school algebra that every value raised to 0 gives 1. Weight is always 1 – we set one mood-croc (denominator in $\lambda_{i}$) for every distance. Mood of flatman m is simple average of the other moods and it is equal to 2. Figure 2: Flatland mood transfer. Hyperparameter p is set to 1, what is the mood value of a flatman m? Credits: S. Moliński Scenario in Figure 2 is more complicated. We have multiple mood-crocs here because we have set $p$ parameter. As you remember from school algebra, if we raise number to the power of 1 then we get the same number. That's why number of mood-crocs (weights assigned to each distance) is an inverse proportion of the distance itself. This time we get weighted average of moods in the form: $$\frac{(1*1+\frac{1}{2}*3+\frac{1}{3}*2)}{(1+\frac{1}{2}+\frac{1}{3})}=\frac{(1+1.5+0.67)}{1.83}=1.73; (3)$$ Now interesting story begins. Number of mood-crocs is expanding with the power assigned to the distance. What is the mood of the unknown flatman? $$\frac{(1*1+\frac{1}{4}*3+\frac{1}{9}*2)}{(1+\frac{1}{4}+\frac{1}{9})}=\frac{(1+0.75+0.22)}{1.36}=1.45; (4)$$ Do you see how power affects effective mood-transfer? Lower mood value is closer to the unknown point, but it still has greatest impact on the final value. And other moods impact decreases quickly with rising power. It is especially visible if you look into the middle part of equation (3) and equation (4). The conclusion is simple: if you want to emphasize close-neighbors interaction then set power to the larger number. But if points in your research area probably interact at a large distance then set power to the small value. Think of one example of measurements where observations are spatially correlated. Why should we care about spatial interpolation? What will happen if power is set to the value between 0 and 1? What will happen if power is negative? Prediction of mercury concentrations in Mediterranean Sea with IDW in Python how to calculate inverse distance weightinginverse distance weightingspatial interpolationspatial statistics
CommonCrawl
3. Elementary Transcendental Functions> Trigonometric functions are not one-to-one because their values repeat periodically and that the horizontal lines $y=c$ intersect the graphs in an infinite number of points, if at all, as we at once see from Figure 1 (recall the horizontal line test in Section One-to-One Functions). Therefore, they cannot have inverses unless we restrict their domains to intervals on which they are one-to-one. (a) Graph of $y=\sin x$ (b) Graph of $y=\tan x$ Figure 1: Trigonometric functions are not one-to-one as they do not pass the horizontal line test. Inverse of sine Inverse of cosine Inverse of Tangent Inverse of the Secondary Trigonometric Functions If we look at the graph of $y=\sin x$ or if we consider the unit circle, we realize that the sine function on the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ is increasing from $-1$ to $1$. So by restricting its domain to this interval, we make it a one-to-one function whose domain is $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ and its range is $[-1,1]$. The inverse of the sine function, denoted by "$\sin^{-1}x$" or "$\arcsin x$" , is a one-to-one function whose domain is $[-1,1]$ and its range is $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$. The graph of $y=\arcsin x$ is obtained by reflecting the graph of $y=\sin x$ (restricted to the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$) in the line $y=x$ (see Figure 2). The two symbols "$\sin^{-1}x$" and "$\arcsin x$" are equivalent and can be used interchangeably. The first one is read "the inverse sine of $x$" and the second "the arc sine of $x$." Again note that $\sin^{-1}x\neq\frac{1}{\sin x}$. $y=\arcsin x$ means $y$ is a number in the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ for which $\sin y=x$ The graph of $y=\arcsin x$ is symmetric about the origin, which shows $y=\arcsin x$ is an odd function. To prove it algebraically, we need to show $\arcsin(-x)=-\arcsin x$. Let y=\arcsin(-x). \] We know it means \sin y & =-x\\ \Rightarrow-\sin y & =x\\ \Rightarrow\sin(-y) & =x & \text{the sine is an odd function}\\ \Rightarrow-y & =\arcsin x\\ \Rightarrow y & =-\arcsin x\\ \Rightarrow\arcsin(-x) & =-\arcsin x Figure 2: The graph of $y=\arcsin x$ is obtained by reflecting the graph of $y=\sin x$ restricted to the interval $[-\pi/2,\pi/2]$ in the line $y=x$. The cosine and tangent functions can be inverted in a similar fashion. By considering the unit circle or looking at the graph of $y=\cos x$, we realize that $y=\cos x$ is not one-to-one on $[-\pi/2,\pi/2]$. So we had to choose a different interval for the cosine function. If we restrict the domain of the cosine function to the interval $[0,\pi]$, we can make it one-to-one, so that it has an inverse function denoted by $\cos^{-1}x$ or $\arccos x$. The graph of $y=\arccos x$ is shown in Figure 3. The domain of $y=\arccos x$ is $[-1,1]$ and its range is $[0,\pi]$. $y=\arccos x$ means $y$ is a number in the interval $[0,\pi]$ for which $\cos y=x$ The graph of $y=\arccos x$ is neither symmetric about the $y$-axis nor is symmetric about the origin, which means that $y=\arccos x$ is neither odd nor even. Figure 3: The graph of $y=\arccos x$ is obtained by reflecting the graph of $y=\cos x$ restricted to the interval $[0,\pi]$ in the line $y=x$. For the tangent function, we choose the open interval $(-\frac{\pi}{2},\frac{\pi}{2})$ to perform the inversion. The resulting function is denoted by "$\tan^{-1}x$" or "$\arctan x$." The domain of $y=\arctan x$ is $(-\infty,\infty)$ and its range is $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$. $y=\arctan x$ means $y$ is a number in the interval $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$ for which $\tan y=x$. The graph of $y=\arctan x$ is shown in Figure 4. This figure shows that the inverse tangent function is an odd function. Figure 4: The graph of $y=\arctan x$ is obtained by reflecting the graph of $y=\tan x$ restricted to the open interval $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$ in the line $y=x$. Recall that if $f$ and $g$ are inverse functions of each other then f(g(x))=x,\qquad g(f(x))=x \] for every $x$ in the domain of the inside function, which are $g$ and $f$, respectively (Theorem 1 in Section on Inverse Functions). The following table summarizes some properties of the inverse trigonometric functions. Note that here we deal with the restricted domains of the trigonometric functions; otherwise, their inverses do not exist. Function Domain Range Cancelation euqations $y=\arcsin x$ $[-1,1]$ $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ $\sin(\arcsin x)=x\quad\text{if }-1\leq x\leq1$ $\arcsin(\sin x)=x\quad\text{if}-\frac{\pi}{2}\leq x\leq\frac{\pi}{2}$ $y=\arccos x$ $[-1,1]$ $[0,\pi]$ $\cos(\arccos x)=x\quad\text{if }-1\leq x\leq1$ $\arccos(\cos x)=x\quad\text{if }0\leq x\leq\pi$ $y=\arctan x$ $(-\infty,\infty)$ $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$ $\tan(\arctan x)=x\quad\text{if }-\infty<x<\infty$ $\arctan(\tan x)=x\quad\text{if }-\frac{\pi}{2}<x<\frac{\pi}{2}$ Table 1: Properties of the inverse trigonometric function Read more on the inverse of the secondary trigonometric functions (Optional) Hide the inverse of the secondary trigonometry functions The inverses of the cotangent, secant, and cosecant can be defined in a similar fashion but they are of lesser importance. Most calculators do not have special keys for arccot $x$, arcsec $x$, or arccsc $x$ (equivalent to $\cot^{-1}x$, $\sec^{-1}x$, or $\csc^{-1}x$), but we can say \text{arccot }x & =\arctan\left(\frac{1}{x}\right)\label{eq:arccot-arcsec-arccsc}\\ \text{arcsec }x & =\arccos\left(\frac{1}{x}\right)\nonumber \\ \text{arccsc }x & =\arcsin\left(\frac{1}{x}\right)\nonumber The graphs of these inverses of cotangent, secant and cosecant are depicted in Figure 5. There is no universal agreement on how to restrict the domains of the secondary trigonometric functions. For example, in many books (especially the older ones) you may see that the inverse cotangent function is defined by restricting the domain of the cotangent function to the intervals $(0,\pi)$, but nowadays in most computer packages such as MATLAB, Mathematica, Sympy, and Maple, it is defined by restricting the domain of cotangent to the interval $(-\frac{\pi}{2},\frac{\pi}{2})$. In Figure 5, we have used the definitions that conform with the conventions used by these computer packages as they are consistent with Equations 1. (a) Graph of $y=\text{arccot }x$ (b) Graph of $y=\text{arcsec }x$ (c) Graph of $y=\text{arccsc }x$ When evaluating the inverse trigonometric functions, do not forget that their outputs are angles in radian measure. Find $\arcsin\frac{1}{2}$. We know $\sin\frac{\pi}{3}=\frac{1}{2}$. Because $\pi/3$ belongs to the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$, therefore $\arcsin\frac{1}{2}=\frac{\pi}{3}$. Evaluate $\arcsin\left(\sin\frac{4\pi}{3}\right)$. First we note that $4\pi/3$ is in the second quadrant. Because $\frac{4\pi}{3}$ is not in the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ \arcsin\left(\sin\frac{4\pi}{3}\right)\neq\frac{4\pi}{3} \] But we can write \sin\frac{4\pi}{3} & =\sin\left(\pi+\frac{\pi}{3}\right)\\ & =-\sin\frac{\pi}{3}\\ & =\sin\left(-\frac{\pi}{3}\right). As $-\frac{\pi}{3}$ is in the restricted domain of the sine function, which is $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$, by the second cancellation equation in Table 1, we have \arcsin\left(\sin\left(-\frac{\pi}{3}\right)\right)=-\frac{\pi}{3}. Find $\cos\left(\arcsin\frac{3}{5}\right)$. Let $\alpha=\arcsin\frac{3}{5}$. Then we know: (1) By the definition of the arc sine function, $\alpha$ is in the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ (2) $\sin\alpha=\frac{3}{5}$. The question asks us to calculate $\cos\alpha$. By the identity $\sin^{2}\alpha+\cos^{2}\alpha=1$, we have \cos^{2}\alpha=1-\sin^{2}\alpha=1-\frac{9}{25}=\frac{16}{25}. \] Because $\alpha$ is in the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ and the cosine function is positive in the first and fourth quadrant: \cos\alpha=+\sqrt{\frac{16}{25}}=\frac{4}{5}. \] That is, $\cos\left(\arcsin\frac{3}{5}\right)=\frac{4}{5}$. Find $\arccos\left(-\frac{1}{2}\right)$. Let $\alpha=\arccos\left(-\frac{1}{2}\right)$. To find $\alpha$, we use the unit circle and draw a vertical line passing through $-\frac{1}{2}$ (recall the $x$-axis is also called the cosine axis). As we at once see from Figure 6, $\alpha$ lies in the second quadrant (there is another angle in the third quadrant for which the cosine is $-1/2$, but we are not interested in that angle as we have restricted the domain of the cosine function to the first and second quadrants). Considering the right triangle $\triangle OHP$ in Figure 6, it is obvious that \cos(\angle HOP)=\frac{1}{2}. \] Because $\cos\frac{\pi}{3}=\frac{1}{2},$ we get \angle HOP=\frac{\pi}{3}. \] From Figure 6, \alpha & =\pi-\angle HOP\\ & =\pi-\frac{\pi}{3}\\ & =\frac{2\pi}{3}. That is, \arccos\left(-\frac{1}{2}\right)=\frac{2\pi}{3}. Evaluate $\arccos\left(\cos\left(-\frac{4\pi}{3}\right)\right)$. Because $-\frac{3\pi}{2}<-\frac{4\pi}{3}<-\pi$, the angle whose radian measure is $-\frac{4\pi}{3}$ is in the second quadrant. Because $-\frac{4\pi}{3}$ does not lie between $0$ and $\pi$, we cannot use the cancellation equation (see Equation 4 in Table 1). So the first step is to find an angle $\alpha$ such that $\alpha$ lies between 0 and $\pi$ and \cos\alpha=\cos\left(-\frac{4\pi}{3}\right), \] then we can use the cancellation equation. We at once see from the following figure that \alpha=2\pi-\frac{4\pi}{3}=\frac{2\pi}{3}. \] Therefore, \arccos\left(\cos\left(-\frac{4\pi}{3}\right)\right)=\arccos\left(\cos\frac{2\pi}{3}\right). \] and now we can use the cancellation equation \arccos\left(\cos\frac{2\pi}{3}\right)=\frac{2\pi}{3}. Find the domain of the function $f$, given $f(x)=\arcsin(x^{2}-3)$. Because the domain the arc sine function $y=\arcsin x$ is $[-1,1]$; Dom(f) & =\{x|\ -1\leq x^{2}-3\leq1\}\\ & =\{x|\ 2\leq x^{2}\leq4\}& \text{(adding 3 to each side of the inequalities)} Here we have two inequalities $x^{2}\leq4$ and $2\leq x^{2}$, and we need to find all $x$ for which the both inequalities hold: x^{2}\leq4\Rightarrow|x|\leq4\Rightarrow-2\leq x\leq2 \] In other words, $x\in[-2,2]$. Now the second inequality: 2\leq x^{2}\Rightarrow\sqrt{2}\leq|x|\Rightarrow x\geq\sqrt{2}\text{ or }x\leq-\sqrt{2} \] or $x\in(-\infty,-\sqrt{2}]\cup[\sqrt{2},\infty).$ Therefore Dom(f) & =\{x|\ -2\leq x\leq-\sqrt{2}\text{ or }\sqrt{2}\leq x\leq2\}\\ & =[-2,-\sqrt{2}]\cup[\sqrt{2},2]. To show that this is the correct domain of $f$, we can graph $f$ using a graphing calculator or a computer package (see the following figure). Figure 7: Graph $y=\arcsin(x^2-3)$
CommonCrawl
test society (2) Test Society 2018-05-10 (1) Experimental study of the influence of wind on Benjamin–Feir sideband instability Larry F. Bliven, Norden E. Huang, Steven R. Long Journal: Journal of Fluid Mechanics / Volume 162 / January 1986 Published online by Cambridge University Press: 21 April 2006, pp. 237-260 A laboratory investigation of the influence of wind on the evolution of mechanically generated regular (m.g.r.) waves is reported. Surface elevation measurements were made at four fetches for steep (0.1 < $\overline{ak}$ < 0.2, 2 Hz) m.g.r. waves, moderate (15 < u* < 25 cm s−1, 3–6 Hz) wind waves, and combinations of the m.g.r. and wind waves. The m.g.r. wave spectra exhibit Benjamin—Feir sidebands that grow exponentially with fetch and whose growth rate increases as the initial wave steepness increases. As fetch increases for the wind cases, total energy increases and the frequency of the spectral maximum downshifts, but no spectral lines representing Benjamin—Feir sidebands were detected even though the wave steepness and fetch were similar to the m.g.r. waves whose spectra displayed sidebands. As wind speed increased over the m.g.r. waves, sideband magnitude, sideband growth rate and low-frequency perturbation components associated with the instability mechanism were reduced. A unified two-parameter wave spectral model for a general sea state Norden E. Huang, Steven R. Long, Chi-Chao Tung, Yeli Yuen, Larry F. Bliven Journal: Journal of Fluid Mechanics / Volume 112 / November 1981 Based on theoretical analysis and laboratory data, we proposed a unified two-parameter wave spectral model as $\phi(n) = \frac{\beta g^2}{n^m n_0^{5-m}} {\rm exp} \left\{-\frac{m}{4}\left(\frac{n_0}{n}\right)^4\right\}$ with β and m as functions of the internal parameter, the significant slope η of the wave field which is defined as $\sect = \frac{(\overline{\zeta^2})^{\frac{}1{2}}}{\lambda_0},$ where $\overline{\zeta^2}$ is the mean squared surface elevation, and λ0, n0 are the wavelength and frequency of the waves at the spectral peak. This spectral model is independent of local wind. Because the spectral model depends only on internal parameters, it contains information about fluid-dynamical processes. For example, it maintains a variable bandwidth as a function of the significant slope which measures the nonlinearity of the wave field. And it also contains the exact total energy of the true spectrum. Comparisons of this spectral model with the JONSWAP model and field data show excellent agreements. Thus we established an alternative approach for spectral models. Future research efforts should concentrate on relating the internal parameters to the external environmental variables. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves Norden E. Huang, Steven R. Long Journal: Journal of Fluid Mechanics / Volume 101 / Issue 1 / 13 November 1980 Laboratory experiments were conducted to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data together with some limited field data were compared. It is found that the skewness of the surface elevation distribution is proportional to the significant slope of the wave field, §, and all the laboratory and field data are best fitted by \[ K_3 = 8\pi\S, \] with § defined as ($(\overline{\zeta^2})^{\frac{1}{2}}/\lambda_0 $, where ζ is the surface elevation, and λ0 is the wavelength of the energy-containing waves. The value of K3 under strong wind could reach unity. Even under these highly non-Gaussian conditions, the distribution can be approximated by a four-term Gram-Charlier expansion. The approximation does not converge uniformly, however. More terms will make the approximation worse. On surface drift currents in the ocean Norden E. Huang Journal: Journal of Fluid Mechanics / Volume 91 / Issue 1 / 9 March 1979 A new model of surface drift currents is constructed using the full nonlinear equations of motion. This model includes the balance between Coriolis forces due to the mean and wave-induced motions and the surface wind stresses. The approach used in the analysis is similar to the work by Craik & Leibovich (1976) and Leibovich (1977), but the emphasis is on the mean motion rather than the small-scale time-dependent part of the Langmuir circulation. The final result indicates that surface currents can be generated by both the direct wind stresses, as in the classical Ekman model, and the Stokes drift, derived from the surface wave motion, in an interrelated fashion depending on a wave Ekman number E defined as \[ E = \Omega/\nu_ek^2_0, \] where Ω is the angular velocity of the earth's rotation, νe, the eddy viscosity and k0, the wavenumber of the surface wave at the spectral peak. When E [Lt ] 1, the Langmuir mode dominates. When E [Gt ] 1, inertial motion results. The classical Ekman drift current is a special case even under the restriction E ≃ 1. On the basis of these results, a new model of the surface-layer movements for future large-scale ocean circulation studies is presented. For this new model both the wind stresses and the sea-state information are crucial inputs. On the variation and growth of wave-slope spectra in the capillary-gravity range with increasing wind Steven R. Long, Norden E. Huang Journal: Journal of Fluid Mechanics / Volume 77 / Issue 2 / 24 September 1976 A new laser device has been used to make direct wave-slope measurements in the capillary-gravity range. Owing to the design principles, the digital nature of the system and the use of a laser beam as a probe, the earlier problems of intensity variations and meniscus effects were avoided. Using this new technique, wave-slope spectra both down and across the channel were obtained for different wind conditions, along with corresponding mean-square slope values. Comparisons are made with existing data. The results indicate that a quasi-equilibrium state may exist for each wind speed and that it increases in intensity with increasing wind, which may imply an asymptotic nature for the equilibrium-range coefficient Caa. From the data, two significant frictional velocities, 17.5 and 31 cm/s respectively, are identified as critical values for different ranges of wave development. The dispersion relation for a nonlinear random gravity wave field Norden E. Huang, Chi-Chao Tung Journal: Journal of Fluid Mechanics / Volume 75 / Issue 2 / 27 May 1976 Published online by Cambridge University Press: 29 March 2006, pp. 337-345 The dispersion relation for a random gravity wave field is derived using the complete system of nonlinear equations. It is found that the generally accepted dispersion relation is only a first-order approximation to the mean value. The correction to this approximation is expressed in terms of the energy spectral function of the wave field. The non-zero mean deviation is proportional to the ratio of the mean Eulerian velocity at the surface and the local phase velocity. In addition to the mean deviation, there is a random scatter. The root-mean-square value of this scatter is proportional to the ratio of the root-mean-square surface velocity and the local phase velocity. As for the phase velocity, the nonzero mean deviation is equal to the mean Eulerian velocity while the root-mean-square scatter is equal to the root-mean-square surface velocity. Special cases are considered and a comparison with experimental data is also discussed.
CommonCrawl
Last edited by Meztisida 4 edition of Banach algebras found in the catalog. Banach algebras Richard D. Mosak Published 1975 by University of Chicago Press in Chicago . Banach algebras. Statement Richard D. Mosak. Series Chicago lectures in mathematics series, Chicago lectures in mathematics. LC Classifications QA326 .M67 Pagination viii, 172 p. ; Science proficiency and course taking in high school School Days Album changing military balance in the Gulf A Special Covenant... (Perspectives on American History, Volume 1) Proceedings of the Seminar on Environmental Pollution in the Context of Present Industrial Developments in India, Trivandrum, October 12-14, 1974 A probationary chirurgical essay on paralysis of the lower extremities from diseased spine ICD-9-CM 1994 The woodwrights workbook Partially melted kyarite eclogite from the Roberts Victor Mine, South Africa The Art of Chris Achilleos Social Services Disability Aide far country woman who stole everything, and other stories Portraits of Marriage in Literature (Essays in Literature Book) People in places Perfectly pure and good Finite element simulations using ANSYS Development of the Earth Banach algebras by Richard D. Mosak Download PDF EPUB FB2 It will certainly be quite useful for new graduate students as well as for non-specialists in the areas covered who want to get a quick overview before delving into dautingly thick treatises as the one by [H. Dales ['Banach algebras and automatic continuity', Lond. Math. Soc. by: This is the first volume of a two volume set that provides a modern account of basic Banach algebra theory including all known results on general Banach *-algebras. This account emphasises the role of *-algebra structure and explores the algebraic results which underlie the theory of Banach algebras and *-algebras/5(2). This introduction leads to the amenability of Banach algebras, which is the main focus of the book. Dual Banach algebras are given an in-depth exploration, as are Banach spaces, Banach homological algebra, and : Springer-Verlag New York. Some of the most recent and significant results on homomorphisms and derivations in Banach algebras, quasi-Banach algebras, C*-algebras, C*-ternary algebras, non-Archimedean Banach algebras and multi-normed algebras are presented in this book. A brief introduction for functional equations and their. The study of Banach algebras began in the twentieth century and originated from the observation that some Banach spaces show interesting properties when they can be supplied with an extra multiplication operation. A standard exam-ple was the space of bounded linear operators on a Banach space, but another. TheBasicsofC∗-algebras Banach algebras Definition A normed algebra is a complex algebra Awhich is a normed space, and the norm satisfies kab≤kakkbk for all a,b∈ A. If A(with this norm) is complete, then Ais called a Banach algebra. Every closed subalgebra of a Banach algebra is itself a Banach algebra. Chapter 1 Banach algebras Whilst we are primarily concerned with C-algebras, we shall begin with a study of a more general class of algebras, namely, Banach algebras. These are of interest in their own right and, in any case, many of the concepts introduced in their analysis are needed for that of C-algebras. urthermore,FFile Size: KB. Harmonic analysis and Banach algebras are rather old areas. Harmonic analysis and Banach algebras are rather old areas of mathematics but very rooted and still We shall also introduce the BSE norm of a Banach function algebra Part 1: Function and operator algebras on locally compact groups. The remaining chapters are devoted to Banach algebras of operators on Banach spaces: Professor Eschmeier gives all the background for the exciting topic of invariant subspaces of operators, and discusses some key open problems; Dr Laursen and Professor Aiena discuss local spectral theory for operators, leading into Fredholm by: The uniqueness of the complete norm topology in Banach algebras and Banach-Jordan onal Analysis,47, 1–6. Banach Algebra Techniques in Operator Theory (Pure and Applied Mathematics 49) Ronald G. Douglas A discussion of certain advanced topics in operator theory, providing the necessary background while assuming only standard senior-first year graduate courses in general topology, measure theory, and algebra. The remainder of the book addresses the structure of various Banach spaces and Banach algebras of analytic functions in the unit disc. Enhanced with challenging exercises, a bibliography, and an index, this text belongs in the libraries of students, professional mathematicians, as well as anyone interested in a rigorous, high-level. In the book, I considered differential equations of order 1 over Banach D-algebra: differential equation solved with respect to the derivative; exact differential equation; linear homogeneous equation. In noncommutative Banach algebra, initial value problem for linear homogeneous equation has infinitely /5(4). This is the first volume of a two volume set that provides a modern account of basic Banach algebra theory including all known results on general Banach *-algebras. This account emphasises the role of *-algebra structure and explores the algebraic results which underlie the theory of Banach algebras and *-algebras. This first volume is an independent, self-contained reference on Banach algebra. Segal proves the real analogue to the commutative Gelfand-Naimark represen-tation theorem. Naimark's book \Normed Rings" is the rst presentation of the whole new the-ory of BA, which was important to its development. Rickart's book \General theory of Banach algebras" is the reference book of all later studies of Size: KB. The aim of this book is to give an account of the principal methods and results in the theory of Banach algebras, both commutative and non­ commutative. It has been necessary to apply certain exclusion principles in order to keep our task within bounds. This book examines some aspects of homogeneous Banach algebras and related topics to illustrate various methods used in several classes of group algebras. It guides the reader toward some of the problems in harmonic analysis such as the problems of. De nition A C-algebra is a Banach -algebra over C that satis es kaak= kak2. The next theorem classi es the kind of Banach -algebras given in the above example: Theorem (Little Gelfand-Naimark Theorem). Let A be a commutative Banach -algebra satisfying kaak= kak2. Then A 'C 0(M) (as a Banach -algebra) for some locally compact space Size: KB. BANACH ALGEBRAS G. RAMESH Contents 1. Banach Algebras 1 Examples 2 New Banach Algebras from old 6 2. The spectrum 9 Gelfand-Mazur theorem 11 The spectral radius formula 12 3. Multiplicative Functionals 15 Multiplicative Functionals and Ideals 16 G-K-Z theorem 17 4. The Gelfand Map [Bo] N. Bourbaki, "Elements of mathematics. Spectral theories", Addison-Wesley () (Translated from French) MR Zbl [DuSc] N. Dunford, J.T. Schwartz, "Linear operators", 1–3, Interscience (–) MR Zbl [Ga]. Page - Maximal ideals in an algebra of bounded analytic functions, J. Math. Mech. 10 (), Appears in 13 books from Page xiii - Let X be a compact Hausdorff space and let C(X) denote the Banach space of all continuous functions on X with the supremum norm. In this paper we give a very simple subharmonic proof of an extension of the famous theorem of B. Johnson on the equivalence of complete norms in semi-simple Banach algebras. This proof avoids irreducible representations so that it can be adapted to the situation of Banach Jordan algebras in order to give a similar result. Gilles Pisier, in Handbook of the Geometry of Banach Spaces, 7 Characterizations of operator algebras and modules. In the Banach algebra literature, an operator algebra is just a closed subalgebra (not necessarily self-adjoint) of B(H).A uniform algebra is a subalgebra of the space C(T) of all continuous functions on a compact set T. (One sometimes assumes that A is. This well-crafted and scholarly book, intended as an (extremely) advanced undergraduate or early graduate text, scores on several fronts. For the well-prepared mathematics student it provides a solid introduction to functional analysis in the form of the theory of Banach spaces and algebras. 2 BANACH ALGEBRAS Example (Finite dimensional). (1) Let A= C. Then with respect to the usual multiplication of complex numbers and the modulus, A is a Banach algebra. (2) Let A= M n(C), the set of n nmatrices with matrix addition, matrix multiplication and with Frobenius norm de ned by kAk F = 0 @ Xn i;j=1 ja ijj2 1 A 1 2 is a non. C ∗-algebras (pronounced "C-star") are subjects of research in functional analysis, a branch of mathematics.A C*-algebra is a Banach algebra together with an involution satisfying the properties of the adjoint.A particular case is that of a complex algebra A of continuous linear operators on a complex Hilbert space with two additional properties. A is a topologically. Buy General Theory of Banach Algebras by Charles E Rickart online at Alibris. We have new and used copies available, in 1 editions - starting at $ Shop now. The book provides researchers and graduate students with a unified survey of the fundamental principles of fixed point theory in Banach spaces and algebras. The authors present several extensions of Schauder's and Krasnosel'skii's fixed point theorems to the class of weakly compact operators acting on Banach spaces and algebras. Let A be a Banach algebra with identity. Then, by moving to an equivalent norm, we may suppose that A is unital. It is easy to check that, for each normed algebra A, the map (a,b) → ab, A × A → A, is continuous. Dales, P. Aiena, J. Eschmeier, K. Laursen, and G. Willis, Introduction to Banach Algebras, Operators, and Harmonic. Banach algebras. [Richard D Mosak] Banach algebras. Banach-Algebra. Banach algebras; More like this: Similar Items Book: All Authors / Contributors: Richard D Mosak. Find more information about: ISBN: OCLC Number. simple result that the set of invertible elements in a unital Banach algebra must be open. While it is fairly easy, it is interesting to observe that this is an important connection between the algebraic and topological structures. Lemma 1. If ais an element of a unital Banach algebra Aand ka 1kFile Size: KB. Browse other questions tagged functional-analysis measure-theory reference-request soft-question banach-algebras or ask your own question. Featured on Meta Improving the Review Queues - Project overview. Sub Banach spaces (Banach algebras) of the disc algebra which are invariant under the differentiation operator In this question the disc algebra $\mathcal{A}(\mathbb{D})$ is the Banach algebra of all holomorphic functions on the unit open disc $\mathbb{D} \subset \mathbb{C}$ which have a. The only Banach division algebra over ℂ \mathbb{C} is ℂ \mathbb{C} itself, by the Gel'fand–Mazur theorem. A JB JB-algebra (or more generally a Jordan–Banach algebra) is a nonassociative (but commutative) kind of Banach algebra. (The commutative associative Banach algebras also count as Jordan–Banach algebras.). Additional Physical Format: Online version: Żelazko, Wiesław. Banach algebras. Amsterdam, New York, Elsevier Pub. Co., (OCoLC) Document Type. Purchase Banach Algebras and Compact Operators, Volume 2 - 1st Edition. Print Book & E-Book. ISBNBook Edition: 1. The author emphasizes the roles of *-algebra structure and explores the algebraic results which underlie the theory of Banach algebras and *-algebras. Proofs are presented in complete detail at a level accessible to graduate students. The books will become the standard reference for the general theory of *-algebras. I'm studying Banach algebra and I was wondering if there are some exercise books (that is, books with solved problems and exercises) The books I'm searching for should be: rich of complete, step. Exercise books in Banach algebra. Ask Question Asked 1 year, 7 months ago. Active 1 year, 7 months ago. Viewed 78 times 0 $\begingroup$ I'm studying Banach algebra and I was wondering if there are some exercise books (that is, books with solved problems and exercises) The books I'm searching for should be: full of hard. In part 1, the by now classical spectral theory of Banach ∗-algebras is developed, including the Shirali-Ford Theorem (). This part cannot serve as an introduction to Banach algebras in general, as the scope is limited to the prerequisites for the sequel, that is, basic repre-sentation theory of ∗ by: 2. Banach algebras are Banach spaces equipped with a continuous multipli- tion. In roughterms,there arethree types ofthem:algebrasofboundedlinear operators on Banach spaces with composition and the operator norm, al- bras consisting of bounded continuous functions on topological spaces with pointwise product and the uniform norm, and algebrasof integrable .operator theory and banach algebras Download operator theory and banach algebras or read online books in PDF, EPUB, Tuebl, and Mobi Format. Click Download or Read Online button to get operator theory and banach algebras book now. This site is like a library, Use search box in the widget to get ebook that you want.Comments: The theory of Banach algebras is a very elegant blend of algebra and topology which provides unifying principles for a number of different parts of mathematics and its applications, notably operator theory, commutative and non-commutative harmonic analysis and the theory of group representations, and the theory of functions of one and. williamblack.club - Banach algebras book © 2020
CommonCrawl
The hierarchical age–period–cohort model: Why does it find the results that it finds? Andrew Bell ORCID: orcid.org/0000-0002-8268-58531 & Kelvyn Jones2 Quality & Quantity volume 52, pages 783–799 (2018)Cite this article It is claimed the hierarchical-age–period–cohort (HAPC) model solves the age–period–cohort (APC) identification problem. However, this is debateable; simulations show situations where the model produces incorrect results, countered by proponents of the model arguing those simulations are not relevant to real-life scenarios. This paper moves beyond questioning whether the HAPC model works, to why it produces the results it does. We argue HAPC estimates are the result not of the distinctive substantive APC processes occurring in the dataset, but are primarily an artefact of the data structure—that is, the way the data has been collected. Were the data collected differently, the results produced would be different. This is illustrated both with simulations and real data, the latter by taking a variety of samples from the National Health Interview Survey (NHIS) data used by Reither et al. (Soc Sci Med 69(10):1439–1448, 2009) in their HAPC study of obesity. When a sample based on a small range of cohorts is taken, such that the period range is much greater than the cohort range, the results produced are very different to those produced when cohort groups span a much wider range than periods, as is structurally the case with repeated cross-sectional data. The paper also addresses the latest defence of the HAPC model by its proponents (Reither et al. in Soc Sci Med 145:125–128, 2015a). The results lend further support to the view that the HAPC model is not able to accurately discern APC effects, and should be used with caution when there appear to be period or cohort near-linear trends. The hierarchical age period cohort (HAPC) model has, like every age–period–cohort (APC) model that has been proposed in the last 50 years, received a mixed reception since it was first outlined in 2006 (Yang and Land 2006). Whilst it has been taken up enthusiastically in parts of the social and medical sciences, the ability of the model to produce meaningful statistics has also been disputed. This is because it is, apparently, attempting to do the impossible (Bell and Jones 2013): separating age, period and birth cohort effects, including linear effects if they are present. Many, including us, have argued that it doesn't work, and used simulations to demonstrate the situations in which this is the case (Luo and Hodges 2016; Bell and Jones 2014a). The inventors of the model and others have responded that simulations are an inappropriate method for assessing the importance of APC methods (Reither et al. 2015a). This paper can be considered the next entry in this continuing debate. This debate is an important one. Many applied researchers now see the HAPC model as the "standard way of analysing generational effects" (Linek and Petrúšek 2016, p. 82), even whilst acknowledging the critics of the method. Whilst the methodological questions remain open, such judgements will continue to be made. This is a problem if, as we believe, the model does not function as its proponents suggest it does, and can produce highly misleading results. The debate also mirrors and complements that taking place elsewhere regarding another APC model called the Intrinsic Estimator (see Pelzer et al. 2015; Te Grotenhuis et al. 2016; Yang and Land 2013b; Luo 2013a, b; Luo et al. 2016). In the latest rejoinder on this subject to our earlier critique, Reither et al. (2015a) left a number of unanswered questions, and we hope to be able to give our answers to those questions here. However, the key focus of this paper lies in making an argument not just that the HAPC model sometimes doesn't work, but also in giving a reason why the model produces the results that it does. Moreover, this paper moves the debate around the HAPC model beyond simulations, towards the analysis of real data. This is not to say that we consider previous simulation studies worthless; rather that we believe that the case presented by simulations is already rather convincing, and in showing that similar results occur in real-life data, it lends credence to the argument that those simulations did indeed produce results that are indicative of real-world scenarios, despite Reither et al.'s (2015a) claims to the contrary. We will show, using both real and simulated data, that the results produced are the result not of substantive processes at hand, but an artefact of the structure of the data being analysed. When taking different samples from the same given real-life dataset, you can get different results depending on how you select your sample. This gives insight into why the HAPC model produces the results that it does—simulations have already shown that often the results they produce are incorrect, but have not thus far given any insight as to why. Readers might feel that, in furthering the critique of the HAPC model, this paper is simply 'flogging a dead horse', given the existing critiques by many separate researchers (see Table 1). We disagree and contend that this paper makes three important contributions. First, practitioners are still using the HAPC model, and this paper we hope encourages readers to be critical of the model, and not take the latest rejoinder (Reither et al. 2015a) as the final word on the subject. Second, it offers useful insight to methodologists in understanding how statistical models generally, and multilevel models in particular, behave in the presence of exact collinearity in the random effects. Third, in comparing simulated results to real data, it shows the value that simulation can offer, in contrast to Reither et al. (2015a) who seem to argue its use is problematic because it is, in a sense, synthetic and therefore unrealistic. Table 1 Key papers (and arguments made) in the debate around the HAPC model This paper begins with a brief discussion of the APC identification problem, before outlining our explanation for why the HAPC model produces the results that it does under different data scenarios. We argue that it is the range of the periods and cohorts set by the data structure, rather than any substantive processes, that drives the results that are found. This is illustrated first with simulations and second by attempting to replicate Reither and colleagues's (2009) study of APC effects on obesity in the United States. Regarding the latter, whilst we were able to replicate the study using the full data (including additional data up to and including 2014) we show that we get different results when we take particular samples of the data, with Reither et al.'s results not replicated when data is sampled based on a narrow range of cohorts. By way of a coda to the article, we rebut the key points made by Reither et al. (2015a) in their most recent rejoinder—particularly regarding the use of model fit statistics, and the use of both descriptive and modelled APC graphical trends to test whether the use of the HAPC model is appropriate. The paper finishes with a summary of the arguments in favour of the HAPC model so far, and suggestions for what substantive researchers interested in APC processes should do in the light of these criticisms. The key critique of the HAPC model The debate around the HAPC model has been extensive, and the key contributions to it are summarised in Table 1 for readers to consider themselves. The problem that the model is trying to address is that age, period (year) and cohort (year of birth) are linearly related such that age = period − cohort. This is a problem if any of age, period or cohort are linearly related to a given outcome, since different linear combinations of APC can produce identical outcomes. For us, the key critique of the HAPC model lies in its inability to accurately represent data generating processes (DGPs) in simulation. In particular, we have shown (Bell and Jones 2014c) that results that have been found in previous work in fact could have resulted from an entirely different DGP. This has been shown, both with linear and non-linear relations (Bell and Jones 2015b); a non-linear relationship with an outcome does not mean there isn't also a linear relation that could cause a problems in attempting to uncover true APC trends, even when no linear effect is included in the DGP. What drives the HAPC model to period trends? All of the above are in our view good reasons why the arguments in favour of the HAPC model should be viewed with scepticism. However, there remain a number of questions that critics of the HAPC model have not yet answered. In particular, why is it that that the HAPC model finds the results that it finds? Simulations have shown that the HAPC model tends to favour period effects over cohort effects, but that this is not consistent when cohorts are grouped (Bell and Jones 2014a). Yet there has been no discussion in the literature to our knowledge as to why that pattern occurs. Here we present an argument that lies in the imposed structure of the data being analysed. The HAPC model is designed for repeated cross sectional data, where a sample is taken across a number of years, and so this data can be represented in a rectangular age-by-period table. Similarly, panel data (for which the HAPC model has been adapted—see Suzuki 2012) can also be represented in such a format. The result of such data is that cohorts—represented by the diagonals in an age-by-period table and measured by year of birth—span a wider range of years than periods. Taking the data used by Reither et al. (2009) in their analysis of obesity, periods span the time period 1976–2002—a range of 26 years, whilst birth cohorts (measured by the year of birth) span the years 1890–1985—a range of 95 years. In the HAPC model, the estimation method, whether frequentist (e.g. maximum likelihood) or Bayesian (e.g. MCMC), aims to minimise the amount of unexplained variation in the model (O'Brien 2016). In the HAPC model, the period and cohort random effects are considered, at least in part, unexplained, since they are in the random part of the model, whilst the age trend is considered explained since it is in the fixed part. The model will thus apportion variation to the trends in such a way that makes those unexplained components as small as possible, regardless of its effect on the explained part (the age parameter estimates). Imagine, for example, the true DGP of a model consists only of a linear cohort trend with a slope of 1. In this case, the HAPC model could assign the linear trend (correctly) to the cohort residuals, or it can apply it to the period trend, with an additional age trend in the opposite direction estimated in the fixed part of the model, cancelling it out (since cohort = period − age). Adding a slope to the age trend does not in any way increase or decrease the unexplained variance, so the question is which of the periods or cohorts increases the unexplained variance the most. The answer is the cohorts, because it has a wider range. The random effects attached to the very new and very old cohorts (U_c in Fig. 1) will be much bigger than the equivalent random effects for periods (U_p in Fig. 1), because a trend with a slope of 1 that spans 95 years (the range of cohorts) will reach much higher and lower values than a slope with the same gradient that spans 26 years (the range of periods), as shown clearly in Fig. 1. In Bayesian estimation, the larger variance of the cohorts would also make the effective number of parameters greater (a wider spread of cohorts results in those cohort residuals counting as more effective parameters—Spiegelhalter et al. 2002). Hypothetical period and cohort trends with a slope of one, for data with the structure of that used in Reither et al. (2009). As can be seen, the cohort trend of necessity produces much more extreme residual values than the period trend, despite both having the same slope value Of course, a true DGP is unlikely to be as simple as a single linear effect. But if there is a single linear or near-linear effect as part of the DGP, the model will assign that trend in such a way that reduces the unexplained variance, and so, all other things being equal, will place it with period effects. As stated previously, grouping has an effect on this, making the direction of the effect assignment more unpredictable. Whilst grouping does not affect the range spanned by the cohorts, it would affect the number of cohort groups. On one hand, grouping cohorts makes the measurement of cohorts less precise and so make the fit of the cohorts to the data worse, which might lead the model to 'favour' during estimation the more finely grouped periods for a trend. On the other hand, if there are fewer groups, there are fewer random effects and so either fewer degrees of freedom consumed (in a Bayesian model) or smaller penalties to the log-likelihood (O'Brien 2016). Whilst on average the results seem to fit the period solution on average, there is more variation around this in possible results from the same DGP (as shown by the simulations in Bell and Jones 2014a), and differently grouping the same dataset will produce fundamentally different results (Luo and Hodges 2016). The key point is that the data structure, and thus the tendencies towards periods described here, are not the result of any real-world substantive process and thus their influence on the results is a statistical artefact. One could, instead, collect data by cohorts; that is, follow a large number of birth cohorts through their lives. The result would be a rectangular age-by-cohort table, with periods along the diagonals. In this situation, there would be a much wider range of periods than cohorts, and the model would tend to assign trends to cohorts instead of periods. This change would not be substantive—it would merely be a result of the data structure. In order to test this, we simulated some data that was collected (1) as if selected by periods, and (2) as if selected by cohorts. The DGP for both datasets is as follows: $$\begin{aligned} {\text{Y}} & = 1 + \left( {0.1*{\text{Age}}} \right) + \left( { - 0.005*{\text{Age}}^{2} } \right) + \left( { - 0.01*{\text{Year}}} \right) + \left( { - 0.002*{\text{Year}}^{2} } \right) \\ & \quad + {\text{u}}_{\text{C}} + {\text{u}}_{\text{p}} + {\text{e}}_{\text{i}} \quad {\text{e}}_{\text{i}} \sim\,{\text{N}}\left( {0,4} \right),{\text{u}}_{\text{c}} \sim\,{\text{N}}\left( {0,1} \right), {\text{u}}_{\text{p}} \sim\,{\text{N}}\left( {0,1} \right) \\ \end{aligned}$$ where ei is the level 1 residuals, Normally distributed with a variance of 4, and uc and up are the cohort group and period residuals, each randomly Normally distributed with a variance of 1. Age and Year are centered on 40 and 1990 respectively, and cohorts grouped into 3 year intervals. This data was generated (1) for samples of individuals aged 20–60 taken in years 1990–2010, and (2) for individuals born between 1930 and 1965, and sampled between age 20 and 60. Thus, in the former cohorts spanned a wider range than periods, and in the latter, the situation is reversed, but in both cases the underlying data generating process is exactly the same; the same age and period linear and quadratic effects and the cohort and the period differences are generated to have the same variance. The datasets, each with 20,000 observations, were fitted to the HAPC model: $$\begin{aligned} y_{{i\left( {j_{1} j_{2} } \right)}} & = \beta_{{0j_{1} j_{2} }} + \beta_{1} Age_{{i\left( {j_{1} j_{2} } \right)}} + \beta_{2} Age_{{i\left( {j_{1} j_{2} } \right)}}^{2} + e_{{i\left( {j_{1} j_{2} } \right)}} \\ \beta_{{0j_{1} j_{2} }} & = \beta_{0} + u_{{1j_{1} }}+ \,u_{{2j_{2} }} \\ & \quad e_{{i\left( {j_{1} j_{2} } \right)}} \sim\,N\left( {0,\sigma_{e}^{2} } \right), \,u_{{1j_{1} }} \sim\,N\left( {0,\sigma_{u1}^{2} } \right), \,u_{{2j_{2} }} \sim\,N\left( {0,\sigma_{u2}^{2} } \right) \\ \end{aligned}$$ Here, i represents individual observations, j1 represents cohort groups, and j2 represents years. This model is run using the same 3-year cohort intervals, using MLwiN 2.36 (Rasbash et al. 2011) with the runmlwin command (Leckie and Charlton 2013) in Stata, with a 100,000 iteration chain length, a 5000 iteration burn-in, and true starting values (in other words, we are being as kind to the model as possible, by actually giving it the true answers as a starting point). The results are shown in Fig. 2. When the data is in the form of an age-by-cohort table (row 1), the model incorrectly assigns a linear trend to cohorts, and consequently misestimates the age and period trends. When the data is in the form of an age-by-period table (row 2) the model assigns the trend to periods, and so accurately estimates all three trends, but of course it would have found this period trend were the true linear trend a cohort effect, as shown previously (Bell and Jones 2014a). This is the case even though there is significant non-linear variation in the DGPs—the linear component is still reassigned in the way suggested above. This provides compelling evidence that it is the data structure that is driving the results that are found. It is only the data structure that changes between these two scenarios; the data generating process has not been changed. Simulation results from the DGP in Eq. (1), with results (thin grey lines) compared to the truth (large back lines/points). Row 1: age-by-cohort data; row 2: age-by-period data Reither et al. (2015a) argue that using simulated data "is not a productive way to advance the discussion" (p. 125). Whilst we do not agree with this, we understand that readers may not be convinced by evidence based on data that is in some sense not real. Because of this, we have additionally tested our explanation with real data. This is not easy to do. Whilst one could collect data by cohorts, this would be extremely costly in time and money (to get a full range of age groups, you would have to measure each cohort of people every year for their entire lives). Instead, we take a real-life dataset and take a number of samples that mimic the properties of the data collected by periods and cohorts as described above. We use the National Health Interview Survey dataset (National Center for Health Statistics 2004) used by Reither et al. (2009) in their study of obesity. We were able to extend Reither et al.'s analysis by including data up to and including 2014. The only other difference between our study's analyses and that of Reither et al. was that we were unable to replicate exactly the adjustment they performed on their outcome variable, the body mass index (BMI) that measures obesity (whilst we contacted the authors in the hope of replicating this adjustment exactly, no reply was forthcoming). This adjustment is necessary due to (1) a change in the way obesity was measured in 1997 to exclude proxy-reporting, and (2) a generally-observed increase in downward-bias that appears to have been present over the study period (Reither et al. 2009 p. 1441). Instead, we used Fig. 1 in Reither et al. (2009) to attempt an approximate adjustment to the measure of obesity. Specifically, measured BMI was adjusted by adding on 0.5 + 0.03 * (year−1976) for years before 1997, and by adding on 0.75 + 0.03*(year−1976) for years 1997 and onwards. Obesity was then defined as those with an adjusted BMI of 30 or more. The results do not appear to be affected by this adjustment, and, when using the full dataset (both up to 2002 and including data up to 2014), we were able to replicate Reither et al.'s results (see Fig. 4), suggesting the adjustment is good enough to make the methodological point at hand in this paper. In order to evaluate whether the results are different when data has different structures, we take a number of different samples from the data; these are represented in Fig. 3. First, we take a number of samples each of which are based on cohorts (the black boxes in Fig. 3). For each such sample, the entire range of years (39) is included, but the range of cohorts is limited to a 10-year birth cohort span (as shown in black in Fig. 3). The reason for choosing 10-years as the range for cohorts is because it makes it approximately one-quarter of the range of periods, which is similar to the range of periods in relation to the range of cohorts in standard repeated cross-sectional data (including the ranges of periods and cohorts in Reither et al.'s original study). Age-by-period representation of the full NHIS dataset, with the 12 samples taken shown. Samples defined by cohorts (10 years) are in black; samples defined by age (30 years) are coloured. (Color figure online) Second, we took a number of samples that were based on age (with an arbitrarily chosen range of 30). In this case, the range of cohorts is still greater than the range of periods, but to a lesser extent than in the overall sample. The purpose of these models was to check whether the results found by HAPC differed across the age range, which might explain any differences found in the cohort-sampled models. These samples are represented by the coloured boxes in Fig. 3. The HAPC model was applied to both the full dataset and the samples outlined here. This was done both without any grouping, and by including grouping for year groups (note, with only a 10 year span of birth cohorts in some models, grouping of cohorts was not possible in the cohort-selected samples). All models were run using MCMC estimation (Browne 2009), with 500,000 iterations and a 50,000 iteration burn-in, with hierarchical centering to speed up convergence and models checked for convergence using visual diagnostics. Models were again run in MLwiN 2.36 (Rasbash et al. 2009) using the runmlwin command in Stata (Leckie and Charlton 2013). Real-life data results When using the full NHIS data, we were able to replicate the results found by Reither et al. (2009) as shown in Fig. 4: an inverse U-shape in the age trend, an approximately linear period trend, and an approximately flat cohort trend (with a slightly higher obesity on cohorts born after around 1970). This result occurs regardless of whether periods and cohorts are ungrouped, periods are grouped, or cohorts are grouped, and whether the data used includes years after 2002 or not. Replication of Reither et al. (2009), with data included up to 2014. Model uses 5-year birth cohort groups (the results are the same with no grouping, and with 3-year period groups) When data is selected by age, the results are less consistent, as shown in Figs. 5 and 6. When data are ungrouped (Fig. 5, columns 1–3), the results also match those of Reither et al. consistently, with a near-linear period trend and a flat cohort trend. However when the model is run with years grouped into 3-year intervals (Fig. 6, columns 1–3), the result is different: in general the temporal trend is split between periods and cohorts, except in one case when the entire trend is in cohorts. Note when the model is run for these data samples with cohorts grouped into 5-year intervals (not shown), the results mostly match those of (Reither et al. 2009), except in one instance where a near-linear cohort trend (and a flat period trend) is found. Results for models without grouping in the random effects Results with periods grouped into 3-year time intervals When data is selected by period, the results are consistent: a near-linear positive cohort trend and a flat period trend are found, regardless of whether years or cohorts are grouped or not. That is, Reither et al.'s results are not replicated in these scenarios. These results are entirely consistent with previous simulations (Bell and Jones 2014a, c, 2015b), the simulations presented earlier in this paper, and with the logic we have spelled out in this paper. When cohorts span a wider range than periods, as in standard repeated cross-sectional data, the trend tends towards periods, although this is complex when periods or cohorts are grouped. In contrast, when there are more periods than cohorts (i.e data is selected by cohorts), the result consistently finds the opposite result, with the trend being found in cohorts. Finally, as expected, the results are less consistent when grouping is used in one of the sets of random effects. To be clear, this is not to say any particular result found here is right or wrong with regard to obesity; rather that the data and the HAPC model alone give us no indication as to which pattern is correct. Responses to Reither et al. (2015a) By way of a coda to this article, we now respond to the points made by Reither et al. (2015a). In their article, they list a number of criteria for the use of the HAPC model, which the simulated data in Bell and Jones (2015b) do not fulfil. We address each of these criteria below. However, it should be noted that the criteria are completely met by the simulations run in Bell and Jones (2014a), and in the simulations in this article, above (see Table 2 for model fit statistics, Fig. 7 for descriptive APC plots, and Fig. 2 for modelled APC plots). In each situation, the HAPC model again failed to recover the true parameters. In any case, we consider each of their recommended criteria below. Table 2 Model fit statistics for an example dataset used in the simulations here (based on the models used in Reither et al. 2015a) Descriptive APC plots for an example dataset used in the simulations here. Row 1: age-by-cohort data; row 2: age-by-period data The use of model fit statistics In their previous commentary, Reither et al. (2015b) argued that fit statistics should be used to assess whether the full APC model is appropriate. This point is reiterated in their latest rejoinder (Reither et al. 2015a) where they make a number of points that for us are contentious. First they present the results of model fit statistics applied to simulated datasets. These show that "In no instance do these model selection statistics point to a fully three-dimensional data structure" (Reither et al. 2015a, p. 127, emphasis their own). This is true; however the test that they are applying is not really relevant to whether the HAPC model should be used. They are testing a model with linear and quadratic age, and period and cohort dummy fixed effects (the 'age + period + cohort' model), against a model with linear and quadratic age and cohort trends (the 'age + cohort (both quadratic)' model). The latter has just 5 degrees of freedom, whilst the former has 46, so given the true data generating process (DGP) is primarily formed of age and cohort, it is unsurprising that the model selection criteria choose the more parsimonious option. By arbitrarily adding the quadratic cohort effect, and removing the cohort and period dummy variables from the model, they are testing apples against oranges—it is this change in the degrees of freedom that makes the difference, not the two dimensions per se. No explanation is offered for this test choice, given that in real substantive research, one would not know the true DGP that produced the data. Reither et al. (2015a) go on to question our judgement that the existence of period and cohort random effects in the DGP meant that model selection that does not find APC effects were incorrect. They argue that, because these random effects are small, it is reasonable for the model to not pick them up as statistically significant. For this reason, the AIC/BIC statistic would not choose the full APC model. This is again true; however it follows from this reasoning that, were these random effects' variances bigger, or the sample size bigger, these non-linear variations would be picked up by the model as significant. In these instances, the HAPC model would be selected on the basis of the fit statistics. The linear dependency of APC in the model does not disappear because there is more data, or because the noise around those linear trends is greater. As such, the results that would be found may be incorrect, because, whilst non-linear APC effects do exist in the DGP (suggesting the HAPC model), linear effects also exist that could still be radically mis-apportioned. indeed, this is the case in the simulations in Bell and Jones (2014a) in which the HAPC model also performed poorly, yet model fit statistics of the sort used by Reither et al. (2015a) suggest the full APC model is preferable. The point, here, is that Reither et al. (2015a) do not appreciate that the presence of non-linear effects in a data generating process does not mean that there aren't also near-linear effects, and it does not mean that those near-linear effects could not be themselves confounded, producing results that are incorrect and highly misleading. Model selection criteria can only tell us about non-linear effects (because the linear components can be apportioned in an infinite number of different ways), and as such, they should not be used to judge whether the HAPC model should be used. Visual tests of whether the HAPC model should be used In their latest rejoinder, Reither et al. (2015a) introduce new tests for whether the HAPC model should be used: a visual inspection of (a) the raw APC descriptive trends, and (b) the modelled APC trends produced by the HAPC model (unfortunately, these are rather confusingly conflated in the article). It is unclear exactly why these would be relevant in deciding whether the HAPC is appropriate. Regarding raw descriptive trends, it will often be the case that trends appear similar. For example the mirror image between period and cohort trends will often be present when there has been change over time. The similarities between age and cohort trends are also unsurprising since older people in the sample are generally born earlier. Thus, we agree that the HAPC model should not be used in these circumstances; however we do not agree that differences in the descriptive APC plots (such as those in Fig. 3 of Reither et al. 2009) are in any way a sufficient test for whether the HAPC model produces valid inference. The reason for this matches our concerns about using model fit statistics—non-matching APC descriptive plots imply the existence of non-linear APC effects in the DGP, but do not imply that there aren't also linear or near-linear effects in the DGP as well. The argument presented by Reither et al. (2015a) implies that the existence of non-linear effects in a DGP mean that near-linear APC effects are no longer problematic; this is simply incorrect. Such non-linearities simply obscure the near-linear effects in the descriptive plots—they may still be there, and they can still be apportioned in an infinite number of ways between age, period and cohort. Visual plots do not provide a way forward for they are indeterminate in their diagnosis. Reither et al. (2015a) similarly argue that similarities in modelled outcomes make the HAPC model unsuitable. But the logic behind this is not clear to us. Each trend is, supposedly, controlled for the other trends, so any similarities in trends are not to do with the collinearity between the variables. It is also unclear what counts as 'similar'. If age and period both show a general upward trend, are they too similar? How much curvature or random variation is needed in each trend before the HAPC model should be allowed? Such questions quickly reveal the flaws in the logic of these criteria, revealing, once again, that these arguments do not consider the possibility that linear and non-linear effects might be present in the same DGP, and that the presence of the latter does not imply that the former is benign. Reither et al. (2015a) also correctly point out a typo in the code that produced our graph—a zero where a nine should be. However, as we argue above, we do not see the relevance of this to their argument. In sum: the new criteria proposed by Reither et al. (2015a) are arbitrary, and readers only need to look to previous work (Bell and Jones 2014a) and this paper for simulated data where these criteria pass and yet the HAPC model fails to produce sensible and 'correct' results. If it were the case that there was genuine social process that was driving a certain age, period and cohort combination, we would expect to find it regardless of the data structure at hand. However, the results here show the data structure has a substantial and sometimes determining influence on the results that are produced. To justify how these results could have occurred whilst still seeing the HAPC model as an appropriate one, one would have to do some fairly dramatic mental gymnastics. First, you would need to argue that the consistency that the real-life results present with simulations has occurred by chance. Second, you would need to come up with a reason why the HAPC model is inappropriate in the data scenarios used here. Third, you would need to argue that there are somehow real differences within these samples that are driving these results to match the simulations (even though the samples heavily overlap with each other). We do not consider any of these arguments to be plausible or defensible. The only sensible explanation in our view is that a statistical artefact of the data structure is driving the results that are found. This explanation also explains those results found by the HAPC models that go against the prevailing academic wisdom (for example Dassonneville 2013, who finds no cohort effects in electoral volatility, despite the literature suggesting such effects should be important). It is worth concluding by considering some of the arguments that have been made in favour of the HAPC model by its proponents in the literature to date: It works because of the inclusion of the age squared term (Yang and Land 2006, p. 84), It works because age is treated on a different hierarchical level to periods and cohorts (Yang and Land 2013a, p. 191), It works because linear effects do not exist in the real world (Reither et al. 2015b), It only works when a model fit statistic says there is non-linear variation in all three of APC (Reither et al. 2015a), It only works when raw descriptive plots of APC look dissimilar (Reither et al. 2015a), It only works when model predicted plots of APC look similar (Reither et al. 2015a), It only works on non-simulated data (Reither et al. 2015a). In this article, and in our previous contributions, we have provided evidence that every one of these suggestions is flawed, and that the model is never able to reliably apportion near linear APC trends, regardless of what other non-linear processes are present in the DGP. It seems to us that proponents of the HAPC model do not really know why their model works (because it often does not), and are searching under stones to find reasons to justify the continued use of the model. A final note on replication: it can be argued that one solution to these problems is replication, and it is true that many papers using the HAPC and IE may have been rejected if reviewers had the possibility to replicate results with different methods to test their robustness. We agree (indeed, replication files for this paper are available as online supplementary files for this reason). However successful replication is a necessary but not sufficient condition for HAPC analyses to be robust. Often the implicit identification strategies of different models will be similar or have similar results, and so a second model may often be wrong in the same direction and magnitude as the first. So, what should likely disheartened reseachers wanting to find independent APC effects do in the absence of a magical solution to the identification problem? For us there are two options. First, authors could choose to remove any linear effects from analyses and focus on patterns of non-linear effects. Such an approach does not solve the identification problem; linear effects cannot be assumed on the basis of these non-linear effects. However often the non-linear effects are in themselves interesting and worthy of publication, so long as the limits of this are made clear (Chauvel and Schroder 2014; Chauvel et al. 2016). Second, and as we have suggested before, researchers should use theory to justify constraints to APC models such as the HAPC model, and those constraints and the reasoning behind them should be stated transparently and explicitly. This might involve a belief that a particular trend will take a certain value, or a view that a certain combination of APC is the most plausible of those possible given the data (Bell 2014; Bell and Jones 2014b, 2015a; Fosse and Winship 2016). It is worth noting once again that none of this means the HAPC model should be entirely abandoned. The model structure is intuitive and, when constraints are applied that are appropriate and theoretically driven, rather than arbitrary, hidden and statistically driven, the model can produce interesting and important results. However when the model is applied as a mechanical routinized solution that 'completely avoids' to the identification problem, dangerously misleading results can be found. Applied researchers should take note. Bell, A.: Life course and cohort trajectories of mental health in the UK, 1991–2008: a multilevel age–period–cohort analysis. Soc. Sci. Med. 120, 21–30 (2014) Bell, A., Jones, K.: The impossibility of separating age, period and cohort effects. Soc. Sci. Med. 93, 163–165 (2013) Bell, A., Jones, K.: Another "futile quest"? A simulation study of Yang and Land's hierarchical age–period–cohort model. Demogr. Res. 30, 333–360 (2014a) Bell, A., Jones, K.: Current practice in the modelling of age, period and cohort effects with panel data: a commentary on Tawfik et al. (2012), Clarke et al. (2009), and McCulloch (2012). Qual. Quant. 48, 2089–2095 (2014b) Bell, A., Jones, K.: Don't birth cohorts matter? A commentary and simulation exercise on Reither, Hauser and Yang's (2009) age–period–cohort study of obesity. Soc. Sci. Med. 101, 176–180 (2014c) Bell, A., Jones, K.: Bayesian informative priors with Yang and Land's hierarchical age–period–cohortmodel. Qual. Quant. 49(1), 255–266 (2015a) Bell, A., Jones, K.: Should age–period–cohort analysts accept innovation without scrutiny? A response to Reither, Masters, Yang, Powers, Zheng, and Land. Soc. Sci. Med. 128, 331–333 (2015b) Browne, W.J.: MCMC Estimation in MLwiN, Version 2.25. Centre for Multilevel Modelling, University of Bristol, Bristol (2009) Chauvel, L., Schroder, M.: Generational inequalities and welfare regimes. Soc. Forces 92(4), 1259–1283 (2014) Chauvel, L., Leist, A.K., Ponomarenko, V.: Testing persistence of cohort effects in the epidemiology of suicide: an age–period–cohort hysteresis model U. S. Tran, ed. PLoS ONE 11(7), 1–20 (2016) Dassonneville, R.: Questioning generational replacement. An age, period and cohort analysis of electoral volatility in The Netherlands, 1971–2010. Elect. Stud. 32(1), 37–47 (2013) Fienberg, S.E., Hodges, J.S., Luo, L.: Letter to the editor. J. Am. Stat. Assoc. 110(509), 457 (2015) Fosse, E., Winship, C.: Nonparametric bounds of age-period-cohort effects. Working paper, Princeton University. http://q-aps.princeton.edu/sites/default/files/q-aps/files/apcbounds_draft.pdf (2016). Accessed 23 Feb 2017 Leckie, G., Charlton, C.: runmlwin: a program to run the MLwiN multilevel modelling software from within stata. J. Stat. Softw. 52(11), (2013) Linek, L., Petrúšek, I.: What's past is prologue, or is it? Generational effects on voter turnout in post-communist countries, 1990–2013. Elect. Stud. (2016). doi:10.18637/jss.v052.i11 Luo, L.: Assessing validity and application scope of the intrinsic estimator approach to the age–period–cohort problem. Demography 50(6), 1945–1967 (2013a) Luo, L.: Paradigm shift in age–period–cohort analysis: a response to Yang and Land, O'Brien, Held and Riebler, and Fienberg. Demography 50(6), 1985–1988 (2013b) Luo, L., Hodges, J.S.: Block constraints in age–period–cohortmodels with unequal-width intervals. Sociol. Methods Res. 45(4), 700–726 (2016) Luo, L., et al.: The sensitivity of the intrinsic estimator to coding schemes: comment on Yang, Schulhofer-Wohl, Fu, and Land. Am. J. Sociol. 122(3), 930–961 (2016) National Center for Health Statistics.: The National Health Interview Survey (NHIS). http://www.cdc.gov/nchs/nhis/ (2004). Accessed 17 June 16 O'Brien, R.: Mixed models, linear dependency, and identification in age–period–cohort models.(2016) In progress Pelzer, B., et al.: The non-uniqueness property of the intrinsic estimator in APC models. Demography 52(1), 315–327 (2015) Rasbash, J., et al.: A User's Guide to MLwiN, Version 2.10. Centre for Multilevel Modelling, University of Bristol, Bristol (2009) Rasbash, J., et al.: MLwiN Version 2.24. Centre for Multilevel Modelling, University of Bristol, Bristol (2011) Reither, E.N., Hauser, R.M., Yang, Y.: Do birth cohorts matter? Age–period–cohort analyses of the obesity epidemic in the United States. Soc. Sci. Med. 69(10), 1439–1448 (2009) Reither, E.N., Land, K.C., et al.: Clarifying hierarchical age–period–cohort models: a rejoinder to Bell and Jones. Soc. Sci. Med. 145, 125–128 (2015a) Reither, E.N., Masters, R.K., et al.: Should age–period–cohort studies return to the methodologies of the 1970s? Soc. Sci. Med. 128, 356–365 (2015b) Spiegelhalter, D.J., et al.: Bayesian measures of model complexity and fit. J. R. Stat. Soc. Ser. B-Stat. Methodol. 64, 583–616 (2002) Suzuki, E.: Time changes, so do people. Soc. Sci. Med. 75(3), 452–456 (2012) Te Grotenhuis, M., et al.: The intrinsic estimator, alternative estimates, and predictions of mortality trends: a comment on Masters, Hummer, Powers, Beck, Lin, and Finch. Demography 53(4), 1245–1252 (2016) Yang, Y.: Bayesian inference for hierarchical age–period–cohort models of repeated cross-section survey data. Sociol. Methodol. 36, 39–74 (2006) Yang, Y., Land, K.C.: A mixed models approach to the age–period–cohort analysis of repeated cross-section surveys, with an application to data on trends in verbal test scores. Sociol. Methodol. 36, 75–97 (2006) Yang, Y., Land, K.C.: Age–period–cohort analysis of repeated cross-section surveys—fixed or random effects? Sociol. Methods Res. 36(3), 297–326 (2008) Yang, Y., Land, K.C.: Age–period–cohort Analysis: New Models, Methods, and Empirical Applications. CRC Press, Boca Raton (2013a) Yang, Y., Land, K.C.: Misunderstandings, mischaracterizations, and the problematic choice of a specific instance in which the IE should never be applied. Demography 50(6), 1969–1971 (2013b) Thanks to Phil Jones for research assistance funded by the British Academy's Skills Innovator Award, and attendees at the Research Methods Festival 2016 and Royal Statistical Society conference 2016 for their helpful suggestions and ideas. Sheffield Methods Institute, University of Sheffield, ICOSS Building, 219 Portobello, Sheffield, S1 4DP, UK Andrew Bell School of Geographical Sciences, University of Bristol, University Road, Bristol, BS8 1SS, UK Kelvyn Jones Correspondence to Andrew Bell. Supplementary material 1 (DO 7 kb) Supplementary material 2 (DO 18 kb) Bell, A., Jones, K. The hierarchical age–period–cohort model: Why does it find the results that it finds?. Qual Quant 52, 783–799 (2018). https://doi.org/10.1007/s11135-017-0488-5 Age–period–cohort Hierarchical age period cohort model Multilevel modelling
CommonCrawl
Option price derivation with these dynamics If my underlying follows a dynamics of the form \begin{align*} dF(t,T)/F(t,T)=\sigma_1(t,T)dW_1(t)+\sigma_2(t,T)dW_2(t), \end{align*} where $\sigma_1(t,T)=h_1e^{-\lambda(T-t)}+h_0$, and $\sigma_2(t,T)=h_2e^{-\lambda(T-t)}$. How to derive an option price? stochastic-calculus snowavesnowave $\begingroup$ Are $W_1$ and $W_2$ correlated or independent? $\endgroup$ – Gordon Jun 23 '16 at 15:36 $\begingroup$ they are independent $\endgroup$ – snowave Jun 23 '16 at 15:41 You can proceed similarly to this question. For $0 < T_0\le T$, consider the option with payoff, at the option maturity $T_0$, of the form \begin{align*} \max(F_{T_0, T}-K, \, 0).\tag{1} \end{align*} Note that \begin{align*} F_{T_0, T} &= F_{0, T}\exp\Bigg(-\frac{1}{2}\int_0^{T_0} \left[\left(h_1e^{-\lambda (T-t)}+h_0\right)^2 + h_2^2e^{-2\lambda (T-t)} \right] dt\\ &\qquad\qquad\qquad +\int_0^{T_0}\left[\left(h_1e^{-\lambda (T-t)}+h_0\right)dW_t^1 + h_2e^{-\lambda (T-t)}dW_t^2\right]\Bigg). \end{align*} Let \begin{align*} \hat{\sigma}^2 &= \frac{1}{T_0}\int_0^{T_0} \left[\left(h_1e^{-\lambda (T-t)}+h_0\right)^2 + h_2^2e^{-2\lambda (T-t)} \right] dt\\ &=\frac{e^{-2\lambda T}(h_1^2+h_2^2)}{2\lambda T_0}\left(e^{2\lambda T_0} -1\right)+\frac{2e^{-\lambda T}h_0h_1}{\lambda T_0}\left(e^{\lambda T_0} -1\right) + h_0^2. \end{align*} Then, in distribution, \begin{align*} F_{T_0, T} = F_{0, T}\exp\left(-\frac{\hat{\sigma}^2}{2} T_0 + \hat{\sigma} \sqrt{T_0} Z\right), \end{align*} where $Z$ is a standard normal random variable. The value of Payoff $(1)$ is now given by \begin{align*} e^{-r T_0}\Big[F_{0, T}\Phi(d_1) - K\Phi(d_2) \Big], \end{align*} where \begin{align*} d_1 &= \frac{\ln \frac{F_{0, T}}{K} + \frac{\hat{\sigma}^2}{2} T_0}{\hat{\sigma} \sqrt{T_0}},\\ d_2 &= d_1 - \hat{\sigma} \sqrt{T_0}, \end{align*} and $\Phi$ is the cumulative distribution function of a standard normal random variable. GordonGordon Not the answer you're looking for? Browse other questions tagged stochastic-calculus or ask your own question. How to derive an option price for an asset with these dynamics? Closed form solution of PDE of Option Price What is the correlation between these two functions of GBMs? How were these SDE derived? Change-of-measure: Dynamics of $\log(S_t)$ with $S_t$ as numeraire Merton's jump diffusion Girsanov's Theorem for Multiple Risky Assets How to deduce the formula of the wealth process of a stochastic volatility model? Price of a stochastic game between an agent and the market
CommonCrawl
Key messages and recommendations Stakeholders' opinions and questions regarding the anticipated malaria vaccine in Tanzania Sally Mtenga1Email author, Angela Kimweri1, Idda Romore1, Ali Ali1, Amon Exavery1, Elisa Sicuri2, 3, Marcel Tanner5, Salim Abdulla1, John Lusingu4 and Shubi Kafuruki1 © Mtenga et al. 2016 Accepted: 3 March 2016 Within the context of combined interventions, malaria vaccine may provide additional value in malaria prevention. Stakeholders' perspectives are thus critical for informed recommendation of the vaccine in Tanzania. This paper presents the views of stakeholders with regards to malaria vaccine in 12 Tanzanian districts. Quantitative and qualitative methods were employed. A structured questionnaire was administered to 2123 mothers of under five children. Forty-six in-depth interviews and 12 focus group discussions were conducted with teachers, religious leaders, community health workers, health care professionals, and scientists. Quantitative data analysis involved frequency distributions and cross tabulations using Chi square test to determine the association between malaria vaccine acceptability and independent variables. Qualitative data were analysed thematically. Overall, 84.2 % of the mothers had perfect acceptance of malaria vaccine. Acceptance varied significantly according to religion, occupation, tribe and region (p < 0.001). Ninety two percent reported that they will accept the malaria vaccine despite the need to continue using insecticide-treated nets (ITNs), while 88.4 % reported that they will accept malaria vaccine even if their children get malaria less often than non-vaccinated children. Qualitative results revealed that the positive opinions towards malaria vaccine were due to a need for additional malaria prevention strategies and expectations that the vaccine will reduce visits to the health facility, deaths, malaria episodes and treatment-related expenses. Vaccine related questions included its side effects, efficacy, protective duration, composition, interaction with other medications, provision schedule, availability to the pregnant women, mode of administration (oral or injection?) and whether a child born of HIV virus or with a chronic illness will be eligible for the vaccine? Stakeholders had high acceptance and positive opinions towards the combined use of the anticipated malaria vaccine and ITNs, and that their acceptance remains high even when the vaccine may not provide full protection, this is a crucial finding for malaria vaccine policy decisions in Tanzania. An inclusive communication strategy should be designed to address the stakeholders' questions through a process that should engage and be implemented by communities and health care professionals. Social cultural aspects associated with vaccine acceptance should be integrated in the communication strategy. Malaria remains a major public health concern in sub-Saharan Africa (SSA). Tanzania is one of the countries in which malaria continues to be a significant cause of morbidity, mortality and considered as an impediment to social economic growth and welfare [1]. According to the National Malaria Control Programme (NMCP), 90 % of the Tanzanian population are at risk of malaria, resulting into 11 million clinical cases per year. The most vulnerable to malaria are children and pregnant mothers [2]. In Tanzanian mainland, the number of malaria microscopically-confirmed cases are 1,550,250 and reported deaths are 8525 [3]. Despite a declining trend in the number of admissions and deaths over the last few years, the country experiences a marked variation across regions having some with high malaria prevalence and others with low prevalence. For instance, there are regions with one percent or less and others with more than 30 % [4]. The current malaria interventions in Tanzania include malaria testing by microscopy and/or rapid diagnostic tests, treatment with affordable and effective malaria treatment, such as artemisinin-based combination therapy (ACT), protection using long-lasting insecticide-treated nets and indoor residual spraying with insecticides, intermittent preventive therapy with sulfadoxine-pyrimethamine for pregnant women [2]. However, challenges are also reported on existing malaria interventions with regards to resistance of malaria parasites to ACT, as well as non-use of mosquito nets [5, 6]. Considering variations in malaria prevalence and challenges related to the existing malaria interventions, more innovative response including the vaccines to prevent malaria is likely to improve the impact of available interventions. Vaccines are considered effective interventions in protecting individuals from infectious diseases and the best tool to achieving disease eradication in various contexts [7]. The currently most advanced candidate vaccine RTS, S/AS01 against Plasmodium falciparum malaria, has been tested across several sub-Saharan African countries including Tanzania. Phase 3 trials showed that during 12 months of follow-up, half malaria episodes were protected in 5–17 months. One third malaria episodes were protected in 6–12 weeks cohort [8]. In infants 6–12 weeks of age, vaccine efficacy was about 30 % against both for clinical and severe malaria [9]. Recent study indicates that during 18 months of follow up, vaccination of children and young infants with RTS, S/AS01 prevented many cases of clinical and severe malaria and that the vaccination showed the highest impact in regions with the highest incidence of malaria [10]. Tanzania with other countries in Africa is underway to launch a malaria vaccine which is hoped to cut episodes of clinical malaria in young children by about half [11]. In the context of the current efficacy results a policy recommendation is likely to occur paving a way for the implementation of the vaccine in countries through their expanded programmes on immunization. Although stakeholders' (community and professionals) voice is imperative before policy endorsement [12]; to date, there is limited information regarding their acceptance and questions related to the vaccine. Where information on the acceptance of the malaria vaccine exists [13, 14], it is not incorporated within the context of the ongoing malaria interventions and does not highlight on whether people may be willing to undertake the vaccine even when it is unlikely to provide full protection. Moreover, the accounts of contextual aspects that influence vaccine acceptance are not fully presented. Such information is crucial for policy decisions and future implementation if recommendation on the vaccine is made in the near future. However such information is missing in Tanzania despite being one of the country in which the RTS, S/AS01 vaccine trial was implemented. Experience indicates that it takes time for the interventions to gain public acceptance even after it has been licensed due to various factors including community acceptance and inadequate prior information that could inform the policy makers on what need to be considered before the implementation of the intervention [14]. Also, the absence of critical data could slow down the decision process that policymakers must undertake to determine whether or not to introduce a particular intervention into their health systems [15]. In addition, lack of community support due to poor knowledge and perceptions made community delay the uptake while others reject vaccines. For instance, it existed when Polio vaccination programme was delayed in northern Nigeria [16]. Therefore, it is crucial that community perceptions are understood and used to highlight any community-based issues that need to be considered during policy deliberation and intervention planning [17]. Within the context of planning for a vaccine to be used alongside existing malaria control methods, mothers of children under five and other stakeholders (teachers, religious leaders, community health workers, health care professionals and scientists) were interviewed to assess their perceptions on malaria and acceptance of the malaria vaccine. The following were the specific objectives: To determine stakeholders' acceptance of the anticipated malaria vaccine and the associated factors. To assess stakeholders' perception and attitude towards the vaccines. To explore stakeholders' expectations from the anticipated malaria vaccine. To explore stakeholders' questions with regards to the anticipated malaria vaccine. We hope that the study findings may assist the policy makers in Tanzania to make informed decisions on the introduction of malaria vaccine in line with other existing malaria intervention strategies [15]. The data may also inform the design of the communication strategy and guide the country's programmers on the issues to be considered before the actual implementation of the vaccine. Overall study design A cross sectional study that involved quantitative and qualitative methods was conducted. The study was implemented between May and June 2013 in twelve districts of Tanzanian mainland (Table 1). The mixed method approach aimed at triangulating the methods and findings for completeness of the data [18]. The participating districts were from areas where phase II and III RTS, S malaria vaccine trial had not been implemented. This was done purposely to minimize bias of opinion with regards to acceptance of malaria vaccine. The districts were from the northern, eastern, western, central and southern parts of the country for enhancing the representativeness of voices from communities of diverse background since the future malaria vaccine may not only be introduced in trial sites. Malaria in the study regions ranges from 1 % in Arusha to more than 20 % in Lindi and Mtwara [1]. In the country, EPI coverage is high but varies across regions having the highest coverage in Arusha (100 %) and the lowest coverage in Kagera (57 %) [19]. The health care system in Tanzania is composed of the public hospitals and the private hospitals. Hospitals are the highest level of access to care and the dispensary being the lowest level. However, at the dispensaries is where the majority of people in urban and rural communities access their health care. Study regions and the selected districts Handeni Ngara Kisarawe Newala Ileje Lindi rural Ilemela Morogoro Municipal/Kilombero Dodoma Municipal Design and setting Using qualitative approach, the study employed parallel individual interviews (IDIs) and focus groups discussions (FGDs). Qualitative participants were from some of the study sites where the quantitative study was conducted including participants from Mwanza (Ilemela district), Mbeya (Ileje district) and Arusha (Ngorongoro district). Addition participants were from Morogoro and Dar es Salaam regions. Preference to conduct qualitative study in these sites was based on the convenience and cost. Study population and recruitment Forty six IDIs and 12 FGDs were conducted. The IDI participants comprised of primary school teachers, religious leaders, community health workers, health care professionals, and scientists. The scientists who participated in the discussion were from various professional backgrounds (sociologists, medical doctors, public health, epidemiologists) excluding those who were participating in the clinical trials. FGDs were carried on with men and women in the respective study sites. The FGDs allowed insights into general group norms on the vaccines and capturing varied views and questions with regards to the anticipated malaria vaccine. The IDIs involved individuals who were believed to be capable of providing personal opinions about malaria and the anticipated malaria vaccine. The participants in the local communities were purposively [20] selected by the assistance of the community leaders believed to be influential in decisions about health-seeking practice in their families and community at large. Scientists were recruited from various institutions both public and private. Selection of the scientists was mostly based on the convenience and availability of the individuals in their institutions. The health care professionals were recruited based on their assimilation with child care services i.e. working in the reproductive health unit and paediatric care. Focus group guide with open ended questions was used to collect FGD data, while a semi structured topic list was used to collect IDI data. Both tools addressed similar topics directly designed to address specific study objectives. The tools were adjusted according to the best fit of the study audience. Main topics included: perception about malaria status, perception about malaria vaccine, expectations from malaria vaccines, preferred modality of providing malaria vaccine and questions with regards to malaria vaccine. IDI and FGD tools were piloted for practicability. Experienced research assistants conducted the IDIs and FGDs. Prior to data collection experienced social scientists (SM, AK) provided training to the research assistants. The training familiarized them with the study objectives, status of malaria vaccine research, interview procedures and ethical aspects. The FGDs were made of up to 7–9 participants and composed of a moderator who facilitated the discussion and a note taker who assisted in taking notes. Both discussions and interviews were conducted in a place convenient to the study participants. To enhance freedom of discussion, women and men had separate discussion groups. The discussion sessions were conducted in Kiswahili, a Tanzania national language which is well understood and used commonly in the study area. Subject to participants' consent some data from interviews with communities were audio-taped but other data was put in the expanded notes [21]. The interviews and discussions lasted for 1 h and 30 min on average. Audio-recorded data was transcribed verbatim for analysis. The transcripts and expanded notes were checked for completeness and accuracy. Two social scientists (SA and AK) experienced in qualitative studies independently reviewed the transcripts to identify the relevant patterns and later the patterns were grouped into main themes. N-VIVO program [22] assisted in the display of participants expressions and the coding process. Considering the views of various stakeholders, a constant comparison approach was employed to compare themes that emerged from these analytic procedures [23]. To consolidate results, the identified themes and categories were shared in a malaria vaccine working groups and during the national stakeholders' meeting. The stakeholders from immunization department in Tanzania and others discussed the findings and the consensus was reached about the interpretation of the results. All data were analysed in the Kiswahili but relevant quotes were translated into English for the purpose of this paper. The study involved face to face household interviews with women aged 18 years and above, who had at least one child under 5 years. Quantitative study was employed to estimate level of acceptance of the malaria vaccine and preferred modality of providing malaria vaccine. The structured questionnaire was used to interview the eligible mothers. Before the interviews, the tools were piloted and later necessary changes were adopted in the tools. A multi-stage random sampling was used to select the households in the specific study areas. At first the country was stratified based on regions, one region was randomly selected from which a list of districts was sought and then one district was randomly selected from the list of districts, (excluding the two districts that have been involved in malaria vaccine trial that is Bagamoyo and Korogwe), followed by village, and lastly household. The sample size was calculated based on simple random sampling given by the following formula and values attributed to parameters: $$ n = \frac{{Z^{2} p(1 - p)N}}{{d^{{2(N - 1) + Z^{2} p(1 - p)}} }} $$ n = sample size; z–score, which is the number of standard deviations from the mean. At 95 % confidence level, z = 1.96; p = prevalence of malaria (assumed to be 50 %); d = absolute precision required (1 %); N = population size (different according to the region, ranging from 188 to 332). Thereafter, the sample size from simple random sample was multiplied by design effect to take into account the clustering effect. The design effect is given by: $$Def = 1 + \left({Average\;population\;per\;cluster - 1} \right) \times \rho$$ where, ρ is a cluster correlation. The sample size was required to detect 50 % proportion on perception of malaria vaccine. The sample size calculation was to give idea of how many individuals were deemed to be interviewed from each study district. Prior to data collection, the research team received orientation on the purpose of the study, reminded about ethical aspects and review of the questions. Electronic devises (tablets) were used to collect information during the interviews, and later the information was synchronized into the field computers. Quality check and errors were carried out in the field and prompt feedback was provided to the field supervisors and later to the research assistants for corrective measures. All the quantitative data were managed and analysed using STATA 11 Software (Stata corp, USA). Data analysis was performed as both one-way frequency distributions and cross tabulations of various outcomes against selected independent variables. In the latter case, Chi Square (χ2) was used to test the degree of association between each pair of categorical variables involved in a cross-tabulation. The significance was determined at p ≤ 0.05. Acceptability of the anticipated malaria vaccine was the main variable and was defined by three questions; Researchers are working to develop a malaria vaccine, which will have the possibility of reducing the recurrence of malaria among children. If the malaria vaccine becomes available will you be willing for your child to receive that malaria vaccine? Malaria vaccine may cause discomfort similar to other childhood vaccines will you agree or disagree that your child still get vaccinated? Even though a child is vaccinated, s/he will still have to use ITNs and seek treatment if s/he has fever. Will you agree that your child get vaccinated? Frequency distribution of responses in each of these questions (A, B and C) was performed. Then bivariate analysis of each question, independent of the other, was conducted to assess how each of these outcomes was related to background and non-background characteristics of the participants, such as age, education, religion and region. Finally, these variables were combined to form a single powerful indicator of acceptability, such that: $${\rm Acceptance} \,\,{\rm of} \,\,{\rm malaria} \,\,{\rm vaccine} = \left\{ \begin{array}{ll} {\text{PERFECT}} &\quad {\text{if A }} = {\text{ YES and B }} = {\text{ YES and C }} = {\text{ YES}} \\ {\text{PARTIAL}} &\quad {\text{if YES to any one or two but not all of the A}},{\text{ B and C}} \\ {\text{NO}}&\quad {\text{if A }} \ne {\text{ YES}} \end{array} \right. $$ Ethical aspects The study was approved by the ethical review board of the Ifakara Health Institute (IHI-IRB). Prior to interviews, local authorities were contacted and asked for permission. Written consent to approach the study communities was obtained. Verbal and written informed consents were obtained from all study participants through which participants were assured of anonymity and confidentiality of information. Since the study was a mixed method design, results are triangulated for the purpose of elaboration and completeness. Results are presented in general themes emanated from the study objectives. Qualitative findings are not presented according to study groups due to observed convergence of views and opinions with regards study phenomenon. Characteristics of study respondents Out of a sample size of 2124, a total of 2123 mothers with children under five from nine districts participated in the study. Of the 2123 mothers, 70 % were in the age range of 20 and 34 years. A majority of mothers (84 %) were in marital relations. Slightly more than one third (34.7 %) of the participants had more than three children. The study population was relatively literate with only 19.4 % of respondents who had never attended school, whilst about 69.4 % had attained primary school and 11 % had secondary education. A majority (70.4 %) of respondents were farmers. About 56.2 % of respondents were Christians and 40.9 % were Muslims (Table 2). Characteristics of the respondents in 2013 household survey (n = 2123) Number of respondents (n) Percent (%) Mean = 28.9 ± 7.6, min = 15.0, max = 65.0 Currently married Ever married (currently divorced/widowed) Never been to school Secondary+ Parity (number of children) Farmer/other Business (petty vender, tailoring etc) Housekeeper/no job Kurya Hangaza Zigua Zaramo Ndali Othersa Region (district) Arusha (Ngorongoro) Kagera (Ngara) Lindi (Lindi rural) Mara (Serengeti) Mbeya (Ileje) Mtwara (Newala) Mwanza (Ilemela) Pwani (Kisarawe) Tanga (Handeni) a Sonjo, Mwela, Yao, Iraki, Chagga, Ha, Masai, Haya, Zigua, Sambaa etc Qualitative participants composed of 21 health care professionals (18 health care providers and four paediatricians), six teachers, four religious leaders and six community health workers. Twelve more IDIs were conducted with scientists from various institutions in Dar es Salaam (Table 3). FGDs were carried on with six groups of women and six groups of men. Most of the FGD and IDI participants were of age 25 and 50. Non-professional participants were mostly farmers and petty trade dealers. Most of them had attained primary school level. The majority of the professionals, such as nurses and teachers, were of secondary school and high school levels. Summary of the IDIs and FGDs participants Total number of participants From public health facilities and private health facilities From public health facilities From private health facilities From rural community From Muslim community From Christian community From sociological background From medical background From government and private institutions FGDs Perception and attitude towards vaccines Qualitative participants (mostly women) possessed a positive opinion towards vaccines. They were in the opinion that the vaccines are important for the reduction of disease severity, reduced cost of treatment and disease prevention. One female participant expressed her opinion that vaccine would reduce the severity of disease: "I know that when a child gets vaccinated he will be protected from diseases. Even if the disease comes, it will not be very much severe as compared to if the child has not completely received a vaccine" (FGD, Female_04). Another female participant was in the opinion that vaccine is important for prevention of diseases: "Just like what the experts says "it is better to prevent than to cure" then I think vaccination is important as it helps to prevent a child from diseases and reduces treatment costs because during treatment you use much cost to treat the child unlike when the child is protected (with the vaccine)" (FGD, Female_05). Similarly in the quantitative study, the majority (90.1 %) of mothers reported that there is a benefit associated with vaccination (Fig. 1). Also, about 97.6 % agreed with the statement that 'I prefer my child to receive all the vaccines' (Table 4). Percent distribution of respondents that believe that there are benefits related to under-five child vaccination (n = 2123) Perception towards vaccine: percent distribution of respondents that AGREE with the listed statements by region (n = 2123) % Stated that … I prefer my child to have all vaccinations I prefer my child to have certain vaccinations I prefer my child not to be vaccinated at all Acceptance of malaria vaccine and the associated reasons Most of the opinions of the qualitative participants reflected a positive acceptance towards the anticipated malaria vaccine. The main consensus was that malaria vaccine is important since malaria is still a common disease among children under five. One of the paediatricians provided his view that malaria vaccine need to be provided since more strategies are needed to fight malaria: "I think malaria problem is still there and more weapons are needed in making sure that it is prevented, vaccine is one of the weapon, but if it's safe for the users" (IDI, Paediatrician _05). Another participant was in the opinion that malaria still affects children and hence a need to introduce malaria vaccine: "Malaria vaccine should be introduced due to the burden of malaria especially for young children" (IDI, Nurse, RCH_06). A male participant thought that malaria vaccine is needed because the mosquito nets cannot provide full protection from mosquitoes: "I think we need malaria vaccine since we are not always covered by the mosquito nets. Look at where we are now, we have stayed for almost one hour and the mosquito nets are inside our houses on the beds. Probably the mosquitoes might have already bitten the child. Therefore, we cannot totally depend on the mosquito nets …" (FGD, Male_ 05). The quantitative results revealed that the majority (84.2 %) of the participants indicated a perfect acceptance of malaria vaccine, 11.9 % had partial acceptance while 3.9 % had no acceptance of the vaccine (Table 5). Occupation, tribe, religion, and regions attained a statistical significance with the perfect acceptance of the malaria vaccine (p < 0.001), with farmers, Christians, members of the tribe Hangaza and households in the Kagera region presenting higher acceptance levels. Degree of malaria vaccine acceptability by various characteristics (n = 2123) Total number of respondents Degree of acceptability (%) Perfect acceptance Partial acceptance No acceptance a Sonjo, Mwela, Yao, Iraki, Chagga, Ha, Masai, Haya, Sambaa etc Expectations from malaria vaccine The common expectations from the malaria vaccine by most participants comprised a view that malaria vaccine will lessen the malaria episodes, frequent visits to the hospital due to malaria, the number of deaths and that the overall burden of malaria among children will be reduced. "My expectations is that if malaria vaccine will work, it will help reduce the hassle we get of having frequent malaria, you will find a child going back to hospital even four times in a month" (FGD, Male _03). "The expectations of most people will be that the malaria vaccine will completely eradicate malaria, because the children will have protection…and so malaria will finish …" (FGD, Female_05). "The health care providers will feel very proud to have this additional vaccine on top of the existing ones since we hope it will succeed in reducing the mortality rate especially for children under 5 years" (Nurse, RCH_05). "Most mothers will definitely take their kids for vaccination since the costs of treatment nowadays is very high" (Teacher_02). Acceptance of malaria vaccine in the context of ITN use The majority (92.5 %) reported that they will be ready to take their children for malaria vaccine despite their obligation to use ITNs and seek treatment when the child has fever. There were differences in the level of acceptance across regions, religion and tribe. The Mbeya region was more likely to indicate acceptance comparatively to other study regions (p = 0001) (Table 6). Percent distribution of respondents ready to get their children vaccinated with malaria vaccine despite the fact that even though a child is vaccinated, s/he will still have to use ITNs and seek treatment if s/he has fever; by various characteristics (n = 2123) % Ready Acceptance of malaria vaccine in the context of partial protection Participants were asked to provide their views on how they think about accepting the forthcoming malaria vaccine despite that their children will get malaria less often than those who don't get the vaccine. Most participants views consistently indicated a willingness to uptake malaria vaccine in the context of its partial protection due to their concern with the burden of malaria and the view that the less the episodes the less the cost of treatment. "This malaria vaccine need to be introduced because it will reduce the magnitude of malaria, even if it reduces to some extent, it is still important, because if a child gets malaria less frequently different from now, the costs of treatment will reduce…" (IDI, Religious leader_02). "If efficacy is 50 % is fine as long as you have helped the person by reducing the episodes. This will help to enhance immunity" (IDI, Health professional lecturer_04). "Vaccine would be the best solution, even if it is partially protected then should be introduced. It should go parallel with other strategies to be more effective." (IDI_Paediatrician_3). "…. even with existing vaccines, still the children get sick but not as much as those who did not get vaccine at all" (Teacher_03). The views of the qualitative participants converged with the quantitative data which indicated that the majority of mothers (88.4 %) reported that they will be comfortable that their children receive malaria vaccine despite that they will still get malaria less often than those who don't get the vaccine. Age of mothers, religion, region and tribes were statistically significantly associated with the acceptance of partial protection of malaria vaccine (Table 7). Percent distribution of respondents that answered "YES" to the question "If your child receives malaria vaccine, and still gets malaria but less often than those who don't get vaccine, will you be comfortable with that?"; by various characteristics (n = 2123) Total number_of respondents % That would be comfortable Questions regarding the anticipated malaria vaccines Despite a positive attitude towards the anticipated malaria vaccine, most participants had various questions with regards to the vaccine. However, there appeared to be similarities with regards to the questions asked by the communities and those asked by other professions. Most questions were mostly related to the side effects of the vaccine and the government response to them, efficacy, protective duration, composition, interaction with other medications, relation of vaccine schedule with existing EPI schedule, availability of the vaccine to the pregnant women, mode of administration (oral or injection?) and whether child born of HIV virus or with a chronic illness will be eligible for the vaccine? (Fig. 2). The common questions with regards to malaria vaccines by communities and professions The study findings suggest that stakeholders have a positive attitude towards the anticipated malaria vaccine and that their acceptance of the vaccine remains high despite the fact that it would be used parallel with other existing intervention strategies. Interestingly, the acceptance level also remains significant though the malaria vaccine is less likely to provide full protection. This outcome could be a reflection of how malaria is seriously perceived in the communities being studied. Furthermore, they may be willing to accept the new malaria interventions as long as they will (to some extent) contribute to the reduction of malaria, especially among children. Similarly, a study in Kenya also found that participants understood that malaria is a serious problem that no single tool can be used to combat it, which influences their acceptance of malaria vaccine [14]. Acceptance of malaria vaccine was also observed in studies conducted in Ghana [13] where the views of various professions and communities also reflected a positive opinion towards the introduction of malaria vaccine as a preventive tool. The study finding that stakeholders would still maintain the acceptance of malaria vaccine in the context of existing malaria intervention strategies is in line with the overall idea of introducing the vaccine which is not meant to replace the existing malaria interventions but rather to compliment it [11]. In this study, social cultural aspects emerged as factors associated with the acceptance of malaria vaccine. These factors include religion (Christians were more willing than Muslims to accept the vaccine), religion (Ndali tribe was less willing to accept the vaccine than the other tribes), and civil servants were more willing to accept the vaccine than the farmers. This finding corroborates with evidence from other countries in Africa [24] and elsewhere [25] where religion and ethnicity were found to influence health care utilization. Specific evidence also indicates that religion and ethnicity are associated with vaccine awareness and acceptance i.e measles vaccine and Human Papillomavirus (HPV) [25, 26]. The differences in vaccine acceptance based on religion, ethnicity and occupation as observed in this study could also reflect that people's values, preferences and expectations would sometimes constrain their acceptance of a particular health care programme. These could originate from the culture in which the social interaction is taking place, which in turn govern their decisions about how they should pursue a recommended health intervention [27]. Although other studies have found that the quality of care i.e. congestion, delays, and the perceived attitudes of the health care providers [14], access to services, reliability of services fear of side effects, and parental beliefs and conflicting priorities [28] constraints immunization services, this study shows that in addition to individual and health system factors, the social cultural aspects may play a significant role in influencing the differential acceptance of vaccination programmes. This is central in this paper, and it lends support to the views of other researchers that people may not automatically use a health intervention once introduced [14], and in the context of a vaccine, if the known barriers are not addressed may lead to under-utilization of immunization coverage [16, 29]. Currently there is a strong recognition globally that health is socially determined and that social-structural aspects are responsible for health inequity. As found in this study, religion and ethnicity may play a significant role towards inequity in immunization coverage. Health inequity is known to be a set back to the wider health development, and this could be addressed by examining the wider social and structural aspects that increase vulnerability to diseases [30–33]. Evidence in Nigeria indicates that the community tailored interventions have proven to be effective in increasing the utilization of polio vaccination [34]. As such, the public health communication strategy that seek to promote the available immunization services as well as the anticipated malaria vaccine could be made effective if tailored within the broader social aspirations and cultural differences existing in the locally contextualized environment. This study also found that the community and other professionals have multiple expectations and questions that relate to the anticipated malaria vaccine. It is important that the Tanzanian Immunization Department, malaria vaccine initiative, and other malaria stakeholders clarifies the questions and expectations prior to or parallel with the introduction of the malaria vaccine and provide the correct knowledge about the added value of malaria vaccine in lay man's language to avoid any misconceptions about the anticipated malaria vaccine. The voices of communities and that of the health care professionals are important and should be considered for better informed decisions, policy recommendation, planning and designing of a communication strategy. Failure to account for community's prior information that could enlighten policy makers on what is needed to be considered before the implementation of the intervention was found as one of the factors that could delay the public acceptance of the proposed intervention [14]. Understanding stakeholders' acceptability and perspectives regarding the anticipated malaria vaccine in the context of other ongoing malaria interventions is crucial for appropriate policy decisions in Tanzania. Stakeholders' high acceptability of the anticipated malaria vaccine, even when it is less likely to provide full protection may reflect the extent to which malaria interventions, are needed in the study areas. However the questions raised by the communities reflect the need to clarify some misconceptions and provision of correct knowledge regarding the vaccine. The optimal acceptance and utilization of the anticipated malaria vaccine may require addressing of the social cultural aspects that could impede the utilization of the vaccine. The fact that the views of various community groups reflect a willingness to undertake malaria vaccine parallel with the existing malaria intervention strategies (such as ITNs) could be one of the strengths within the National Malaria Control Strategy in Tanzania. This aspect may need to be emphasized during the implementation phase of malaria vaccine, since the vaccine may not provide full protection. Communities are also ready to accept a partial efficacy malaria vaccine, which may be useful in guiding policy recommendations toward the vaccine in the country. Although stakeholders possess a positive opinion towards the anticipated malaria vaccine there is much on which the Tanzanian Immunization Department, malaria vaccine initiative, and other malaria stakeholders, need to undertake for optimal acceptance and utilization of the vaccine. Based on the findings the following recommendations can be made: The communication strategy should clarify the questions and expectations raised by stakeholders prior to or parallel with the introduction of the malaria vaccine in lay man's language to avoid any misconceptions about the anticipated malaria vaccine. Issues on religion, ethnicity, occupation and region should be considered for the designing of culturally based interventions to increase the acceptability and effectiveness of vaccine programmes. SM made a substantial contribution in the designing of the study, conception of the paper, supervision of the field work, analysis of the qualitative data, interpretation of the findings and drafting of the manuscript. AK assisted in the supervision of the field work and in the analysis of the qualitative data. SM and AE analyzed the quantitative data. JL, SK, SA, IR, and AA provided inputs in the conception and design of the study. JL, ES, IR, SA, SK, MT provided substantial inputs to the paper. All authors, read, revised and approved the final manuscript. All authors read and approved the final manuscript. The researchers extend a sincere appreciation to the study participants for their time and valuable information which has been useful in the accomplishment of this study. The work of the field team which was supervised by the scientists from the Ifakara Health Institute (IHI) is highly appreciated. We are grateful to the local leaders and the District Medical Officers (DMOs) at the respective study districts for their support and permissions to conduct studies in their premises. Our sincere appreciation goes to Dr. Mwele Malechela the director of Tanzania National Medical Research Institute, Dr. Antonneite Ba-Nguz; previously worked for PATH-MVI and Dr. Dafrossa Lyimo the director of Immunization department in Tanzania for their inputs which they provided from the conception of this study. We also appreciate the inputs of Dr. Mary Mwangome from IHI. We are thankful to Ms. Hadija Mlege from IHI for coordinating the field logistics. We would also like to extend our appreciation to the PATH Malaria Vaccine Initiative for funding the study. Ifakara Health Institute (IHI), P.O. Box 78373, Dar es Salaam, Tanzania ISGlobal, Barcelona Centre International Health Research (CRESIB), Hospital Clínic, Universitat de Barcelona, Barcelona, Spain Health Economics Group, Department of Infectious Disease Epidemiology, School of Public Health, Imperial College London, London, UK National Institute for Medical Research Institute (NIMR), Tanga, Tanzania Swiss Tropical and Public Health Institute, Basel, Switzerland Tanzania Commission for AIDS, Zanzibar AIDS Commission, National Bureau of Statistics, Office of the Chief Government Statistician, and ICF International. Tanzania HIV/AIDS and Malaria Indicator Survey 2011–12. Dar es Salaam; 2012.Google Scholar Ministry of Health and Social Welfare. National Malaria guideline for Malaria diagnosis and treatment. Dar es Salaam. National Malaria Control Programme; 2008.Google Scholar WHO. World Malaria Report 2008. Geneva: World Health Organization; 2008.Google Scholar National Bureau of Statistics (NBS) and ICF Macro. Tanzania Demographic and Health Survey 2010. Dar es Salaam; 2010.Google Scholar Wongsrichanalai C, Meshnick SR. Declining artesunate-mefloquine efficacy against falciparum malaria on the Cambodia–Thailand border. Emerg Infect Dis. 2008;14:716–9.View ArticlePubMedPubMed CentralGoogle Scholar Baume CA, Reithinger R, Woldehanna S. Factors associated with use and non-use of mosquito nets owned in Oromia and Amhara regional states, Ethiopia. Malar J. 2009;8:26.View ArticleGoogle Scholar de Quadros CA. History and prospects for viral disease eradication. Med Microbiol Immunol. 2002;191:75–81.View ArticlePubMedGoogle Scholar Agnandji ST, Lell B, Soulanoudjingar SS, Fernandes JF, Abossolo BP, Conzelmann C, et al. First results of phase 3 trial of RTS, S/AS01 malaria vaccine in African children. NEJM. 2011;365:1863–75.View ArticlePubMedGoogle Scholar Agnandji ST, Lell B, Fernandes JF, Abossolo BP, Methogo B, Kabwende AL, et al. A phase 3 trial of RTS, S/AS01 malaria vaccine in African infants. NEJM. 2012;367:2284–95.View ArticlePubMedGoogle Scholar The RTS,S Clinical, Trials Partnership. Efficacy and safety of the RTS, S/AS01 malaria vaccine during 18 months after vaccination: a phase 3 randomized, controlled trial in children and young infants at 11 African sites. PLoS Med. 2014;11:e1001685.View ArticleGoogle Scholar Asante KP, Abokyi L, Zandoh C, Owusu R, Awini E, Sulemana A, et al. Community perceptions of malaria and malaria treatment behaviour in a rural district of Ghana: implications for artemisinin combination therapy. BMC Public Health. 2010;10:409.View ArticlePubMedPubMed CentralGoogle Scholar WHO. Department of Immunization, Vaccines, Biologicals: vaccine introduction guidelines: adding a vaccine to a National Immunization Programme: decision and implementation. Geneva: World Health Organization; 2005.Google Scholar Febir LG, Asante KP, Dzorgbo DB, Senah KA, Letsa TS, Owusu-Agyei S. Community perceptions of a malaria vaccine in the Kintampo districts of Ghana. Malar J. 2013;12:156.View ArticlePubMedPubMed CentralGoogle Scholar Ojakaa DI, Ofware P, Machira YW, Yamo E, Collymore Y, Ba-Nguz A, et al. Community perceptions of malaria and vaccines in the South Coast and Busia regions of Kenya. Malar J. 2011;10:14.View ArticleGoogle Scholar www.malariavaccine.org/files/MVIfactsheet. Accessed 12 May 2012. Kabir M. Knowledge, perception and beliefs about childhood immunization and attitude towards uptake of poliomyelitis immunization in a northern Nigerian village. Ann Niger Med. 2005;1:21–6.Google Scholar Romore I, Ali A, Semali I, Mshinda H, Tanner M, Abdulla S. Assessment of parental perception of malaria vaccine in Tanzania. Malar J. 2015;14:355.View ArticlePubMedPubMed CentralGoogle Scholar Strauss A. Qualitative analysis for social scientists. Cambridge: Cambridge University Press; 1987.View ArticleGoogle Scholar United Republic of Tanzania. Immunization and Vaccine Development program (2014–18). Communication strategy for routine Immunization.Google Scholar Vogt WP. Dictionary of statistics and methodology: A non technical guide for the social sciences. London: Sage publications; 1999.Google Scholar Halcomb E, Davidson P. Is verbatim transcription of interview data always necessary? Appl Nurs Res. 2006;19:38–42.View ArticlePubMedGoogle Scholar www.qsrinternational.com/products_nvivo-mac.aspx. Accessed 26 Apr 2014. Spencer L, Ritchie J, O'Connor W. Analysis: practices, principles, processes. In: Ritchie J, Lewis J, editors. Qualitative research practice: a guide for social science students and researchers. Thousand Oaks: Sage Publications; 2007.Google Scholar Crommett M. Confronting religion: perceptions and health-seeking behaviors of devout adolescents when faced with a sexually transmitted infection in Ghana. GUJHS. 2008;5:1.Google Scholar Shaikh B, Hatcher J. Health seeking behaviour and health service utilization in Pakistan: challenging the policy makers. J Public Health. 2004;27:49–55.View ArticleGoogle Scholar Marlow LAV, Wardle J, Forster AS, Waller J. Ethnic differences in human papillomavirus awareness and vaccine acceptability. J Epidemiol Community Health. 2009;63:1010–5.View ArticlePubMedPubMed CentralGoogle Scholar Wombwell E, Fangman MT, Yoder AK, Spero DL. Religious barriers to measles vaccination. J Community Health. 2015;40:597–604.View ArticlePubMedGoogle Scholar Jackson JD. Structural characteristics of norms. In: Steiner ID, Fishbein M, eds. Current studies in social psychology; 1965. p. 301–9.Google Scholar Favin M, Steinglass R, Fields R, Banerjee K, Sawhney M. Why children are not vaccinated: a review of the grey literature. Int Health. 2012;4:229–38.View ArticlePubMedGoogle Scholar Stanton BF. Assessment of relevant cultural considerations is essential for the success of a vaccine. J Health Popul Nutr. 2004;22:286–92.PubMedGoogle Scholar Mtenga S, Masanja I, Mamdani M. Strengthening national capacities for researching on social determinants of health (SDH) towards informing and addressing health inequities in Tanzania. Int J Equity Health. 2016;15:23.View ArticlePubMedPubMed CentralGoogle Scholar WHO. A conceptual framework for action on the social determinants of health 2010. Geneva: World Health Organization; 2008.Google Scholar Mtenga S, Pfeiffer C, Merten S, Mamdani M et al (2015) Prevalence and social drivers of HIV among married and cohabitating heterosexual adults in south-eastern Tanzania: analysis of adult health community cohort data. Glob Health Action 8:2894Google Scholar Nasiru S-G, Aliyu GG, Gasasira A, Aliyu MH, Zubair M, Mandawari SU, et al. Breaking community barriers to polio vaccination in northern Nigeria: the impact of a grass roots mobilization campaign (Majigi). Pathog Glob Health. 2012;106:166–71.View ArticlePubMedPubMed CentralGoogle Scholar
CommonCrawl
Optimal subspace codes in $ {{\rm{PG}}}(4,q) $ AMC Home New non-binary quantum codes from constacyclic codes over $ \mathbb{F}_q[u,v]/\langle u^{2}-1, v^{2}-v, uv-vu\rangle $ August 2019, 13(3): 405-420. doi: 10.3934/amc.2019026 Exponential generalised network descriptors Suzana Antunović 1,, , Tonči Kokan 2, , Tanja Vojković 3, and Damir Vukičević 3, Faculty of Civil Engineering, Architecture and Geodesy, Matice hrvatske 15, University of Split, Croatia Faculty of Science, Bijenička cesta 30, University of Zagreb, Croatia Faculty of Science, Rudera Boškovića 33, University of Split, Croatia * Corresponding author Received May 2018 Revised February 2019 Published April 2019 Figure(1) / Table(1) In communication networks theory the concepts of networkness and network surplus have recently been defined. Together with transmission and betweenness centrality, they were based on the assumption of equal communication between vertices. Generalised versions of these four descriptors were presented, taking into account that communication between vertices $ u $ and $ v $ is decreasing as the distance between them is increasing. Therefore, we weight the quantity of communication by $ \lambda^{d(u,v)} $ where $ \lambda \in \left\langle0,1 \right\rangle $. Extremal values of these descriptors are analysed. Keywords: Centrality, transmission, networkness, network surplus. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35. Citation: Suzana Antunović, Tonči Kokan, Tanja Vojković, Damir Vukičević. Exponential generalised network descriptors. Advances in Mathematics of Communications, 2019, 13 (3) : 405-420. doi: 10.3934/amc.2019026 S. Antunovic, T. Kokan, T. Vojkovic and D. Vukicevic, Generalised network descriptors, Glasnik Matematicki, 48 (2013), 211-230. doi: 10.3336/gm.48.2.01. Google Scholar A. L. Barabasi, Linked: How Everything is Connected to Everything Else and What It Means, Persus Publishing, Cambridge, 2002.Google Scholar B. Bollobas, Modern Graph Theory, Springer, New York, 1998. doi: 10.1007/978-1-4612-0619-4. Google Scholar S. P. Borgatti and M. G. Everett, A graph-theoretic perspective on centrality, Social Networks, 28 (2006), 466-484. doi: 10.1016/j.socnet.2005.11.005. Google Scholar U. Brandes, A faster algorithm for betweenness centrality, J. Math. Sociol., 25 (2001), 163-177. doi: 10.1080/0022250X.2001.9990249. Google Scholar G. Caporossi, M. Paiva, D. Vukicevic and M. Segatto, Centrality and betweenness: Vertex and edge decomposition of the Wiener index, MATCH Commun. Math. Comput. Chem, 68 (2012), 293-302. Google Scholar C. Dangalchev, Residual closeness and generalized closeness, International Journal of Foundations of Computer Science, 22 (2011), 1939-1948. doi: 10.1142/S0129054111009136. Google Scholar L. Freeman, A set of measures of centrality based on betweenness, Sociometry, 40 (1977), 35-41. doi: 10.2307/3033543. Google Scholar L. Freeman, Centrality in social networks: Conceptual clarification, Social Networks, 1 (1978), 215-239. doi: 10.1016/0378-8733(78)90021-7. Google Scholar S. Gago, J. Coroničová Hurajová and T. Mandaras, On decay centrality in graphs, Mathematica Scandinavica, 123 (2018), 39-50. doi: 10.7146/math.scand.a-106210. Google Scholar M. O. Jackson and A. Wolinsky, A strategic model of social and economic networks, Journal of Economic Theory, 71 (1996), 44-74. doi: 10.1006/jeth.1996.0108. Google Scholar [12] M. E. J. Newman, Networks: An Introduction, Oxford University Press, Oxford, 2010. doi: 10.1093/acprof:oso/9780199206650.001.0001. D. Vukicevic and G. Caporossi, Network descriptors based on betweenness centrality and transmission and their extremal values, Discrete Applied Mathematics, 161 (2013), 2678-2686. doi: 10.1016/j.dam.2013.04.005. Google Scholar Figure 1. A broom that minimizes $mt_{\lambda }^{e}(G)$ Table 1. Extremal values of exponential generalised network descriptors Descriptor $\lambda \in \left\langle 0,1\right\rangle $ $mt_{\lambda }^{e}$ broom (starting vertex) complete graph * $A_n$ $ (n-1)\lambda $ $Mt_{\lambda }^{e}$ open problem broom (starting vertex) $B_n$ $mc_{\lambda }^{e}$ path (end vertices) complete graph * $\frac{\lambda^D-\lambda}{\lambda -1}$ $(n-1)\lambda $ $Mc_{\lambda }^{e}$ open problem star (center) $(n-1)\left[ \lambda +\frac{1}{2}(n-2)\lambda ^{2}\right] $ $mN_{\lambda }^{e}$ broom (starting vertex) vertex-transitive graph $C_n$ $1$ $MN_{\lambda }^{e}$ vertex-transitive graph star (center) $1$ $\frac{1}{2}(n-2)\lambda +1$ $m\nu _{\lambda }^{e}$ broom (starting vertex) vertex-transitive graph $D_n$ $0$ $M\nu _{\lambda }^{e}$ vertex-transitive graph star (center) $0$ $\frac{1}{2}(n-1)(n-2)\lambda ^{2}$ Rumi Ghosh, Kristina Lerman. Rethinking centrality: The role of dynamical processes in social network analysis. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1355-1372. doi: 10.3934/dcdsb.2014.19.1355 Zuolin Shen, Junjie Wei. Hopf bifurcation analysis in a diffusive predator-prey system with delay and surplus killing effect. Mathematical Biosciences & Engineering, 2018, 15 (3) : 693-715. doi: 10.3934/mbe.2018031 Wuyuan Jiang. The maximum surplus before ruin in a jump-diffusion insurance risk process with dependence. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3037-3050. doi: 10.3934/dcdsb.2018298 David Colton, Lassi Päivärinta, John Sylvester. The interior transmission problem. Inverse Problems & Imaging, 2007, 1 (1) : 13-28. doi: 10.3934/ipi.2007.1.13 Hui Wan, Jing-An Cui. A model for the transmission of malaria. Discrete & Continuous Dynamical Systems - B, 2009, 11 (2) : 479-496. doi: 10.3934/dcdsb.2009.11.479 Fioralba Cakoni, Drossos Gintides. New results on transmission eigenvalues. Inverse Problems & Imaging, 2010, 4 (1) : 39-48. doi: 10.3934/ipi.2010.4.39 Andreas Kirsch. On the existence of transmission eigenvalues. Inverse Problems & Imaging, 2009, 3 (2) : 155-172. doi: 10.3934/ipi.2009.3.155 Jiangtao Mo, Liqun Qi, Zengxin Wei. A network simplex algorithm for simple manufacturing network model. Journal of Industrial & Management Optimization, 2005, 1 (2) : 251-273. doi: 10.3934/jimo.2005.1.251 Mahin Salmani, P. van den Driessche. A model for disease transmission in a patchy environment. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 185-202. doi: 10.3934/dcdsb.2006.6.185 Vesselin Petkov, Georgi Vodev. Localization of the interior transmission eigenvalues for a ball. Inverse Problems & Imaging, 2017, 11 (2) : 355-372. doi: 10.3934/ipi.2017017 Burcu Adivar, Ebru Selin Selen. Compartmental disease transmission models for smallpox. Conference Publications, 2011, 2011 (Special) : 13-21. doi: 10.3934/proc.2011.2011.13 Angela Cadena, Adriana Marcucci, Juan F. Pérez, Hernando Durán, Hernando Mutis, Camilo Taútiva, Fernando Palacios. Efficiency analysis in electricity transmission utilities. Journal of Industrial & Management Optimization, 2009, 5 (2) : 253-274. doi: 10.3934/jimo.2009.5.253 Armin Lechleiter. The factorization method is independent of transmission eigenvalues. Inverse Problems & Imaging, 2009, 3 (1) : 123-138. doi: 10.3934/ipi.2009.3.123 Didier Aussel, Rafael Correa, Matthieu Marechal. Electricity spot market with transmission losses. Journal of Industrial & Management Optimization, 2013, 9 (2) : 275-290. doi: 10.3934/jimo.2013.9.275 Luc Robbiano. Counting function for interior transmission eigenvalues. Mathematical Control & Related Fields, 2016, 6 (1) : 167-183. doi: 10.3934/mcrf.2016.6.167 Christopher M. Kribs-Zaleta. Alternative transmission modes for Trypanosoma cruzi . Mathematical Biosciences & Engineering, 2010, 7 (3) : 657-673. doi: 10.3934/mbe.2010.7.657 Konstantin Avrachenkov, Giovanni Neglia, Vikas Vikram Singh. Network formation games with teams. Journal of Dynamics & Games, 2016, 3 (4) : 303-318. doi: 10.3934/jdg.2016016 Joanna Tyrcha, John Hertz. Network inference with hidden units. Mathematical Biosciences & Engineering, 2014, 11 (1) : 149-165. doi: 10.3934/mbe.2014.11.149 T. S. Evans, A. D. K. Plato. Network rewiring models. Networks & Heterogeneous Media, 2008, 3 (2) : 221-238. doi: 10.3934/nhm.2008.3.221 David J. Aldous. A stochastic complex network model. Electronic Research Announcements, 2003, 9: 152-161. PDF downloads (44) Suzana Antunović Tonči Kokan Tanja Vojković Damir Vukičević
CommonCrawl
FYKOS.org RulesPeopleContact Us SearchArchiveSerialYearbooks Events Experiments česky: Vyhledávání úloh podle oboru Problems Results Fyziklani 2020 in Prague Online Physics Brawl astrophysics (67)biophysics (16)chemistry (18)electric field (59)electric current (62)gravitational field (66)hydromechanics (127)nuclear physics (34)oscillations (42)quantum physics (25)magnetic field (29)mathematics (78)mechanics of a point mass (222)gas mechanics (79)mechanics of rigid bodies (191)molecular physics (59)geometrical optics (66)wave optics (47)other (140)relativistic physics (33)statistical physics (18)thermodynamics (119)wave mechanics (43) electric field (10 points)4. Series 33. Year - S. We are sorry. This type of task is not translated to English. electric currentelectric field (8 points)5. Series 32. Year - 4. splash Consider a free water droplet with radius of $R$. We start to charge the drop slowly. Find the magnitude of the charge $Q$ the drop needs to splash. Solution in Czech hydromechanicselectric field (6 points)4. Series 32. Year - 3. levitating Matěj likes levitating things and therefore he bought an infinite non-conductive charged horizontal plane with the charge surface density $\sigma $. Then he placed a small ball with given mass $m$ and charge $q$ above the plane. For which values of $\sigma $ will the ball levitate above the plane? What is the corresponding height $h$? Assume that the gravitational acceleration $g$ is constant. Matěj would love to have levitation superability. (7 points)1. Series 32. Year - 3. unstable We have 8 point charges (each of magnitude $q$) located on the vertices of a cube. Find out the value of a point charge $q_0$ that needs to be placed in the middle of the cube, so that all charges remain in balance. Is this equilibrium stable? Matej wanted to pose a problem that even a professor couldn't work out. (5 points)6. Series 29. Year - 5. Particle race Two particles, an electron with mass $m_{e}=9,1\cdot 10^{-31}\;\mathrm{kg}$ and charge $-e=-1,6\cdot 10^{-19}C$ and an alpha particle with mass $m_{He}=6,6\cdot 10^{-27}\;\mathrm{kg}$ and charge 2$e$, are following a circular trajectory in the $xy$ plane in a homogeneous magnetic field $\textbf{B}=(0,0,B_{0})$, $B_{0}=5\cdot 10^{-5}T$. The radius of the orbit of the electron is $r_{e}=2\;\mathrm{cm}$ and the radius of the orbit of the alpha particle is $r_{He}=200\;\mathrm{m}$. Suddenly, a small homogeneous electric field $\textbf{E}=(0,0,E_{0})$, $E_{0}=5\cdot 10^{-5}V\cdot \;\mathrm{m}^{-1}$ is introduced. Determine the length of trajectories of these particles during in the time $t=1\;\mathrm{s}$ after the electric field comes into action. Assume that the particles are far enough from each other and that they don't emit any radiation. nuclear physicsmagnetic fieldelectric field (2 points)4. Series 29. Year - 2. Brain in a microwave How far from a base transceiver station (BTS) do a person have to be, for the emission to be fully comparable with that of the mobile phone just next to somebody's head. Expect the BTS to broadcast uniformly into a half-space with the emission power 400 W. The emission power of a mobile phone is 1 W. biophysicsmagnetic fieldelectric field (2 points)2. Series 27. Year - 2. Flying wood We have a wooden sphere at a height of $h=1\;\mathrm{m}$ above the surface of the Earth which has a perimeter of $R_{Z}=6378\;\mathrm{km}$ and a weight of $M_{Z}=5.97\cdot 10^{24}\;\mathrm{kg}$. The sphere has a perimeter of $r=1\;\mathrm{cm}$ and is made of a wood which has the density of $ρ=550\;\mathrm{kg}\cdot \mathrm{m}^{-3}$. Assume that the Earth has an electric charge of $Q=5C$. What is the charge $q$ that the sphere has to have float above the surface of the Earth? How does this result depend on the height $h?$ gravitational fieldelectric field Karel přemýšlel, co zadat jednoduchého. (5 points)1. Series 27. Year - 5. a bead A small bead of mass $m$ and charge $q$ is free to move in a horizontal tube. The tube is placed in between two spheres with charges $Q=-q$. The spheres are separated by a distance 2$a$. What is the frequency of small oscillations around the equilibrium point of the bead? You can neglect any friction in the tube. Hint: When the bead is only slightly displaced, the force acting on it changes negligibly. Radomír was rolling in a pipe. (5 points)1. Series 27. Year - P. speed of light What would be the world like if the speed of light was only $c=1000\;\mathrm{km}\cdot h^{-1}$ while all the other fundamental constants stayed unchanged? What would be the impact on life on Earth? Would it even be possible for people to exist in such a world? relativistic physicsmagnetic fieldelectric fieldquantum physics Karel came up with an unsolvable problem. (8 points)4. Series 26. Year - E. Fun with straws You can charge a regular plastic straw by rubbing it with a piece of fabric. This charge can be so large that the straw might even attach itself to a wall or a whiteboard. Explain this phenomena and estimate the charge you can put on a single straw. Hint: You might need to use two straws. Karel ran out of drawing pins.
CommonCrawl
SciPy Tutorial Integration (scipy.integrate)¶ The scipy.integrate sub-package provides several integration techniques including an ordinary differential equation integrator. An overview of the module is provided by the help command: >>> help(integrate) Methods for Integrating Functions given function object. quad -- General purpose integration. dblquad -- General purpose double integration. tplquad -- General purpose triple integration. fixed_quad -- Integrate func(x) using Gaussian quadrature of order n. quadrature -- Integrate with given tolerance using Gaussian quadrature. romberg -- Integrate func using Romberg integration. Methods for Integrating Functions given fixed samples. trapz -- Use trapezoidal rule to compute integral from samples. cumtrapz -- Use trapezoidal rule to cumulatively compute integral. simps -- Use Simpson's rule to compute integral from samples. romb -- Use Romberg Integration to compute integral from (2**k + 1) evenly-spaced samples. See the special module's orthogonal polynomials (special) for Gaussian quadrature roots and weights for other weighting factors and regions. Interface to numerical integrators of ODE systems. odeint -- General integration of ordinary differential equations. ode -- Integrate ODE using VODE and ZVODE routines. General integration (quad)¶ The function quad is provided to integrate a function of one variable between two points. The points can be \(\pm\infty\) (\(\pm\) inf) to indicate infinite limits. For example, suppose you wish to integrate a bessel function jv(2.5,x) along the interval \([0,4.5].\) \[I=\int_{0}^{4.5}J_{2.5}\left(x\right)\, dx.\] This could be computed using quad: >>> result = integrate.quad(lambda x: special.jv(2.5,x), 0, 4.5) >>> print result (1.1178179380783249, 7.8663172481899801e-09) >>> I = sqrt(2/pi)*(18.0/27*sqrt(2)*cos(4.5)-4.0/27*sqrt(2)*sin(4.5)+ sqrt(2*pi)*special.fresnel(3/sqrt(pi))[0]) >>> print I >>> print abs(result[0]-I) 1.03761443881e-11 The first argument to quad is a "callable" Python object (i.e a function, method, or class instance). Notice the use of a lambda- function in this case as the argument. The next two arguments are the limits of integration. The return value is a tuple, with the first element holding the estimated value of the integral and the second element holding an upper bound on the error. Notice, that in this case, the true value of this integral is \[I=\sqrt{\frac{2}{\pi}}\left(\frac{18}{27}\sqrt{2}\cos\left(4.5\right)-\frac{4}{27}\sqrt{2}\sin\left(4.5\right)+\sqrt{2\pi}\textrm{Si}\left(\frac{3}{\sqrt{\pi}}\right)\right),\] \[\textrm{Si}\left(x\right)=\int_{0}^{x}\sin\left(\frac{\pi}{2}t^{2}\right)\, dt.\] is the Fresnel sine integral. Note that the numerically-computed integral is within \(1.04\times10^{-11}\) of the exact result — well below the reported error bound. If the function to integrate takes additional parameters, the can be provided in the args argument. Suppose that the following integral shall be calculated: \[I(a,b)=\int_{0}^{1} ax^2+b \, dx.\] This integral can be evaluated by using the following code: >>> from scipy.integrate import quad >>> def integrand(x, a, b): ... return a * x + b >>> a = 2 >>> b = 1 >>> I = quad(integrand, 0, 1, args=(a,b)) >>> I = (2.0, 2.220446049250313e-14) Infinite inputs are also allowed in quad by using \(\pm\) inf as one of the arguments. For example, suppose that a numerical value for the exponential integral: \[E_{n}\left(x\right)=\int_{1}^{\infty}\frac{e^{-xt}}{t^{n}}\, dt.\] is desired (and the fact that this integral can be computed as special.expn(n,x) is forgotten). The functionality of the function special.expn can be replicated by defining a new function vec_expint based on the routine quad: >>> def integrand(t, n, x): ... return exp(-x*t) / t**n >>> def expint(n, x): ... return quad(integrand, 1, Inf, args=(n, x))[0] >>> vec_expint = vectorize(expint) >>> vec_expint(3,arange(1.0,4.0,0.5)) array([ 0.1097, 0.0567, 0.0301, 0.0163, 0.0089, 0.0049]) >>> special.expn(3,arange(1.0,4.0,0.5)) The function which is integrated can even use the quad argument (though the error bound may underestimate the error due to possible numerical error in the integrand from the use of quad ). The integral in this case is \[I_{n}=\int_{0}^{\infty}\int_{1}^{\infty}\frac{e^{-xt}}{t^{n}}\, dt\, dx=\frac{1}{n}.\] >>> result = quad(lambda x: expint(3, x), 0, inf) (0.33333333324560266, 2.8548934485373678e-09) >>> I3 = 1.0/3.0 >>> print I3 >>> print I3 - result[0] This last example shows that multiple integration can be handled using repeated calls to quad. General multiple integration (dblquad, tplquad, nquad)¶ The mechanics for double and triple integration have been wrapped up into the functions dblquad and tplquad. These functions take the function to integrate and four, or six arguments, respecively. The limits of all inner integrals need to be defined as functions. An example of using double integration to compute several values of \(I_{n}\) is shown below: >>> from scipy.integrate import quad, dblquad >>> def I(n): ... return dblquad(lambda t, x: exp(-x*t)/t**n, 0, Inf, lambda x: 1, lambda x: Inf) >>> print I(4) As example for non-constant limits consider the integral \[I=\int_{y=0}^{1/2}\int_{x=0}^{1-2y} x y \, dx\, dy=\frac{1}{96}.\] This integral can be evaluated using the expression below (Note the use of the non-constant lambda functions for the upper limit of the inner integral): >>> from scipy.integrate import dblquad >>> area = dblquad(lambda x, y: x*y, 0, 0.5, lambda x: 0, lambda x: 1-2*x) >>> area (0.010416666666666668, 1.1564823173178715e-16) For n-fold integration, scipy provides the function nquad. The integration bounds are an iterable object: either a list of constant bounds, or a list of functions for the non-constant integration bounds. The order of integration (and therefore the bounds) is from the innermost integral to the outermost one. The integral from above \[I_{n}=\int_{0}^{\infty}\int_{1}^{\infty}\frac{e^{-xt}}{t^{n}}\, dt\, dx=\frac{1}{n}\] can be calculated as >>> from scipy import integrate >>> N = 5 >>> def f(t, x): >>> return np.exp(-x*t) / t**N >>> integrate.nquad(f, [[1, np.inf],[0, np.inf]]) Note that the order of arguments for f must match the order of the integration bounds; i.e. the inner integral with respect to \(t\) is on the interval \([1, \infty]\) and the outer integral with respect to \(x\) is on the interval \([0, \infty]\). Non-constant integration bounds can be treated in a similar manner; the example from above can be evaluated by means of >>> def f(x, y): >>> return x*y >>> def bounds_y(): >>> return [0, 0.5] >>> def bounds_x(y): >>> return [0, 1-2*y] >>> integrate.nquad(f, [bounds_x, bounds_y]) (0.010416666666666668, 4.101620128472366e-16) which is the same result as before. Gaussian quadrature¶ A few functions are also provided in order to perform simple Gaussian quadrature over a fixed interval. The first is fixed_quad which performs fixed-order Gaussian quadrature. The second function is quadrature which performs Gaussian quadrature of multiple orders until the difference in the integral estimate is beneath some tolerance supplied by the user. These functions both use the module special.orthogonal which can calculate the roots and quadrature weights of a large variety of orthogonal polynomials (the polynomials themselves are available as special functions returning instances of the polynomial class — e.g. special.legendre). Romberg Integration¶ Romberg's method [WPR] is another method for numerically evaluating an integral. See the help function for romberg for further details. Integrating using Samples¶ If the samples are equally-spaced and the number of samples available is \(2^{k}+1\) for some integer \(k\), then Romberg romb integration can be used to obtain high-precision estimates of the integral using the available samples. Romberg integration uses the trapezoid rule at step-sizes related by a power of two and then performs Richardson extrapolation on these estimates to approximate the integral with a higher-degree of accuracy. In case of arbitrary spaced samples, the two functions trapz (defined in numpy [NPT]) and simps are available. They are using Newton-Coates formulas of order 1 and 2 respectively to perform integration. The trapezoidal rule approximates the function as a straight line between adjacent points, while Simpson's rule approximates the function between three adjacent points as a parabola. For an odd number of samples that are equally spaced Simpson's rule is exact if the function is a polynomial of order 3 or less. If the samples are not equally spaced, then the result is exact only if the function is a polynomial of order 2 or less. >>> from scipy.integrate import simps >>> import numpy as np >>> def f(x): ... return x**2 >>> def f2(x): >>> x = np.array([1,3,4]) >>> y1 = f1(x) >>> I1 = integrate.simps(y1, x) >>> print(I1) This corresponds exactly to \[\int_{1}^{4} x^2 \, dx = 21,\] whereas integrating the second function does not correspond to \[\int_{1}^{4} x^3 \, dx = 63.75\] because the order of the polynomial in f2 is larger than two. Faster integration using Ctypes¶ A user desiring reduced integration times may pass a C function pointer through ctypes to quad, dblquad, tplquad or nquad and it will be integrated and return a result in Python. The performance increase here arises from two factors. The primary improvement is faster function evaluation, which is provided by compilation. This can also be achieved using a library like Cython or F2Py that compiles Python. Additionally we have a speedup provided by the removal of function calls between C and Python in quad - this cannot be achieved through Cython or F2Py. This method will provide a speed increase of ~2x for trivial functions such as sine but can produce a much more noticeable increase (10x+) for more complex functions. This feature then, is geared towards a user with numerically intensive integrations willing to write a little C to reduce computation time significantly. ctypes integration can be done in a few simple steps: 1.) Write an integrand function in C with the function signature double f(int n, double args[n]), where args is an array containing the arguments of the function f. //testlib.c double f(int n, double args[n]){ return args[0] - args[1] * args[2]; //corresponds to x0 - x1 * x2 2.) Now compile this file to a shared/dynamic library (a quick search will help with this as it is OS-dependent). The user must link any math libraries, etc. used. On linux this looks like: $ gcc -shared -o testlib.so -fPIC testlib.c The output library will be referred to as testlib.so, but it may have a different file extension. A library has now been created that can be loaded into Python with ctypes. 3.) Load shared library into Python using ctypes and set restypes and argtypes - this allows Scipy to interpret the function correctly: >>> import ctypes >>> lib = ctypes.CDLL('/**/testlib.so') # Use absolute path to testlib >>> func = lib.f # Assign specific function to name func (for simplicity) >>> func.restype = ctypes.c_double >>> func.argtypes = (ctypes.c_int, ctypes.c_double) Note that the argtypes will always be (ctypes.c_int, ctypes.c_double) regardless of the number of parameters, and restype will always be ctypes.c_double. 4.) Now integrate the library function as normally, here using nquad: >>> integrate.nquad(func, [[0,10],[-10,0],[-1,1]]) (1000.0, 1.1102230246251565e-11) And the Python tuple is returned as expected in a reduced amount of time. All optional parameters can be used with this method including specifying singularities, infinite bounds, etc. Ordinary differential equations (odeint)¶ Integrating a set of ordinary differential equations (ODEs) given initial conditions is another useful example. The function odeint is available in SciPy for integrating a first-order vector differential equation: \[\frac{d\mathbf{y}}{dt}=\mathbf{f}\left(\mathbf{y},t\right),\] given initial conditions \(\mathbf{y}\left(0\right)=y_{0}\), where \(\mathbf{y}\) is a length \(N\) vector and \(\mathbf{f}\) is a mapping from \(\mathcal{R}^{N}\) to \(\mathcal{R}^{N}.\) A higher-order ordinary differential equation can always be reduced to a differential equation of this type by introducing intermediate derivatives into the \(\mathbf{y}\) vector. For example suppose it is desired to find the solution to the following second-order differential equation: \[\frac{d^{2}w}{dz^{2}}-zw(z)=0\] with initial conditions \(w\left(0\right)=\frac{1}{\sqrt[3]{3^{2}}\Gamma\left(\frac{2}{3}\right)}\) and \(\left.\frac{dw}{dz}\right|_{z=0}=-\frac{1}{\sqrt[3]{3}\Gamma\left(\frac{1}{3}\right)}.\) It is known that the solution to this differential equation with these boundary conditions is the Airy function \[w=\textrm{Ai}\left(z\right),\] which gives a means to check the integrator using special.airy. First, convert this ODE into standard form by setting \(\mathbf{y}=\left[\frac{dw}{dz},w\right]\) and \(t=z\). Thus, the differential equation becomes \[\begin{split}\frac{d\mathbf{y}}{dt}=\left[\begin{array}{c} ty_{1}\\ y_{0}\end{array}\right]=\left[\begin{array}{cc} 0 & t\\ 1 & 0\end{array}\right]\left[\begin{array}{c} y_{0}\\ y_{1}\end{array}\right]=\left[\begin{array}{cc} 0 & t\\ 1 & 0\end{array}\right]\mathbf{y}.\end{split}\] In other words, \[\mathbf{f}\left(\mathbf{y},t\right)=\mathbf{A}\left(t\right)\mathbf{y}.\] As an interesting reminder, if \(\mathbf{A}\left(t\right)\) commutes with \(\int_{0}^{t}\mathbf{A}\left(\tau\right)\, d\tau\) under matrix multiplication, then this linear differential equation has an exact solution using the matrix exponential: \[\mathbf{y}\left(t\right)=\exp\left(\int_{0}^{t}\mathbf{A}\left(\tau\right)d\tau\right)\mathbf{y}\left(0\right),\] However, in this case, \(\mathbf{A}\left(t\right)\) and its integral do not commute. There are many optional inputs and outputs available when using odeint which can help tune the solver. These additional inputs and outputs are not needed much of the time, however, and the three required input arguments and the output solution suffice. The required inputs are the function defining the derivative, fprime, the initial conditions vector, y0, and the time points to obtain a solution, t, (with the initial value point as the first element of this sequence). The output to odeint is a matrix where each row contains the solution vector at each requested time point (thus, the initial conditions are given in the first output row). The following example illustrates the use of odeint including the usage of the Dfun option which allows the user to specify a gradient (with respect to \(\mathbf{y}\) ) of the function, \(\mathbf{f}\left(\mathbf{y},t\right)\). >>> from scipy.integrate import odeint >>> from scipy.special import gamma, airy >>> y1_0 = 1.0 / 3**(2.0/3.0) / gamma(2.0/3.0) >>> y0_0 = -1.0 / 3**(1.0/3.0) / gamma(1.0/3.0) >>> y0 = [y0_0, y1_0] >>> def func(y, t): ... return [t*y[1],y[0]] >>> def gradient(y, t): ... return [[0,t], [1,0]] >>> x = arange(0, 4.0, 0.01) >>> t = x >>> ychk = airy(x)[0] >>> y = odeint(func, y0, t) >>> y2 = odeint(func, y0, t, Dfun=gradient) >>> print ychk[:36:6] [ 0.355028 0.339511 0.324068 0.308763 0.293658 0.278806] >>> print y[:36:6,1] >>> print y2[:36:6,1] References¶ [WPR] http://en.wikipedia.org/wiki/Romberg's_method [NPT] http://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html General integration (quad) General multiple integration (dblquad, tplquad, nquad) Gaussian quadrature Romberg Integration Integrating using Samples Faster integration using Ctypes Ordinary differential equations (odeint) Special functions (scipy.special) Last updated on Jan 11, 2015.
CommonCrawl
An Introduction to Political and Social Data Analysis Using R What's in this Book? Keys to Student Success Data Sets and Codebooks 1 Introduction to Research and Data 1.1 Political and Social Data Analysis 1.2 Data Analysis or Statistics? 1.3.1 Interests and Expectations 1.3.2 Research Preparation 1.3.3 Data Analysis and Interpretation 1.3.4 Feedback 1.4 Observational vs. Experimental Data 1.4.1 Necessary Conditions for Causality 1.5 Levels of Measurement 1.6 Level of Analysis 1.7 Next Steps 1.8 Assignments 1.8.1 Concepts and Calculations 2 Using R to Do Data Analysis 2.1 Accessing R 2.2 Understanding Where R (or any program) Fits In 2.3 Time to Use R 2.4 Some R Terminology 2.4.1 Save Your Work 2.6.1 R Problems 3 Frequencies and Basic Graphs 3.1 Get Ready 3.3 Counting Outcomes 3.3.1 The Limits of Frequency Tables 3.4 Graphing Outcomes 3.4.1 Bar Charts 3.4.2 Histograms 3.4.3 Density Plots 3.4.4 A few Add-ons for Graphing 4 Transforming Variables 4.3 Data Transformations 4.4 Renaming and Relabeling 4.4.1 Changing Attributes 4.5 Collapsing and Reordering Catagories 4.6 Combining Variables 4.6.1 Creating an Index 4.7 Saving Your Changes 5 Measures of Central Tendency 5.2 Central Tendency 5.2.1 Mode 5.3 Median 5.4 The Mean 5.4.1 Dichotomous Variables 5.5 Mean, Median, and the Distribution of Variables 5.6 Skewness Statistic 5.7 Adding Legends to Graphs 6 Measures of Dispersion 6.3 Measures of Spread 6.3.1 Range 6.3.2 Interquartile Range (IQR) 6.3.3 Boxplots 6.4 Dispersion Around the Mean 6.4.1 Don't Make Bad Comparisons 6.5 Dichotomous Variables 6.6 Dispersion in Categorical Variables? 6.7 The Standard Deviation and the Normal Curve 6.7.1 Really Important Caveat 6.8 Calculating Area Under a Normal Curve 6.9 One Last Thing 6.10 Next Steps 6.11 Assignments 6.11.1 Concepts and Calculations 6.11.2 R Problems 7 Probability 7.1 Get Started 7.2 Probability 7.3 Theoretical Probabilities 7.3.1 Large and Small Sample Outcomes 7.4 Empirical Probabilities 7.4.1 Empirical Probabilities in Practice 7.4.2 Intersection of Two Probabilities 7.4.3 The Union of Two Probabilities 7.4.4 Conditional Probabilities 7.5 The Normal Curve and Probability 8 Sampling and Inference 8.1 Getting Ready 8.2 Statistics and Parameters 8.3 Sampling Error 8.4 Sampling Distributions 8.4.1 Simulating the Sampling Distribution 8.5 Confidence Intervals 8.6 Proportions 9 Hypothesis Testing 9.1 Getting Started 9.2 The Logic of Hypothesis Testing 9.2.1 Using Confidence Intervals 9.2.2 Direct Hypothesis Tests 9.2.3 One-tail or Two? 9.3 T-Distribution 9.5 T-test in R 10 Hypothesis Testing with Two Groups 10.1 Getting Ready 10.2 Testing Hypotheses about Two Means 10.2.1 Generating Subgroup Means 10.3 Hypothesis Testing with Two means 10.3.1 A Theoretical Example 10.3.2 Returning to the Empirical Example 10.3.3 Calculating the t-score 10.3.4 Statistical Significance vs. Effect Size 10.4 Difference in Proportions 10.5 Plotting Mean Differences 10.6 What's Next? 10.7.1 Concepts and Calculations 10.7.2 R Problems 11 Hypothesis Testing with Multiple Groups 11.2 Internet Access as an Indicator of Development 11.2.1 The Relationship between Wealth and Internet Access 11.3 Analysis of Variance 11.3.1 Important concepts/statistics: 11.4 Anova in R 11.5 Effect Size 11.5.1 Plotting Multiple Means 11.6 Population Size and Internet Access 11.7 Connecting the T-score and F-Ratio 11.8 Next Steps 11.9 Assignments 12 Hypothesis Testing with Crosstabs 12.2 Crosstabs 12.2.1 The Relationship Between Education and Religiosity 12.3 Sampling Error 12.4 Hypothesis Testing with Crosstabs 12.4.1 Regional Differences in Religiosity? 12.5 Directional Patterns in Crosstabs 12.5.1 Age and Religious Importance 12.6 Limitations of Chi-Square 13 Measures of Association 13.2 Going Beyond Chi-squared 13.3 Measures of Association for Crosstabs 13.3.1 Cramer's V 13.3.2 Lambda 13.4 Ordinal Measures of Association 13.4.1 Gamma 13.4.2 Tau-b and Tau-c 13.5 Revisiting the Gender Gap in Abortion Attitudes 13.5.1 When to Use Which Measure 14 Correlation and Scatterplots 14.1 Get Started 14.2 Relationships between Numeric Variables 14.3 Scatterplots 14.4 Pearson's r 14.4.1 Calculating Pearson's r 14.4.2 Other Independent Variables 14.5 Variation in Strength of Relationships 14.6 Proportional Reduction in Error 14.7 Correlation and Scatterplot Matrices 14.8 Overlapping Explanations 14.10 Exercises 14.10.1 Concepts and calculations 14.10.2 R Problems 15 Simple Regression 15.2 Linear Relationships 15.3 Ordinary Least Squares Regression 15.3.1 Calculation Example: Presidential Vote in 2016 and 2020 15.4 How Well Does the Model Fit the Data? 15.6 Getting Regression Results in R 15.6.1 All Fifty States 15.7 Understanding the Constant 15.8 Non-numeric Independent Variables 15.9 Adding More Information to Scatterplots 15.10 Next Steps 15.11 Assignments 16 Multiple Regression 16.1 Getting Started 16.2 Organizing the Regession Output 16.2.1 Summarizing Life Expectancy Models. 16.3.1 Assessing the Substantive Impact 16.4 Model Accuracy 16.5 Predicted Outcomes 16.5.1 Identifying Observations 17 Advanced Regression Topics 17.2 Incorporating Access to Health Care 17.3 Multicollinearity 17.4 Checking on Linearity 17.4.1 Stop and Think 17.5 Which Variables have the Greatest Impact? 17.6 Statistics vs. Substance 18 Regession Assumptions 18.2 Regression Assumptions 18.3 Linearity 18.4 Independent Variables are not Correlated with the Error Term 18.5 No Perfect Multicollinearity 18.6 The Mean of the Error Term equals zero 18.7 The Error Term is Normally Distributed 18.8 Constant Error Variance (Homoscedasticity) 18.9 Independent Errors 18.10 What's next? Appendix: Codebooks ANES20 County20large Countries2 States20 Chapter 5 Measures of Central Tendency In this chapter, we move on to examining measures of central tendency. You will find that the techniques learned here complement and expand upon the material from Chapter 2, providing a few more options for describing how variables are distributed. In order to follow along in this chapter, you should load the anes20 and states20 data sets and attach libraries for the following packages: DescTools, descr, and Hmisc. While frequencies tell us a lot about the distribution of variables, we are also interested in more precise measures of the general tendency of the data. Do the data tend toward a specific outcome? Is there a "typical" value? What is the "expected" value? Measures of central tendency provide this information. There are a number of different measures of central tendency, and their role in data analysis depends in part on the level measurement for the variables you are studying. The mode is the category or value that occurs most often, and it is most appropriate for nominal data because it does not require that the underlying variable be quantitative in nature. That said, the mode can sometimes provide useful information for ordinal and numeric data, especially if those variables have a limited number of categories. Let's look at an example of the mode by using Religious affiliation, a nominal-level variable that is often tied to important social and political outcomes. The frequency below shows the outcomes from anes20$denom, an eight-category recoded version of the original twelve-category variable measuring religious affiliation (anes20$V201435). #Create new variable for religious denomination anes20$denom<-anes20$V201435 #Reduce number of categories by combine some of them levels(anes20$denom)<-c("Protestant", "Catholic", "OtherChristian", "OtherChristian", "Jewish", "OtherRel","OtherRel", "OtherRel","Ath/Agn", "Ath/Agn", "SomethingElse", "Nothing") #Checkout the new variable freq(anes20$denom, plot=F) PRE: What is present religion of R Frequency Percent Valid Percent Protestant 2113 25.519 25.778 Catholic 1640 19.807 20.007 OtherChristian 267 3.225 3.257 Jewish 188 2.271 2.294 OtherRel 163 1.969 1.989 Ath/Agn 796 9.614 9.711 SomethingElse 1555 18.780 18.970 Nothing 1475 17.814 17.994 NA's 83 1.002 Total 8280 100.000 100.000 From this table, we can see in both the raw frequency and percent columns that "Protestant" is the modal category (recognizing that this category includes a number of different religions). Let's think about how this represents the central tendency or expected outcome of this variable. With nominal variables such as this, the concept of a "center" doesn't hold much meaning, strictly speaking. However, if we are a bit less literal and take "central tendency" to mean something more like the typical or expected outcome, it makes more sense. Think about this in terms of guessing the outcome, and you need information that will minimize your error in guessing (this idea will also be important later in the book). Suppose you have all 8197 valid responses on separate strips of paper in a great big hat, and you need to guess the religious affiliation of each respondent using the coding scheme from this variable. What's your best guess as pull each piece of paper out of the hat? It turns out that your best strategy is to guess the modal category, "Protestant," for all 8197 valid respondents. You will be correct 2113 times and wrong 6084 times. That's a lot of error, but no other guess will give you less error because "Protestant" is the most likely outcome. In this sense, the mode is a good measure of the "typical" outcome for nominal data. Besides using the frequency table, you can also get the mode for this variable using the Mode command in R: #Get the modal outcome, Note the upper-case M Mode(anes20$denom) attr(,"freq") Oops, this result doesn't look quite right. That's because many R functions don't know what to do with missing data and will report [1] NA instead of the information of interest. Since there are 83 missing cases for this variable (see the frequency), we get the error message. This is fixed in most cases by adding na.rm=T, which tells R to remove the NAs from the analysis, to the command line. #Add "na.rm=T" to account for missing data Mode(anes20$denom, na.rm=T) [1] Protestant [1] 2113 8 Levels: Protestant Catholic OtherChristian Jewish OtherRel ... Nothing This confirms that "Protestant" is the modal category, with 2113 respondents, and also lists all of the levels. In many cases, such as this, I prefer looking at the frequency table for the mode because it provides a more complete picture of the variable, showing, for instance, that while Protestant is the modal category, "Catholic" is a very close second. While the mode is the most suitable measure of central tendency for nominal-level data, it can be used with ordinal and interval-level data. Let's look at two variables we've used in earlier chapters, spending preferences on programs for the poor (anes20$V201320x), and state abortion restrictions (states20$abortion_laws): #Mode for spending on aid to the poor Mode(anes20$V201320x, na.rm=T) [1] 3. Kept the same 5 Levels: 1. Increased a lot 2. Increased a little ... 5. Decreasaed a lot #Mode for # Abortion restrictions Mode(states20$abortion_laws) Here, we see that the modal outcome for spending preferences is "Kept the same", with 3213 respondents, and the mode for abortion regulations is 10, which occurred twelve times. While the mode does provide another piece of information for these variables, there are better measures of central tendency when working with ordinal or interval/ratio data. However, the mode is the preferred measure of central tendency for nominal data. The median cuts the sample in half: it is the value of the outcome associated with the observation at the middle of the distribution when cases are listed in order of magnitude. Because the median is found by ordering observations from the lowest to the highest value, the median is not an appropriate measure of central tendency for nominal variables. If the cases for an ordinal or numeric variable are listed in order of magnitude, we can look for the point that cuts the sample exactly in half. The value associated with that observation is the median. For instance, if we had 5 observations ranked from lowest to highest, the middle observation would be the third one (two above it and two below it), and the median would be the value of the outcome associated with that observation. Just to be clear, the median in this example would not be 3, but the value of the outcome associated with the third observation. The median is well-suited for ordinal variables but can also provide useful information regarding numeric variables. Here is a useful formula for finding the middle observation: \[\text{Middle Observation}={\frac{n+1}{2}}\] where n=number of cases If n is an odd number, then the middle is a single data point. If n is an even number, then the middle is between two data points and we use the mid-point between those two values as the median. Figure 5.1 illustrates how to find the median, using hypothetical data for a small sample of cases. Figure 5.1: Finding the Median with Odd and Even Numbers of Cases In the first row of data, the sixth observation perfectly splits the sample, with five observations above it and five below. The value of the sixth observation, 15, is the median. In the second row of data, there are an even number of cases (10), so the middle of the distribution is between the fifth and sixth observations, with values of 14 and 15, respectively. The median is the mid-point between these values, 14.5. Now, let's look at this with some real-world data, using the abortion laws variable from the states20 data set. There are 50 observations, so the mid-point is between the 25th and 26th observations ((50+1)/2). Obviously, we can get R to just show us the median, but it is instructive at this point to find it by "eyeballing" the data. All fifty observations for states20$abortion_laws are listed below in order of magnitude. The value associated with the 25th observation is 9, as is the value associated with the 26th observation. Since these are both the same, the median outcome is 9. #Use "sort" to view outcomes in order of magnitude sort(states20$abortion_laws) [1] 1 2 3 3 4 4 4 5 5 5 5 5 6 6 6 6 6 6 7 7 7 8 8 8 9 [26] 9 9 9 9 9 9 10 10 10 10 10 10 10 10 10 10 10 10 11 11 12 12 12 13 13 We could also get this information more easily: #Get the median value median(states20$abortion_laws) To illustrate the importance of listing the outcomes in order of magnitude, check the list of outcomes for states20$abortion_law when the data are listed alphabetically: #list abortion_laws without sorting from lowest to highest states20$abortion_laws [1] 8 8 11 10 5 2 4 6 10 9 5 11 5 12 9 13 10 10 4 6 5 10 9 9 12 [26] 7 10 7 3 7 6 4 9 9 10 13 3 9 6 10 10 10 12 10 1 8 5 6 10 6 If you took the mid-point between the 25th (12) and 26th (7) observations, you would report a median of 9.5. While this is close to 9, by coincidence, it is incorrect. Let's turn now to finding the median value for spending preferences on programs for the poor (anes20$V201320x). Since there are over 8000 observations for this variable, it is not practical to list them all in order, but we can do the same sort of thing using a frequency table. Here, I use the Freq command to get a frequency table because I am particularly interested in the cumulative percentages. # Use 'Freq' to get cumulative % Freq(anes20$V201320x) level freq perc cumfreq cumperc 1 1. Increased a lot 2'560 31.1% 2'560 31.1% 2 2. Increased a little 1'617 19.7% 4'177 50.8% 3 3. Kept the same 3'213 39.1% 7'390 89.8% 4 4. Decreased a little 446 5.4% 7'836 95.3% 5 5. Decreasaed a lot 389 4.7% 8'225 100.0% What we want to do now is use the cumulative frequency to identify the category associated with the 50th percentile. In this case, you can see that the cumulative frequency for the category "Increased a little" is 50.8%, meaning the middle observation (50th percentile) is in this category, so this is the median outcome. We can check this with the median command in R. One little quirk here is that R requires numeric data to calculate the median. That's fine. All we have to do is tell R to treat the values as if they were numeric, replacing the five levels with values 1,2,3,4 and 5. #Get the median, treating the variable as numeric median(as.numeric(anes20$V201320x), na.rm=T) We get confirmation here that the median outcome is the second category, "Increased a little". The Mean is usually represented as \(\bar{x}\) and is also referred to as the arithmetic average, or the expected value of the variable, and is a good measure of the typical outcome for numeric data. The term "typical" outcome is interesting in this context because the mean value of a variable may not exist as an actual outcome. The best way to think about the mean as a measure of typicality is somewhat similar to the discussion of the mode as your best guess if you want to minimize error in predicting the outcomes in a nominal variable. The difference here is that since the mean is used with numeric data, we don't judge accuracy in a dichotomous right/wrong fashion; instead, we can judge accuracy in terms of how close the mean is to each outcome. For numeric data, the mean is closer overall to the actual values than any other guess you could make. In this sense, the mean represents the typical outcome better than any other statistic. The formula for the mean is \[\bar{x}=\frac{\sum_{i=1}^n x_i}{n}\] This reads: the sum of the values of all observations (the numerical outcomes) of x, divided by the total number of valid observations (observations with real values). This formula illustrates an important way in which the mean is different from both the median and the mode: it is based on information from all of the values of x, not just the middle value (the median) or the value that occurs most often (the mode). This makes the mean a more encompassing statistic than either the median or the mode. Using this formula, in the case of the number of state abortion restrictions, we could calculate the mean something like this: \[\frac{1+2+3+3+4+4+\cdot\cdot\cdot\cdot+11+12+12+12+13+13}{50}\] We don't actually have to add up all fifty outcomes manually to get the numerator. Instead, we can tell R to sum up all of the values of x: #Sum all of the values of 'abortion_laws' sum(states20$abortion_laws) [1] 394 So the numerator is 394. We divide through by the number of cases (50) to get the mean: #Divide the sum or all outcomes by the number of cases [1] 7.88 \[\bar{x}=\frac{394}{50} = 7.88\] Of course, it is simpler, though not quite as instructive, to just have R tell us the mean: #Tell R to get the mean value mean(states20$abortion_laws) A very important characteristic of the mean is that it is the point at which the weight of the values is perfectly balanced on each side. It is helpful to think of the outcomes numeric variables distributed according to their weight (value) on a plank resting on a fulcrum, and that fulcrum is placed at the mean of the distribution, the point at which both sides are perfectly balanced. If the fulcrum is placed at the mean, the plank will not tip to either side, but if it is placed at some other point, then the weight will not be evenly distributed and the plank will tip to one side or the other. Figure 5.2: The Mean Perfectly Balances a Distribution *Source: https://stats.stackexchange.com/questions/200282/explaining-mean-median-mode-in-laymans-terms A really important concept here is the deviation of observations of x from the mean of x. Mathematically, this is represented as \(x_i-\bar{x}\). So, for instance, the deviation from the mean number of abortion restrictions (7.88) for a state like Alabama, with 8 abortion restrictions on the books is .12 units (8-7.88), and for a state like Colorado, with 2 restrictions on the books, the deviation is -5.88 (2-7.88). For any numeric variable, the distance between the mean and all values greater than the mean is perfectly offset by the distances between the mean and all values lower than the mean. The distribution is perfectly balanced around the mean. This means that summing up all of the deviations from the mean (\(\sum_{i=1}^n{x_i-\bar{x}}\)) is always equal to zero. This point is important for understanding what the mean represents, but it is also important for many other statistics that are based on the mean. We can check this balance property in R. First, we express each observation as a deviation from the means. ##Subtract the mean of x from each value of x dev_mean=(states20$abortion_laws- mean(states20$abortion_laws)) #Print the mean deviations dev_mean [1] 0.12 0.12 3.12 2.12 -2.88 -5.88 -3.88 -1.88 2.12 1.12 -2.88 3.12 [13] -2.88 4.12 1.12 5.12 2.12 2.12 -3.88 -1.88 -2.88 2.12 1.12 1.12 [25] 4.12 -0.88 2.12 -0.88 -4.88 -0.88 -1.88 -3.88 1.12 1.12 2.12 5.12 [37] -4.88 1.12 -1.88 2.12 2.12 2.12 4.12 2.12 -6.88 0.12 -2.88 -1.88 [49] 2.12 -1.88 As expected, some deviations from the mean are positive, and some are negative. Now, if we take the sum of all deviations, we get: #Sum the deviations of x from the mean of x sum(dev_mean) [1] 5.329071e-15 That's a funny looking number. Because of the length of the resulting number, the sum of the deviations from the mean is reported using scientific notation. In this case, the scientific notation is telling us we need to move the decimal point 15 places to the left, resulting in .00000000000000532907. Not quite exactly 0, due to rounding, but essentially 0.14 Note, that the median does not share this "balance" property; if we placed a fulcrum at the median, the distribution would tip over because the positive deviations do not balance perfectly by the negative deviations, as shown here: #Sum the deviations of x from the median of x sum(states20$abortion_laws- median(states20$abortion_laws)) [1] -56 The negative deviations outweigh the positive by a value of 56. Here's an interesting bit of information that will be useful in the future: for dichotomous variables with all values are either 0 or 1, the mean of the variable is the proportion of cases in category 1. Let's take an example from the anes20 survey. Suppose you are interested in the political behavior of people who have served in the military compared to those who have not. The 2020 ANES survey asks a question about military service, and the responses are in anes20$V201516. Below, I shorten the label of the second category so the frequency contents can be printed together. # Change levels to fit better levels(anes20$V201516)<-c("1. Now serving on active duty", "2. Previously served,not now on active duty", "3. Have never served on active duty") freq(anes20$V201516, plot=F) PRE: Armed forces active duty Frequency Percent Valid Percent 1. Now serving on active duty 74 0.8937 0.8966 2. Previously served,not now on active duty 868 10.4831 10.5174 3. Have never served on active duty 7311 88.2971 88.5860 NA's 27 0.3261 Total 8280 100.0000 100.0000 Here we see that just less than 1% are currently on active duty, about 10.5% have previously served, and the overwhelming majority (88.6%) have never served. If we are interested in a comparison between those who have and have not served, we need to combine the first two categories into a single category. You already know how to do this from the material covered in Chapter 4, but let's have another go at it here. #Create new variable anes20$service<-anes20$V201516 #Change category labels to reflect military service levels(anes20$service)<-c("Yes, Served","Yes, Served", "No Service") #Check the changes freq(anes20$service, plot=F) Yes, Served 942 11.3768 11.41 No Service 7311 88.2971 88.59 NA's 27 0.3261 Total 8280 100.0000 100.00 If we try to get the mean of this variable, we would get an error because anes20$service is a factor variable (try it if you want to). So, we need to convert this into a numeric variable, scored 0 for those with no military service and 1 for those who have served in the military. Recall that we did the same thing when creating dichotomous variables for the LGBTQ index in Chapter 4. Typically, when creating numeric dichotomous variables like this, you should use 0 to signal that the corresponding observations do not have the characteristic identified in the variable name and 1 to signal that they have that characteristic. #Creating a numeric (0,1) version of a categorical variable. anes20$service.n<-as.numeric(anes20$service=='Yes, Served') Here, we are telling R to create a new object, anes20$service.n, as a numeric variable using the original factor variable anes20$service and assigning a 1 for all "Yes, Served" outcomes and (by default) a 0 for all other valid outcomes ("No Service" answers, in this case). We are also telling R to treat this new variable as a numeric variable, which is why we use the ".n" extension. This is not required, but these types of extensions are helpful when trying to remember which similarly named variable is which. Now, let's get a frequency for the new variable: #check the new indicator variable freq(anes20$service.n, plot=F) anes20$service.n 0 7311 88.2971 88.59 1 942 11.3768 11.41 NA's 27 0.3261 Total 8280 100.0000 100.00 Now we have a dichotomous numeric variable that distinguishes between those who have (11.4%) and have not (88.6%) served in the military. We can use the formula presented earlier to calculate the mean of this variable. Since the value 0 occurred 7311 times, and the value 1 occurred 942 times, the mean is equal to \[\bar{x}=\frac{(0*7311)+(1*942)}{8253}=\frac{942}{8253}=.1141\] We can verify this with R: #Get the mean of the indicator variable mean(anes20$service.n, na.rm=T) The mean of this dichotomous variable, scored 0 and 1, is the proportion in category 1. This should always be the case with similar dichotomous variables. This may not seem very intuitive to you at this point, but it will be very useful to understand it later on. One thing you might be questioning at this point is how we can treat what seems like a nominal variable—whether people have or have not served in the military–as a numeric variable. The way I like to think about this is that the variable measures the presence (1) or absence (0) of a characteristic. In cases like this, you can think of the 0 value as a genuine zero point, an important characteristic of most numeric variables. In other words, 0 means none of the characteristic being measured. Since the value 1 indicates having a unit of the characteristic, we can treat this variable as numeric. However, it is important to always bear in mind what the variable represents and that there are only two outcomes, 0 and 1. This becomes especially important in later chapters when we discuss using these types of variables in regression analysis. As a general rule, the mean is most appropriate for interval/ratio level variables, and the median is most appropriate for ordinal variables. However, there are instances when the median might be a better measure of central tendency for numeric data, almost always because of skewness in the data. Skewness occurs in part as a consequence of one of the virtues of the mean. Because the mean takes into account the values of all observations, whereas the median does not actually emphasize "values" but uses the ranking of observations, the mean can be influenced by extreme values that "pull" it away from the middle of the distribution. Consider the following simple data example, using five observations from a hypothetical variable: 0, 1, 2, 2, 5 \(\rightarrow\) median=2, mean=2.0 In this case, the mean and the median are the same. Now, watch what happens is we change the value 5 to 18, a value that is substantially higher than the rest of the values: 0, 1, 2, 2, 18 \(\rightarrow\) median=2, mean=4.6 The median is completely unaffected by the extreme values but the mean is. The median and some other statistics are what we call robust statistics because their value is not affected by the weight of extreme outcomes. In some cases, the impact of extreme values is so great that it is better to use the median as a measure of central tendency when discussing numeric variables. The data are still balanced around the mean, and the mean is still your "best guess" (smallest error in prediction), but in terms of looking for an outcome that represents the general tendency of the data, it may not be the best option in some situations. Having said this, it has been my experience that once students hear that the median might be preferred over the mean when there is a significant difference between them, they tend to default to the median whenever it is at all different from the mean. There will almost always be some difference between the mean and the median and, hence, some skewness to the data, so you should not automatically default to the median. There are no official cutoff points, so it is a judgment call. My best advice is the mean should be your first choice. If, however, there is evidence that the mean is heavily influenced by extreme observations (see below), then you should also use the median. Of course, it doesn't hurt to present both statistics, as more information is usually good information. In general, when the mean is significantly higher than the median, this is a sign that the distribution of values is right (or positively) skewed, due to the influence of extreme values at the high end of the scale. This means that the extreme values are pulling the mean away from the middle of the observations. When the mean is significantly lower than the median, this indicates a left (or negatively) skewed distribution, due to some extreme values at the low end of the scale. When the mean and median are the same, or nearly the same, this could indicate a bell-shaped distribution, but there are other possibilities as well. When the mean and median are the same, there is no skew. Figure 3 provides caricatures of what these patterns might look like. The first graph shows that most data are concentrated at the low (left) end of the x-axis, with a few very extreme observations at the high (right) end of the axis pulling the mean out from the middle of the distribution. This is a positive skew. The second graph shows just the opposite pattern: most data at the high end of the x-axis, with a few extreme values at the low (left) end dragging the mean to the left. This is a negatively skewed distribution. Figure 5.3: Illustrations of Different Levels of Skewness Finally, the third graph shows a perfectly balanced, bell-shaped distribution, with the mean and the median equal to each other and situated right in the middle of the distribution. There is no skewness in this graph. This bell-shaped distribution is not the only way type of distribution with no skewness15, but it is an important type of distribution that we will take up later. Let's see what this looks like when using real-world data, starting with abortion laws in the states. First, we take another look at the mean and median, which we produced in an earlier section of this chapter, and then we can see a density plot for the number of restrictive abortion laws in the states. #get mean and median for 'abortion_laws' The mean is a bit less than the median, so we might expect to see signs of negative (left) skewness in the density plot. plot(density(states20$abortion_laws), xlab="# Abortion Restrictions in State Law", main="") #Insert vertical lines for mean and median abline(v=mean(states20$abortion_laws)) abline(v=median(states20$abortion_laws), lty=2) #use dashed line Here we the difference between the mean (solid line) and the median (dashed line) in the context of the full distribution of the variable, and the graph shows a distribution with a bit of negative skew. The skewness is not severe, but it is visible to the naked eye. An R code digression. The density plot above includes the addition of two vertical lines, one for the mean and one for the median. To add these lines, I used the abline command. This command allows you to add lines to existing graphs. In this case, I want to add two vertical lines, so I use v= to designate where to put the lines. I could have put in the numeric values of the mean and median (v=7.88 and v=9), but I chose to use have R calculate the mean and median and insert the results in the graph (v=mean(states20$abortion_laws) and v=median(states20$abortion_laws)). Either way would get the same result. Also, note that for the median, I added lty=2 to get R to use "line type 2", which is a dashed line. The default line type is a solid line, which is used for the mean. Now, let's take a look at another distribution, this time for a variable we have not looked at before, the percent of the state population who are foreign-born (states20$fb). This is an increasingly important population characteristic, with implications for a number of political and social outcomes. mean(states20$fb) [1] 7.042 median(states20$fb) Here, we see a bit more evidence of a skewed distribution. In absolute terms, the difference between the mean and the median (2.09) is not much greater than in the first example (1.22), but the density plot (below) looks somewhat more like a skewed distribution than in the first example. In this case, the distribution is positively skewed, with a few relatively high values pulling the mean out from the middle of the distribution. #Density plot for % foreign-born plot(density(states20$fb), xlab="Percent foreign-born", #Add lines for mean and median abline(v=mean(states20$fb)) abline(v=median(states20$fb), lty=2) #Use dashed line Finally, as a counterpoint to these skewed distributions, let's take a look at the distribution of percent of the two-party vote for Joe Biden in the 2020 election: mean(states20$d2pty20) [1] 48.8044 median(states20$d2pty20) [1] 49.725 Here, there is very little difference between the mean and the median, and there appears to be almost no skewness in the shape of the density plot (below). plot(density(states20$d2pty20), xlab="% of Two-Party Vote for Biden", abline(v=mean(states20$d2pty20)) abline(v=median(states20$d2pty20), lty=2) What's interesting here is that the distance between the mean and median for Biden's vote share (.92) is not terribly different than the distance between the mean and the median for the number of abortion laws (1.12). Yet there is not a hint of skewness in the distribution of votes, while there is clearly some negative skew to abortion laws. This, of course, is due to the difference in scale between the two variables. Abortion laws range from 1 to 13, while Biden's vote share ranges from 27.5 to 68.3, so relative to the scale of the variable, the distance between the mean and median is much greater for abortion laws in the states than it for Biden's share of the two-party vote in the states. This illustrates the real value of examining a numeric variable's distribution with a histogram or density plot alongside statistics like the mean and median. Graphs like these provide context for those statistics Fortunately, there is a skewness statistic that summarizes the extent of positive or negative skew in the data. There are several different methods for calculating skewness. The one used in R is based on the direction and magnitude of deviations from the mean, relative to the amount of variation in the data.16 The nice thing about the skewness statistic is that there are some general guidelines for evaluating the direction and seriousness of skewness: A value of 0 indicates no skew to the data. Any negative value of skewness indicates a negative skew, and positive values indicate positive skew. Values lower than -2 and higher than 2 indicate extreme skewness that could pose problems for operations that assume a relatively normal distribution. Let's look at the skewness statistics for the three variables discussed above. #Get skewness for three variables (upper-case S in the "Skew" command) Skew(states20$abortion_laws) [1] -0.3401058 Skew(states20$fb) [1] 1.202639 Skew(states20$d2pty20) [1] 0.006792623 These skewness statistics make a lot of sense, given the earlier discussion of these three variables. There is a little bit of negative skew (-.34) to the distribution of abortion restrictions in the states, a more pronounced level of (positive) skewness to the distribution of the foreign-born population, and no real skewness in the distribution of Biden support in the states (.007). None of these results are anywhere near the -2,+2 cut-off points discussed earlier, so these distributions should not pose any problems for any analysis that uses these variables. Quite often, you will be using variables whose distributions look a lot like the three shown above, maybe a bit skewed, but nothing too outrageous. Occasionally, though, you come across variables with severe skewness. One such "problem" variable from the 2020 ANES is anes20$V201628, which measures how many guns respondents reported owning. We get the mean and median for this variable below, and we can see that the mean (1.47) is greater than the median (0). We can glean a couple of things from this. First, since the median is 0, we know that at least half the respondents own no guns (if you look at a frequency table for this variable, you'll see that 64.5% do not own guns). Second, because the mean is greater than the median, we can tell that the mean is being pulled from the middle of the distribution by some extreme observations at the high end of the x-axis. #Mean and median for gun ownership variable mean(anes20$V201628, na.rm=T) median(anes20$V201628, na.rm=T) We can get a better sense of the distribution by looking at a density plot for this variable. #Density plot for gun ownership variable plot(density(anes20$V201628, na.rm=T), xlab="#Number of Guns Onwed by Respondent", This plot shows an extreme level of positive skewness. In fact, this plot is a good illustration of how extreme skewness can get. The skewness statistic reported below (8.33) confirms what is apparent from the density plot, an extremely high level of skewness. In fact, this variable is so skewed that it would not make sense to use it in its current state. Instead, you might want to recode it into an ordinal variable (0 guns, 1 gun, 2-5, guns, more than five guns), or perhaps take the truncate the distribution at 20+ guns. Either way, a variable with a skewness problem like this is going to cause problems with a number of statistics. Skew(anes20$V201628, na.rm=T) In reality, most of the variables you look at will not be this severely skewed. However, had you only looked at the mean and median of gun ownership, you might not have had any idea how skewed this variable was because the difference between the mean and median does not seem that great. This brings home a very important point: use as much information as you can to learn about the data. Even if you do not end up using the information when you present results, you should use it to make sure you understand what's going on with the data. Note that in the examples provided above, I used a solid vertical line for the mean and a dashed vertical line for the median. This was explained in the text, but there was no identifying information provided in the graphs themselves. When presenting information like this in a more formal setting (e.g., on the job, for an assignment, or for a term paper), it is usually expected that you provide a legend with your graphs that identifies what the lines represent (see figure 5.3 as an example). This can be a bit complex, especially if there are multiple lines in your graph. For our current purposes, however, it is not terribly difficult to add a legend. We're going to add the following line of code to the commands used to generate the plot for states20$fb to add a legend to the plot: #Create a legend to be put in the top-right corner, identifying #mean and median outcomes with solid and dashed lines, respectively legend("topright", legend=c("Mean", "Median"), lty=1:2) The first bit ("topright") is telling R where to place the legend in the graph. In this case, I specified topright because I know that is where there is empty space in the graph. You can specify top, bottom, left, right, topright, topleft, bottomright, or bottomleft, depending on where the legend fits the best. The second piece of information, legend=c("Mean", "Median"), provides names for the two objects we are identifying, and the last part, lty=1:2, tells R which line types to use in the legend (the same as in the graph). Let's add this to the command lines for the density plot for states20$fb and see what we get. abline(v=median(states20$fb), lty=2) #Add the legend legend("topright", legend=c("Mean", "Median"), lty=1:2) This extra bit of information fits nicely in the graph and aids in interpreting the pattern in the data. Sometimes, it is hard to get the legend to fit well in a graph space. When this happens, you need to tinker a bit to get the legend to fit. There are some fairly complicated ways to achieve this, but I favor trying a couple of simple things first: try different locations for the legend, reduce the number of words you use to name the lines, or add the cex command to reduce the overall size of the legend. When using cex, you might start with cex=.8, which will reduce the legend to 80% of its original size, and then change the value as needed to make the legend fit. The next chapter builds on the discussion by focusing on something we've dealt with tangentially in this chapter: measuring variability in the data. Measures of central tendency are important descriptive tools, but they also form the building blocks for many other statistics, including measures of dispersion, which we take up in Chapter 6. Following that, we spend a bit of time on the more abstract topics of probability and statistical inference. These things might sound a bit intimidating to you, but you will be ready for them when we get there. As usual, when making calculations, show the process you used. Let's return to the list of variables used for exercises in Chapters 1 & 3. Identify what you think is the most appropriate measure of central tendency for each of these variables. Choose just one measure for each variable Course letter grade Voter turnout rate (votes cast/eligible voters) Marital status (Married, divorced, single, etc) Occupation (Professor, cook, mechanic, etc.) Total number of votes cast in an election #Years of education Subjective social class (Poor, working class, middle class, etc.) Racial or ethnic group identification Below is a list of voter turnout rates in twelve Wisconsin Counties during the 2020 presidential election. Calculate the mean, median, and mode for the level of voter turnout in these counties. Which of these is the most appropriate measure of central tendency for this variable? Why? Based on the information you have here, is this variable skewed in either direction? Wisconsin County % Turnout Dane 87 Forrest 71 Grant 63 Iron 82 Kenosha 71 Marinette 71 Portage 74 Taylor 70 As usual, show the R commands and output you used to answer these questions. Use R to report all measures of central tendency that are appropriate for each of the following variables: The feeling thermometer rating for the National Rifle Association (anes20$V202178), Latinos as a percent of state populations (states20$latino), party identification (anes20$V201231x), and region of the country where ANES survey respondents live (anes20$V203003). Where appropriate, also discuss skewness. Using one of the numeric variables listed in the previous question, create a density plot that includes vertical lines showing the mean and median outcomes, and add a legend. Describe what you see in the plot. If you are interested in how the mean came to be used as a measure of "representativeness", here is a short bit of intellectual history: https://priceonomics.com/how-the-average-triumphed-over-the-median/ ↩︎ For instance, a U-shaped distribution, or any number of multi-modal distributions could have no skewness.↩︎ \(Skewness=\frac{(x_{i}-\bar{x})^3}{n*{S^3}}\), where n= number of cases, and S=standard deviation↩︎
CommonCrawl
An intelligent solar energy-harvesting system for wireless sensor networks Yin Li1 & Ronghua Shi1 EURASIP Journal on Wireless Communications and Networking volume 2015, Article number: 179 (2015) Cite this article An intelligent solar energy-harvesting system for supplying a long term and stable power is proposed. The system is comprised of a solar panel, a lithium battery, and a control circuit. Hardware, instead of software, is used for charge management of the lithium battery, which improves the reliability and stability of the system. It prefers to use the solar energy whenever the sunshine is sufficient, and the lithium battery is a complementary power supply for conditions, such as overcast, rain, and night. The system adapts a maximum power point tracking (MPPT) circuit to take full advantage of solar energy, and it ensures the lithium battery an extremely long life with an appropriate charging method, which shortens the frequency of the battery charge-discharge cycle. This system can be implemented with small power equipment which is especially suitable for outdoor-based wireless sensor nodes in the Internet of Things (IOT). Wireless sensor network (WSN) is the second largest network after the Internet in the world, and it ranks as the first of the next ten emerging technologies. Currently, it has been used widely in the Internet of Things (IOT), mainly for environmental parameter monitoring in various production circumstances, such as greenhouse [1, 2], water quality monitoring [3, 4] and so on. Conventionally, disposable batteries can be used for power supply in WSN, where researchers have made efforts to save the finite battery on power control by routing algorithm and topology optimization [5–7]. On the other hand, reducing the power consumption of the nodes always sacrifices performances like computing. The most up-to-date power density of available battery technology cannot match the needs of most WSN for long lifetime and small form factor, which limits the use of WSN due to the need for large batteries. It also has a slight possibility that the better batteries for small devices will become available in the next few years. Energy harvesting and management may be the most convenient ways to solve the problem of making WSN autonomous and enable widespread use of these systems in many applications [8]. The state-of-the-art energy-storage techniques for energy-harvesting systems in sustainable wireless sensor nodes can be classified into two technologies, i.e., supercapacitors and rechargeable batteries [9]. These two categories have their own advantages and disadvantages, involving energy-storage density, lifetime, discharging, leakage, size and so on [10]. Since the supercapacitors have significantly lower power density and higher leakage overhead than rechargeable batteries [11], which makes them impractical for small-package WSN nodes, we employ an energy-harvesting system using a lithium battery as the storage. From the electrochemical theory, we may learn that the aging of the lithium battery is influenced greatly by the self state of charge (SOC) [12–14]. Data from [15] show that lithium batteries in high SOC are more vulnerable to environmental impacts of aging and SOC cycling the batteries enhances the resistive lives. Therefore, it must be avoided that the battery always be in high SOC to extend battery lifetime, and approaches, one example of which is recharging the battery until its voltage drops below a specific level, should be taken [16]. The charging managements are usually employed by microcontrollers for the flexibility of software designing and implementation [9, 17], but researchers [18] have proven that battery-charging controlled by software may have some problems in which charging logic could not work, and that the battery could not be charged under sufficient sunlight. In our system, the charging management is implemented by hardware instead of the codes running within the microcontroller for consideration of reliability. This paper focuses on an intelligent solar energy-harvesting (ISEH) system based on maximum power point tracking (MPPT) for wireless sensor nodes used in IOT, which prefers to use the solar power and takes the lithium battery as a supplementary under the condition of inadequate illumination. To prolong the lithium battery life, an intelligent circuit using RS triggers is proposed, which makes the lithium battery charge only when the battery voltage is lower than a specific value. The circuit can be divided into two main functional parts, i.e., the charging sub-circuit and the control sub-circuit. The sub-circuits are merged into one printed circuit board (PCB), and the whole system has been designed, built, and tested. Experimental results show that the system can work stably and quite fit the requirements of pre-designing. The contributions of this paper for the ISEH systems are as follows: Charging control of the lithium battery is implemented dexterously by RS triggers, which supports a reliable and stable operational status for the system contrasting the approach executed by software. We combine the advantages of other solar-harvesting systems, which are dominated by solar power using a lithium battery as an energy storage and only when the battery voltage drops below a specific level before charging it. This architecture will help extend the life of the battery and the system, and avoid wasting the solar energy. The construction of this paper is organized as follows. In Sec. 1, we give an introduction of our study. A brief review on the related work is outlined in Sec. 2. Then, we present the problems in solar energy-harvesting systems and the proposed system construction in Sec. 3. In Sec. 4, the corresponding calculations of some important parameters are shown. In Sec. 5, we provide the functional module implementation of the system; simulation and experimental results are also included in this section. In Sec. 6, a conclusion of this paper is made. Self-sustainable WSN systems are on the verge of being a broad requirement in many fields [19, 20], since most of WSN applications are difficult to maintain after their deployments. Researchers make great efforts to find out renewable energy resources from the environment for WSN usage, such as solar power, wind, vibration, heat, and RF [9]. How to collect and store energy effectively from the environment has been taken more and more seriously by researchers. Sharma et al. [21] studied a sensor node with an energy-harvesting source, and a buffer was used to store the generated energy. The sensor node periodically sensed a random field and generated packets. Only when the energy was available, the packets would be transmitted; otherwise, they were stored and waited upon. They also exploited throughput optimization, i.e., to obtain energy management policies for the largest possible data rate and the minimal mean delay in the queue. In order to increase the lifespan of WSN with powering-up methods, Ramasur et al. [22] took some efforts in a wind energy harvester (WEH) model. Their WEH consisted of a wind generator and a power management unit to store and condition the generated energy. The results showed that their aero-elastic flutter generator could produce more power compared with that of the other small-scale wind generators; however, the circuit was not equipped with maximum power point tracking (MPPT) resulting in poor efficiencies of the WEH. Besides harvesting the wind power, taking full advantage of the solar power may be more convenient in WSN usage. Although solar power is time- and season-dependent, it remains as one of the best choices by adapting a power management mechanism [23]. Yi et al. [24] put forward a wireless sensor node design based on a solar energy-power supply. They gave full consideration to energy-saving principles by adapting low-power consumption devices in every module and collected the solar energy to provide lasting power for the system. Another low-power solar energy-harvesting system for WSN was put forward in [25], where Naveen et al. employed the system in an intelligent building. They adapted a solar energy harvester instead of an alkaline battery for the sensor nodes. They used a number of solar cells connected in series and parallel to each other to scavenge energy, and they applied a set of ultracapacitors to store up the energy. As a backup energy source, alkaline batteries were connected along with the capacitors. A solar-powered sensor module using low-cost capacitors as storage buffers was investigated in [26], and only capacitors were adapted. The advantage is that the energy-charging time is shortened within a second, although the module cannot work without light illumination. A battery-supercapacitor hybrid energy-storage module was proposed in [17], and an embedded processor was used to control the charging of the battery. Supercapacitors charge the battery only when their saved energy exceeds the peak requirements of processors running at full speed and ignores estimating the SOC of the battery. Taneja [27] et al. brought forward a kind of micro photovoltaic energy system. Alberola et al. [28] put forward another solar power system consisting of a supercapacitor and a lithium battery. To provide an uninterrupted power supply, the literature [29] exploited two battery groups for energy storage. In charging lithium batteries, some literatures have been presented [30–32]. The common method is that whenever the voltage of the solar panel is high enough, charging starts. According to the research of Ecker et al. [15], Takahashi et al. [33], and Liu et al. [34], the lithium battery capacity could fade rapidly when it is in high SOC for a long term; thereby, Jiang et al. [16] and Li et al. [35] have proposed a solution that the system charges the lithium battery only when the voltage is lower than a specific value. This strategy can solve the problem of charging too often and the lithium battery being in high SOC all the time; nevertheless, it may waste most of the solar power in that this system applied the lithium battery as the primary power source no matter how capable the solar panel was to supply the system or not. In this section, to begin with, we analyze some shortages of the existing solar energy-harvesting systems and then present the methodology and the hardware components of our system. Problems to be considered Energy harvesting is one of the most promising technologies toward the goal of perpetual operation of WSN. Recent developments have allowed renewable energy sources such as solar or wind power to be used for wireless sensor nodes. Concerning the usage of solar energy, a lot of scholars have conducted considerable research works, while there are still some aspects which could take more optimization. Making full use of the solar energy is very important. The system employs the solar power as the preferential power source as long as the sunshine is available [27–29], rather than that which takes the rechargeable battery as the primary one and applies the solar energy only for charging. To extend the rechargeable battery life as far as possible and keep the high performance of the battery, the charging process of the battery should be taken into control, avoiding too many charge-discharge cycles or the battery will always be in a high-charge state. Designing a simple and ingenious control circuit can reduce the complexity of system development, decrease the power consumption, and increase the stability and reliability of the system. According to the survey of the related work, many researchers provide better solutions to the first two points mentioned above; however, the last point on how to improve the stability and reliability of the circuit needs to be researched further. Common energy management and charge control are always achieved by software, while in cases of extreme discharges, the microcontroller itself may be powered off and cannot be restarted even if there is sufficient illumination, leading to the question, how could the battery be charged? In this paper, we focus on application of hardware realization of charge management, which can greatly improve the robustness of the solar-harvesting system. The ISEH system is physically composed of a solar panel, a lithium battery, and a control circuit. The control circuit comprises a solar MPPT module, a charging sub-circuit, an over-discharged protection sub-circuit, and a boost DC/DC module for the lithium battery. The system schematic diagram is shown in Fig. 1. Scheme for the ISEH System As shown in Fig. 1, the system has three input branches, i.e., a solar panel, a lithium battery, and a mini-USB interface. It also has an output branch offered by an ordinary USB interface. In this paper, we use "bimodule" to emphasize that the output of the system may originate from the solar panel or the lithium battery, respectively. The standard mini-USB interface is a reservation to charge the lithium battery by an external power adapter if necessary. The functional components of the system are demonstrated as follows: Solar power supply We propose a new solution for supplying the power to the sensor nodes, contrasting with that in [28] and [31]. In those two papers, the task of the solar power is only for charging the lithium battery and the super capacitor by a DC/DC converter, which may cause part of the solar energy to be wasted. While in our work, the solar branch has the priority to provide electrical power to make full use of its energy unless overcast, rainy days, or night etc. comes. The power generated via this branch flows to the MPPT module first. The MPPT guarantees that the system can utilize the almost peak power produced by the solar panel, and the principle of MPPT are shown in Fig. 2. Power, voltage and current of the solar panel When the solar panel meets a light load or comes very close to opening a circuit, the output voltage may approach the highest level. The power and the current are simultaneously very small. As the load becomes heavier, there are many changes that will be made until it reaches the peak value and then gradually declines. For example, the output voltage of the solar panel being decreased, the current being increased more rapidly, and the power being increased gradually as a synthesis. The MPPT circuit helps the output voltage of the solar panel stay around the peak power area. Therefore, the system can apply it with maximum efficiency [36, 37]. Lithium battery power supply As shown in Fig. 1, a mechanical single-pole single-throw (SPST) switch (3) is located following the lithium battery, which can deter the lithium battery from wasting energy in an open circuit and keep it safe in the case of transportation or stock. After system deployment, the switch should be switched on manually. If the power provided by the solar panel cannot satisfy the load, the voltage comparator (2) drives the switch (2) to connect to port B, which disconnects the solar branch and connects the lithium branch to the system output. A protection circuit is placed right after the battery to avoid overdischarge, followed by a DC/DC boost circuit to raise the voltage from battery nominal voltage to system output voltage. Most of the values used in this paper are listed in Table 1. Table 1 Summary of values Charging circuit In Fig. 1, the switch (1) is an electronic SPST which controls the charging status of the lithium battery. It maintains the switch-off status as default, which means that the battery is not charged. If and only if the battery voltage is underneath the predefined level, the voltage comparator (1) triggers the switch (1) on, and the battery is charged at once if the solar panel could afford it. As soon as the charging starts for a few seconds, the battery voltage rises rapidly and changes the state of the comparator (1). That is why a RS trigger is required to keep the charging status. The control chip of the charging circuit is CN3063 [38]. As charging is terminated, the pin END of the chip changes to low level, which turns the switch (1) off and disconnects the charging circuit. The switch status is also maintained by the RS trigger. System parameters analysis Power consumption of node In applications of IOT, WSN nodes are generally divided into the following three categories; the sensor nodes, the routing nodes, and the sink node. The power consumption of the sink node is usually the largest, but it can be deployed indoors or easy to approach outdoors, so it is unnecessary to consider its power supply. In terms of hardware construction, the sensor nodes usually have more sensors than the routing nodes, for example, the dissolved oxygen sensor, PH sensor etc. The total power consumption of the sensor group is larger than that of the sensor node itself, consumed by data computing and radio transmission, so that the power consumption of the sensor nodes is larger than that of the routing nodes. In the system design, power consumption of the sensor nodes should be taken into the overriding consideration. In our research, the sensor nodes usually include the following components: ZigBee module Sensors (dissolved oxygen, PH, temperature etc.) The power consumption of all the components is calculated as shown in Table 2. V in represents the input voltage, which is the ISEH system output voltage. The notation I stands for the average current. Especially, the data given in the table are the maximum or close to it. Each of the sensor's current consumption is set to the average operating current. The notation T stands for the working hours in one day, 2 h in a day means the duty cycle of 8.33 %, that is to say, a data monitoring period lasts for 5 min every hour. As a kind of statistical result, the power consumption of one sensor node is about 0.92 Wh every day. Table 2 Power consumption of one sensor node Battery selection Comparison of common rechargeable batteries is shown in Table 3. Numbers 1–5 represent lead-acid battery, nickel-cadmium battery, nickel-hydrogen battery, lithium-ion battery, and lithium-polymer battery, respectively. Table 3 Comparison of rechargeable batteries As demonstrated in Table 3, the lithium-polymer battery is relatively satisfactory for the system usage, whose capacity depends on the capacity of the load and the longest rainy days of the system-deployed region. Calculation of battery capacity The calculation formula of the battery capacity (BC) is given by $$ \mathrm{B}\mathrm{C}=\mathrm{A}\times \mathrm{Q}\mathrm{L}\times \mathrm{N}\mathrm{L}\times \mathrm{T}\mathrm{O}/\left(\mathrm{cc}\times \mathrm{V}\right)\mathrm{W}\mathrm{h} $$ In this expression, A - Safety factor, between 1.1–1.4 QL - The average daily power consumption of the load, Wh NL - The longest continuous rainy days, set 7 according to the experience TO - Temperature correction factor, in general, TO = 1 when temperature is above 0 °C, and TO = 1.1 above −10 °C and TO = 1.2 below −10 °C. cc - The depth of battery discharge, generally speaking, if it is a lead-acid battery, the value is 0.75, if it is a nickel-cadmium battery, the value is 0.85, and if it is a lithium battery, the value is 0.80. In our research, the lithium battery would not be charged until its voltage dropped under a specific level, and it should provide the system with the rest energy just in case there is no sunshine for charging, namely, even when the battery voltage is reduced to the charging boundary, it still persists that appropriate energy as a backup should be considered, so the value of cc is set to 0.6. Combined with the data from above, the battery capacity is calculated to be about 3190−4061 mAh (for the nominal voltage of a lithium-polymer battery). Since the calculation is made with safety allowance, the battery capacity can be set to 4000 mAh. Simulation and implementation In this section, the performance of the proposed ISEH system is simulated and compared with two types of existing solar energy-harvesting systems [16, 30–32, 35], then we gave the design details of each module in the ISEH system. Finally, we have implemented an experimental circuit, as shown in Fig. 9, which was tested with output characteristics of the system. Performance simulation Based on the analysis mentioned above, simulations are taken for demonstrating the differences between the ISEH system and the other systems. The parameters used in the simulations are shown in Table 4. Table 4 Simulation parameters It is worth noticing that the duty cycle is set to 1, which means that the node works all the time with full power consumption, and the sunshine performs a regular pattern like a square wave, which is unreal for the actual weather condition. All the settings are adapted only for the reason of facilitating the contrast effect. Results given in Fig. 3 are a comparison between the ISEH system and the previous system which could charge when the sunlight is sufficient (CWSS), which charges the battery as long as there is sufficient illumination. Simulation result between the ISEH and CWSS system In Fig. 3, the green line represents the lithium battery-voltage curve of the ISEH system and the red line represents that of the CWSS system; the dashed red line parallel to the horizontal axis is the charging threshold, and the dashed black square waves represent the intensity of the sunshine. The blue curve is a reference, one that represents the continuous discharge of the lithium battery voltage without charging. As shown in the figure, the lithium battery of the ISEH system discharges only when the sunshine is insufficient, and it is charged once during the whole 40 h. Relatively, the charge-discharge cycle of the CWSS system is six times, which is far more than that of the ISEH system. At the same time, it is SOC also in a high level most of the period. As a matter of fact, the real weather condition cannot be regular like that, so the number of the cycle is usually more than that shown in this simulation, which has a great influence on the life of lithium batteries. Figure 4 shows the comparison result between the ISEH system and the other system which uses the lithium battery as the main power supply (BMPS) and charges it only when its voltage is beneath a specific level. The parameters used in the simulations are also shown in Table 4. Except for the red curve with "o" which represents the battery voltage of the BMPS system, the rest of the curves are the same as that shown in Fig. 3. Simulation result between the ISEH and BMPS system As shown in Fig. 4, the BMPS system makes a little improvement, because the charge-discharge cycle is less than that of the CWSS system shown in Fig. 3. But whatever the weather is, the lithium battery is the main power source, which causes a waste of the solar energy and makes the discharge rate larger than that of the ISEH system, thus the number of charge-discharge cycles is also more than that of the ISEH system. According to the principle of the system composition, each function module of the system is taken into detailed circuit design. The main functional module realization is described as follows. MPPT module The MPPT module is based on chip MP2307 which uses an autonomous tracking strategy to provide an upmost power supply for the system. The input of the module is the output voltage of the solar panel, and the output of it is the normal voltage. Charging module This module is based on the charging management chip CN3063 [38] which is produced by the Consonance Electronic Co. Ltd. The function of this module provides an intelligent method to charge the lithium battery. The input of this module is from the MPPT circuit, and the output is connected to the lithium battery. CN3063 is a single lithium battery-charging management chip, which can be easily powered by a solar panel with a wide range of input voltage. An on-chip 8-bit ADC can adjust the charging current automatically. Its regulation voltage can be adjusted by an external resistor, and the charging current can also be programmed externally with a single resistor. The charging process of CN3063 is drawn in Fig. 5 [38]. Charging profile of CN3063 The lithium battery-charging process is generally divided into the following three phases: Pre-charge phase Constant current phase Constant voltage phase The first phase is carried out only when the lithium battery voltage is very low. Usually, the second and the third phase occupy most of the charging process. The constant current of charging can be regulated externally, and the continuous programmable charge current can reach 600 mA. In the second step, the calculation formula of charging current is given by $$ {\mathrm{I}}_{\mathrm{CH}}=1800\ \mathrm{V}/{\mathrm{R}}_{\mathrm{I}\mathrm{SET}} $$ I CH - charging current, with unit A R ISET - the resistance from the ISET pin of the CN3063 to ground, with unit Ω For example, if the system requires a charging current of 500 mA, R ISET can be calculated as follows: $$ {\mathrm{R}}_{\mathrm{ISET}}=1800\ \mathrm{V}/0.5\mathrm{A}=3.6\mathrm{k}\Omega $$ To ensure good stability and temperature characteristics, R ISET recommends the use of a metal film resistor with an accuracy of 1 %. When charging continues, the level of the \( \mathrm{CHRG}- \) pin drops down by an internal switch, which indicates that the charging is in progress; otherwise, the pin is in a high-impedance state. This pin can be used to indicate whether the battery is being charged or not by an external LED. When the charging is terminated, the level of the \( \mathrm{D}\mathrm{O}\mathrm{N}\mathrm{E}- \) pin drops down by an internal switch; otherwise, the pin is in a high-impedance state. The pin's low level can be used as the input signal to the RS trigger, and it turns the trigger over and maintains the status, which ensures that the lithium battery cannot be charged until its voltage drops below the predefined level again. Boost module There are some differences between the battery nominal voltage and the system output voltage; therefore, it is necessary to use a DC/DC boost circuit to make the conversion. GS1661 is a current mode boost DC/DC converter, and it is available in SOT23-6 L package and provides space-saving PCB for applications. The chip can be implemented with fewer external components. The proposed module can raise voltage from 3.3–4.2 V to the system output voltage, and the output voltage can be determined by two external resistors $$ {\mathrm{V}}_{\mathrm{OUT}}=0.6\mathrm{V}\times \left({\mathrm{R}}_{\mathrm{e}1}/{\mathrm{R}}_{\mathrm{e}2}+1\right) $$ The peak current can also be determined by an external resistor R e3, the expression is $$ {\mathrm{I}}_{\mathrm{OCP}}=48,000\ \mathrm{V}/{\mathrm{R}}_{\mathrm{e}3} $$ Control module of branch choosing Since the ISEH system has two branches for power supply, a switching control circuit as shown in Fig. 6 is needed. Power supply circuit switching module LM339 is a voltage comparator (U4A), and it has two inputs, the inverted input as a reference which is provided by a three-terminal regulator (OUTPUT) and the positive-going input which is provided by the MPPT output voltage. When the positive-going input voltage is higher than the reference, LM339 provides a high-level output, which makes Q1 (9013) switch on and drives the pin G of the Mosfet (U3) down; thus, U3 breaks over, and the system selects the solar panel for power support. At the same time, pin 1 of U4A is high, which makes ST2301 (U13) and Mosfet (U6) both go off and then the output of the lithium battery is closed. On the other hand, when the positive-going input voltage is beneath the reference, the output voltage of U4A should be low, and ST2301 (U13) switches on making U6 break over. As a matter of fact, the lithium battery provides the power instead of the solar panel. Charge control and over discharge protection To prolong the life of the lithium battery, lengthening the charge-discharge cycle as long as possible is undoubtedly a good approach. Li et al. used a microcontroller to realize the charging progress, which is convenient and makes it easy to change some strategies, and it merely seems slightly more complicated for programming [35]. In this paper, we deploy a sophisticated method using RS triggers to implement this function with less cost. Consequently, it is more stable compared with the crash possibility of the microcontroller. The charge control and over discharge protection circuit is shown in Fig. 7. The lifetime of the lithium battery determines the life of the entire system. Based on the premise of not affecting the system work, reducing the charge-discharge cycles of the battery is very helpful. Experimental results show that the lithium battery can work for a long time when its voltage is between 3.7–3.9 V, but if the voltage is lower than the threshold, it drops rapidly with discharge going on. So the lithium battery should be charged at once when its voltage is under the threshold. Critically, if the battery voltage is lower than the danger boundary, it may be eternally damaged, so discharge should be stopped immediately, and the system has to do something to avoid this from happening [35]. According to the circuit in Fig. 7, when the battery voltage is lower than the threshold (whose value can be adjusted by the resistor R14 and R18), the output of U4B in LM339 and the pin \( 2\mathrm{R}- \) of RS trigger (U10) are both low. Since the battery is not fully charged, the pin \( \mathrm{D}\mathrm{O}\mathrm{N}\mathrm{E}- \) of CN3063 is in a high-impedance state according to its data sheet. Therefore, the pin \( 2\mathrm{S}- \) of U10 is high. Based on the function table of RS trigger as shown in Table 5, the output Q of RS trigger should be low, thus the Mosfet U9 is conducted, and the system starts to charge the battery. Table 5 Function table of RS Trigger Once the battery voltage is higher than the threshold with continuous charging, the level of \( 2\mathrm{R}- \) changes from low to high, and \( 2\mathrm{S}- \) remains at a high level at the same time because charging has not finished yet. According to Table 5, the output of the RS trigger maintains the former state, and the system still continues to charge the battery. When the battery has been charged, the pin \( \mathrm{D}\mathrm{O}\mathrm{N}\mathrm{E}- \) of CN3063 converts to low, and the pin \( 2\mathrm{S}- \) is also being low. Simultaneously, the pin \( 2\mathrm{R}- \) remains high, so the RS trigger outputs high levels, and the charging circuit is disconnected. The pin \( \mathrm{D}\mathrm{O}\mathrm{N}\mathrm{E}- \) turns to being in a high-impedance state once again, which makes the level of \( 2\mathrm{S}- \) high, and \( 2\mathrm{R}- \) also maintains a high state, thus the RS trigger remains in a Q0 state of high level, which means the circuit is out of the charging process. RS trigger realizes an intelligent charging strategy. When the lithium battery voltage falls below the threshold, the charging circuit will switch on. Subsequently, when the battery is fully charged, the circuit will disconnect at once. It would not be turned on until the battery voltage drops beneath the threshold again. All the descriptions in time sequence are shown in Fig. 8. The RS trigger sequence diagram In addition, the system also has an over-discharged protection module. When the lithium battery voltage is lower than the protection (the value can be adjusted by two resistors), U12 (ST2301) is disconnected, which cuts off the lithium battery power output to protect the battery from overdischarge. A circuit board for testing is made as shown in Fig. 9. The ISEH system is constructed based on the board, which could provide uninterrupted power supply for the WSN nodes. The system is tested under daylight illumination, and the experimental results show that the system can switch the power supply branch automatically and work stably. When the lithium battery voltage drops underneath the predefined level, the lithium battery can be charged as soon as possible if the illumination is sufficient. As shown in Table 2, the power consumption of one sensor node is 460 mW. We can get two different operating points in the conditions of solar power or lithium battery-branch energy supply, as shown in Fig. 10a. The reason causing the operating points to be different is that the output voltage of the two branches have a slight difference in the circuit. Whether the power is provided by the solar or the lithium branch, the power efficiency (as shown in Fig. 10b) is higher than 60 %. It is worth mentioning that the power efficiency is up to 80 % with the lithium battery as a power supply. From Fig. 10b we can find that the efficiency of the solar power branch is lower than that of the lithium branch, and at the same time, the former is more sensitive with the load resistance. The reasons for this phenomenon can mainly be concluded in two parts. The first one is that the voltage difference between input and output of the solar branch is larger than that of the battery branch, and the second is that the former components cause more power loss than that of the latter. Output characteristics of the ISEH system. a Power conversion efficiency versus load resistance. b Output power versus load resistance In this paper, a novel intelligent solar energy-harvesting system is designed by using an MPPT circuit. Hardware, instead of software, is used for charging management of the lithium battery, which can enhance the robustness of the system greatly. Analyses based on power supply requirements are made for WSN nodes in IOT. The system can afford a stable power supply with 5-V output voltage through a standard USB interface. Lithium battery-charging strategy can also ingeniously avoid the charge-discharge cycle a lot, and thus the lifetime of the lithium battery can be greatly extended. Experimental results demonstrate that the system can switch the power supply branch automatically. When the voltage of lithium battery drops below the predefined level, it can be charged properly. The system performs stably and safely with high reliability, high efficiency, low-power loss, and simple composition. ZhouY, Yang X, Guo X, et al. A design of greenhouse monitoring & control system based on ZigBee wireless sensor network. IEEE. Int. Conf. WiCOM, (IEEE, Shanghai, 2007), pp. 2563–2567 Ahonen T, Virrankoski R, Elmusrati M. Greenhouse monitoring with wireless sensor network. IEEE/ASME. Int. Conf. on MESA, (IEEE/ASME, Beijing, 2008), pp. 403–408 Nasser N, Zaman A, Karim L, et al. CPWS: An efficient routing protocol for RGB sensor-based fish pond monitoring system. IEEE. 8th. Int. Conf. WiMob, (IEEE, Barcelona, 2012), pp. 7–11 O'Flyrm B, Martinez R, Cleary J, et al. Smart Coast: a wireless sensor network for water quality monitoring. IEEE. 32nd. Conf. LCN. 2007;815–816. GS Arumugam, T Ponnuchamy, EE-LEACH: development of energy-efficient LEACH Protocol for data gathering in WSN. EURASIP. J. Wireless. Commun. Netw. 1, 76 (2015). doi:10.1186/s13638-015-0306-5 I Butun, I Ra, R Sankar, PCAC: power-and connectivity-aware clustering for wireless sensor Networks. EURASIP. J. Wireless. Commun. Net. 1, 83 (2015). doi:10.1186/s13638-015-0321-6 G Anastasi, M Conti, M Di Francesco et al., Energy conservation in wireless sensor networks: a survey. Ad. Hoc. Netw. 7(3), 537–568 (2009) RJM Vullers, RV Schaijk, HJ Visser et al., Energy harvesting for autonomous wireless sensor networks. IEEE. Solid State Circuits. Mag. 2(2), 29–38 (2010) F Akhtar, MH Rehmani, Energy replenishment using renewable and traditional energy resources for sustainable wireless sensor networks: a review. Renew. Sustain. Energy Rev. 45, 769–784 (2015) VA Boicea, Energy storage technologies: the past and the present. Proc. IEEE. 102(11), 1777–1794 (2014) Z Xu et al., Electrochemical supercapacitor electrodes from sponge-like graphenenano architectures with ultrahigh power density. J. Phys. Chem. Lett. 3(20), 2928–2933 (2012) W Waag, S Käbitz, DU Sauer, Experimental investigation of the lithium-ion battery impedance characteristic at various conditions and aging states and its influence on the application. Appl. Energy. 102, 885–897 (2013) M Broussely, P Biensan, F Bonhomme et al., Main aging mechanisms in Li ion batteries. J. Power. Sources. 146(1), 90–96 (2005) J Vetter, P Novák, MR Wagner et al., Ageing mechanisms in lithium-ion batteries. J. Power. Sources. 147(1), 269–281 (2005) M Ecker, JB Gerschler, J Vogel et al., Development of a lifetime prediction model for lithium-ion batteries based on extended accelerated aging test data. J. Power. Sources. 215, 248–257 (2012) Jiang X, Polastre J, Culler D. Perpetual environmentally powered sensor networks. IEEE. 4th. Int Symp. IPSN, (IEEE, Los Angeles, 2005), pp. 463–468 Xiang Y, Pasricha S. Run-time management for multicore embedded systems with energy harvesting. 2015. doi:10.1109/TVLSI.2014.2381658 Dutta P, Hui J, Jeong J, et al. Trio: enabling sustainable and scalable outdoor wireless sensor network deployments. Proc. 5th. Int. Conf. Info. Process. Sensor. Netw, (ACM, Nashville, TN, 2006), pp. 407–415 A Berger, LB Hörmann, C Leitner et al., Sustainable energy harvesting for robust wireless sensor networks in industrial applications (Sensor Applications Symposium SAS, Zadar, 2015) Dall'Ora R, Raza U, Brunelli D, et al. SensEH: From simulation to deployment of energy harvesting wireless sensor networks. IEEE. 39th. Conf. Local. Comput. Netw. Workshops, (IEEE, Edmonton, AB, 2014), pp. 566–573 V Sharma, U Mukherji, V Joseph et al., Optimal energy management policies for energy harvesting sensor nodes. IEEE. Trans. Wireless. Commun. 9(4), 1326–1336 (2010) Ramasur D, Hancke G P. A wind energy harvester for low power wireless sensor networks. IEEE. Int. Conf. I2MTC, (IEEE, Graz, 2012), pp. 2623–2627 JA Khan, HK Qureshi, A Iqbal, Energy management in wireless sensor networks: a survey. Comput. Electr. Eng. 41, 159–176 (2015) Y Gao, G Sun, W Li et al., Wireless sensor node design based on solar energy supply. IEEE. 2nd. Int. Conf. PEITS 3, 203–207 (2009) Naveen K V, Manjunath S S. A reliable ultra-capacitor based solar energy harvesting system for wireless sensor network enabled intelligent buildings. IEEE. 2nd. Int. Conf. IAMA, (IEEE, Chennai, 2011), pp. 20–25 Chuang WY, Lee CH, Lin CT, et al. Self-sustain wireless sensor module. IEEE Int Conf Internet Things Green Computing Commun Cyber Physical Soc Computing 2014;288–291 Taneja J, Jeong J, Culler D. Design, modeling, and capacity planning for micro-solar power sensor networks. Proc. 7th. IEEE. Int. Conf. IPSN, (IEEE Computer Society, St. Louis, Missouri, 2008), pp. 407–418 Alberola J, Pelegri J, Lajara R, et al. Solar inexhaustible power source for wireless sensor node. IEEE. Proc. IMTC, (IEEE, Victoria, BC, 2008), pp. 657–662 Bhuvaneswari P T V, Balakumar R, Vaidehi V, et al. Solar energy harvesting for wireless sensor networks. IEEE. 1st. Int. Conf. CICSYN, (IEEE, Indore, 2009), pp. 57–61 Isaacson M J, Holl. Sworth R P, Giampaoli P J, et al. Advanced lithium ion battery charger. IEEE. 15th. BCAA, (IEEE, Long Beach, CA, 2000), pp. 193–198 F Ongaro, S Saggini, P Mattavelli, Li-ion battery-supercapacitor hybrid storage system for a long lifetime photovoltaic-based wireless sensor network. IEEE. Trans. Power. Electronics. 27(9), 3944–3952 (2012) H He, R Xiong, X Zhang et al., State-of-charge estimation of the lithium-ion battery using an adaptive extended Kalman filter based on an improved Thevenin model. IEEE. Trans. Veh. Technol. 60(4), 1461–1469 (2011) K Takahashi, M Saitoh, N Asakura et al., Electrochemical properties of lithium manganese oxides with different surface areas for lithium ion batteries. J. Power. Sources. 136(1), 115–121 (2004) YJ Liu, XH Li, HJ Guo et al., Electrochemical performance and capacity fading reason of LiMn2O4/graphite batteries stored at room temperature. J. Power. Sources. 189(2), 721–725 (2009) X Li, Y Wu, G Li et al., Development of wireless soil moisture sensor base on solar energy. Trans. CSAE. 26(11), 13–18 (2010) D Brunelli, C Moser, L Thiele et al., Design of a solar-harvesting circuit for batteryless embedded systems. IEEE. Trans. Circuits. Syst I: Regul. Pap. 56(11), 2519–2528 (2009) T Esram, PL Chapman, Comparison of photovoltaic array maximum power point tracking techniques. IEEE. Trans. Energy. Conversion. EC. 22(2), 439 (2007) CN3063 datasheet, Shanghai Consonance Electronics Co. Ltd., http://www.consonance-elec.com/pdf/datasheet/DSE-CN3063.pdf. This work was supported by the Hunan Provincial Natural Science Foundation of China under Grant 14JJ5009, the Post Doctoral Foundation of Central South University, China, and the National Natural Science Foundation of China (61272495, 61379153, and 61401519). School of Information Science and Engineering, Central South University, Changsha, 410083, China & Ronghua Shi Search for Yin Li in: Search for Ronghua Shi in: Correspondence to Ronghua Shi. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Li, Y., Shi, R. An intelligent solar energy-harvesting system for wireless sensor networks. J Wireless Com Network 2015, 179 (2015) doi:10.1186/s13638-015-0414-2 Received: 07 January 2015 Solar energy harvesting Bimodule power supply
CommonCrawl
The Limitless Vertigo of Cantor's Infinite No one believed him. Not even fellow mathematicians. They thought he was wrong. They thought he was crazy. Even he ended up doubting himself and went crazy. And yet, he had mathematically proved it all. Georg Cantor had figured out how to manipulate the infinite. Even more remarkable, he showed that there were actually several infinities; and some are bigger than others! January 29, 2015 ArticleInfinity, Mathematics, Set TheoryLê Nguyên Hoang 3609 views Mathematics is often presented as the one human endeavour whose claims are indisputable. Yet, when, in the late 19th century, the giant Georg Cantor would dare to theorize mathematically about the infinite, not only would he initiate everlasting debates with the Church and philosophers, but he'd also undergo brutal objections by fellow mathematicians. Cantor was particularly maltreated by Kronecker, who would describe him as a "scientific charlatan", a "renegade" and a "corrupter of youth." In fact, in his (sane) lifetime, Cantor would find hardly any supporter. Instead, the greatest mathematicians of his time would look down on him. They wouldn't hesitate to bring him down. The doors of of the greatest mathematical institutes would be closed to him. Cantor was unwelcome. Alone against the whole mathematical community, he would feel isolated to the point of doubting his own ideas. In the end, it is not so surprising that the great genius eventually went a bit crazy. You're talking as though he was right against the mathematical community? Was he? And if he were right, could it be that absolutely no one believed him? Cantor did not have many fans. But he did have a few. In particular, fortunately for Cantor, one of the fans was the most charismatic mathematician of that time, the great David Hilbert. Hilbert viewed Cantor's achievements as of the greatest order. At last, Cantor's theory that was being a rejected by the whole mathematical community would be brandished by its leader. Wow! That sounds so unlikely! But how does the mathematical community now feels about Cantor's theory? Today, some serious mathematicians like Norman Wildberger still refute Cantor, while others like Andrej Bauer invite us to disregard Cantor's theory (at least every now and then). But overall, the mathematical community celebrates it as one of mankind's greatest achievements. Cantor's ideas are now taught in all the universities in the world as some of the most beautiful truths in mathematicss. This is so intriguing! So what was Cantor's theory? Hehe… Among other things, Cantor would prove that there were different infinities, with some infinities that are bigger than others… Hilbert's Hotel A quick way to get a feeling of the vertigo of Cantor's theory of the infinite is to follow the footsteps of David Hilbert. To give an insight into Cantor's theory, Hilbert proposed a clever thought experiment. Imagine a hotel with an infinite number of rooms. I mean a hotel with a room numbered 1, a room numbered 2, a room numbered 3… and so on to infinity. Now, imagine that the hotel is full, and that a newcomer arrives at the desk office. Could the hotel manager, say Hilbert himself, find a room for the newcomer? Obviously not! The hotel is full! Are you sure? If the hotel was finite, your answer would be perfectly right. But don't forget. Hilbert's hotel is infinite. Hummm… I don't see how to free a room… Take your time. You can figure this one out! Hummm… Okay, here's the trick. Hilbert could ask every guest to move to the room next to his, hence shifting all the guests by one unit. Every guest would still be having a room. But amazingly, they would also have freed the first room! There's room for the newcomer! Wow! That is so counter-intuitive! I know! Think about what has just happened. We've shown that an infinite hotel can always accommodate a newcomer, even when it is full! Mathematically, by adding one to all guests' rooms, we've also shown that there are as many whole numbers as there are whole numbers larger or equal to 2! If you've never heard about that, it should blow your mind! It does! And yet, we're merely at the beginning of our journey in the world of the infinites! Let's go further. Let's now imagine that there is an infinite number of newcomers. Could Hilbert accommodate them all? Hummm… Well, he could repeat this operation indefinitely, right? I guess… But that would take an infinite amount of time. Can Hilbert free an infinite number of room at once? Hummm… By moving the first guest to room number 2, we're freeing one room. Yes. That's what we've done. But now, let's also move the second guest to room number 4, so that we free room number 3. You're onto something! Then, the third guest to room number 6 to free room number 5. Do you see the pattern? I do! We'll send the $n$th guest to room number $2 \times n$… We're doubling the guests' room numbers! Awesome! Thereby, we're freeing all the odd-numbered rooms! Amazingly, we've freed an infinite number of room at once! Through this doubling rule, we've shown that we can fit the set of whole numbers into the set of even numbers. This is astounding! It's what's brilliantly explain in this video by the Open University Youtube channel: You haven't answered my question though. Can Hilbert fit an infinite number of newcomers in his full hotel? Well, obviously, yes, since we've freed an infinite number of rooms! Obviously? Sure! You take a list of all the newcomers. The first of the list will go to room number 1, the second to room number 3, the third to room number 5… and so on. Don't you see where you got it wrong? Did I get something wrong? You did. You assumed that the newcomers could be listed! Can't all sets be listed? As astounding as it sounds, Cantor would prove that no. Some sets aren't listable. The classical word is rather "countable". But as James Grime brilliantly points it out on Numberphile, "countable" is a very poor terminology. Infinite countable sets cannot be counted, precisely because they are infinite. Instead, we should talk about listable and unlistable sets. Listable Infinity Let's start with listable infinities. The most obviously listable of these infinities is evidently the whole numbers. Let me write that as a theorem with a proof. That wasn't hard, was it? Nope. I got it! Now, what about positive and negative integers? What could be the first element of this list of integers? What could the second? The third? The one after that? How about the list: 0, 1, -1, 2, -2, 3, -3,… and so on? Very good! Let me present that as a beautiful theorem: Now that we have seen two examples of infinite listable sets, I can get to Cantor's big idea. As I said in the introduction, Cantor would prove that some infinities are bigger than other infinities. But what could he mean by that? I guess it means that some infinities have more elements than others… But what does that mean? By far, Cantor's greatest idea was to give a meaning to the "size" of a (possibly infinite) set. To understand Cantor's insight, let's play a game. In the following picture, are there more apples than bananas? Easy! There are as many apples as bananas? How do you know? Did you count them? How many of each are there? I don't know… I actually don't know how I know what I know! But I know I didn't count them. I know. But then, how do you know that there are as many apples as bananas? Hummm… It's clear, isn't it? That's not a good mathematical answer! Let me help you make it clearer… Do you see it now? I do! I've managed to pair them up! Now this is a brilliant insight. To guarantee the fact that two sets have the same size, you don't need to count their elements. You just need to pair up their elements. In fact, this is how Georg Cantor decided to define what it meant for two sets to have the same size. And that, my friends, is Cantor's greatest idea! Really? His greatest idea? As we've seen, it's obviously a good definition when sets are finite. But interestingly, it can be applied to infinite sets too! Without this insight, there would be no Cantorian theory, no set theory, no basic foundational mathematics… we'd be back to the dark ages! OK, I might be exaggerating a little bit, but, seriously, Cantor's definition is undeniably a major breakthrough that would inspire later mathematicians to take on the extremely fruitful abstraction of mathematical concepts. A pairing of all elements of two sets is called a bijection. Formally, two sets have the same size when there is exists a bijection between them. OK… But what does that have to do with the listability of sets? Suppose you can list an infinite set of apples and an infinite set of bananas. Do you see a way to pair up apples and bananas? Yes! I can pair up the first apple with the first banana, the second apple with the second banana… and so on! Exactly! In fact, we have the following wonderful theorem. There's more. A set $A$ has the same size as an infinite listable set $L$ if and only if $A$ is itself infinite listable. Indeed, if $A$ is in bijection with $L$, then we can list its elements, by determining the $n$th element of $A$ as the one that's paired with the $n$th element of $L$. By combining the three theorems we have seen, we can now assert one troubling fact. There are as many whole numbers as integers! Now, let's move on to a bigger test. You do know about rational numbers, don't you? There are like 2/3, 5/2… and so on, right? Exactly! Can you list the rational numbers? Well, there seems to be a lot of rational numbers… I know! As a matter of fact, between any two rational numbers, you can always find another rational numbers. In fact, you can find infinitely many rational numbers between any two rational numbers. So there seems to be a lot of rationals. So I guess that there are more rationals than whole numbers… Well, no… What's the answer? Can they be listed? Hehe… They can! To do so, let's first represent them in an array accordingly to the numerators and denominators: Now, here's the big idea. We can list the rationals by going through the down-left-to-up-right diagonals, from the inside to the outside, as we jump over the rationals we've already gone through. Thereby, we have a list of all positive rationals. Of course, what I asked for is a list of all rationals. But, based on how we listed integers and how we listed positive rationals, I'll let you guess by yourself how all rationals could be listed. Now, what I've presented here is the classical proof that rationals are listable. But I want to tell you about a more elegant proof. In fact, you know what? I'll let the great Matt Parker tell it to you, on Numberphile: As an exercise, try to list the set of $n$-tuples of rationals, and the set of polynomials with rational coefficients (that is, of the form $a_n X^n + a_{n-1} X^{n-1} + \ldots + a_1 X + a_0$). Unlistable Infinity At this point, you might think that all infinities are listable… Well no. I've read the title of this section! Okay. Fair enough. So then, let's talk about unlistable infinities. Can you think of an infinite set so big that it cannot be listed? Let me give it to you! After all, it's a very, very difficult question. I claim that the decimal numbers — mathematically badly named the "real" numbers — cannot be listed. Wait what are these real numbers? They are numbers with an infinite number of decimals. You know? Like $e \approx 2.71828182846…$ or $\tau \approx 6.28318530718…$ Or you might have heard of the impostor $\pi = \tau / 2 \approx 3.14159…$ This number $\pi$ is an impostor, because $\pi$ is wrong! And you're saying that these real numbers can't be listed? That's what I'm saying. Any list of the reals would miss out on many of the reals. The proof of it is one of the most beautiful proof of mathematics! As you've guessed, Cantor is the one who came up with this. His proof is a classic example of a proof by contradiction. Suppose there is a list of reals. Cantor will find something wrong about this list. Namely, he will construct a real number that is not in the list. To determine a real number that's not in the list, Cantor first constructed a real number whose $n^{th}$ decimal digit is the $n^{th}$ decimal digit of the $n^{th}$ number of the list. Then, he made up another real number by changing all the digits. This is the number that Cantor claims not to be in the list. Could Cantor's number be the $n^{th}$ element of the list? Hummm… I don't know. If it were, then all the digits of Cantor's number would be the same as the digits of the $n^{th}$ element of the list. Yet, we know by construction that Cantor's number's $n^{th}$ decimal digit is not the $n^{th}$ decimal digit of the $n^{th}$ number of the list! I see! Therefore, their $n^{th}$ decimal digits differ, and, thus, they cannot be the same! That's brilliant! I know! Let's listen to Henry Reich who summed up this wonderful piece of mathematics on Minute Physics: To be a bit more rigorous, this is not a sufficient argument. Some real numbers can be written in two different ways. For instance, 0.9999999…=1. But if you avoid switching 0s into 9s and vice-versa, and instead switch them into any number between 1 and 8, then you'll be fine with Cantor's proof. I can finally state this as a theorem: In effect, this says that the infinite of real numbers is bigger than the listable infinite! Wow! And are there still bigger infinities? There are! The construction of bigger infinities is such a beautiful piece of mathematics that I can't resist including it in this article. It is a bit more technical than the rest of the article though, so feel free to jump ahead if you're struggling with the abstractness of the following paragraphs. For any infinite set $E$ (like for instance, the set of real numbers), Cantor devised a clever method to construct a bigger infinity $\mathcal P(E)$, called the power set. How did he do that? He defined $\mathcal P(E)$ as the set of subsets of $E$. Wait… What's a subset? A subset of $E$ is a set that contains some but not necessarily all elements of $E$. For instance, if $E$ is a set containing an apple and a banana, $E$ has four subsets. One contains both the apple and the banana, another contains only the apple, another contains only the banana, and the last subset contains nothing. More generally, when $E$ has two elements, $\mathcal P(E)$ has four elements. Hummm… That's hard to visualize. Here's an equivalent perhaps simpler way to visualize the power set of $E$. A subset $E$ can be determined by saying for all elements $e$ of $E$ whether $e$ is in or out the subset. So, back to our apple-and-banana example, a subset is determined by whether the apple is in or out, and whether the banana is in or out. Since there are two possibilities for each element of $E$, there are globally $2^E$ possibilities. That's why the power set $\mathcal P(E)$ is sometimes denoted $\mathcal P(E) = 2^E$. More rigorously, we have a canonical bijection $f : \{ in, out \}^E \rightarrow \mathcal P(E)$, which inputs a function $g : E \rightarrow \{ in, out \}$ and outputs the subset $f(g)$ that contains an element $e \in E$ if and only if $g(e) = in$. Finally, I can state Cantor's theorem. Cantor proved that the power set $\mathcal P(E)$ is always bigger than the set $E$. In other words, you cannot pair up elements of $E$ and of $\mathcal P(E)$. Hummm… Why? The idea of the proof is essentially the same as the proof that reals are not listable. To get the intuition of the proof, let me list elements of $E$ anyways, assume that they are paired with subsets of $E$ and write whether the subset they are paired with leave elements of $E$ in or out… Next, like before, we are going to make up a first subset using diagonal terms, and then another subset by switching every in-or-out value. Finally, we obtain a set that could not have been paired up, because of the diagonal argument! Formally, assume that $f : E \rightarrow \mathcal P(E)$ is a bijection. Let $F = \{ e \in E \; | \; e \notin f(e) \}$ the subset of $E$ that switches the values of the diagonal terms (if $e$ was in $f(e)$, then $e$ is not in $F$, and if $e$ was not in $f(e)$, then $e$ is in $F$). Then, $F$ cannot have been some $f(e)$. Indeed, if $e$ is in $f(e)$, then $e$ is not in $F$. But if $e$ is not in $f(e)$, then it is in $F$. Either way, $F$ and $f(e)$ do not agree on whether they include $e$, thus they cannot be the same subsets. Therefore, we have a contradiction, as $F$ has not been paired up by $f$ with any element $e \in E$. Do you realize what this means? It means that the power set $\mathcal P(\mathbb R)$ of the set of real numbers is strictly bigger than the set $\mathbb R$ of real numbers. More generally, there is no biggest infinity. Just take the power set of an infinite set to make up a bigger one! On other note, I'll let you prove as an exercise that the power set of an infinite listable set has the same size as the set of real numbers. The Continuum Hypothesis Many of Cantor's proofs are the kinds of enlightening beautiful insights that every mathematician wishes he could stumble upon by himself. But there's one problem that defied even the great Cantor. What's that problem? Hilbert found that problem so important that he put it at the top of his list of problems to be solved, when he presented a list of 23 open problems in his legendary 1900 talk. These problems would be guiding mathematicians all along the 20th century, and, surely enough, they would yield deep insights into the nature of mathematics. In fact, the one that allowed for the greatest insights might well be the first of these problems, the problem that Cantor could not solve. Again… What's that problem? It is called the continuum hypothesis. It asserts that the infinite of the reals is the smallest infinite bigger than the listable infinite. Cantor and Hilbert could not prove it. The listable infinite is commonly denoted $\aleph_0$. Its power set has the same size as the real. This size is thus denoted $2^{\aleph_0}$. Meanwhile the smallest infinite bigger than $\aleph_0$ is called $\aleph_1$ (whose existence is guaranteed by the highly questionable axiom of choice). The continuum hypothesis then asks whether $2^{\aleph_0} = \aleph_1$. Has this problem been solved? To this day, no one has been able to prove nor disprove the continuum hypothesis. So, it's still an open problem, isn't it? Actually, the problem is considered solved! How come? What does that mean? It's one of the most dramatic result in the History of mathematics! In 1940, Kurt Gödel would prove that you couldn't prove the existence of an infinite in-between the infinites of the whole numbers and the reals. In other words, according to Gödel, the continuum hypothesis has no disproof. Doesn't that solve the problem? Doesn't it mean that the continuum hypothesis is true? Read carefully. Gödel didn't prove that the continuum hypothesis is true. He proved that you couldn't prove that it was false. Hummm… I'm not sure I understand. So, he didn't solve the problem, did he? He did not. But in 1963, Paul Cohen would complete Gödel's answer in the most stunning possible way. He would prove that there is no proof that the continuum hypothesis is true! Astoundingly, Gödel and Cohen proved that no one will ever figure out the continuum hypothesis. It cannot be proved, nor can it be disproved! Wow! That's… Is that even possible? This sort of unprovability had been predicted decades earlier by Kurt Gödel himself, in 1931. It's called the (first) incompleteness theorem, which asserts that in all mathematical systems, you will always stumble upon assertions that can never be proved nor disproved. But this was a theoretical prediction that didn't seem to come up in practice. To everyone's surprise, it did. And not to any problem. It popped up for Hilbert's most prestigious problem, the continuum hypothesis! I'm butchering Gödel's incompleteness theorem here and I'm sorry for that. But it is roughly what it asserts. So what can be done then? This means that you can assume that the continuum hypothesis is true, and do one kind of mathematics. But you can just as well assume that it is false, and do another kind of mathematics. These two alternatives would yield two different mathematical universes. So there's like… a mathematical multiverse? Exactly! Especially nowadays, many, many different universes of mathematics have been unveiled. As Andrej Bauer argues, too often though, mathematicians are stuck in their own little universe called ZFC (well, it's actually not so small since it contains more infinites than any infinite…). ZFC is nothing less than the universe Cantor was exploring, even though he himself could not formalize it. It is also the universe that is brutally taught to students as the only evident mathematical universe one can (or should) work with. But there are, in fact, other universes… Yes indeed! Granted, ZFC is Historically the universe mathematicians have mostly worked with. But, as Bauer argues, it's definitely not the only interesting universe. And the cool thing about the mathematical multiverse is that it is physically possible to explore it — physics never was and never will be a limitation to mathematics! Thus, Bauer invites mathematicians to take a trip towards other universes and discover some of the unusual, but probably useful, properties of the mathematical objects they'll encounter! One alternative universe (or collection of universes?) that I have lately been very impressed by is the one based on univalence. Find out more with my series of articles on: (1) type theory, (2) higher inductive types and (3) univalent foundations. Let's Conclude To conclude, let me recap the listable set of amazing things we've uncovered about the infinites. In fact, you know what? I'm feeling a bit lazy, so I'll just let the awesome Dennis Wildfogel sum up our findings in this wonderful Ted Education video. Sadly, the space of this article is way too restricted to give you a full story about the infinite — to do so, I'd probably rather need an infinite amount of space! But luckily, Vi Hart gave a wonderful overview of the multitude of infinites that are extremely useful in mathematics. As you'll see, the infinite is not always about how big it is, but rather, about how well it completes our mathematical objects. © 2016 par Lê Nguyên Hoang, avec Wordpress.
CommonCrawl
Forthcoming papers Izv. RAN. Ser. Mat.: Izv. Akad. Nauk SSSR Ser. Mat., 1979, Volume 43, Issue 4, Pages 831–859 (Mi izv1733) This article is cited in 108 scientific papers (total in 109 papers) The index of elliptic operators over $C^*$-algebras A. S. Mishchenko, A. T. Fomenko Abstract: In this paper natural generalizations are developed of the theory of elliptic operators invariant under the action of a $C^*$-algebra. The theory of compact and Fredholm operators acting in spaces of the type of a Hilbert space over a $C^*$-algebra is developed. A formula of the Atiyah–Singer type for elliptic operators over a $C^*$-algebra is developed. Bibliography: 16 titles. Full text: PDF file (2556 kB) Mathematics of the USSR-Izvestiya, 1980, 15:1, 87–112 UDC: 513.6 MSC: Primary 58G12, 58G35; Secondary 46L05, 47A53, 47B05, 58G15 Citation: A. S. Mishchenko, A. T. Fomenko, "The index of elliptic operators over $C^*$-algebras", Izv. Akad. Nauk SSSR Ser. Mat., 43:4 (1979), 831–859; Math. USSR-Izv., 15:1 (1980), 87–112 \Bibitem{MisFom79} \by A.~S.~Mishchenko, A.~T.~Fomenko \paper The index of elliptic operators over $C^*$-algebras \jour Izv. Akad. Nauk SSSR Ser. Mat. \mathscinet{http://www.ams.org/mathscinet-getitem?mr=548506} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=A1980LB83500004} http://mi.mathnet.ru/eng/izv1733 http://mi.mathnet.ru/eng/izv/v43/i4/p831 V. A. Kasimov, "Homotopy properties of the general linear group of the Hilbert module $l_2(A)$", Math. USSR-Sb., 47:2 (1984), 365–376 R. A. Biktashev, "Spekrty psevdo-differentsialnykh operatorov nad $C^*$-algebrami", Vestn. Mosk. un-ta. Ser. 1. Matem., mekh., 1982, no. 4, 36–38 J. Rosenberg, "Homological invariants of extensions of $C^*$-algebras", Operator algebras and applications, Part 1 (Kingston, Ont., 1980), Proc. Sympos. Pure Math., 38, Amer. Math. Soc., Providence, RI, 1982, 35–75 M. A. Rieffe, "Morita equivalence for operator algebras", Operator algebras and applications, Part I (Kingston, Ont., 1980), Proc. Sympos. Pure Math., 38, Amer. Math. Soc., Providence, RI, 1982, 285–298 G. G. Kasparov, "Indeks invariantnykh ellipticheskikh operatorov, K-teoriya i predstavleniya grupp Li", Dokl. RAN, 268:3 (1983), 533–537 J. Rosenberg, "$C^*$-algebras, positive scalar curvature, and the Novikov conjecture", Inst. Hautes Études Sci. Publ. Math., 1983, no. 58, 197–212 M. Nur, "Formula sleda dlya ellipticheskikh psevdo-differentsialnykh operatorov nad $C^*$-algebrami", Vestn. Mosk. un-ta. Ser. 1. Matem., mekh., 1984, no. 5, 69–72 J. CUNTZ, "$K$-theory and $C^*$-algebras", Algebraic $K$-theory, number theory, geometry and analysis (Bielefeld, 1982), Lecture Notes in Math., 1046, Springer, Berlin, 1984, 55–79 E. V. Troitskii, "Teorema ob indekse dlya ekvivariantnykh $C^*$-ellipticheskikh operatorov", Dokl. RAN, 282:5 (1985), 1059–1061 F. Sharipov, "Nezavisimost spektra ellipticheskogo operatora nad $C^*$-algebrami", Vestn. Mosk. un-ta. Ser. 1. Matem., mekh., 1985, no. 1, 87–89 A. Mallios, "Hermitian $K$-theory over topological $*$-algebras", J. Math. Anal. Appl., 106:2 (1985), 454–539 E. V. Troitskii, "The equivariant index of $C^*$-elliptic operators", Math. USSR-Izv., 29:1 (1987), 207–224 E. V. Troitskii, "Contractibility of the full general linear group of the $l_2(A)$", Funct. Anal. Appl., 20:4 (1986), 301–307 J. A. Mingo, "$K$-theory and multipliers of stable $C^*$-algebras", Trans. Amer. Math. Soc., 299:1 (1987), 397–411 J. Rosenberg, C. Schochet, "The Künneth theorem and the universal coefficient theorem for Kasparov's generalized $K$-functor", Duke Math. J., 55:2 (1987), 431–474 N. Ch. Phillips, Equivariant $K$-theory and freeness of group actions on $C^*$-algebras, Lecture Notes in Math., 1274, Springer-Verlag, Berlin, 1987, viii+371 pp. E. V. Troitskii, "Tochnaya $K$-kogomologicheskaya $C^*$-indeksnaya formula. I. Izomorfizm Toma i topologicheskii indeks", Vestn. Mosk. un-ta. Ser. 1. Matem., mekh., 1988, no. 2, 83–85 J. Rosenberg, S. Weinberger, "Higher $G$-indices and applications", Ann. Sci. École Norm. Sup. (4), 21:4 (1988), 479–495 E. V. Troitskii, "Exact $K$-cohomological $C^*$-index formula. II. The index theorem and its applications", Russian Math. Surveys, 44:1 (1989), 259–260 A. Connes, H. Moscovici, "Cyclic cohomology, the Novikov conjecture and hyperbolic groups", Topology, 29:3 (1990), 345–388 E. V. Troitsky, "Lefschetz numbers of $C^*$-complexes", Lecture Notes in Math., 1474, Springer, Berlin, 1991, 193–206 K. Liu, "On mod 2 and higher elliptic genera", Comm. Math. Phys., 149:1 (1992), 71–95 Zhang Shuang, "Factorizations of invertible operators and $K$-theory of $C^*$-algebras", Bull. Amer. Math. Soc. (N.S.), 28:1 (1993), 75–83 E. V. Troitskii, "Sledy, chisla Lefshetsa $C^*$-kompleksov i starshie tsiklicheskie gomologii", Vestn. Mosk. un-ta. Ser. 1. Matem., mekh., 1993, no. 5, 36–39 G. V. Sandrakov, "Averaging of quasiperiodic functions", Dokl. Math., 47:2 (1993), 289–293 E. V. Troitskii, "An Averaging Theorem in $C^*$-Hilbert Modules and Operators without Adjoint", Funct. Anal. Appl., 28:3 (1994), 220–223 V. M. Manuilov, "Diagonalization of compact operators in Hilbert modules over finite W ∗ -algebras", Ann. Global Anal. Geom., 13:3 (1995), 207–226 Yu Guo Liang, "Cyclic cohomology and higher indices for noncompact complete manifolds", J. Funct. Anal., 133:2 (1995), 442–473 U. Bunke, "A $K$-theoretic relative index theorem and Callias-type Dirac operators", Math. Ann., 303:2 (1995), 241–279 D. Burghelea, T. Kappeler, P. McDonald, L. Friedlander, "Analytic and Reidemeister torsion for representations in finite type Hilbert modules", Geom. Funct. Anal., 6:5 (1996), 751–859 E. V. Troitskii, M. Frank, "Lefschetz Numbers and Geometry of Operators in $W^*$-Modules", Funct. Anal. Appl., 30:4 (1996), 257–266 V. M. Manuilov, "Representability of Functionals on Hilbert $C^*$-Modules", Funct. Anal. Appl., 30:4 (1996), 287–289 T. Kato, "Asymptotic Lipschitz cohomology and higher signatures", Geom. Funct. Anal., 6:2 (1996), 346–369 V. M. Manuilov, "Diagonalization of operators over continuous fields of $C^*$-algebras", Sb. Math., 188:6 (1997), 893–911 V. M. Manuilov, "Diagonalization of compact operators on Hilbert modules over $C^*$-algebras of real rank zero", Math. Notes, 62:6 (1997), 726–730 Jiang Xinhui, "An index theorem on foliated flat bundles", K-Theory, 12:4 (1997), 319–359 Wu Fangbing, "A bivariant Chern-Connes character and the higher $\Gamma$-index theorem", K-Theory, 11:1 (1997), 35–82 Yu Guoliang, "$K$-theoretic indices of Dirac type operators on complete manifolds and the Roe algebra", K-Theory, 11:1 (1997), 1–15 Yu Guoliang, "Zero-in-the-spectrum conjecture, positive scalar curvature and asymptotic dimension", Invent. Math., 127:1 (1997), 99–126 E. Leichtnam, P. Piazza, The $b$-pseudodifferential calculus on Galois coverings and a higher Atiyah-Patodi-Singer index theorem, Mém. Soc. Math. Fr. (N.S.), 68, 1997, iv+121 pp. Yu Guoliang, "The Novikov conjecture for groups with finite asymptotic dimension", Ann. of Math. (2), 147:2 (1998), 325–355 J. G. Miller, "Signature operators and surgery groups over $C^*$-algebras", K-Theory, 13:4 (1998), 363–402 E. Leichtnam, P. Piazza, "Spectral sections and higher Atiyah-Patodi-Singer index theory on Galois coverings", Geom. Funct. Anal., 8:1 (1998), 17–58 E. V. Troitskii, "Functionals on $l_2(A)$, Kuiper and Dixmier–Douady Type Theorems for $C^*$-Hilbert Modules", Proc. Steklov Inst. Math., 225 (1999), 344–362 M. Marcolli, V. Mathai, "Twisted index theory on good orbifolds. I. Noncommutative Bloch theory", Commun. Contemp. Math, 1:4 (1999), 553–587 J. Lott, "Delocalized $L^2$-invariants", J. Funct. Anal., 169:1 (1999), 1–31 J. Trout, "Asymptotic morphisms and elliptic operators over $C^*$-algebras", K-Theory, 18:3 (1999), 277–315 M. Frank, "Geometrical aspects of Hilbert $C^*$-modules", Positivity, 3:3 (1999), 215–243 J. Lott, Diffeomorphisms and noncommutative analytic torsion, Mem. Amer. Math. Soc., 141, no. 673, 1999, viii+56 pp. D. Kastler, "Noncommutative geometry and fundamental physical interactions: The Lagrangian level–Historical sketch and description of the present situation", J. Math. Phys., 41:6 (2000), 3867–3891 E. Park, J. Trout, "Representable $E$-theory for $C_0(X)$-algebras", J. Funct. Anal., 177:1 (2000), 178–202 E. Leichtnam, J. Lott, P. Piazza, "On the homotopy invariance of higher signatures for manifolds with boundary", J. Differential Geom., 54:3 (2000), 561–633 R. Lauter, B. Monthubert, V. Nistor, "Pseudodifferential analysis on continuous family groupoids", Doc. Math., 5 (2000), 625–655 J. Trout, "On graded $K$-theory, elliptic operators and the functional calculus", Illinois J. Math., 44:2 (2000), 294–309 P. Baum, A. Connes, "Geometric $K$-theory for Lie groups and foliations", Enseign. Math. (2), 46:1-2 (2000), 3–42 E. V. Troitsky, "Geometry and topology of operators on Hilbert $C^*$-modules", Functional analysis, 6, J. Math. Sci. (New York), 98, no. 2, 2000, 245–290 V. M. Manuilov, E. V. Troitsky, "Hilbert $C^*$- and $W^*$-modules and their morphisms", Functional analysis, 6, J. Math. Sci. (New York), 98, no. 2, 2000, 137–201 E. Leichtnam, P. Piazza, "A higher Atiyah-Patodi-Singer index theorem for the signature operator on Galois coverings", Ann. Global Anal. Geom., 18:2 (2000), 171–189 Yu Guoliang, "The coarse Baum-Connes conjecture for spaces which admit a uniform embedding into Hilbert space", Invent. Math., 139:1 (2000), 201–240 M. J. Gruber, "Noncommutative Bloch theory", J. Math. Phys., 42:6 (2001), 2438–2465 A. A. Bolibrukh, A. A. Irmatov, M. I. Zelikin, O. B. Lupanov, V. M. Maynulov, E. F. Mishchenko, M. M. Postnikov, Yu. P. Solov'ev, E. V. Troitskii, "Aleksandr Sergeevich Mishchenko (on his 60th birthday)", Russian Math. Surveys, 56:6 (2001), 1187–1191 E. V. Troitsky, "Actions of compact groups on algebras, the $C^*$-index theorem, and families", Pontryagin Conference, 8, Topology (Moscow, 1998), J. Math. Sci. (New York), 105, no. 2, 2001, 1884–1923 A. A. Pavlov, "The generalized Chern character and Lefschetz numbers in $W^*$-modules", Noncommutative geometry and operator $K$-theory, Acta Appl. Math., 68, no. 1-3, 2001, 137–157 E. V. Troitsky, ""Twice" equivariant $C^*$-index theorem and the index theorem for families", Noncommutative geometry and operator $K$-theory., Acta Appl. Math., 68, no. 1-3, 2001, 39–70 J. Lott, "Higher-degree analogs of the determinant line bundle", Comm. Math. Phys., 230:1 (2002), 41–69 W. Lück, "The relation between the Baum-Connes conjecture and the trace conjecture", Invent. Math., 149:1 (2002), 123–152 E. Vasselli, "Continuous fields of $C^*$-algebras arising from extensions of tensor $C^*$-categories", J. Funct. Anal., 199:1 (2003), 122–152 G. Khimshiashvili, "Global geometric aspects of linear conjugation problems. Complex analysis", J. Math. Sci. (N. Y.), 118:5 (2003), 5400–5466 A. Gorokhovsky, J. Lott, "Local index theory over étale groupoids", J. Reine Angew. Math., 560 (2003), 151–198 J. F. Davis, K. Pearson, "The Gromov-Lawson-Rosenberg conjecture for cocompact Fuchsian groups", Proc. Amer. Math. Soc., 131:11 (2003), 3571–3578 E. V. Troitsky, "Discrete groups actions and corresponding modules", Proc. Amer. Math. Soc., 131:11 (2003), 3411–3422 W. Dwyer, M. Weiss, B. Williams, "A parametrized index theorem for the algebraic $K$-theory Euler class", Acta Math., 190:1 (2003), 1–104 N. Higson, J. Roe, "Mapping surgery to analysis. I. Analytic signatures", K-Theory, 33:4 (2004), 277–299 E. Leichtnam, P. Piazza, "Elliptic operators and higher signatures", Ann. Inst. Fourier (Grenoble), 54:5 (2004), 1197–1277 D. Perrot, "Retraction of the bivariant Chern character", K-Theory, 31:3 (2004), 233–287 Th. Schick, "$L^2$-index theorems, $KK$-theory, and connections", New York J. Math., 11 (2005), 387–443 A. S. Mishchenko, "$K$-theory from the point of view of $C^*$-algebras and Fredholm representations", Cent. Eur. J. Math., 3:4 (2005), 766–793 E. Hawkins, "Quantization of multiply connected manifolds", Comm. Math. Phys., 255:3 (2005), 513–575 E. Vasselli, "Crossed products by endomorphisms, vector bundles and group duality", Internat. J. Math., 16:2 (2005), 137–171 M. J. Dupré, J. F. Glazebrook, E. Previato, "A Banach algebra version of the Sato Grassmannian and commutative rings of differential operators", Acta Appl. Math., 92:3 (2006), 241–267 M. N. Krein, "The mappings of degree 1", Abstr. Appl. Anal., 2006, 90837, 14 pp. D. Kucerovsky, P. W. Ng, "An abstract Pimsner-Popa-Voiculescu theorem", J. Operator Theory, 55:1 (2006), 169–183 A. A. Pavlov, E. V. Troitskii, "Property (T) for topological groups and $C^*$-algebras", J. Math. Sci., 159:6 (2009), 863–878 S. Damaville, "Régularité d'opérateurs non bornés dans les modules de Hilbert", C. R. Math. Acad. Sci. Paris, 344:12 (2007), 769–772 A. A. Pavlov, E. V. Troitsky, "A $C^*$-analogue of Kazhdan's property (T)", Adv. Math., 216:1 (2007), 75–88 Ch. Wahl, Noncommutative Maslov index and eta-forms, Mem. Amer. Math. Soc., 189, no. 887, 2007, vi+118 pp. P. Piazza, Th. Schick, "Bordism, rho-invariants and the Baum-Connes conjecture", J. Noncommut. Geom., 1:1 (2007), 27–111 A. A. Irmatov, A. S. Mishchenko, "On compact and Fredholm operators over $C^*$-algebras and a new topology in the space of compact operators", J. K-Theory, 2:2, Special issue in memory of Yurii Petrovich Solovyev, Part 1 (2008), 329–351 M. Marcolli, "Solvmanifolds and noncommutative tori with real multiplication", Commun. Number Theory Phys., 2:2 (2008), 421–476 Ch. Wahl, "Spectral flow and winding number in von Neumann algebras", J. Inst. Math. Jussieu, 7:3 (2008), 589–619 S. Bayramov, "On stability of index of Fredholm complexes on the $C^*$-algebra", Appl. Anal., 87:4 (2008), 409–419 "J. Rosenberg", Trans. Amer. Math. Soc., 360:1 (2008), 383–394 V. E. Nazaikinskii, A. Yu. Savin, B. Yu. Sternin, "On the index of nonlocal elliptic operators", Dokl. Math., 77:3 (2008), 441–445 Yu. A. Kordyukov, "Index theory and non-commutative geometry on foliated manifolds", Russian Math. Surveys, 64:2 (2009), 273–391 Savin A.Yu., Sternin B.Yu., "Index of nonlocal elliptic operators over $C^*$-algebras", Dokl. Math., 79:3 (2009), 369–372 E. Vasselli, "Gauge-equivariant Hilbert bimodules and crossed products by endomorphisms", Internat. J. Math., 20:11 (2009), 1363–1396 C. Wahl, "Homological index formulas for elliptic operators over $C^*$-algebras", New York J. Math., 15 (2009), 319–351 D. Perrot, "The equivariant index theorem in entire cyclic cohomology", J. K-Theory, 3:2 (2009), 261–307 U. Bunke, Index theory, eta forms, and Deligne cohomology, Mem. Amer. Math. Soc., 198, no. 928, 2009, vi+120 pp. A. Yu. Savin, B. Yu. Sternin, "On the index of noncommutative elliptic operators over $C^*$-algebras", Sb. Math., 201:3 (2010), 377–417 M. E. Zadeh, "Index theory and partitioning by enlargeable hypersurfaces", J. Noncommut. Geom., 4:3 (2010), 459–473 C. Wahl, "Index theory for actions of compact Lie groups on $C^*$-algebras", J. Operator Theory, 63:1 (2010), 217–242 D. S. Freed, J. Lott, "An index theorem in differential K-theory", Geom. Topol., 14:2 (2010), 903–966 R. Ji, C. Ogle, B. Ramsey, "Relatively hyperbolic groups, rapid decay algebras and a generalization of the Bass conjecture", J. Noncommut. Geom., 4:1 (2010), 83–124 A. A. Pavlov, E. V. Troitskii, "Quantization of branched coverings", Russ. J. Math. Phys., 18:3 (2011), 338–352 M. Dadarlat, "Group quasi-representations and index theory", J. Topol. Anal., 4:3 (2012), 297–319 P. Albin É. Leichtnam, R. Mazzeo, P. Piazza, "The signature package on Witt spaces", Ann. Sci. Éc. Norm. Supér. (4), 45:2 (2012), 241–310 H. Sati, "Geometry of Spin and Spin$^c$ structures in the M-theory partition function", Rev. Math. Phys., 24:3 (2012), 1250005, 112 pp. Antonini P., "Boundary integral for the Ramachandran index", Rend. Semin. Mat. Univ. Padova, 131 (2014), 1–14 This page: 1682 First page: 4
CommonCrawl
Concurrence in Equilateral Triangle What Might This Be About? $\ell$ is a line through the center $G$ of an equilateral $\Delta ABC;$ $A',$ $B',$ $C'$ are the midpoints of the sides $BC,$ $AC,$ and $AB;$ $A_1,$ $B_1,$ $C_1$ are the feet of th perpendiculars from the vertices of $\Delta ABC$ to $\ell.$ Prove that $A'A_1,$ $B'B_1,$ and $C'C_1$ are concurrent. The solution makes use of complex numbers. Assume $G$ is the center of the coordinate system, $\ell$ coincides with the $x-axis,$ and the vertices are given by $A=2e^{it},$ $B=2\omega e^{it},$ $C=2\omega^{2} e^{it},$ where $\omega = e^{\displaystyle i\frac{2\pi}{3}}.$ Note that $\omega$ satisfies the equation $\omega^{2}+\omega +1=0.$ We further find $A'=(\omega +\omega^{2} )e^{it}=-e^{it},$ $B'=-\omega e^{it},$ $C'=-\omega^{2}e^{it}.$ In addition, the projections on $\ell$ are given by their coordinates: $A_1 =(2\cos t,0),$ $\displaystyle B_1=(2\cos \left(t+\frac{2\pi}{3}\right), 0),$ $\displaystyle C_1=(2\cos \left(t+\frac{4\pi}{3}\right), 0).$ We thus in the position to write equations of the three lines: $\displaystyle\begin{cases} A'A_1: & x\sin t - 3y\cos t =\sin 2t,\\ B'B_1: & x\sin\left(t+\frac{2\pi}{3}\right)-3y\cos\left(t+\frac{2\pi}{3}\right)=\sin 2\left(t+\frac{2\pi}{3}\right),\\ C'C_1: & x\sin\left(t+\frac{4\pi}{3}\right)-3y\cos\left(t+\frac{4\pi}{3}\right)=\sin 2\left(t+\frac{4\pi}{3}\right). \end{cases}$ The three lines concur iff $\displaystyle\left|\begin{array}{ccc} \,\sin t & \cos t & \sin 2t\\ \sin\left(t+\frac{2\pi}{3}\right) & \cos\left(t+\frac{2\pi}{3}\right) & \sin 2\left(t+\frac{2\pi}{3}\right)\\ \sin\left(t+\frac{4\pi}{3}\right) & \cos\left(t+\frac{4\pi}{3}\right) & \sin 2\left(t+\frac{4\pi}{3}\right) \end{array} \right|=0.$ To verify that this is indeed so, add the last two rows to the first one whose terms will become $\displaystyle Im\left(( 1+\omega +\omega^{2}) e^{it}\right),\; Re\left(( 1+\omega +\omega^{2}) e^{it}\right),\; Im\left(( 1+\omega^{2} +\omega^{4}) e^{2it}\right),$ each clearly equal to $0.$ The above problem has been posted by Dào Thanh Oai at the GeoGebra tube; the solution, posted at the CutTeKnotMath facebook page, is by Leo Giugiuc. Angle Trisectors on Circumcircle Equilateral Triangles On Sides of a Parallelogram Pompeiu's Theorem Pairs of Areas in Equilateral Triangle The Eutrigon Theorem Equilateral Triangle in Equilateral Triangle Seven Problems in Equilateral Triangle Spiral Similarity Leads to Equilateral Triangle Parallelogram and Four Equilateral Triangles A Pedal Property in Equilateral Triangle Miguel Ochoa's van Schooten Like Theorem Two Conditions for a Triangle to Be Equilateral Incircle in Equilateral Triangle When Is Triangle Equilateral: Marian Dinca's Criterion Barycenter of Cevian Triangle Excircle in Equilateral Triangle Converse Construction in Pompeiu's Theorem Wonderful Trigonometry In Equilateral Triangle 60o Angle And Importance of Being The Other End of a Diameter One More Property of Equilateral Triangles Van Khea's Quickie Equilateral Triangle from Three Centroids |Contact| |Front page| |Contents| |Geometry|
CommonCrawl
Heat capacity - Einstein and Debye models Harmonic oscillator Phonons are a crystal's way of storing thermal energy. Each chemical bond is treated as an oscillator (effectively a spring) whose fundamental frequency is determined by the elastic constant (spring constant) and the masses of the attached atoms. However, since no bond is isolated but rather part of a crystal lattice, oscillations do not occur on individual bonds. Instead, vibration modes are excited along certain lattice directions. As long as the oscillations can be treated as elastic, i.e. Hooke's law can be taken to be valid, the quantum-mechanical harmonic oscillator model determines the energy levels of a particular mode and its harmonics: $$E_n=\left(n+\frac{1}{2}\right)\hbar\omega\qquad,$$ where $n$ is the vibrational quantum number. The ½ term represents the zero-point energy corresponding to the fundamental mode and is a consequence of the uncertainty principle. The Boltzmann distribution allows us to determine the relative population of each energy level as a function of temperature: $$\frac{N_n}{N}=\frac{\exp{\left(\frac{-n\hbar\omega}{k_BT}\right)}}{\sum_s\exp{\left(\frac{-s\hbar\omega}{k_BT}\right)}}$$ We can use this to calculate the average quantum number, $\langle n\rangle$, i.e. the number of phonons exciting a particular mode. The average quantum number is equal to the weighted sum of the quantum numbers of each level $s$, where each level is weighted with its relative population: $$\langle n\rangle=\sum_ss\frac{N_s}{\sum_sN_s}\quad\color{grey}{=0\frac{N_0}{N_0+N_1+N_2+\dots}+1\frac{N_1}{N_0+N_1+N_2+\dots}+2\frac{N_2}{N_0+N_1+N_2+\dots}+\dots}$$ The first three terms of the sum are shown in grey to illustrate the workings of the summation. Since the partition function in the denominator is the same for each term of the outer sum, we can move the outer sum sign into the numerator - again, the first terms of the ratio of sums are shown in grey to demonstrate the validity of this move: $$\qquad=\frac{\sum_ss\exp{\left(\frac{-s\hbar\omega}{k_BT}\right)}}{\sum_s\exp{\left(\frac{-s\hbar\omega}{k_BT}\right)}} \quad\color{grey}{=\frac{0N_0+1N_1+2N_2+\dots}{N_0+N_1+N_2+\dots}}\qquad.$$ By taking the factor $s$ out of the exponential function based on the fact that an exponent comprising a product is equivalent to applying the two powers sequentially, we can re-write the ratio as $$\qquad=\frac{\sum_ss\left(\exp{\left(\frac{-\hbar\omega}{k_BT}\right)}\right)^s}{\sum_s\left(\exp{\left(\frac{-\hbar\omega}{k_BT}\right)}\right)^s} =\frac{\bbox[lightblue]{\sum_ssx^s}}{\bbox[lightgreen]{\sum_sx^s}}\qquad\color{grey}{\left[a^{bc}=(a^b)^c\right]},$$ using $x=\exp{\left(\frac{-\hbar\omega}{k_BT}\right)}$ as temporary shorthand. This emphasises that the denominator is the geometric series, which converges to $$\bbox[lightgreen]{\sum_sx^s}=\frac{1}{1-x}\qquad(\textrm{for}\,x\lt 1)\qquad.$$ The numerator is equal to the product of $x$ and the derivative of the geometric series: $$\bbox[lightblue]{\sum_ssx^s}=x\frac{{\rm d}}{{\rm d}x}\sum_sx^s=x\frac{{\rm d}}{{\rm d}x}\frac{1}{1-x}=x\left(-\frac{1}{(1-x)^2}\right)\frac{{\rm d}}{{\rm d}x}(1-x)=\frac{x}{(1-x)^2}\qquad,$$ as can be seen by comparing the first few terms of both sums: $$\color{grey}{0x^0+1x^1+2x^2+\dots=x\frac{{\rm d}}{{\rm d}x}(x^0+x^1+x^2+\dots)}\qquad.$$ These terms can be substituted for the sums in the denominator and numerator, respectively. Since they are quite similar, they simplify nicely: $$\langle n\rangle=\frac{\bbox[lightblue]{\sum_ssx^s}}{\bbox[lightgreen]{\sum_sx^s}} =\frac{\frac{x}{(1-x)^2}}{\frac{1}{1-x}} =\frac{x}{1-x} =\frac{\exp{\left(\frac{-\hbar\omega}{k_BT}\right)}}{1-\exp{\left(\frac{-\hbar\omega}{k_BT}\right)}}\qquad.$$ We can further simplify by expanding the fraction by the positive counterpart of the exponential: $$\qquad=\frac{\exp{\left(\frac{-\hbar\omega}{k_BT}\right)}}{1-\exp{\left(\frac{-\hbar\omega}{k_BT}\right)}}\cdot\frac{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}}{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}} =\frac{1}{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1}\qquad,$$ leaving a function with a single positive exponential. This form is known as the Planck distribution. With it, it becomes possible to calculate the average quantum number of a system of oscillators without having to consider each individual energy level individually; only the temperature and the fundamental frequency have to be known. The total vibrational energy of the lattice can then be calculated by substituting the average quantum number into the formula for the energy of a specific level of the harmonic oscillator: $$E_{vib}=\sum_i\left(\langle n_i\rangle+\frac{1}{2}\right)\hbar\omega_i\approx\sum_i\langle n_i\rangle\hbar\omega_i =\sum_i\frac{\hbar\omega_i}{\exp{\left(\frac{\hbar\omega_i}{k_BT}\right)}-1}\qquad.$$ The sum in this formula is over the different modes (oscillations having different wave vectors) and polarisations (longitudinal and two transversal) rather than over different energy levels of the same mode (as the average quantum number takes care of that). Since there are many modes in a crystal, we can substitute the sum with an integral: $$E_{vib}=\int D(\omega)\frac{\hbar\omega}{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1}{\rm d}\omega\qquad.$$ In it, the density of states, $D(\omega)$, describes how many different modes exist in a frequency band ${\rm d}\omega$. The energy formula in this form is reasonably general - the only severe assumption made is that the oscillators are harmonic, i.e. operate in an elastic regime according to Hooke's law. Beyond that, the particular form chosen for the density of states reflects our model of the crystal and therefore contains our understanding of the vibrational properties of the crystal. We will be looking at two relatively simple models which can be solved analytically in the next two boxes on this page, followed by a more general approach requiring numerical solution strategies. Once we have determined the vibrational energy and its dependence on the temperature, we can calculate the vibrational heat capacity, $c_{vib}$, by differentiating the energy with respect to temperature: $$c_{vib}=\frac{{\rm d}E_{vib}}{{\rm d}T} =\frac{{\rm d}}{{\rm d}T}\int D(\omega)\frac{\hbar\omega}{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1}{\rm d}\omega$$ As the integral and the derivative in this equation relate to different variables (frequency and temperature, respectively), we can swap them around. Since the density of states (unlike their population!) does not change with temperature, it can be taken outside the derivative along the other factors independent of temperature: $$\qquad=\int D(\omega)\hbar\omega\frac{{\rm d}}{{\rm d}T}\left(\frac{1}{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1}\right){\rm d}\omega\qquad.$$ Differentiating involves applying the chain rule twice: $$\qquad=\int D(\omega)\hbar\omega\left(-\frac{1}{\left(\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1\right)^2}\right)\frac{{\rm d}}{{\rm d}T}\left(\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1\right){\rm d}\omega$$ $$\qquad=\int D(\omega)\hbar\omega\left(-\frac{1}{\left(\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1\right)^2}\right)\exp{\left(\frac{\hbar\omega}{k_BT}\right)}\frac{\hbar\omega}{k_B}\left(-\frac{1}{T^2}\right){\rm d}\omega\qquad.$$ With some tidying up, we get a formula for the heat capacity within the same limits of applicability as for the vibrational energy: $$c_{vib}=\int D(\omega)\frac{\hbar^2\omega^2\exp{\left(\frac{\hbar\omega}{k_BT}\right)}}{k_BT^2\left(\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1\right)^2}{\rm d}\omega=k_B\int D(\omega)\frac{x^2{\rm e}^x}{({\rm e}^x-1)^2}{\rm d}\omega\qquad\left(\textrm{where}\,x=\frac{\hbar\omega}{k_BT}\right)\qquad.$$ The subsitution $x$ is made frequently in the literature for brevity's sake. Einstein model In order to calculate the vibrational heat capacity of a solid we have to find a suitable model representing the solid and infer the appropriate density of states from it. A simple model for this purpose is the Einstein model. It is based on the assumption that all oscillators have the same frequency $\omega_0$. The density of states is then simply $$D(\omega)=3N\delta(\omega-\omega_0)\qquad,$$ where the delta function rejects all frequencies that are not equal to $\omega_0$. The factor $3N$ arises from the fact that each of the $N$ atoms in the crystal has three degrees of freedom of motion. The resulting vibrational energy and heat capacity therefore become $$E_{vib}=\int D(\omega)\frac{\hbar\omega}{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1}{\rm d}\omega=\frac{3N\hbar\omega_0}{\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}-1}\qquad,\,\textrm{and}$$ $$c_{vib}=\int D(\omega)\frac{\hbar^2\omega^2\exp{\left(\frac{\hbar\omega}{k_BT}\right)}}{k_BT^2\left(\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1\right)^2}{\rm d}\omega=\frac{3N\hbar^2\omega_0^2\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}}{k_BT^2\left(\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}-1\right)^2}\qquad,$$ respectively. To assess whether this rather simple model produces realistic results, we investigate the high- and low-temperature limits of the heat capacity. In the high-temperature limit, we expect the heat capacity to approach the value of 3R (where R is the gas constant): $$\lim_{T\to\infty}c_{vib}=3N_Ak_B=3R\qquad.$$ This is known as the rule of Dulong and Petit and agrees quite well with experimental observations. It is based on the equipartition theorem of thermodynamics, which assigns $\frac{1}{2}k_BT$ per degree of freedom to both the potential and kinetic energies of an atom or molecule. Unfortunately, in the case of the Einstein model, it is not immediately clear what the high-temperature limit might be as the different temperature-dependent terms pull $c_{vib}$ in different directions: $$\lim_{T\to\infty}c_{vib}=\lim_{T\to\infty}\frac{3N\hbar^2\omega_0^2\color{red}{\cancel{\color{black}{\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}}}^1}}{k_B\color{red}{\cancel{\color{black}{T^2}}^{\infty}}\color{red}{\cancel{\color{black}{\left(\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}-1\right)^2}}^0}}\qquad.$$ We can use de l'Hôpital's rule to resolve this problem. It states that the limit of the ratio of two functions is equal to the limit of their derivatives (as long as both numerator and denominator are differentiable): $$\lim_{x\to x_0}\frac{f(x)}{g(x)}=\lim_{x\to x_0}\frac{\quad\frac{{\rm d}f(x)}{{\rm dx}}\quad}{\frac{{\rm d}g(x)}{{\rm d}x}}\qquad.$$ If we bring the $T^2$ factor onto the numerator, both numerator and denominator approach zero as the temperature goes to infinity, and we can use de l'Hôpital's rule and differentiate both separately: $$\lim_{T\to\infty}c_{vib}=\lim_{T\to\infty}\frac{\frac{3N\hbar^2\omega_0^2}{k_BT^2}\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}}{\left(\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}-1\right)^2}=\lim_{T\to\infty}\frac{\frac{{\rm d}}{{\rm d}T}\left(\frac{3N\hbar^2\omega_0^2}{k_BT^2}\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}\right)}{\frac{{\rm d}}{{\rm d}T}\left(\left(\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}-1\right)^2\right)}\qquad.$$ The numerator takes a combination of the chain rule and the product rule to solve, the denominator requires two consecutive applications of the chain rule: $$\qquad=\lim_{T\to\infty}\frac {\frac{3N\hbar^2\omega_0^2}{k_B}\left[\left(-\frac{2}{T^3}\right)\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}+\frac{1}{T^2}\frac{\hbar\omega_0}{k_B}\left(-\frac{1}{T^2}\right)\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}\right]} {2\left(\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}-1\right)\left(-\frac{\hbar\omega_0}{k_BT^2}\right)\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}}$$ After tidying up, we are still left with a ratio of two functions which both approach zero in the high-temperature limit. At least, the temperature dependence has simplified, so we should be able to find the limit by applying de l'Hôpital's rule again: $$\qquad=\lim_{T\to\infty}\frac {\frac{3N\hbar\omega_0}{T}\left(1+\frac{\hbar\omega_0}{2k_BT}\right)} {\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}-1} =\lim_{T\to\infty}\frac {\frac{{\rm d}}{{\rm d}T}\left[\frac{3N\hbar\omega_0}{T}\left(1+\frac{\hbar\omega_0}{2k_BT}\right)\right]} {\frac{{\rm d}}{{\rm d}T}\left(\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}-1\right)}\qquad.$$ This takes another application of the product and chain rule, respectively: $$\qquad=\lim_{T\to\infty}\frac {3N\hbar\omega_0\left[\left(-\frac{1}{T^2}\right)\left(1+\frac{\hbar\omega_0}{2k_BT}\right)+\frac{1}{T}\frac{\hbar\omega_0}{2k_B}\left(-\frac{1}{T^2}\right)\right]} {\frac{\hbar\omega_0}{k_B}\left(-\frac{1}{T^2}\right)\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}}\qquad,$$ and after tidying up, we finally have a ratio of finite values in the high-temperature limit: $$\qquad=\lim_{T\to\infty}\frac {3N\hbar\omega_0\left(-\frac{1}{T^2}-\frac{\hbar\omega_0}{2k_BT^3}-\frac{\hbar\omega_0}{2k_BT^3}\right)} {-\frac{\hbar\omega_0}{k_BT^2}\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}} =\lim_{T\to\infty}3Nk_B\frac{1+\color{red}{\cancel{\color{black}{\frac{\hbar\omega_0}{k_BT}}}^0}}{\color{red}{\cancel{\color{black}{\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}}}^1}}=3Nk_B$$ Even better, the predicted limit matches the expectation according to Dulong and Petit! The plot of the Einstein heat capacity above shows that the Dulong-Petit limit is approached faster at relatively low frequencies than at higher ones. Note that even a frequency considered "low" in phonon terms is typically in the low THz range. Clearly, the Einstein model is spot on in the high-temperature limit, despite its simplicity. What about the low-temperature limit, though? In this case, we're not just looking at the value of the heat capacity in the limit of $T\to 0$ but rather at the shape of the curve as it begins to rise. By inspecting the order of magnitude of the argument of the exponential, we find that $$O\left(\frac{\hbar\omega_0}{k_BT}\right)=\frac{10^{-34}\,2\pi\,10^{12}}{10^{-23}O(T)}\approx\frac{10^2}{O(T)}\qquad,$$ where the $2\pi$ comes from the fact that the formula contains the angular frequency. In the limit of temperatures of no more than a few Kelvins, the argument of the exponential will be of the order of 100, so the exponential will be a very large number, compared to which we can neglect the 1 in the denominator: $$c_{vib}=\frac{\frac{3N\hbar^2\omega_0^2}{k_BT^2}\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}}{\left(\exp{\left(\frac{\hbar\omega_0}{k_BT}\right)}\color{red}{\cancel{\color{black}{-1}}}\right)^2}\approx\frac{3N\hbar^2\omega_0^2}{k_BT^2}\exp{\left(-\frac{\hbar\omega_0}{k_BT}\right)}\qquad.$$ The diagram below shows the behaviour of the Einstein model (bold colours) and its low-temperature approximation (soft colours). The temperature at which the low-temperature approximation deviates from the full model depends on the frequency, but as long as the heat capacity is below R (the gas constant), the approximation is quite good in relation to the model. However, it demonstrates that the Einstein model produces a $T^{-2}{\rm e^{-1/T}}$ dependence. This doesn't reflect the experimentally observed $T^3$ dependence at all. The deviation from the observed relationship is shown magnified in the figure on the right. We have to conclude that the Einstein model is too simplistic to describe low-temperature modes in crystals adequately, even though it is perfectly fine at higher temperatures. Debye model To capture the low-temperature behaviour of the heat capacity correctly, it is necessary to go beyond the simple assumption of the Einstein model that all modes in the crystal have the same fundamental frequency. Instead, we need to identify and count the different modes present based on a model of the crystal structure. Modes are standing waves in the crystal. Therefore, the possible wave numbers for a particular crystallographic orientation are given by $$k=0,\pm\frac{2\pi}{L},\pm\frac{4\pi}{L},\dots,\frac{N\pi}{L}\qquad,$$ where $L$ and $N$ are the size of the grain and the number of atoms along that orientation, respectively. The figure shows the situation for a small crystal six unit cells wide. Oscillations with wave vectors $|k|\gt\frac{N\pi}{L}$ fall outside the Brillouin zone boundary, i.e. they replicate the same atom displacements that are already represented by another, smaller, wavevector. Counting the different wave numbers, it is clear that there is one $k$ value per $\frac{2\pi}{L}$, or, analoguously in 3D, one $k$ per $\left(\frac{2\pi}{L}\right)^3$. To count the total number of modes (wave vectors) contained in the crystal, we need to multiply the number density of modes, $\left(\frac{L}{2\pi}\right)^3$, by the reciprocal space volume taken up by all wave vectors within a sphere of radius $k$: $$N=\left(\frac{L}{2\pi}\right)^3\left(\frac{4}{3}\pi k^3\right)=\frac{Vk^3}{6\pi^2}\qquad,$$ where $V$ is the macroscopic volume of the crystal grain. The density of states is the derivative of $N$ with respect to frequency - literally how densely the states are spaced along the frequency axis: $$D(\omega)=\frac{{\rm d}N}{{\rm d}\omega}=\frac{V}{6\pi^2}\frac{{\rm d}}{{\rm d}\omega}k^3=\frac{V}{6\pi^2}3k^2\frac{{\rm d}k}{{\rm d}\omega} =\frac{Vk^2}{2\pi^2}\frac{{\rm d}k}{{\rm d}\omega}\qquad.$$ From this point onward, we need to have a model of the crystal which lets us determine the dispersion relation, $\omega(k)$, from which $\frac{{\rm d}k}{{\rm d}\omega}$ can be calculated. A relatively simple but analytically solvable model is the Debye model, which assumes that the crystal can be treated as an isotropic elastic medium. Of course this is a rather coarse approximation at least on short length scales, where the periodic pattern of atoms is anything but isotropic. In this model, wave vector and frequency are proportional to one another. The constant of proportionality is the speed of sound, $v_s$, in the medium: $$\omega=v_sk\qquad.$$ This is analogous to the dispersion relation in optics, where the speed of light takes on the role of the speed of sound. With this dispersion relation, we can calculate the Debye density of states: $$D(\omega)=\frac{V\omega^2}{2\pi^2v_s^2}\frac{{\rm d}}{{\rm d}\omega}\frac{\omega}{v_s}=\frac{V\omega^2}{2\pi^2v_s^3}\qquad,$$ and with it the vibrational energy: $$E_{vib}=\int D(\omega)\frac{\hbar\omega}{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1}{\rm d}\omega =\int_0^{\omega_D}3\frac{V\omega^2}{2\pi^2v_s^3}\frac{\hbar\omega}{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1}{\rm d}\omega\qquad.$$ Here, the factor 3 reflects the fact that we have three different polarisations (one longitudinal, two transversal) for each wave vector in a primitive lattice. The upper integration boundary is reduced from infinity to the Debye frequency, the maximum (fundamental) frequency that we can expect in the crystal lattice. This is based on the observation (above) that there is one $k$ per primitive unit cell, producing a total number of modes (as seen above) of $$N=\frac{Vk^3}{6\pi^2}=\frac{V\omega^3}{6\pi^2v_s^3}\qquad.$$ The cutoff frequency is therefore $$\omega_D=v_s\sqrt[3]{\frac{6\pi^2N}{V}}\qquad,$$ and the corresponding cutoff wave vector is $$k_D=\frac{\omega_D}{v_s}=\sqrt[3]{\frac{6\pi^2N}{V}}\qquad.$$ With the substitution $$x=\frac{\hbar\omega}{k_BT}\quad\Rightarrow\quad \omega=\frac{k_BTx}{\hbar}\quad\Rightarrow\quad \frac{{\rm d}\omega}{{\rm d}x}=\frac{k_BT}{\hbar}\quad\Rightarrow\quad {\rm d}\omega=\frac{k_BT}{\hbar}{\rm d}x\qquad,$$ the vibrational energy becomes $$E_{vib}=\frac{3V\hbar}{2\pi^2v_s^3}\frac{k_B^3T^3}{\hbar^3}\frac{k_BT}{\hbar}\int_0^{x_D}\frac{x^3}{{\rm e}^x-1}{\rm d}x =\frac{3Vk_B^4T^4}{2\pi^2v_s^3\hbar^3}\int_0^{x_D}\frac{x^3}{{\rm e}^x-1}{\rm d}x\qquad,$$ which is often given in the literature. The Debye temperature, $\Theta_D$, is defined as $$\Theta_D=\frac{\hbar\omega_D}{k_B}=\frac{\hbar v_s}{k_B}\sqrt[3]{\frac{6\pi^2N}{V}}\qquad.$$ It is a material constant which allows us to normalise the temperature dependence of the vibrational properties of different crystals to observe common behaviour on a reduced temperature scale (i.e. when plotting thermodynamic functions against $\frac{T}{\Theta_D}$). Finally, we can calculate the Debye heat capacity by differentiating the vibrational energy with respect to temperature: $$c_{vib}=\frac{\partial E_{vib}}{\partial T} =\frac{3V\hbar}{2\pi^2v_s^3}\frac{\partial}{\partial T}\int_0^{\omega_D}\frac{\omega^3}{\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1}{\rm d}\omega =\frac{3V\hbar}{2\pi^2v_s^3}\int_0^{\omega_D}\omega^3\left(-\frac{1}{\left(\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1\right)^2}\right)\frac{\hbar\omega}{k_B}\exp{\left(\frac{\hbar\omega}{k_BT}\right)}\left(-\frac{1}{T^2}\right){\rm d}\omega$$ $$\qquad=\frac{3V\hbar^2}{2\pi^2v_s^3k_BT^2}\int_0^{\omega_D}\frac{\omega^4\exp{\left(\frac{\hbar\omega}{k_BT}\right)}}{\left(\exp{\left(\frac{\hbar\omega}{k_BT}\right)}-1\right)^2}{\rm d}\omega\qquad.$$ With the substitutions above, this can be expressed in terms of the Debye temperature as: $$c_{vib}=\frac{9Nk_BT^3}{\Theta_D^3}\int_0^{x_D}\frac{x^4{\rm e}^x}{({\rm e}^x-1)^2}{\rm d}x$$ For very low temperatures, the upper integration boundary $x_D=\frac{\Theta_D}{T}$ approaches infinity, and the infinite integral expands into a convergent series, i.e. it reduces to a constant factor. Therefore, the experimentally observed $T^3$ dependence of the heat capacity at low temperatures is reproduced faithfully by the Debye model. The Debye model assumes that the phonon density of states follows a parabolic, i.e. monotonously increasing, dependence of the fundamental frequency of lattice oscillations. This is a consequence of the approximation of the crystal lattice as an isotropic medium. Because a crystal lattice is in fact highly anisotropic, with different wave vectors in different crystallographic orientations, the actual density of states is a lot more complex. The Debye model predicts both the high and low temperature limits correctly, but in order to model the rich behaviour of a real crystal in the medium temperature range, it is necessary to establish realistic dispersion relations and determine the actual density of states, via $\frac{{\rm d}k}{{\rm d}\omega}$, from them. The Debye model describes the heat capacity of solids well in both the low and high temperature limits. However, the assumption made that the medium is isotropic, i.e. that the vibrational states do not depend on the spatial direction of the wave is far too simplistic to describe the rich vibrational properties of real crystals at intermediate temperatures. Phonon spectroscopy produces experimental dispersion relations that show the variation with crystallographic orientation and capture this richness.
CommonCrawl
Does one square centimenter of the sun core really radiate this amount of energy? I have been thinking that since the core of the sun maintains its temperature at 15 million degrees Kelvin, then every cubic centimeter of this core is receiving a certain amount of energy to keep it at this temperature. So I was thinking what would happen if we could take a small sphere, say, 0.6 cm in radius (so the area is 1 square cm) and put it in the earth's atmosphere. Would it radiate the same energy that it consumed to get to 15 million degrees in the first place ? I made some calculations using Stefan-Boltzmann law, here's what I got : $$ P = Area \times 5.67 \times 10^{-8} \times e \times T^4 $$ So for easier calculations let's assume that it is a black body, and the surrounding temperature is negligible compared to 15 million degrees. so I got : $$ P = .0001 \times 5.7*10^{-8} \times ({15*10^{6}})^4 = 2.9\times10^{17} \,\mathrm{joules/second} $$ Now, isn't this a huge amount of energy ? I know the body has to be maintained at 15 million degrees to keep radiating energy at this rate, about 0.8 cubic cm of the sun core is about 120 kilograms, so the more important question is for how long will this rate of radiation continue. I honestly feel that there are mistakes in what I have calculated, so please any correction would be greatly appreciated. homework-and-exercises thermodynamics thermal-radiation sun Abanob EbrahimAbanob Ebrahim $\begingroup$ Worth noting is that while the temperature at the sun's core is quite high, the power density is surprisingly low — only a few hundred watts per cubic meter. This adds up to a lot of power because the core of the sun is still very, very large. $\endgroup$ – rob♦ May 24 '14 at 22:52 $\begingroup$ Yes I understand that. But it takes energy to heat the material in the core to that degree, so shouldn't we get the same amount of energy when we let this material cool again ? If using Stefan-Boltzmann equation for calculating that is unclear because we don't know the time it takes for the material to cool, then should we use the specific heat capacity instead ? $\endgroup$ – Abanob Ebrahim May 25 '14 at 8:02 $\begingroup$ Yes, heat capacity matters; if your computed power is correct, the next step is to ask how long it would take to cool off. $\endgroup$ – rob♦ May 25 '14 at 14:32 Yes a small marble of material from the center the sun would radiate hugely, and cool off quickly, if it was not at the center of the sun. But since it is at the center, it receives an almost equal radiation from its surroundings and does not cool off. The marble does not contain all that much energy. It would be something like a nuclear bomb. A bomb creates a momentary nuclear reaction that heats a small amount of material to a temperature around 15 million degrees. It then cools off, transferring that energy to the surrounding country side. See this and this. In the sun, the marble produces power continuously, but the power is tiny. Volume for volume, the sum produces heat at about the same rate as a compost heap. It produces a lot of heat because it is large. See this Wikipedia article on the Sun. The reason center of the Sun is so hot it that it has very poor cooling. Energy produced in the center either leaves or raises the temperature of the center. It takes millions of years for energy to reach the surface. Likewise, the surface of the Sun has poor cooling. The power radiated by the surface of the sun is large because the surface is large. The power per square meter is that of a black body radiator at 6000 K. See this calculator. It is about $7 MW/m^2$, or $7 W/mm^2$, which is not small. Energy leaves the surface of the Sun only through radiation. This makes the surface heat up until there is enough radiation. Some computer chips give off more than 7 Watts of waste heat through each square millimeter of surface. With good cooling, they stay below 200C. If space was full of air and you had a giant fan, you could keep the surface of the sun at that temperature. mmesser314mmesser314 $\begingroup$ Actually the link you provided says that at 6000 K, the power is 70 MW rather than 7 MW. But I also don't understand this number, shouldn't we just divide the sun luminosity by its surface area ? that actually yields ~260 MW per meter cubed. $\endgroup$ – Abanob Ebrahim May 25 '14 at 8:08 $\begingroup$ I can't make the numbers work out. If I change to 5778K, that becomes 63 MW/sq m. Luminosity/area = $3.8*10^{26} W$/$6.1 * 10^{24} m^2$ = $62 W/m^2$. There is a missing factor of mega somewhere. Also, mean intensity = $2.0 * 10^7 W/m^2 sr$. Ignoring darkening of the limb, assume the sun radiates over a hemisphere = $2\pi$ sr. That gives $12.6 MW/m^2$. The intensity at 90 deg will be higher, but not by a factor of 5. $\endgroup$ – mmesser314 May 25 '14 at 16:01 $\begingroup$ @AbanobEbrahim 260 MW per meter cubed or squared? $\endgroup$ – Jens May 31 '18 at 18:16 The core is under a lot of pressure, so you couldn't remove it from the sun's gravity well without it expanding and cooling to the point fusion stops. Only a small part of the core's energy makes it to the surface. Each second the sun's core converts 600 million tons of Hydrogen into 595 million tons of helium. The missing 5 million tons is converted to energy. In one second this is equivalent to 1 billion mega-ton hydrogen bombs. This enough energy to fuel America's energy needs for 7 million years. This energy though takes a long time to escape the core. In fact by the time a photon from the core travels 320 km to a point where the sun is cool enough that electrons bind to atoms again and work it's way to the surface it can take a million years. (link) The energy managing to escape from the entire surface of the sun is on the order of $3.9*10^{26}J/s$ (based on 5800K and 696,000km radius of sun). Fusion does produce a lot of energy and is the reason you hear it discussed a lot. user6972user6972 $\begingroup$ From your statement, one can say that the 5 million tons of mass energy is (roughly) $3.9\times 10^{26} J$, since the surface temperature and the surface area are not changing. $\endgroup$ – LDC3 May 24 '14 at 17:52 $\begingroup$ @LDC3 no because that is at the core which is a different temperature and size. $\endgroup$ – user6972 May 24 '14 at 18:28 $\begingroup$ The surface doesn't generate energy, only the core. Therefore, the energy dissipated at the surface must come from the core. The only source of energy is from fusion, unless you think some of the energy is coming from the temperature increase due to compression. $\endgroup$ – LDC3 May 24 '14 at 18:39 $\begingroup$ @LDC3 Sorry you're right, it's about the same. I meant that some is lost directly from the core in neutrino production in the ppI, ppII and ppIII chains. $\endgroup$ – user6972 May 24 '14 at 21:19 $\begingroup$ Did you really mean 210 Kelvinmeter? Or was that a typo? Perhaps you meant to write "210km", though that sounds like too short a distance for the sentence to make sense to me. $\endgroup$ – kasperd Jun 14 '15 at 6:45 Your $15\times10^6$ temperature is at the Sun's core, where H fusion takes place. The radiated energy occurs at the Sun's "surface", at a temperature of only about 5700 K. Michael LuciukMichael Luciuk Not the answer you're looking for? Browse other questions tagged homework-and-exercises thermodynamics thermal-radiation sun or ask your own question. is this heat calculation equation correct? Sun energy production as a function of depth How to combat the black-body temperature of an object? What is the actual energy content of the Sun? Rough/ballpark thermodynamics and black body temperature question How does the sun's surface conduct thermal energy from the convective zone to the corona? Estimating fraction of radiant energy absorbed by a metal Energy balance for Earth with regards to greenhouse effect Coronal heating problem -How deep does it go? Another way to find the energy diffusion time in the Sun?
CommonCrawl
Understanding socio-economic inequalities in the prevalence of asthma in India: an evidence from national sample survey 2017–18 Rashmi Rashmi ORCID: orcid.org/0000-0003-4709-35691, Pradeep Kumar ORCID: orcid.org/0000-0003-4259-820X2, Shobhit Srivastava ORCID: orcid.org/0000-0002-7138-49162 & T. Muhammad ORCID: orcid.org/0000-0003-1486-70383 Today, over 300 million people reside with asthma worldwide and India alone is home for 6% of children and 2% of adults suffering from this chronic disease. A common notion of disparity persists in terms of health outcomes across the poor and better-off section of the society. Thus, there is a need to explore socio-economic inequality in the contribution of various factors associated with asthma prevalence in India. Data for the study were carved out from the 75th round of National Sample Survey (NSS), collected by the National Sample Survey Organization (NSSO) during 2017–18. The sample size for this study was 555,289 individuals, for which data was used for the analysis. Descriptive statistics were used to show the distribution of the study population. Further, bivariate and multivariate analysis was performed to identify the factors associated with Asthma prevalence. The concentration index was used to measure the inequality. Further, we used decomposition analysis to find the contribution of factors responsible for socio-economic status-related inequality in asthma prevalence. The prevalence of asthma was 2 per 1000 in the whole population; however, the prevalence differs by age groups in a significant manner. Age, sex, educational status, place of residence, cooking fuel, source of drinking water, household size and garbage disposal facility were significantly associated with asthma prevalence in India. It was found that asthma was more concentrated among individuals from higher socioeconomic status (concentration index: 0.15; p < 0.05). While exploring socio-economic inequality for asthma, richest wealth status (53.9%) was the most significant contributor in explaining the majority of the inequality followed by the urban place of residence (37.9%) and individual from age group 45–65 years (33.3%). Additionally, individual aged 65 years and above (27.9%) and household size less than four members (14.7%) contributed in explaining socio-economic inequality for asthma. Due to the heterogeneous nature of asthma, associations between different socio-economic indicators and asthma can be complex and may point in different directions. Hence, considering the concentration of asthma prevalence in vulnerable populations and its long-term effect on general health, a comprehensive programme to tackle chronic respiratory diseases and asthma, in particular, is urgently needed. With the passage of time and changing lifestyles, the world is combating the growing threat of non-communicable diseases (NCDs). According to the World Health Organization (WHO), NCDs are responsible for 71% of all deaths worldwide [1] and adds substantial health and economic burden to nations that are already battling communicable and infectious diseases. In India, Global Burden of Disease (GBD) Collaborators showed a state-level variation in epidemiological transition and found that the burden of NCDs like cardiovascular diseases, respiratory diseases and diabetes had escalated at an alarming rate [2]. While the contribution of cardiovascular diseases in total mortality of India was found to be the largest [3], the prevalence of respiratory disease named asthma had also increased by 8.6% during 1990–2016 [4]. According to the definition of WHO, Asthma is a chronic or long term condition that inflames and narrows the airways in the lungs from time to time causing chest tightness, shortness of breath, wheezing and coughing [5]. Today, over 300 million people reside with asthma worldwide [6] and India alone is home for 6% of children and 2% of adults suffering from this chronic disease [7]. Although Asthma contributes a smaller burden of total mortality among non-communicable diseases, it still poses a serious concern as most of the deaths caused are preventable [8]. With no exact cure, this disease can be triggered through genetic and environmental factors. An Australian cross-sectional study found that asthma among children was highly associated with the direct and indirect effect of genetics, environment and allergens [9]. A longitudinal cohort study from Tucson provides evidence that chronic asthma among adults was highly associated with their onset from childhood and persistent wheezing in early life [10]. Past research had also indicated that early onset of asthma was linked with age [11], sex [12], genetic factors [13], parental smoking [14], active smoking in childhood [15], preterm birth [16], larger families [17] and childhood obesity [18]. Besides these factors, few psychological determinants were significantly linked with asthma among individuals [19]. Extant research from India has shown the prevalence, trend, socio-economic, demographic and environmental predictors of asthma morbidity across different sections of the population [20,21,22,23]. A study using the second round of India Human Development Survey data linked the burden of asthma with households using unclean fuels, individuals who are lesser educated and those who belong to a poorer section of society [24]. Further, a study had shown the role of various occupations among adults in building the risk of four non-communicable diseases including asthma [23]. Studies had also linked the influence of stressful psychosocial circumstances and spatial heterogeneity with the asthma prevalence in India [25, 26]. Across developed and developing countries, the trend in asthma mortality has decreased with a steady increase in the prevalence in the past few years and the reasons for such increase are yet not defined [27]. Despite advancements in technology to diagnose and manage asthma in developed countries, a study from New York city reveals that poor housing condition, outdoor air pollution and noxious land uses can contribute higher incidence of asthma in urban neighbourhoods [28]. A study, further, revealed that adolescents residing in peri-urban areas of developing countries are more prone to asthma [29]. The same study shows that the history of cigarette smoking and indoor pollution increases the likelihood of reported and symptoms of asthma. This brings our attention to the situation of developing nations, which are already succumbed from infectious diseases and are continuously burdened with the people who are yet not diagnosed or are unaware with the risk they are carrying. The risk is, further, increased in a country like India where the burden of non-communicable diseases is escalating [30] along with a sharp rise in urban settlements of poor [31]. The rationale for the current analysis is as follows. First, despite having a minimal mortality, asthma remains to have constant threat due to no exact cure procedure. Moreover, the growing urbanization and industrialization and changing lifestyles have increased the chances of asthma prevalence in both poor and richer section of society. Second, so far minimal evidence from India had examined the extent of socioeconomic inequality in NCD prevalence especially asthma which remains to be highly dominating in both younger and older age groups of society [32]. Lastly, due to variations in geographical, environmental, social, economic and cultural factors across the states of India, a state-wise inequality in asthma prevalence among different socio-economic groups can present a reliable estimate across India. Therefore, as per the conceptual framework provided in Fig. 1, the current study aims to explore the factors associated with asthma and the contribution of those factors in socioeconomic inequality in the prevalence of asthma in India. Conceptual framework of asthma morbidity Data and methods Data for this study were carved out from the 75th round of National Sample Survey (NSS), schedule 25.0 data on key indicators of Household social consumption in India: health, collected by the National Sample Survey Organization (NSSO) during 2017–18. The 75th round survey was aimed at generating basic quantitative information on the health sector. The NSS has adopted multistage stratified sampling design with census villages and urban blocks as the first-stage units for the rural and urban areas, respectively, and households as the second-stage units for ensuring regional and social group representation. A detailed methodology of data collection and sampling design was published elsewhere [33]. The major objective of the survey was to determine the prevalence rate at the state and national level of general morbidity by age-group and gender, as well as of specific categories of ailment. The survey collected data from 555,372 individuals. We remove the missing cases (83 cases) from the data to provide better estimates. The sample size for this study was 555,289 individuals, for which data was used in the present study for the analysis. A direct question was asked to the respondent regarding the nature of ailment such as particular medical treatment received as an in-patient of a medical institution during the last 365 days 'Reported Diagnosis and/or Main Symptom'. The survey collected data about 89 diseases/symptoms of the household members. Asthma was the binary outcome variable of this study; if a person reported diagnosis of asthma it was coded as '1' and '0', otherwise. Exposure variables The predictor variables included age of the individual (less than 5, 5–14, 15–29, 30–44, 45–65, and 65 + years), sex (male and female), marital status (never married, currently married and others), educational status (no education, below primary, primary and middle, secondary and above), religion (Hindu, Muslim, and others), caste (scheduled caste, scheduled tribe, other backward class, and others). The caste system in India has its roots in the earlier varna (color) system. The varnas represented a social hierarchy with purity and pollution-related notions, which is based on the principle that some works were considered pure and some impure or polluted. Accordingly, the system was setup to delegate the various activities to particular groups of people. Thus, the Scheduled Caste includes a group of the population that is socially and financially/economically segregated by their low status as per Hindu caste hierarchy. The Scheduled Castes (SCs) and Scheduled Tribes (STs) are among the most disadvantaged socio-economic groups in India. The OBC is is a group of intermediate categories identified as "educationally, economically and socially backward". The "other" caste category is identified as having higher social status [34, 35]. Place of residence (rural and urban), monthly per capita consumption expenditure (MPCE) (poorest, poorer, middle, richer, and richest), cooking fuel (clean and others), source of drinking water (improved and unimproved), type of toilet facility (improved and unimproved), household size (less than 4 members and 4 or more member), and garbage disposal (have an arrangement and no arrangement). Descriptive statistics were used to show the distribution of the study population. Further, bivariate and multivariable analysis was used to identify the factors associated with Asthma. Moreover, wealth quintile was the key variable to measure the economic status of the household. To study the variation in asthma, health expenditure, choice of healthcare facility etc. across the population at different levels of living, a measure of the level of living was derived for each surveyed household based on information collected on its usual monthly consumer expenditure. This allowed estimates to be generated separately for 5 different equal-sized classes of the population at different quintile class of household expenditure and also known as monthly per capita consumption expenditure (MPCE) [36]. The study used household monthly per capita expenditure (Rupees) for decomposition analysis and the calculation of Concentration Index (CI), the study used MPCE which has divided into five equal sizes of the population. Concentration index Concentration index represents the magnitude of inequality by measuring the area between the concentration curve and line of equality and calculated as twice the weighted covariance between the outcome and fractional rank in the wealth distribution divided by the variable mean. The concentration index can be written as follows: $${\varvec{C}} = \frac{2}{{\varvec{\mu}}}{\varvec{cov}}\left( {{\varvec{y}}_{{{\varvec{i}},}} {\varvec{R}}_{{\varvec{i}}} } \right)$$ where C is the concentration index; \(y_{i}\) is the outcome variable index; R is the fractional rank of individual i in the distribution of socio-economic position; \({\varvec{\mu}}\) is the mean of the outcome variable of the sample and \({\varvec{cov}}\) denotes the covariance [37]. The index value lies between − 1 to + 1. If the curve lies above the line of equality, the concentration index takes a negative value, indicating a disproportionate concentration of inequality among the poor (pro-rich). Conversely, if the curve lies below the line of equality, the concentration index takes a positive value, indicating a disproportional concentration of inequality among the rich (pro-poor). In absence of socio-economic related inequality, the concentration index is zero. Decomposition of the concentration index The study used Wagstaff decomposition analysis to decompose the concentration index. Wagstaff's decomposition demonstrated that the concentration index could be decomposed into the contributions of each factor to the income-related inequalities [38]. Based on the linear regression relationship between the outcome variable \(y_{i}\), the intercept α, the relative contribution of \(x_{ki}\) and the residual error \(\varepsilon_{i}\) $$y_{i} = \alpha + \sum \beta_{k} x_{ki} + \varepsilon_{i}$$ where \(\varepsilon_{i}\) is an error term, given the relationship between \(y_{i}\) and \(x_{ki}\), the CI for y (C) can be rewritten as: $$C = \sum \left( {\frac{{\beta_{k} \overline{x}_{k} }}{\mu }} \right)C_{k} + \frac{GC\varepsilon }{\mu }/\mu$$ where \(\mu\) is the mean of \(y_{i}\), \(\overline{x}_{k}\), is the mean of \(x_{k}\), \(\beta_{k}\) is the coefficient from a linear regression of outcome variable, \(C_{k}\) is the concentration index for \(x_{k}\) (defined analogously to C, and GCɛ is the generalized concentration index for the error term (\(\varepsilon_{i}\)). Here C is the outcome of two components: First, the determinants or 'explained' factors. The explained factors indicate that the proportion of inequalities in the outcome (Asthma) variable is explained by the selected explanatory factors, i.e., xk. Second, a residual or 'unexplained' factor \(\left( {\frac{GC\varepsilon }{\mu }/\mu } \right)\), indicating the inequality in health variable that cannot be explained by selected explanatory factors across various socioeconomic groups. The analysis was adjusted for complex survey design (in this case multistage sampling) by using svyset command in STATA 14. The svyset command also adjusted the estimates for survey weights. Table 1 presents the socio-economic profile of the study population in India. About 3.3% of the population belong to the age group 65 years and above. About 51.7% of the population was male and 48.3% was female. Nearly, 50.5% of the population was currently married and 44.4% was never married. Almost 26.1% of the population was not educated and 30.3% was having education secondary and above. About 8 in 10 people in India belong to the Hindu religion. About one-tenth of population was from the Scheduled Caste category and additionally, about 2 in 10 people belong to the Scheduled Caste category. About 70.5% of the population belong to a rural place of residence. Nearly 20.5% of the population belong to poorest wealth quintile and 19.9% of the population belong to richest wealth quintile. Nearly, 55.2% of household used clean cooking fuel, 96.5% used improved source for drinking water and 75.2% used improved toilet facilities. About 83% of households had a household size of four or more. Nearly, 59% of households had no arrangement for garbage disposal. Table 1 Socio-economic and demographic profile of study population, 2017–18 Table 2 represents the prevalence of asthma and its logistic regression estimates by background characteristics in India. Only the logistic regression estimate will be interpreted as they provide the adjusted figures. Individuals aged 65 + years had 67.92 times significantly higher likelihood to suffer from asthma in comparison to individuals less than five years [OR: 67.92; CI: 37.75–122.2]. Females had 14% significantly lower likelihood to suffer from asthma than males [OR: 0.86; CI: 0.76–0.98]. Individuals who were divorced/separated/widowed were 47% significantly higher likelihood to suffer from asthma in comparison to individuals from currently married status. Individuals with no educational status had 81% significantly higher likelihood to suffer from asthma than individuals who had secondary and above educational status [OR: 1.81; CI: 1.50–2.20]. Individuals from the Muslim religion had 29% significantly higher likelihood to suffer from asthma than individuals from Hindu religion [OR: 1.29; CI: 1.10–1.52]. Individuals from the urban place of residence had 45% significantly higher likelihood to suffer from asthma than individuals from a rural place of residence. The individual from Scheduled Tribe had 54% lower likelihood to suffer from Asthma in reference to individuals from other caste category [OR: 0.46; CI: 0.34–0.61]. Table 2 Prevalence of Asthma and logistic regression estimates by background characteristics, 2017–18 Further, individuals from richest wealth quintile had 76% significantly higher likelihood to suffer from asthma than individuals from poorest wealth quintile [OR: 1.76; CI: 1.43–2.17]. Individuals from the household with the unclean source of cooking fuel had 37% significantly higher likelihood to suffer from asthma than individuals from the household with a clean source of cooking fuel [OR:1.37; CI: 1.19–1.59]. Individuals from the household with an unimproved source of drinking water had 34% significantly higher likelihood to suffer from asthma than individuals from the household with an improved source of drinking water [OR: 1.34; CI: 1.03–1.75]. Individuals from the household with 4 or less members had 29% significantly lower likelihood to suffer from asthma than individuals from the household with 4 or more members [OR: 0.71; CI: 0.62–0.82]. Table 3 represents the state-wise prevalence and concentration index (CCI) value for asthma in India. Daman and Diu had the highest prevalence of asthma (20.7%) followed by Kerala (8.1%) and West Bengal (4.9%). Additionally, highest value for concentration index for asthma was for Chandigarh (0.694; p < 0.05) followed by Mizoram (0.560; p < 0.5) and West Bengal (0.395; p < 0.05). Table 3 State-wise prevalence and concentration index value for Asthma, 2017–18 Figure 2 reveals a concentration curve for asthma prevalence among the Indian population and it was found that asthma was more concentrated among individuals from higher socioeconomic status (CCI: 0.15; p < 0.05). The adjusted (Erreygers normalization) CCI was 0.005. Concentration curve for Asthma prevalence among the Indian population, 2017–18 Table 4 represents decomposition analysis estimates for asthma prevalence in India. Coefficients were obtained by applying logit regression. Absolute contribution is the product of elasticity and CCI whereas the percentage contribution is the proportion of absolute contribution multiplied by 100. In explaining socio-economic inequality for asthma, richest wealth status (53.9%) was the most significant contributor in explaining the majority of the inequality followed by the urban place of residence (37.9%) and individual from age group 45–65 years (33.3%). Additionally, individual aged 65 years and above (27.9%) and household size less than four members (14.7%) contributed in explaining socio-economic inequality for asthma. Table 4 Decomposition analysis estimates for asthma prevalence among Indian population, 2017–18 There is a common notion that some disparities persist in terms of health outcomes across the poor and better-off section of society. And the problem intensifies when individuals are left undiagnosed due to lack of awareness and access to health care services. Ample evidence revealed that changing lifestyles and growing level of stress in day-to-day life easily triggers asthma in the richer section of society [39,40,41]. Thus, the study explored the factors associated and socio-economic inequality in the asthma prevalence and the contribution of various factors in those inequalities in India. Additionally, we have shown significant differences in state-wise prevalence rates of asthma in India. The study reported that females had lower asthma prevalence as compared to males. In general, childhood asthma prevalence is higher in boys than in girls (especially before puberty). However, asthma prevalence becomes more prevalent in female than in males in adulthood. The inconsistent finding of our study can be a result of not stratifying by age groups while analysing the gender factor. We have also highlighted an unexpected pattern of higher prevalence of asthma among individuals with higher economic status measured by MPCE. The finding is contrary to many Western studies that showed that poor economic status and low income as risk factors for the development of asthma, asthmatic wheeze and chronic productive cough [42,43,44]. It is revealed that the higher prevalence of asthma found in poor compared to the affluent population in developed nations and in affluent compared to poor population in developing nations, reflects cultural and contextual differences [45]. The increased access to healthcare among people with higher economic status may explain the current finding where improved healthcare system contributes to the ascertainment of diseases such as asthma among the economically better-off populations. Studies also reported the higher likelihood of under-diagnosis and under-reporting of non-communicable diseases including asthma among lower socioeconomic groups in India [32]. Consistent with previous studies [43, 46], the contribution of educational status in the socioeconomic inequality in asthma prevalence was higher than any other socioeconomic variables in the study. The results are also in accordance with several studies that found the low educational level to be strongly associated with asthma and respiratory symptoms [42, 44, 47]. Importantly, the positive association of wealth quintile with asthma prevalence and simultaneously the negative association of education with asthma in the current analysis suggest future investigation of underlying mechanisms in such associations. Further, the finding that higher the size of household greater the prevalence of asthma is inconsistent with previous studies that reported the inverse association of the number of siblings with the prevalence of asthma and called it as 'sibling effect' [48, 49]. Similarly, odds of suffering from asthma was higher among men than women in our study which is contrary to several earlier studies in India that revealed a higher prevalence of the disease among women who are more exposed to poor housing conditions [50, 51]. Besides, urban rates of asthma prevalence were higher than rural rates in our study which confirms the finding that urbanization by which exposure to biomass fuel smoke increases is an environmental risk factor of asthma [52]. In a country where large proportion of the population still relies on solid and biomass fuels for cooking, a significant association was found between cooking fuel and the prevalence of asthma disease in the present study. It is found that clean cooking fuel is a protective factor against asthma which is consistent with earlier studies [51, 53, 54]. Further, an increased risk of asthma was found among the people from households that have no garbage disposal arrangements. Another study that evaluated the prevalence of asthma in relation to a residence in houses built on a former dumping area containing industrial and household wastes has shown similar finding that the risk of asthma was higher in the dump cohort than people living outside the site [55]. Consistent with a recent study in India [26], a significant association of improved source of drinking water with lower asthma prevalence was also found in the present study which is supported by evidence that shows exposure to heavy metals and arsenic in drinking water increase the prevalence of respiratory illnesses [56,57,58]. Other potential mechanisms for such associations including the exposure to allergens need to be further explored in future studies. The highest prevalence of asthma in the coastal states of Kerala, and West Bengal and the UT of Daman& Diu may be attributed to its geographical features, where people consume more fish that may contribute to the higher burden of asthma in these States/UT [51]. We also found a relative difference between the lowest and highest region-wise prevalence rates that ranged between 0 to 20.7 individuals suffering from asthma per thousand individuals indicating that there are wide regional variations in the prevalence of asthma in India. Although the similar findings are shown in previous studies [26, 51], the reasons for the variations are unclear and require further investigation. Large nationally representative sample is the strength of our study, which allows comparisons between states and urban–rural settings, and the ability to examine socio- economic and housing patterns of asthma risk. However, the study is limited by its cross-sectional design. Additionally, biological or social factors related to asthma were not measured in this study which may have influenced and contributed to the gender and place of residence-related differences observed. Besides, the higher prevalence of asthma in older age groups in comparison to younger population might be due to potential bias in reporting the disease such as potential misclassification between chronic obstructive pulmonary disease (COPD) and asthma in the older age groups. Advancing age, male sex, residence in the urban area, lower education, higher MPCE and poor housing conditions such as unclean cooking fuel, unimproved source of drinking water and unarranged garbage disposal were associated with significantly higher odds of having asthma. Due to the heterogeneous nature of asthma, associations between different SES indicators and asthma can be complex and may point in different directions. Hence, considering the concentration of asthma prevalence in vulnerable populations and its long-term effect on general health, a comprehensive programme to tackle chronic respiratory diseases and asthma, in particular, is urgently needed. Future studies are warranted on the higher prevalence of asthma among wealthy people observed in the current study. Besides, a state-specific analysis must be conducted to explore the substantial differences in asthma prevalence and different socioeconomic and environmental risk factors in Indian states. And further longitudinal studies should be conducted to confirm the temporal sequence of the results and further elucidate the impact of socioeconomic and contextual disadvantages on the incidence of asthma over the course of time. The study utilises secondary source of data which is freely available in public domain through http://mospi.nic.in/NSSOa. World Health Organization. Noncommunicable diseases [Internet]. 2018 [cited 2020 Dec 16]. Available from: https://www.who.int/health-topics/noncommunicable-diseases#tab=tab_1 Dandona L, Dandona R, Kumar GA, Shukla DK, Paul VK, Balakrishnan K, et al. India state-level disease burden initiative collaborators. Nations within a nation: Variations in epidemiological transition across the states of India 1990–2016 in the global burden of disease study. Lancet. 2017;390(10111):2437–60. Prabhakaran D, Jeemon P, Sharma M, Roth GA, Johnson C, Harikrishnan S, et al. The changing patterns of cardiovascular diseases and their risk factors in the states of India: the Global Burden of Disease Study 1990–2016. Lancet Glob Heal. 2018;6(12):e1339–51. Salvi S, Kumar GA, Dhaliwal RS, Paulson K, Agrawal A, Koul PA, et al. The burden of chronic respiratory diseases and their heterogeneity across the states of India: the Global Burden of Disease Study 1990–2016. Lancet Glob Heal. 2018;6(12):e1363–74. WHO. Asthma Factsheets. World Health Organization. 2020. Beasley R, Hancox RJ. Reducing the burden of asthma: time to set research and clinical priorities. Lancet Respir Med. 2020. Global Asthma Network. The Global Asthma Report. 2018. Sadatsafavi M, Rousseau R, Chen W, Zhang W, Lynd L, FitzGerald JM. The preventable burden of productivity loss due to suboptimal asthma control: a population-based study. Chest. 2014;145(4):787–93. Gold DR, Wright R. Population disparities in asthma. Annu Rev Public Health. 2005;26(107):89–113. Stern DA, Morgan WJ, Halonen M, Wright AL, Martinez FD. Wheezing and bronchial hyper-responsiveness in early childhood as predictors of newly diagnosed asthma in early adulthood: a longitudinal birth-cohort study. Lancet. 2008;372(9643):1058–64. Guddattu V, Swathi A, Nair NS. Household and environment factors associated with asthma among Indian women: a multilevel approach. J Asthma. 2010;47(4):407–11. Melgert BN, Ray A, Hylkema MN, Timens W, Postma DS. Are there reasons why adult asthma is more common in females? Curr Allergy Asthma Rep. 2007;7(2):143–50. Martinez FD. Genes, environments, development and asthma: a reappraisal. Eur Respir J. 2007;29(1):179–84. Mitchell EA, Beasley R, Keil U, Montefort S, Odhiambo J, Group IPTS. The association between tobacco and the risk of asthma, rhinoconjunctivitis and eczema in children and adolescents: analyses from Phase Three of the ISAAC programme. Thorax 2012;67(11):941–9. Gilliland FD, Islam T, Berhane K, Gauderman WJ, McConnell R, Avol E, et al. Regular smoking and asthma incidence in adolescents. Am J Respir Crit Care Med. 2006;174(10):1094–100. Sonnenschein-Van Der Voort AMM, Arends LR, de Jongste JC, Annesi-Maesano I, Arshad SH, Barros H, et al. Preterm birth, infant weight gain, and childhood asthma risk: a meta-analysis of 147,000 European children. J. Allergy Clin. Immunol. 2014;133(5):1317–29. Strachan DP, Aït-Khaled N, Foliaki S, Mallol J, Odhiambo J, Pearce N, et al. Siblings, asthma, rhinoconjunctivitis and eczema: a worldwide perspective from the International Study of Asthma and Allergies in Childhood. Clin Exp Allergy. 2015;45(1):126–36. Mitchell EA, Beasley R, Björkstén B, Crane J, Garcia-Marcos L, Keil U, et al. The association between BMI, vigorous physical activity and television viewing and the risk of symptoms of asthma, rhinoconjunctivitis and eczema in children and adolescents: ISAAC Phase Three. Clin Exp Allergy. 2013;43(1):73–84. Van Lieshout RJ, MacQueen G. Psychological factors in asthma. Allergy Asthma Clin Immunol. 2008;4(1):12. To T, Stanojevic S, Moores G, Gershon AS, Bateman ED, Cruz AA, et al. Global asthma prevalence in adults: findings from the cross-sectional world health survey. BMC Public Health. 2012;12(1):204. Aggarwal AN, Chaudhry K, Chhabra SK, D Souza GA, Gupta D, Jindal SK, et al. Prevalence and risk factors for bronchial asthma in Indian adults: a multicentre study. Indian J. Chest Dis. Allied Sci. 2006;48(1):13. Arokiasamy P, Karthick K, Pradhan J. Environmental risk factors and prevalence of asthma, tuberculosis and jaundice in India. Int J Environ Health. 2007;1(2):221–42. Patel S, Ram U, Ram F, Patel SK. Socioeconomic and demographic predictors of high blood pressure, diabetes, asthma and heart disease among adults engaged in various occupations: evidence from India. J Biosoc Sci. 2020;52(5):629–49. Kumar P, Ram U. Patterns, factors associated and morbidity burden of asthma in India. PLoS ONE. 2017;12(10):e0185938. Subramanian SV, Ackerson LK, Subramanyam MA, Wright RJ. Domestic violence is associated with adult and childhood asthma prevalence in India. Int J Epidemiol. 2007;36(3):569–79. Singh SK, Gupta J, Sharma H, Pedgaonkar SP, Gupta N. Socio-economic correlates and spatial heterogeneity in the prevalence of asthma among young women in India. BMC Pulm Med. 2020;20(1):1–12. Beasley R, Semprini A, Mitchell EA. Risk factors for asthma: Is prevention possible? Lancet. 2015;386(9998):1075–85. Corburn J, Osleeb J, Porter M. Urban asthma and the neighbourhood environment in New York City. Health Place. 2006;12(2):167–79. Robinson CL, Baumann LM, Romero K, Combe JM, Gomez A, Gilman RH, et al. Effect of urbanisation on asthma, allergy and airways inflammation in a developing country setting. Thorax. 2011;66(12):1051–7. Arokiasamy P. India's escalating burden of non-communicable diseases. Lancet Glob Health. 2018;6(12):e1262–3. Ooi GL, Phua KH. Urbanization and slum formation. J Urban Health. 2007;84(1):27–34. Vellakkal S, Subramanian S V, Millett C, Basu S, Stuckler D, Ebrahim S. Socioeconomic inequalities in non-communicable diseases prevalence in India: Disparities between self-reported diagnoses and standardized measures. PLoS ONE. 2013;8(7):e68219. National Sample Survey Office. Key Indicators of Social Consumption in India Health. Ministry of Statistics and Programme Implementation New Delhi; 2017. Subramanian S V., Nandy S, Irving M, Gordon D, Lambert H, Smith GD. The mortality divide in India: The differential contributions of gender, caste, and standard of living across the life course. Am J Public Health. 2006. Jensen R. Caste, culture, and the status and well-being of widows in India. Anal Econ Aging. 2005;I(August):357–76. Singh L, Arokiasamy P, Singh PK, Rai RK. Determinants of gender differences in self-rated health among older population: Evidence from India. SAGE Open. 2013;3(2):1–12. O'donnell O, Van Doorslaer E, Wagstaff A, Lindelow M. Analyzing health equity using household survey data: a guide to techniques and their implementation. The World Bank; 2007. Wagstaff A. Socioeconomic inequalities in child mortality: Comparisons across nine developing countries. Bull World Health Organ. 2000;78(1):19–28. Nunes C, Pereira AM, Morais-Almeida M. Asthma costs and social impact. Asthma Res Pract. 2017;3(1):1–11. Rodriguez A, Brickley E, Rodrigues L, Normansell RA, Barreto M, Cooper PJ. Urbanisation and asthma in low-income and middle-income countries: a systematic review of the urban-rural differences in asthma prevalence. Thorax. 2019;74(11):1020–30. Barros R, Moreira A, Padrão P, Teixeira VH, Carvalho P, Delgado L, et al. Dietary patterns and asthma prevalence, incidence and control. Clin Exp Allergy. 2015;45(11):1673–80. Hedlund U, Eriksson K, Rönmark E. Socio-economic status is related to incidence of asthma and respiratory symptoms in adults. Eur Respir J. 2006;28(2):303–10. Chittleborough CR, Taylor AW, Dal Grande E, Gill TK, Grant JF, Adams RJ, et al. Gender differences in asthma prevalence: Variations with socioeconomic disadvantage. Respirology. 2010;15(1):107–14. Schyllert C, Lindberg A, Hedman L, Stridsman C, Andersson M, Ilmarinen P, et al. Low socioeconomic status relates to asthma and wheeze, especially in women. ERJ Open Res. 2020;6(3):1–11. GINA. Global Strategy for Asthma Management and Prevention: 2017 Guidelines. Vol. 126, Global Initiative for Asthma. 2017. Sinharoy A, Mitra S, Mondal P. Socioeconomic and Environmental Predictors of Asthma-Related Mortality. J Environ Public Health. 2018;2018. Eagan TML, Gulsvik A, Eide GE, Bakke PS. The effect of educational level on the incidence of asthma and respiratory symptoms. Respir Med. 2004;98(8):730–6. Karmaus W, Botezan C. Does a higher number of siblings protect against the development of allergy and asthma? A review. J Epidemiol Community Health. 2002;56(1):209–17. Farfel A, Tirosh A, Derazne E, Garty BZ, Afek A. Association between socioeconomic status and the prevalence of asthma. Ann Allergy Asthma Immunol. 2010;104(6):490–5. Mishra V. Effect of indoor air pollution from biomass combustion on prevalence of asthma in the elderly. Environ Health Perspect. 2003;111(1):71–8. Agrawal S, Pearce N, Ebrahim S. Prevalence and risk factors for self-reported asthma in an adult Indian population: A cross-sectional survey. Int J Tuberc Lung Dis. 2013;17(2):275–82. Gaviola C, Miele CH, Wise RA, Gilman RH, Jaganath D, Miranda JJ, et al. Urbanisation but not biomass fuel smoke exposure is associated with asthma prevalence in four resource-limited settings. Thorax. 2016;71(2):154–60. Trevor J, Antony V, Jindal SK. The effect of biomass fuel exposure on the prevalence of asthma in adults in India—Review of current evidence. J Asthma. 2014;51(2):136–41. Agrawal S. Effect of indoor air pollution from biomass and solid fuel combustion on prevalence of self-reported asthma among adult men and women in India: Findings from a nationwide large-scale cross-sectional survey. J Asthma. 2012;49(4):355–65. Pukkala E, Pönkä A. Increased incidence of cancer and asthma in houses built on a former dump area. Environ Health Perspect. 2001;109(11):1121–5. Arain MB, Kazi TG, Baig JA, Jamali MK, Afridi HI, Jalbani N, et al. Respiratory effects in people exposed to arsenic via the drinking water and tobacco smoking in southern part of Pakistan. Sci Total Environ. 2009;407(21):5524–30. Dauphiné DC, Ferreccio C, Guntur S, Yuan Y, Hammond SK, Balmes J, et al. Lung function in adults following in utero and childhood exposure to arsenic in drinking water: Preliminary findings. Int Arch Occup Environ Health. 2011;84(6):591–600. Chowdhury S, Mazumder MAJ, Al-Attas O, Husain T. Heavy metals in drinking water: Occurrences, implications, and future needs in developing countries. Sci Total Environ. 2016;569–570:476–88. Authors did not receive any funding to carry out this research. International Institute for Population Sciences, Mumbai, India Rashmi Rashmi Department of Survey Research & Data Analytics, International Institute for Population Sciences, Mumbai, India Pradeep Kumar & Shobhit Srivastava Department of Population Policies and Programmes, International Institute for Population Sciences, Mumbai, India T. Muhammad Shobhit Srivastava RR, PK, SS and MT made a substantial contribution to the concept, design of the work, acquisition, analysis and interpretation of data. RR, PK, SS and MT drafted the article or revised it critically for important intellectual content. RR, PK, SS and MT approved the version to be published. RR, PK, SS and MT each author have participated sufficiently in the work to take public responsibility for appropriate portions of the content. All authors read and approved the final manuscript. Correspondence to T. Muhammad. Rashmi, R., Kumar, P., Srivastava, S. et al. Understanding socio-economic inequalities in the prevalence of asthma in India: an evidence from national sample survey 2017–18. BMC Pulm Med 21, 372 (2021). https://doi.org/10.1186/s12890-021-01742-w Asthma; Socio-economic inequality; Decomposition; India
CommonCrawl
All talks are at noon on Monday in E575 Sept 10 everyone Open problem session in E575 (University of Lethbridge) Sept 17 Joy Morris Calculating partition numbers The partition number \(p(n)\) is the number of ways that \(n\) can be partitioned into a sum of smaller positive integers. At the SIAM Discrete Math conference in June, I attended a plenary talk by Ken Ono of Emory on how to calculate partition numbers. This topic incorporates both combinatorics and number theory. Ken Ono was kind enough to give me a copy of his slides so that I could present this topic in our seminar, and I will be using those slides for this talk. Sept 24 Amir Akbary On a Conjecture of Erdös Let \(m\) be an integer bigger than 1 and let \(P(m)\) denote the largest prime divisor of \(m\). In 1962, Erdös conjectured that $$\lim_{n\rightarrow \infty} \frac{P(2^n-1)}{n}=\infty.$$ In 2000, Ram Murty and Siman Wong conditionally resolved this conjecture, under the assumption of a celebrated conjecture in number theory. In this talk I will describe their work. Oct 1 Dave Morris Hamiltonian paths in solvable Cayley digraphs Cayley graphs are very nice graphs that are constructed from finite groups. If the group is abelian, then it is easy to show that the graph has a hamiltonian cycle. It is conjectured that the nonabelian Cayley graphs also have hamiltonian cycles. We will discuss a few recent results (both positive and negative) on the related problem where the graph is replaced by a directed graph, and the finite group is assumed to be solvable (which means it is not too far from being abelian). Oct 15 Soroosh Yazdani Local Szpiro Conjecture The Szpiro conjecture is one of the big conjectures in number theory and Diophantine equations. It is equivalent to the ABC conjecture, and so it implies many interesting results. In this talk I will mention a conjecture that is motivated by the Szpiro conjecture, which seems much less strong than the Szpiro conjecture, even though it still has many interesting Diophantine applications. We will also present a few cases where we can prove this conjecture. Oct 22 Nathan Ng Additive Divisor Sums The divisor function \( d(n) \) equals the number of divisors of an integer \( n \). In this talk I will discuss what is known about additive divisor sums of the shape $$D(N,r)=\sum_{n \le N} d(n) d(n+r)$$ where \( r \) is a fixed positive integer. These sums were introduced by Ingham in 1926, who proved an upper bound for \( D(N,r) \). This was later refined to an asymptotic formula by Estermann and over the years was further sharpened by a succession of authors, including Heath-Brown, Deshouilliers and Iwaniec, Motohashi, and Meurman. More recent evaluations of \( D(N,r) \) makes use of the spectral theory of automorphic forms. I will also discuss more general additive divisor sums of the shape $$D_k(N,r) = \sum_{n \le N} d_k(n) d_k(n+r)$$ where \( k \) is a natural number larger than 2 and where \( d_k(n) \) equals the number of ordered \( k \)-tuples \( (n_1, \ldots, n_k) \) such that \( n = n_1 \cdots n_k \). Oct 29 Majid Shahabi Weil Conjectures In 1949, Weil proposed a set of conjectures about the generating functions which are derived from counting the number of points on an algebraic variety over a finite field. Solving Weil's conjectures was one of the central mathematics projects of the twentieth century. These problems were totally solved by a group of people including Dwork, Grothendieck, and Deligne. In this talk, we present a historical background and state the assertions of Weil conjectures. We further explain some sentences about the ideas of the proofs. Nov 5 Farzad Aryan The distribution of \(k\)-tuples of reduced residues Let \(q\) be a natural number, and write \(P = \varphi(q)/q\), that is \(P\) is the probability that a randomly chosen integer is relatively prime to \(q\). Let $$ 1 = a_1 < a_2 < \cdots < a_{\phi(q)} < q $$ be the reduced residues mod \(q\) (integers co-prime to \(q\) in increasing order). A quantity of central interest is $$V_\gamma (q) = \sum_{i=1}^{\phi(q)} (a_{i+1}- a_i )^ \gamma .$$ In 1940, Erdős conjectured that $$V_{\gamma }(q) \ll qP^{1-\gamma }.$$ Let \(\mathcal{D}=\lbrace h_1, h_2 , \cdots, h_s \rbrace\) be an admissible set. We call \(a+h_1,\ldots, a+h_s\) an \(s\)-tuple of reduced residues, if each of these numbers is co-prime with \(q\). Study of \(s\)-tuples of reduced residues is an analogue to the study of \(s\)-tuples of primes. In this talk we prove estimates about the distribution of \(s\)-tuples of reduced residues and finally we prove an extension of Erdős's conjecture for \(s\)-tuples: $$V^{\mathcal{D}}_{\gamma }(q):=\sum_{a_i < q} ( a_{i+1} - a_i )^ \gamma \ll qP^{-s(\gamma-1) }, $$ where the sum runs over the integers \(1 = a_1 < a_2 < \cdots < q \) for which \(a_i+h_1,\ldots, a_i+h_s\) is an \(s\)-tuple of reduced residues. Nov 23 Chris Godsil Continuous Quantum Walks on Graphs in room B650 (University of Waterloo) If \(A\) is the adjacency matrix of a graph \(X\), then the matrix exponential \(U (t) = \exp(itA)\) determines what physicists term a continuous quantum walk. They ask questions such as: for which graphs are there vertices \(a\) and \(b\) and a \(t\) such that \(| U (t)_{a,b} | = 1\)? The basic problem is to relate the physical properties of the system with properties of the underlying graphs, and to study this we make use of results from the theory of graph spectra, number theory, ergodic theory.... My talk will present some of the progress on this topic. Nov 30 Heinz Bauschke An Invitation to Projection Models in room D610 (UBC Okanagan) Feasibility problems, i.e., finding a solution satisfying certain constraints, are common in mathematics and the natural sciences. If the contraints have simple projectors (nearest point mappings), then one popular approach to these problems is to use the projectors in some algorithmic fashion to approximate a solution. In this talk, I will survey three methods (alternating projections, Dykstra, and Douglas-Rachford), and comment on recent advances and remaining challenges. Dec 3 Mark Thom Squarefree Values of Trinomial Discriminants (UBC) The discriminant of a trinomial of the form \(x^n \pm x^m \pm 1\) has the form \(\pm n^n \pm (n-m)^{n-m} m^m\) when \(n\) and \(m\) are co-prime. We determine necessary and sufficient conditions for identifying primes whose squares never divide the discriminants arising from coprime pairs \((n,m)\). These conditions are adapted into an exhaustive search method, which we use to corroborate a heuristic estimate of the density of all such primes among the odd primes. The same results are used to produce a heuristic estimate of the density of squarefree values of these discriminants. We'll also look at an unlikely seeming family of divisors of the discriminants, arising from an elementary identity on them. This is joint work with David Boyd and Greg Martin. Past semesters: Fall 2007 Fall 2008 Fall 2009 Fall 2010 Fall 2011
CommonCrawl
Morsels Logical•et;ai These are "morsels" — little ideas or results that can be expressed in a short space. They may be exported or extended to full articles at a later time. Convex shadows. Any convex 3D object will have a silhouette of a certain area. Averaged over all viewing angles, the size of the silhouette is exactly a quarter of the object's surface area. Why? By dimensional analysis, the size of the silhouette must be proportional to surface area. By subdividing the surface into shadow-casting pieces, you can prove that the constant doesn't depend on shape as long as the shape is convex. So use the example of a unit sphere to compute its value. The same principle applies in higher dimensions \(n\). The formulas for sphere surface area and volume involve the gamma function, which simplifies if you consider odd and even dimensions separately. Let \(\lambda_n\) designate the constant for \(n\) dimensions. Then: $$\lambda_{n+1} = \begin{cases}\frac{1}{2^{n+1}} {n \choose n/2} & n\text{ even}\\\frac{1}{\pi}\frac{2^n}{n+1} {n \choose {\lfloor n/2\rfloor} }^{-1} & n\text{ odd}\end{cases}$$ $$\lambda_n = \frac{1}{2},\; \frac{1}{\pi},\; \frac{1}{4},\; \frac{2}{3\pi},\; \frac{3}{16},\; \frac{8}{15\pi},\; \ldots .$$ M'aide!. One afternoon, you call all your friends to come visit you. If they all depart their houses at once, travel in straight lines towards you, and are uniformly distributed throughout the surrounding area, how many people should you expect to arrive as time goes by? The expected number of arrivals at each moment is proportional to the time since you called. Imagine a circle, centered on you, whose radius expands at the same rate everyone travels. Whenever the circle engulfs someone's house, that's the exact moment the person arrives at your door. Uniform distribution implies number of new arrivals proportional to circumference; number of cumulative arrivals proportional to area. Same principle generalizes to higher dimensions \(n\). Connect infinity. Extend the board game connect four so that it has infinitely many columns. You win after infinitely many moves if you've constructed an unbroken rightwardly-infinite horizontal sequence of pieces in any row and your opponent hasn't. (You don't need to fill the whole row, and diagonal/vertical chains don't count in this game.) Prove that you can force a draw. Nailing down the proof is subtle, because blocking strategies for lower rows don't always generalize to higher rows. The general principle is to block your opponent infinitely many times in each row, preventing an unbroken chain. The specific principle is: to stop your opponent from reaching height \(k\) in a certain column, don't put any pieces there; wait until they build a tower of height \(k-1\) then put your piece on top. For each row, choose infinite disjoint subsets of \(\mathbb{N}\): \(T_1, T_2, T_3, T_4\). Each \(T_k\) is a predetermined list of columns where you will interrupt your opponent's towers of height \(k\). Whenever your opponent moves, if their piece builds a tower of height \(k-1\) in a column in \(T_k\), put your next piece on top. Otherwise, put your piece in an empty column of \(T_1\) to the right of all pieces played so far. You never put pieces on your opponent's except to block them, and this blocking process cannot be stopped. A key insight is you can't stymie your opponent by building your own interrupting towers, only by stopping theirs. There is no way to build a tower of height 3 or 4 if the other player wants to prevent it. Blackout Turing machines. A "blackout machine" is a Turing machine where every transition prints a 1 on the tape. The halting problem for blackout machines is decideable. The tape of a blackout machine always consists of a contiguous region of 1s surrounded by infinite blank space on the left and right. Any machine that reads sufficiently many ones in a row must enter a control loop. By measuring the net movement during the loop, you can decide whether the machine will overall go left, right, or stay in place each iteration. Regardless of how many ones are in a row, using modular arithmetic, you can efficiently predict what state the machine will be in when it exits. By this reasoning, a state-space graph will help us fast-forward the looping behavior of the machine. A q-state machine can enter a loop with period 1, 2, 3, …, or at most q. To anticipate its behavior on any-sized string of ones, it's therefore enough to know what the string size is mod 2, mod 3, …, mod q. Thus, for each option of left-side/right-side, each possible state of the machine's control, and each possible value of string length mod 2, mod 3, mod 4,… mod q, the graph has a node. By simulating the machine, you can efficiently compute the transitions between nodes of this graph, recording how the machine-and-tape will look whenever the machine exits the string of ones. (Add a special node for "loops forever within the string of ones" and "halts"). There are (on the order of) q! nodes in this graph—overkill, but still finite. You can simulate the machine in fast-forward using these graph transitions. After simulating a number of transitions equal to the number of nodes in this graph, the machine must either halt, loop forever within the region of ones, or revisit a node (in which case it loops forever, growing the region of ones). Lopped polygons. Lop off the edges of a regular polygon, forming a regular polygon with twice as many edges. Repeat this process until, in the limit, you obtain a circle. As you might expect, that circle turns out to be the incircle of the polygon (i.e. the largest circle contained in it). Interestingly, the cutting process multiplies the original perimeter of the object by \((\pi/n) \cot(\pi/n)\), where \(n\) is the original number of sides. It multiples the area by the same amount (!). NP-complete morphemes. If you have a language (set) of even length strings, you can form a language comprising just the first half of each string, or just the second half. Is there a language which is in P, but where each half-language is NP-complete? Is there a language which is NP-complete, but where each half-language is in P? (What are P and NP?1) The answer to both questions is yes. For the first question, make a language of strings where each half contains a hard problem and a solution to the problem in the other half. Then each half-language is hard because it's a hard problem paired with a solution to an unrelated problem. But the combined language is easy because then both problems come with solutions. For the second question, make a language by pairing up easy problems according to a complicated criterion. So the half-langauges are easy, because they're just sets of easy problems. But the overall language is hard because deciding whether the pairing is valid is hard. Concrete examples: In the first question, take strings of the form GxHy, where G and H are isomorphic Hamiltonian graphs; x is a Hamiltonian path through G; and y is an isomorphism from G to H. (Pad as necessary so Gx and Hy have the same length.) When G and H are separated, the strings x and y provide no useful information, so each half-language is as hard as HAMPATH. When united, GxHy is easy because x and y easily allow you to confirm that G and H are Hamiltonian. In the second question, take strings of the form GH where G and H are graphs and G is isomorphic to a subgraph of H. This is an NP-complete problem. But each half-language is just the set of well-formed graphs, which is easy to check. Juking jukebox. I couldn't tell whether my music player was shuffling songs or playing them in a fixed sequential order starting from the same first song each session. Part of the problem was that I didn't know how many songs were in the playlist, and all I could retain about songs is whether I had heard them before (not, e.g., which songs usually follow which). You can model this situation as a model-selection problem over bitstrings representing whether played songs have been heard (0) or not-heard (1). Every bitstring is consistent with random play order, but only a handful are consistent with sequential play order (...0001111...). Therefore, the longer you listen and remain unsure, the vastly more probable the sequential model is. (Specifically if you play n songs, there are n+1 outcomes consistent with sequential play out of 2n total outcomes.) (And yes; turns out my music player was sequential, which was strongly indicated by the fact that I felt unsure it was random in the first place.) Loop counting. If there's a unique two-step path between every pair of nodes in a directed graph, then every node has \(k\) neighbors and the graph has \(k\) loops, where \(k^2\) is the number of nodes in the graph. Astonishingly, you can prove this result using the machinery of linear algebra. You represent the graph as a matrix \(M\) whose \((i,j)\) entry is one if nodes \(i\) and \(j\) are neighbors, or 0 otherwise. The sum of the diagonal entries (the trace of the matrix) tells you the number of loops. The trace is also equal to the sum of the generalized eigenvalues, counting repetitions, so you can count loops in a graph by finding all the eigenvalues of the corresponding matrix. The property about paths translates into the matrix equation \(M^2 = J\), where \(J\) is a matrix of all ones. (The r-th power of a matrix counts r-step paths between nodes.) This matrix \(J\) has a number of special properties—multiplying a matrix by \(J\) computes its row/column totals (i.e. the number of neighbors for a graph!), multiplying \(J\) by \(J\) produces a scalar multiple of \(J\), and \(J\) zeroes-out any vector whose entries sum to zero; this is an \(n-1\) dimensional subspace. The property that \(M^2=J\), along with special properties of \(J\), allows you to conclude that higher powers of \(M\) are all just multiples of \(J\); in particular, examining \(M^3=MJ=JM\) reveals that every node in the graph has the same number of neighbors \(k\). So \(M^3 = kJ\). (And \(k^2 = n\) because \(k^2\) is the number of two-step paths, which lead uniquely to each node in the graph.) Notice that because of this neighbor property, \(M\) sends a column vector of all ones to a column vector of all k's, so \(M\) has an eigenvalue \(k\). Based on the powers of \(M\), \(M\) furthermore has \((n-1)\) generalized eigenvectors with eigenvalue zero. There are always exactly \(n\) independent generalized eigenvalues, and we've just found all of them. Their sum is \(0+0+0+\ldots+0+k = k\), which establishes the result. The same procedure establishes a more general result: If there's a unique r-step path between every pair of nodes in a graph, then every node has \(k\) neighbors and the graph has \(k\) loops, where \(k^r\) is the number of nodes in the graph. Cheap numbers. The buttons on your postfix-notation calculator each come with a cost. You can push any operator (plus, minus, times, floored division, modulo, and copy) onto the stack for a cost of one. You can also push any integer (say, in a fixed range 1…9) onto the stack; the cost is the value of the integer itself. Find the smallest-cost program for producing any integer. For example, optimal programs for the first few integers are 1, 2, 3, 2@+, 5, 3@+, 13@++, 2@@**, 3@*. (Here @ denotes the operator which puts a copy of the top element of the stack onto the stack.) You can find short programs using branch-and-bound search to search the space of stack states. There are some optimizations: Using dynamic programming (Dijkstra's algorithm), you can cache the lowest-cost path to each state so that you never have to expand a state more than once. Second, you can use the size of the stack state as an admissible heuristic: if a stack has \(n\) items in it, it will take at least \(n-1\) binary operators to collapse it into single number. There's a nice theorem, too: For any integer, the lowest-cost program only ever needs the constants 1 through 5 (additional constants never offer additional savings). The program never costs more than the integer itself, and in fact for sufficiently large integers (over 34), it costs less than half of the integer. To prove the theorem, first show that every integer \(n\) has a (possibly nonoptimal) program which costs at most \(n\) and which uses the constants 1…5 2. Next, suppose you have an optimal-cost program which uses any set of constants. By the previous result, you can replace each constant with a short script that uses only 1…5, and this substitution won't increase the program's cost. Hence the program-with-substitutions is still optimal, but only uses the digits 1…5, QED. Rolls with Advantage. In D&D5e, certain die rolls have advantage, meaning that instead of rolling the die once, you roll it twice and use the larger result. Similarly, a modifier is a constant you add to the result of a die roll. There's a simple probabilities nomogram for deciding whether you should prefer to roll (d20 + x) with advantage, or (d20 + y). To use: draw a line from the DC on the first axis to the modifier on the second axis and continue until you hit the third axis. Read the right-hand value for the probability of success when rolling d20, or the left hand value when rolling d20 with advantage. For example, the orange line shows a situation where DC=16, modifier=+1. With advantage, success rate is around 50%. Without, it's around 30%. The equation governing the odds \(P\) of beating a DC of \(D\) with a modifier of \(X\) when rolling a g-sided die is: $$g (1-P) = (1-D+X) $$ With advantage, the equation is surprisingly similar: $$g \sqrt{(1-P)} = (1-D+X) $$ And so both can be plotted on the same sum nomogram. Categorical orientations. The pan and ace orientations are optimal in a certain mathematically rigorous sense. They factor out gender (in the definition, you don't need to know the person's gender or the person's model of gender), and they form what is called a categorical adjunction or Galois connection: Consider the space of possible orientations. If we assume that each person has exactly one gender and that an orientation comprises a subset of attractive genders, then the space of possible orientations becomes a set \(G \times 2^G\). We can equip \(2^G\) with subset ordering; because genders aren't ordered, we'll equip \(G\) with the identity order (each gender is comparable only to itself). \(G\times 2^G\) has the induced product order. There is a disorientation functor \(\mathscr{U}:G\times 2^G\rightarrow G\) which forgets a person's orientation but not their gender. Observe that the left adjoint of \(\mathscr{U}\) assigns an ace orientation to each person: \(g\mapsto \langle g, \emptyset\rangle\). And the right adjoint of \(\mathscr{U}\) assigns the pan orientation to each person: \(g\mapsto \langle g, G\rangle\). Each one represents a certain optimally unassuming solution to recovering an unknown orientation3. $$\mathbf{Ace} \vdash \mathbf{Disorient} \vdash \mathbf{Pan}$$ P.S. The evaluation map \(\epsilon\) ("apply") detects same-gender attraction: In programming, evaluation sends arguments \(\langle f, x\rangle\) to \(f(x)\). Every subset of \(G\) is [equivalent to] a characteristic function \(G\rightarrow \{\text{true}, \text{false}\}\). So if we apply eval to an orientation in \(G\times 2^G\), we determine whether it includes same-gender attraction. Adjunctions as rounding. (A fun math fact followup.) There is a map \(i:\mathbb{N}\hookrightarrow \mathbb{R}\) which includes the natural numbers into the reals (by "typecasting"). Considering \(\mathbb{N}\) and \(\mathbb{R}\) as poset categories, the inclusion has both a left adjoint and a right adjoint. As you can guess, they're the floor and ceiling functions: if the floor of a number is at least \(n\), then the number itself is at least \(n\), and conversely. This makes floor, perhaps surprisingly, the right adjoint, as \(i(n) \leq x \iff n \leq \lfloor x \rfloor\). $$\lceil \cdot \rceil\;\; \vdash \;\; i \vdash \;\; \lfloor \cdot \rfloor$$ Geometric orientations. There is a vast potential for amusing orientation-based results in differential geometry, which is a subject I don't yet have the expertise to formalize. For example, if you conceive of a manifold of possible genders with a designated base point for one's own gender, then you can define attraction to a gender by the existence of a geodesic joining them (orientation assigns an overall shape to the gender manifold); closed geodesics, indicative of same-gender attraction, exist only in the presence of curvature and therefore imply that the manifold is not flat. Pyramid puzzles The objective of the number-tower puzzle is to fill in numbers in a triangular structure so that every number is equal to the sum of the two numbers below it. Some numbers are specified in advance, providing constraint. Example: The question is, when designing such a puzzle, how many and which spaces should you specify in advance to guarantee that there is a single unique solution? For example, specifying all values at the base nodes is sufficient. How do you characterize all such arrangements? Solution: Let \(h\) be the height of the tower. If you place \(h\) values in the tower such that you can't solve for any one value in terms of the others, then those \(h\) values uniquely determine a solution. You never require more or less than \(h\) values. You can prove this using methods of linear algebra. The addition constraints for the tower define a system of linear equations (\(x_1 + x_2 = x_3\), and so on), which you can represent as a matrix. Because filling out the \(h\) base nodes of the tower uniquely determines the solution, and because the problem is linear, it always takes \(h\) independent values to determine a solution. You can formalize the "solve for one value in terms of the other" in terms of a matrix determinant. Construct the ancestral path matrix \(N\) with one column for every node, and one row for every node specifically in the tower base. In each entry, put the number of distinct upward paths to that node from that base node. Next, if you choose \(h\) tower positions to fill in, pick out the corresponding columns of \(N\) and combine them into an \(h\times h\) matrix, \(D\). Those \(h\) positions uniquely determine the entire pyramid if and only if \(D\) is invertible, i.e. has a nonzero determinant. This is because the rows of the ancestral path matrix \(N\) span the solution space—each row is a primitive solution. \(N\) sends the \(h\) parameters of the solution space to a complete filled-out solution. Selecting \(h\) columns of \(N\) amounts to forgetting (quotienting out) all but the \(h\) filled-in values in the complete solution. If you can undo the forgetting effect of \(D\)—if you can recover the \(h\) parameters of the solution space (e.g. the base tower values) from the filled-in values—then you can uniquely determine the complete solution from the filled in values. Charting the mirror world. A parabolic mirror distorts shapes in the real world. By creating strange anti-distorted shapes, you can create shapes whose reflections "in the mirror world" look normal. With the right coordinate system, the anti-distortion transformation has surprisingly simple form. It's: $$\widehat{x} = \frac{1 + \sin{\alpha}}{\cos{\alpha}}$$ $$\widehat{\alpha} = \frac{1/4x^2 - f}{x}$$ Here (see figure), the parabola's focus is at the origin, the \(x\)-axis is perpendicular to the optical axis, and the angles \(\alpha\) are measured relative to the origin and x-axis. Every point, real or virtual, is uniquely defined by an \(\langle x, \alpha\rangle\) pair. Parabolas characteristically reflect vertical rays into rays through the focus and vice-versa; this is how you prove the above transformation, and why the transformed \(\widehat x\) coordinate is defined purely in terms of the original angle \(\alpha\) and vice-versa. Miraculously, even in 3D these anti-distorted figures will have reflections that look correct from any viewing angle (!). I imagine you could do this same trick for mirrors shaped like other conic sections (ellipse, hyperbola) but I haven't tried yet. By the way, P problems are ones that are easy to solve. For example: is this string a well-formed graph without any syntax errors? In contrast, NP-complete problems are the hardest problems that still have easy-to-verify solutions. For example, is there a path through this graph that visits every node exactly once? It's a hard problem, but verifying a candidate solution is easy. Here's another NP-complete problem: Is this graph a subgraph of that one? Hard in general, but verifying a proposed correspondence is easy. Proof: Check, by hand, that you can make the first 34 numbers with programs that don't cost more than the number and that use only the digits 1…5 (For any smaller set of digits, this part fails because 5 costs more than 5 to make.) For cases larger than 34, a stronger property emerges: the optimal 1…5 program for each integer costs less than half the integer itself, minus one (!). You can prove this by induction. Check cases 34 through 256 by hand. Then if the theorem is true for every integer between 34 and some power of two, it is true for every integer between that power of two and the next one. Indeed, any integer in that range can be written as the sum of two numbers larger than 34, and by the induction hypothesis, the program which generates those two numbers then adds them costs less than half the integer. Incidentally, the ace/pan functors have further adjunctions when, and only when, there are no genders. In this case, pan and ace become equivalent and the adjunctions form a cycle. ♡2019 Dylan Holmes. Morsels by Dylan Holmes is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.(You may make and distribute verbatim copies of this work, even commercially, as long as you preserve this notice.)
CommonCrawl
Mathematical Lectures Home > Academics > Mathematical Events > Conferences > Content Advances in Homotopy Theory II Time:May 2-4 2022 Venue:Zoom Meeting ID:361 038 6975, Passcode:BIMSA Organizer:Anthony Bahri、Matthew Burfitt、Sebastian Chenery、Xin Fu、Alexander Grigor Speaker:Anthony Bahri、Matthew Burfitt、Sebastian Chenery、Xin Fu、Alexander Grigor Organizer:The Beijing Institute of Mathematical Sciences and Applications (BIMSA) This is the second edition of a twice-yearly workshop that will alternate between the Southampton Centre for Geometry, Topology and Applications (CGTA) and the Beijing Institute of Mathematical Sciences and Applications (BIMSA). The aims are to promote exciting new work in homotopy theory, with an emphasis on that by younger mathematicians, and to showcase the wide relevance of the subject to other areas of mathematics and science. Anthony Bahri (Rider) Matthew Burfitt (Aberdeen) Sebastian Chenery (Southampton) Xin Fu (Ajou University) Alexander Grigor'yan (Bielefeld) Fedor Pavutnitskiy (HSE) Tseleung So (Regina) Vladimir Vershinin (Montpelier) Juxin Yang (Hebei) Schedule of talks London Time Beijing Time Speaker 12:00-12:50 19:00-19:50 Alexander Grigor'yan 13:00-13:50 20:00-20:50 Matthew Burfitt 14:00-14:50 21:00-21:50 Fedor Pavutnitskiy 12:00-12:50 19:00-19:50 Xin Fu 13:00-13:50 20:00-20:50 Sebastian Chenery 14:00-14:50 21:00-21:50 Anthony Bahri 12:00-12:50 19:00-19:50 Vladimir Vershinin 13:00-13:50 20:00-20:50 Juxin Yang 14:00-14:50 21:00-21:50 Tseleung So Titles and Abstracts Title: Symmetric products and a realization of generators in the cohomology of a polyhedral product Speaker: Anthony Bahri Polyhedral products, which are determined by a simplicial complex and a family of CW pairs, behave sufficiently well with respect to symmetric products as to allow for a description of the cohomology in terms of the link structure of the simplicial complex and the cohomology of the CW pairs. The process works particularly well under certain freeness conditions which include the use of field coefficients. Moreover, generators obtained via this process are robust enough to compute products. This talk however will focus on the way in which symmetric products are used to obtain the additive structure. Applications include the computation of Poincar series. The results are the culmination of a project which had its genesis in 2014 and is joint work with Martin Bendersky, Fred Cohen and Sam Gitler. .................................................................................................................................................................................................................................................................................................... Title: Topological data analysis of Fast Field-Cycling MRI images Speaker: Matthew Burfitt Fast Field-Cycling MRI (FFC MRI) has the potential to recover new biomarkers from a range of diseases by scanning a number of low magnetic eld strengths simultaneously. The images produced by an FFC scanner can be interpreted in the form of a sequence time series of 2-dimensional grey scale images with each times series corresponding to each of the magnetic eld strengths. I will investigate the applications of topological data analysis and machine learning to brain stroke images obtained by the FFC MRI scanner. A main obstacle to achieving good results lies in multiplicative brightness errors occurring in the data. A simple solution might be to consider pixelwise image feature vectors initially only up to multiplication by a constant. This can be thought of as splitting the data point cloud within a product by rst embedding into standard n-simplices. We observe that this point cloud embedding can provide good information on tissue types which when modelled against the other component of the data point cloud can be used to highlight stroke damaged tissue. A drawback of the rst method is that it discarded the pixel spatial locations information of the image. However, this can be captured by persistent homology in a parameter choice free process. A direct comparison between pixel intensity histograms and the Betti curves reveals that homology captures and emphasises tissue signals. Moreover, additional geometric and topological information about the images is captured with persistent homology. From here the ultimately aim is to extract persistent homology features useful for machine leaning and develop new visual diagnostics for the FFC data. Title: The rational homotopy type of homotopy fibrations over connected sums Speaker: Sebastian Chenery We provide a simple condition on rational cohomology for the total space of a pullback fibration over a connected sum to have the rational homotopy type of a connected sum, after looping. This takes inspiration from recent work of Jeffrey and Selick, in which they study pullback fibrations of this type, but under stronger hypotheses compared to our result. Title: The homotopy classification of four-dimensional toric orbifolds Speaker: Xin Fu Quasitoric manifolds are compact, smooth 2n-manifolds with a locally standard $T^n$-action whose orbit space is a simple polytope. The cohomological rigidity problem in toric topology asks whether quasitoric manifolds are distinguished by their cohomology rings. A toric orbifold is a generalized notion of a quasitoric manifold, and there are examples of toric orbifolds that do not satisfy cohomological rigidity. In this talk, we see that certain toric orbifolds in four dimensions, though not cohomologically rigid, are homotopy equivalent if their integral cohomology rings are isomorphic. We achieve this goal by decomposing those spaces up to homotopy. This is joint work with Tseleung So (University of Regina) and Jongbaek Song (KIAS). Title: Path homology and join of digraphs Speaker: Alexander Grigor'yan We introduce the path homology theory on digraphs (=directed graphs) and present Kunneth like formulas for path homology of various joins of digraphs. Title: Homology of Lie rings Speaker: Fedor Pavutnitskiy Homology of Lie algebras over fields is usually defined in terms of the Chevalley-Eilenberg chain complex. Similarly to group homology there are other equivalent definitions in terms of simplicial resolutions and Tor functors. Turns out that these definitions in general are no longer equivalent for Lie algebras over commutative rings. In the talk we will discuss these different approaches to homology of Lie rings and some theorems relating them. This is joint work with Sergei O. Ivanov, Vladislav Romanovskii and Anatoliy Zaikovskii. Title: Suspension splittings of manifolds and their applications Speaker: Tseleung So In order to study the topology of a space, it is useful to decompose the space into smaller pieces, analyse the pieces and reassemble into a whole. We say that a space has a suspension splitting if its suspension decomposes into a wedge of smaller spaces. In this talk I will talk about suspension splittings of 4-dimensional and 6-dimensional smooth manifolds, and their applications to computing generalized cohomology theories and gauge groups of 4- and 6-dimensional smooth manifolds. Title: On homotopy braids Speaker: Vladimir Vershinin The homotopy braid group b Bn is the subject of the work. First, linearity of b Bn over the integers is proved. Then we prove that the group b B3 is torsion free. Also we conjecture that the homotopy braid groups are torsion free for all n. The talk is based on the joint work with Valerii Bardakov ans Wu Jie, Forum Math. 34 (2022), no. 2, 447-454. Title: On the homotopy groups of the suspended Quaternionic projective plane Speaker: Juxin Yang In this talk, I'll report on my computation of the homotopy groups $\pi_{r+k}(\sum^k\mathbb{H}P^2)$ (for r ≤ 15 and k ≥ 0) localized at 2 or 3, especially the unstable ones. Then I'll give some applications of them, including two classification theorems of a kind of 3-local CW complexes, and some decompositions of the self smash products The link of the first Advances in Homotopy Theory Workshop: https://www.southampton.ac.uk/cgta/pages/soton-bimsa-biannual-2021-09.page DATEMay 2, 2022 Summer School on Chromatic Homotopy Theory and Higher (Infinity-Categorical) Algebra Schedule08-158:20-8:30Guozhen WangOpening remarks8:30-9:30Jacob LurieOpening talk9:45-10:45Sihao MaLecture 1: Cohomology theories, Brown representability, K-theory11:00-12:00Heyi ZhuLecture 2: Simplicial sets, Kan complexes, categories, nerves, simplicial categories, infinity categories, homotopy categories, equivalences08-168:30-9:30David GepnerOpening talk9:45-10:45Sihao MaLecture 3: Complex ... The Postdoc workshop - Spring Semester 2022 The Postdoc workshop - Spring Semester 2022 ScheduleDate: Apr 22, 2022Zoom Meeting ID: 869 7053 8707 Passcode: 123456Zoom link: https://us06web.zoom.us/j/86970538707?pwd=RUUyMWdTczdPMFRnU1Rwb0dOODdjdz09 Date: Apr 23, 2022Zoom Meeting ID: 898 0601 9150 Passcode: 123456Zoom link: https://us06web.zoom.us/j/89806019150?pwd=d3pFRHZJNXpaM0ZYc2drREpFYUNmZz09
CommonCrawl
EURASIP Journal on Wireless Communications and Networking Review on FADAC-OFDM Combining iterative ICI cancelation schemes to FADAC-OFDM Complexity reduction Iterative detection for frequency-asynchronous distributed Alamouti-coded (FADAC) OFDM Bong-seok Kim1 and Kwonhue Choi2Email author EURASIP Journal on Wireless Communications and Networking20172017:39 Accepted: 1 February 2017 We propose a near intercarrier interference (ICI)-free and very low complexity iterative detector for frequency-asynchronous distributed Alamouti-coded (FADAC) orthogonal frequency division multiplexing (OFDM). In the previous cancelation schemes, the entire subcarrier signals from one transmit (TX) antenna are estimated and canceled in the received signal from the other TX antenna and vice versa. However, the reliability of the estimated symbols are revealed to significantly vary across the subcarriers and thus, the poorly estimated symbols lead to the incorrect cancelation. Motivated from this, we first propose a scheme which does not cancel the interfering subcarrier(s) at the half band edges which undergo very high interference in FADAC-OFDM. For further improvement, we propose a so-called selective scheme which instantly measures the reliability of the detected symbols at each iteration and then exclude the unreliable symbols in the estimated interference generation. Moreover, the proposed scheme has a drastically reduced complexity by converting the cancelation process from the subcarrier domain to the time domain. In accordance with the analysis on the considered reliability measures, the numerical results show that the proposed scheme achieves the near ICI-free level only within three or four iterations for wide ranges of SNR, frequency offset, and delay spread. Iterative MIMO Alamouti ICI cancelation OFDM Distributed antennas Frequency offset Recently, several studies on Alamouti-coded OFDM (orthogonal frequency division multiplexing) for cooperative systems have been reported. One of the main challenging issues in this area is to mitigate self-interference due to the carrier frequency offset (CFO) between the distributed transmit antennas [1–8]. Very recently in [4], the so-called FADAC-OFDM (frequency-asynchronous distributed Alamouti-coded OFDM) has been proposed and shown to outperform the other existing approaches in [3–5]. In contrast to the conventional distributed Alamouti-coded OFDM, FADAC-OFDM is free from ICI (intercarrier interference) terms from the near subcarriers due to its ICI self-cancelation property only by simple Alamouti-decoding process. Especially, [4] tried to exploit this ICI self-cancelation property even in the selective fading channel by dividing the entire subcarriers into multiple subblocks. However, in the severely frequency-selective fading channels, FADAC-OFDM gets worse due to non-negligible inter-block ICI terms. Meanwhile, in [5–7], typical types of iterative ICI cancelation schemes for the conventional Alamouti-coded OFDM with the distributed antennas have been proposed. Since the conventional Alamouti-coded OFDM [9] has no ICI self-cancelation property for frequency and timing asynchronous distributed antennas, the accuracy of the initial detection is poor. Thus, a considerable number of iterations of cancelation has to be performed until the performance converges. Moreover, the converged performances are not so impressive. Although in [5] they derived the performance result close to no-CFO case, they assumed the perfect ICI cancelation which has not been justified. The schemes in [6, 7] rapidly break down as the CFO gets larger than 0.5. Moreover, they have high computation overheads because at each iteration, the required number of complex multiplications for the interference reconstruction is 4N 2 where N denotes the total number of OFDM subcarriers. Recently, in [10] and [14], the decision-directed iterative ICI cancelation schemes to distributed multiple input multiple output (MIMO) have been proposed. In [14], the authors considered spatial modulation MIMO as the application system model and they showed the performance results only for the small values of CFO. In [10], we, the authors of this paper, combined a typical decision-directed iterative ICI cancelation scheme to FADAC-OFDM. We achieved better performance compared with [5] even with low complexity due to better initial detection performance of FADAC-OFDM compared to the conventional Alamouti-coded OFDM. This scheme uses all the detected symbols in the interference reconstruction step without any consideration of reliability of detection symbols. However, even in FADAC-OFDM, some of the detection symbols may eventually have relatively high possibility of errors due to severe ICI terms such as inter-block ICI as mentioned before. It is shown that after the first iteration, the performance fairly improves, but from the second iteration, the performance is stuck in the same value. This is because the reconstructed interference term for cancelation is not updated anymore due to the erroneous portion of the constructed interference. Consequently, the performance gap between this scheme [10] and the case of no ICI is still considerable. In order to solve drawbacks of the previous cancelation scheme [10], it is important to carefully decide whether or not to use each of the detected symbols in the interference reconstruction step at each iteration. In other words, we have to use or devise a certain measure to assess the reliability of the detected symbols at each iteration, based on which we have to exclude the unreliable symbols in the interference reconstruction. To this end, we first propose a deterministic scheme where the fixed number of data symbols at the half band edges are not used in the interference reconstruction because they themselves undergo very high interference in FADAC-OFDM, and thus, they are less reliable. In order to further improve the cancelation performance, we propose a method which instantaneously measures the reliability of the soft detected symbols [12] at each iteration and then exclude the unreliable symbols in the estimated interference generation. We jointly employ two reliability check measures: (1) square error of the decision variable from the corresponding constellation point and (2) the detection consistency for two consecutive iterations. Apart from the ICI cancelation performance itself, the computational complexity of the scheme should be feasible from the implementation viewpoint. In the proposed scheme, we employ a drastically low complexity structure which attains the complexity reduction in terms of polynomial order. The remainder of this paper is structured as follows: First, we review FADAC-OFDM in Section 2. In Section 3, we first review our previous iterative cancelation scheme and its problem and then we propose two types of iterative cancelation schemes. In Section 4, we propose a complexity-reduced ICI cancelation structure. In Section 5, we provide the various performance results which support the improved ICI cancelation capability of the proposed schemes. 2 Review on FADAC-OFDM In this paper, we are considering the iterative ICI cancelation schemes for FADAC-OFDM. In this section, we describe the motivation of FADAC-OFDM and give a self-contained review on the TX and receive (RX) structures of FADAC-OFDM. In addition, we revisit its residual ICI terms for the subsequent sections. FADAC-OFDM has been proposed for FO-tolerant Alamouti-coded OFDM for frequency-asynchronous distributed antenna systems [4]. FADAC-OFDM employs a frequency reversal structure at the TX side. Then, FADAC-OFDM detects the symbols by performing simple linear combining after two separate DFT operations with local carriers synchronized to each TX antenna. By doing so, FADAC-OFDM cancels the major parts of intra-block ICI terms from neighboring subcarriers, and thus, FADAC-OFDM significantly improves the performance for the distributed antenna systems. However, despite the ICI self-cancelation property of FADAC-OFDM, two kinds of ICI terms, i.e., intra-block ICI and inter-block ICI, still remain non-negligible. This leads us to consider a further cancelation of the remaining ICI terms by using an iterative cancelation scheme which will be introduced in Section 3. 2.1 The system model and the OFDM symbol structure of FADAC-OFDM In this section, we introduce the system model and the OFDM symbol structure of FADAC-OFDM. We consider the distributed antenna system that is composed of two TX antennas and one RX antenna. In each TX antenna, OFDM-modulated signals are transmitted with N total subcarriers, as in [1–6]. Let the variable x b,l denote the lth data symbol of the bth subblock and the variables \(X_{b,k}^{(A)}\) and \(X_{b,k}^{(B)}\) denote the Alamouti-coded symbols at the kth subcarrier of bth subblock of TX antennas A and B, respectively. Figures 1 and 2 show the OFDM symbol structures of the conventional distributed Alamouti-coded (CDAC) OFDM [5, 6] and FADAC-OFDM [4], respectively. OFDM symbol structure of CDAC-OFDM OFDM symbol structure of FADAC-OFDM In CDAC-OFDM, Alamouti code pairs are mapped to the neighboring subcarriers just like the typical space-frequency Alamouti code structure [15], i.e., \(X_{b,k}^{(A)}\) and \(X_{b,k}^{(B)}\) are set to $$ X_{b,k}^{(A)} = \left\{ \begin{array}{ll} x_{b,1} & \text{if}~k=1, \\ - x_{b,2}^{*} & \text{if}~k=2, \end{array} \right. $$ $$ X_{b,k}^{(B)} = \left\{ \begin{array}{ll} x_{b,2} & \text{if}~k=1, \\ x_{b,1}^{*} & \text{if}~k=2, \\ \end{array} \right. $$ where x b,1 and x b,2 denote the two data symbols for the bth subblock. Meanwhile, in FADAC-OFDM, the subblock size (the number of subcarriers per subblock) is larger than 2, and then, Alamouti-coded symbol pairs are packed into the mirror images in each subblock as shown in Fig. 2. Specifically, N-total subcarriers are partitioned into N b subblocks, and thus, the block size n c is equal to N/N b . Thus, in FADAC-OFDM, \(X_{b,k}^{(A)}\) and \(X_{b,k}^{(B)}\) for 1≤b≤N b are set as follows: $$ X_{b,k}^{(A)} = \left\{ \begin{array}{cc} x_{b,2k - 1} & \mathrm{for~ 1} \le k \le n_{c}/2, \\ - x_{b,2(n_{c} - k + 1)}^{*} & \mathrm{for }~n_{c}/2 + \mathrm{1} \le k \le n_{c}, \end{array} \right. $$ $$ X_{b,k}^{(B)} = \left\{ \begin{array}{cc} x_{b,2k} & \mathrm{for~ 1} \le k \le n_{c}/2, \\ x_{b,2(n_{c} - k) + 1}^{*} & \mathrm{for }~n_{c}/2 + 1 \le k \le n_{c}. \end{array} \right. $$ We assume that the fading is locally flat over the Alamouti-coded block. To justify this assumption, the block size n c is set smaller than the coherent bandwidth. From Figs. 1 and 2, it is straightforward that as an extreme case, the FADAC-OFDM with n c =2 is equivalent to CDAC-OFDM. 2.2 RX structure of FADAC-OFDM Figure 3 shows the overall structures for the previous and the proposed ICI cancelation schemes, which will be explained later. The parts inside the bold boxes which are common to both structures correspond to the RX structure of FADAC-OFDM. We assume that there exists an inevitable carrier frequency offset (CFO) between \(f_{c}^{(A)}\) and \(f_{c}^{(B)}\) which denote the received carrier frequencies from distributed TX antennas A and B, respectively. Two FFTs (fast Fourier transforms) are performed on the RX signal by separately synchronizing to two asynchronous TX antennas' carrier frequencies and arrival timings. Receiver structures for the previous scheme in [10] (a) and the proposed scheme (b) Let the variables r (A) and r (B) denote two FFT input vectors synchronized to TX antennas A and B, respectively, as shown in Fig. 3, then the FFT outputs corresponding to the kth elements of the bth subblock of two TX antennas are expressed as [10] $$\begin{array}{@{}rcl@{}} R_{b,k}^{(A)} &=& H_{b,k}^{(A)}X_{b,k}^{(A)} + I_{b,k}^{(A)} + w_{b,k}^{(A)} \end{array} $$ $$\begin{array}{@{}rcl@{}} R_{b,k}^{(B)} &=& H_{b,k}^{(B)}X_{b,k}^{(B)} + I_{b,k}^{(B)} + w_{b,k}^{(B)} \end{array} $$ where \(w_{b,k}^{(A)}\) and \(w_{b,k}^{(B)}\) are AWGN terms and \(H_{b,k}^{(A)}\) and \(H_{b,k}^{(B)}\) are channel fading coefficients at the kth subcarrier of the bth subblock from TX antennas A and B, respectively, and they are each independent and follow zero mean, unit variance complex Gaussian distribution. The variables \(I_{b,k}^{(A)}\) and \(I_{b,k}^{(B)}\) denote ICI terms due to CFO and they are expressed as follows: $$\begin{array}{*{20}l} I_{b,k}^{(A)} = \sum\limits_{\beta = 1}^{N_{b}} \sum\limits_{m = 1}^{n_{c}} Q\left(\left({\beta - b} \right){n_{c}} + m - \varepsilon - k \right) H_{\beta,m}^{(B)}X_{\beta,m}^{(B)}, \end{array} $$ $$\begin{array}{*{20}l} I_{b,k}^{(B)} = \sum\limits_{\beta = 1}^{N_{b}} \sum\limits_{m = 1}^{n_{c}} Q\left(\left({\beta - b} \right){n_{c}} + m + \varepsilon - k \right) H_{\beta,m}^{(A)}X_{\beta,m}^{(A)} \end{array} $$ where ε is the normalized CFO between two transmit antennas, i.e., \(\varepsilon =(f_{c}^{(B)} - f_{c}^{(A)})/f_{\Delta }\) where f Δ is the subcarrier spacing and Q(x) is the ICI coefficient given as [16] $$\begin{array}{@{}rcl@{}} Q(x) = \frac{\sin (\pi x)}{N\sin ((\pi /N)x)}\exp \left[ {j\pi (1 - 1/N)x} \right]. \end{array} $$ With a typical Alamouti decoding, the normalized decision variables (DVs) \(\tilde {x}_{b,2k-1}\) and \(\tilde {x}_{b,2k}\) corresponding to x b,2k−1 and x b,2k , respectively, are obtained as follows: $$ \tilde x_{b,2k - 1} = \frac{H_{b,k}^{*(A)}R_{b,k}^{(A)} + H_{b,n_{c} - k + 1}^{(B)}R_{b,n_{c} - k + 1}^{*(B)}}{\left| H_{b,k}^{(A)} \right|^{2} + \left| H_{b,n_{c} - k + 1}^{(B)} \right|^{2}}, $$ $$ \tilde x_{b,2k} = \frac{H_{b,k}^{*(B)}R_{b,k}^{(B)} - H_{b,n_{c} - k + 1}^{(A)}R_{b,n_{c} - k + 1}^{*(A)}}{{\left| H_{b,k}^{(B)} \right|^{2}} + \left| {H_{b,{n_{c}} - k + 1}^{(A)}} \right|^{2}}. $$ Substituting (5) and (6) into (10), with \(X_{b,k}^{(A)}\) and \(X_{b,k}^{(B)}\) replaced by (3) and (4), results in \({\tilde x}_{b,2k - 1}\) as the summation of the data symbol x b,2k−1 and degrading effect terms, i.e., interference term i b,2k−1 and noise term w b,2k−1, respectively, as follows [4]: $$ \begin{aligned} {\tilde{x}}_{b,2k - 1} = x_{b,2k-1} &+ \underbrace{\frac{{H_{b,k}^{*(A)}I_{b,k}^{(A)} + H_{b,{n_{c}} - k + 1}^{(B)}I_{b,{n_{c}} - k + 1}^{*(B)}}}{{{{\left| {H_{b,k}^{(A)}} \right|}^{2}} + {{\left| {H_{b,{n_{c}} - k + 1}^{(B)}} \right|}^{2}}}}}_{{\text{interference~term},} \triangleq i_{b,2k-1}} \\ &+\underbrace{\frac{{H_{b,k}^{*(A)}w_{b,k}^{(A)} + H_{b,{n_{c}} - k + 1}^{(B)} w_{b,{n_{c}} - k + 1}^{*(B)}}}{{{{\left| {H_{b,k}^{(A)}} \right|}^{2}} + {{\left| {H_{b,{n_{c}} - k + 1}^{(B)}} \right|}^{2} }}}.}_{{\text{noise~term},} \triangleq w_{b,2k-1}} \end{aligned} $$ In (12), w b,2k−1 still follows Gaussian distribution because w b,2k−1 is the linear combination of two i.i.d noise samples with the same weighting factors. Therefore, w b,2k−1 has the identical static to that of \(w_{b,k}^{(A)}\) and \(w_{b,n_{c} -k+1}^{*(B)}\). Meanwhile, substituting (7) and (8) into i b,2k−1 in (12), i b,2k−1 is expressed as the summation of the intra-block ICI and inter-block ICI terms, as follows: $$ {\begin{aligned} i_{b,2k - 1} &= \underbrace{{\left(\begin{array}{l} H_{b,k}^{*(A)}\sum\limits_{m = 1}^{{n_{c}}} {Q\left({m + \varepsilon - k} \right)H_{b,m}^{(B)}X_{b,m}^{(B)}} \\ + H_{b,{n_{c}} - k + 1}^{(B)}\sum\limits_{m = 1}^{{n_{c}}} {Q\left({m - \varepsilon - ({n_{c}} - k + 1)} \right)H_{b,m}^{*(A)}X_{b,m}^{*(A)}} \\ \end{array} \right)} }_{\triangleq i_{b,2k-1}^{\text{intra}},{\text{~intra \text{-} block~~ICI}}} \\ &\quad + \underbrace{{\left(\begin{array}{l} H_{b,k}^{*(A)}\sum\limits_{\scriptstyle \beta \neq b \hfill}^{{N_{b}}} {\sum\limits_{m = 1}^{{n_{c}}} {Q\left({\left({\beta - b} \right){n_{c}} + m + \varepsilon - k} \right)H_{\beta,m}^{(B)}X_{\beta,m}^{(B)}}} \\ + H_{b,{n_{c}} - k + 1}^{(B)}\times\\ \sum\limits_{\scriptstyle \beta \neq b \hfill}^{{N_{b}}} {\sum\limits_{m = 1}^{{n_{c}}} {Q\left({\left({\beta - b} \right){n_{c}} + m - \varepsilon - \left({{n_{c}} - k + 1} \right)} \right)H_{\beta,m}^{*(A)}X_{\beta,m}^{*(A)}}} \\ \end{array} \right).} }_{\triangleq i_{b,2k-1}^{\text{inter}}, {\text{inter} -\text {block}\;\text{ICI}}} \end{aligned}} $$ We assume that the block size n c is set smaller than coherent bandwidth, that is, the fading is locally flat over the Alamouti-coded block. By our assumption, \(H_{b,k}^{(A)}\) and \(H_{b,k}^{(B)}\) are replaced by \(H_{b}^{(A)}\) and \(H_{b}^{(B)}\), respectively. Therefore, \(i_{b,2k-1}^{\text {intra}}\) is expressed as the summation of the four interference terms \(i_{b,2k-1}^{\text {intra},(1)}, i_{b,2k-1}^{\text {intra},(2)}, i_{b,2k-1}^{\text {intra},(3)}\), and \(i_{b,2k-1}^{\text {intra},(4)}\), as follows [4]: $$ {\begin{aligned} {i_{b, 2k - 1}^{\text{intra}}} &= \underbrace {{H_{b}^{*(A)}}{H_{b}^{(B)}}\sum\limits_{m = 1}^{n_{c} /2} {Q\left({m + \varepsilon - k} \right){X_{b,m}^{(B)}}} }_{\triangleq {i_{b,2k - 1}^{\text{intra},(1)}}}\\ &\quad+ \underbrace {{H_{b}^{*(A)}}{H_{b}^{(B)}}\sum\limits_{m = n_{c} /2 + 1}^{n_{c}} {Q\left({m + \varepsilon - k} \right) }{X_{b,m}^{(B)}}}_{\triangleq {i_{b,2k - 1}^{\text{intra},(2)}}}\\ &\quad+ \underbrace {{H_{b}^{*(A)}}{H_{b}^{(B)}}\sum\limits_{m = 1}^{n_{c} /2} {Q\left({m - \varepsilon - \left({n_{c} - k + 1} \right)} \right){X_{b,m}^{*(A)}}} }_{\triangleq {i_{b,2k - 1}^{\text{intra},(3)}}}\\ &\quad+ \underbrace {{H_{b}^{*(A)}}{H_{b}^{(B)}}\sum\limits_{m = n_{c} /2 + 1}^{n_{c}} {Q\left({m - \varepsilon - \left({n_{c} - k + 1} \right)} \right)X_{b,m}^{*(A)}} }_{\triangleq {i_{b,2k - 1}^{\text{intra},(4)}}}. \end{aligned}} $$ Figure 4 shows the variances of the four interference terms mentioned above for ε=0.2 and ε=0.5. Intuitively, the variances of \({i_{b,2k - 1}^{\text {intra},(1)}}\) and \({i_{b,2k - 1}^{\text {intra},(4)}}\) are identical and the variances of \({i_{b,2k - 1}^{\text {intra},(2)}}\) and \({i_{b,2k - 1}^{\text {intra},(3)}}\) are identical. Note that the variances of \({i_{b,2k - 1}^{\text {intra},(1)}}\) and \({i_{b,2k - 1}^{\text {intra},(4)}}\) are significantly dominant over those of \({i_{b,2k - 1}^{\text {intra},(2)}}\) and \({i_{b,2k - 1}^{\text {intra},(3)}}\). This is because \({i_{b,2k - 1}^{\text {intra},(1)}}\) and \({i_{b,2k - 1}^{\text {intra},(4)}}\) are the ICI from the subcarriers of the half subblock where the desired subcarrier belongs to, whereas \({i_{b,2k - 1}^{\text {intra},(2)}}\) and \({i_{b,2k - 1}^{\text {intra},(3)}}\) are the ICI from the other half subblock [4]. The variances of the four terms in intra-block ICI with n c =32 for ε=0.2 and ε=0.5 Let us focus on one of the dominant intra-block ICI terms \(i_{b,2k-1}^{\text {intra},(4)}\). From (9), Q(x)=Q ∗(−x), and thus, Q ∗(−(m−ε−k))=Q(m+ε−k), and from (3) and (4), \(X_{b,n_{c} -m +1}^{(A)} = -X_{b,m}^{*(B)}\). Using these properties and introducing a new indexing variable, i.e., m ′=n c −m+1 to rearrange n c /2+1≤m≤n c in reverse order, \(i_{b,2k-1}^{\text {intra},(4)}\) in (14) can be rewritten as $$\begin{array}{@{}rcl@{}} i_{b,2k-1}^{\text{intra},(4)} &=& H_{b}^{*(A)}H_{b}^{(B)} \sum\limits_{m' = 1}^{n_{c} /2} {Q^{*}}(n_{c} - m' + 1 - \varepsilon\\ &&- (n_{c} - k + 1))X_{b,n_{c} - m' +1}^{*(A)} \\ &=& H_{b}^{*(A)}H_{b}^{(B)} \sum\limits_{m = 1}^{N/2} {{Q^{*}}(- m - \varepsilon + k)\left(-X_{b,m}^{*(B)}\right)^{*}} \\ &=&-H_{b}^{*(A)}H_{b}^{(B)} \sum\limits_{m = 1}^{N/2} {Q\left({m + \varepsilon - k} \right)X_{b,m}^{(B)}} \\ &=&-i_{2k-1}^{\text{intra},(1)} \end{array} $$ which concludes that \(i_{b,2k-1}^{\text {intra},(1)}\) and \(i_{b,2k-1}^{\text {intra},(4)}\) in (14) cancel each other. By canceling \(i_{b, 2k-1}^{\text {intra},(1)}\) and \(i_{b, 2k-1}^{\text {intra},(4)}\) which are dominant terms in intra-block ICI, the performance degradation due to ICI can be substantially ameliorated. On the other hand, we can show that \(i_{2k-1}^{\text {intra},(2)} = i_{2k-1}^{\text {intra},(3)}\) without difficulty, and thus, the overall intra-block ICI term \(i_{b,2k-1}^{\text {intra}}\) can be expressed as the relatively weak ICI term \(i_{b,2k-1}^{\text {intra},(2)}\) as follows: $$\begin{array}{@{}rcl@{}} i_{b,2k-1}^{\text{intra}}&=&2{H_{b}^{*(A)}}{H_{b}^{(B)}} {\sum\limits_{m = n_{c} /2 + 1}^{n_{c}} {Q\left({m - \varepsilon - k} \right)X_{b,m}^{(B)}} } \\ &=& 2i_{b,2k-1}^{\text{intra},(2)}. \end{array} $$ Therefore, (12) is rewritten as the summation of the data symbol, the minor part of the intra-block ICI term, inter-block ICI term, and additive noise term, as follows: $$\begin{array}{@{}rcl@{}} {\tilde x}_{b,2k - 1} = x_{b,2k-1} + 2i_{b,2k-1}^{\text{intra},(2)} + i_{b,2k-1}^{\text{inter}} + w_{b,2k-1} \end{array} $$ Using a similar calculation and notation, x b,2k is represented without difficulty and loss of generality as $$\begin{array}{@{}rcl@{}} {\tilde{x}}_{b,2k} = x_{b,2k} + 2i_{b,2k}^{\text{intra},(2)} + i_{b,2k}^{\text{inter}} + w_{b,2k}. \end{array} $$ Figure 5 a shows the normalized average ICI powers in the decision variables according to the pair of subblock number and subcarrier index, i.e., (b,k) with N=256 and n c =32. The term i in the legend denotes the number of the iterative cancelations whose detailed algorithm will be proposed in the subsequent subsection. In Fig. 5 a, it is shown that even without the iterative cancelation (i=0), the ICI power of FADAC-OFDM is maintained lower than −15 dB in the middle of half subblocks even for a large ε, i.e., 0.5. This is due to the intrinsic property of FADAC-OFDM, i.e., the major ICI terms from the neighboring subcarriers in the considered subblock are completely self-canceled. Meanwhile, despite intra-block ICI self-cancelation, the ICI power sharply increases at the half band edges (k=n c /2 or k=n c ). This is because in the vicinity of the half subblock edges, the frequency distances between the considered subcarrier and the subcarriers belonging to the counterpart (the other side) half subblock or the consecutive subblocks decrease and the interferences from these subcarriers are not canceled by FADAC-OFDM as shown in (17) and (18). Normalized average ICI power in decision variables of FADAC-OFDM with n c =32 (a) and n c=2 (CDAC-OFDM) (b), according to the pair of subblock number and subcarrier index and the number of iterations for cancelation i, ε=0.5 and N=256 Motivated from this, by using an iterative cancelation step which will be introduced in the next section, we try to cancel further the remaining ICI terms. In Fig. 5 a, by employing iterative cancelation, it is shown that ICI powers at half band edge and band edge significantly decrease compared to the case before iterative cancelation. However, the ICI powers at half subblock edges are still relatively large compared to the middle band. This is because the iterative ICI cancelation is not perfect, and thus, the reason for high interferences at the half subblock edges mentioned above still holds. For a reference, Fig. 5 b shows the normalized ICI power of FADAC-OFDM with n c =2 which is equivalent to CDAC-OFDM. With n c =2, the feature of FADAC-OFDM, i.e., self-cancelation of the intra-block ICI term, is meaningless because there exists only one subcarrier in each half subblock. Thus, the ICI powers over all subcarriers are very high as shown in Fig. 5 b, and the iterative cancelation is not so effective either. This implies that CDAC-OFDM is not suitable for the frequency-asynchronous distributed antenna systems. Figure 6 shows the bit error rate (BER) results of FADAC-OFDM according to the subblock size n c with ε=0.5 for the several cases of T max which denotes the maximum delay spread of multi-path, and T denotes the OFDM symbol duration. It is shown that the optimal n c is larger than 2 and is getting larger as the delay spread decreases, which accords with our expectation. In addition, as the delay spread decreases, the suboptimal zone where the performance is rather insensitive to n c is getting wider. However, if n c is set excessively large, the performance is getting worse. The BER results according to n c for the several T maxs, ε=0.5 for BPSK with E b /N 0=20 dB 3 Combining iterative ICI cancelation schemes to FADAC-OFDM 3.1 The previous iterative ICI cancelation scheme for FADAC-OFDM The procedure of the previous iterative ICI cancelation scheme to FADAC-OFDM in [10] is shown in Fig. 3 a. First, FADAC-OFDM is performed for initial detection. Then, the estimated ICI terms are generated by using initial detection symbols and channel information to subtract ICI terms from RX signal. This is iteratively performed by updating the detection symbols at each iteration. Denote \({\hat {x}}_{b,2k-1}^{(i)}\) and \({\hat {x}}_{b,2k}^{(i)}\) as the detection symbols obtained by slicing \({\tilde {x}}_{b,2k-1}\) and \({\tilde {x}}_{b,2k}\) in (17) and (18), respectively, at the ith iteration. By substituting \({\hat {x}}_{b,2k-1}^{(i)}\) and \({\hat {x}}_{b,2k}^{(i)}\) into (3) and (4) and then into (7) and (8), we reconstruct the estimated versions of \({I}_{b,k}^{(A)}\) and \({I}_{b,k}^{(B)}\), respectively, at the ith iteration. Denote \({\hat I}_{b,k}^{(A)}(i)\) and \({\hat I}_{b,k}^{(B)}(i)\) as the estimated versions of \({I}_{b,k}^{(A)}\) and \({I}_{b,k}^{(B)}\) at the ith iteration, respectively. We update the FFT outputs \(R_{b,k}^{(A)}\) and \(R_{b,k}^{(B)}\) at the ith iteration as follows: $$\begin{array}{*{20}l} \begin{array}{l} R_{b,k}^{(A,i)} \leftarrow R_{b,k}^{(A)} - \hat{I}_{b,k}^{(A,i)}, \end{array} \end{array} $$ $$\begin{array}{*{20}l} \begin{array}{l} R_{b,k}^{(B,i)} \leftarrow R_{b,k}^{(B)} - \hat{I}_{b,k}^{(B,i)}, \end{array} \end{array} $$ where \(R_{b,k}^{(A,i)}\) and \(R_{b,k}^{(B,i)}\) denote the updated versions of \(R_{b,k}^{(A)}\) and \(R_{b,k}^{(B)}\), respectively, at the ith iteration. Finally, at each iteration, we perform the Alamouti combining in (10) and (11) using \(R_{b,k}^{(A,i)}\) and \(R_{b,k}^{(B,i)}\) to obtain the updated detection symbols \(\hat {x}_{b,2k-1}^{(i+1)}\) and \(\hat {x}_{b,2k}^{(i+1)}\), respectively, for the next ((i+1)th) iteration. In [10], it is shown that due to the good performance of FADAC-OFDM by intra-subblock ICI self-cancelation, this basic iterative scheme for FADAC-OFDM achieves better performance with lower complexity compared with [5]. However, this scheme still has room to be improved. Due to high ICI power at the subband edges shown in Fig.5, the detection symbols at those edges are more likely to be erroneously detected compared to the other detection symbols. The incorrect detection symbols result in the incorrect ICI term reconstruction and thus the incorrect ICI cancelation. As a result, even with increasing iterations, the improvement of performance is limited and the error probability is stuck in a certain point where the non-negligible incorrect contribution to the reconstructed ICI term is not self-corrected by the iterations any more. This will be checked out again in the simulation results. 3.2 The proposed iterative ICI cancelation schemes In the previous section, we addressed the issue of the previous iterative ICI cancelation in [10], i.e., the drawback of using the entire detection symbols for ICI reconstruction and cancelation. To tackle this issue, we propose two types of selective ICI cancelation schemes. 3.2.1 Scheme I. DS scheme As the first scheme to avoid the problem of using the unreliable symbol detection at the subband edges, we simply do not use the fixed number of symbols at the subband edges for ICI term reconstruction. In other words, if we denote \(\hat {x}_{b,2k-1}^{(i,\text {used})}\) and \({\hat {x}}_{b,2k}^{(i,\text {used})}\) as the symbol estimates which will be finally used to reconstruct the ICI terms for cancelation at the ith iteration, they are set as follows: $$ {\hat{x}}_{b,2k-1}^{(i,\text{used})} = \left\{ \begin{array}{ll} 0&{\text{if}}\; k \in E \\ {\hat{x}}_{b,2k-1}^{(i)},&{\text{elsewhere}} \\ \end{array} \right., $$ $$ {\hat{x}}_{b,2k}^{(i,\text{used})} = \left\{ \begin{array}{ll} 0&{\text{if}}\; k \in E \\ {\hat{x}}_{b,2k}^{(i)},&{\text{elsewhere}} \\ \end{array} \right., $$ where E is a set of indices of edge subcarriers, i.e., \(E=\{1, 2, \ldots, M, \frac {n_{c}}2-M+1, \frac {n_{c}}2-M+2,\ldots, \frac {n_{c}}2\}\), and M is the number of data symbols (subcarriers) at each edge to be excluded in the ICI term reconstruction. For example, if M is set to 2 with n c =16, then set E is equal to {1,2,7,8}. Consequently, 2M(=M pairs of Alamouti code) data symbols are not used, and they are replaced by null data symbols in the ICI term reconstruction. Simply by excluding 2M detection symbols at the edge in each subblock which are severely interfered by inter-block ICIs, we can avoid the performance degradation due to wrong ICI term cancelation. Another merit of this scheme is that it does not need any additional hardware or computations compared to [10]. This scheme excludes the data symbols in the deterministic carrier positions, i.e., predetermined positions based on the average ICI power distribution across the subcarriers as shown in Fig. 5. However, we know from (7) and (8) that the ICI term at each subcarrier contains lots of random variables such as the data symbols in the other subcarriers and their fading coefficients and thus the ICI power at each subcarrier instantaneously varies. This implies that some of the edge subcarriers can eventually undergo rather small instantaneous ICI despite the high average ICI power. As we need the instantaneous reliability of the detection symbols to decide whether or not to use each detection symbol, the proposed DS (deterministically selective) scheme which simply excludes the fixed number of band edge subcarriers still has room to be improved if we can accommodate the instantaneous reliability of the detection symbols. 3.2.2 Scheme II. AS scheme To alleviate the problem of the proposed DS scheme mentioned in the previous paragraph, we propose another so-called adaptively selective (AS) scheme. In the proposed AS scheme, we use two measures for the instantaneous reliability of the detection symbols. As one of the reliability measures, we use the square error between the soft decision variable \(\tilde {x}\) and its nearest constellation point \(\hat {x}\) as the tentative decision value [11]. Let us denote this reliability measure for a certain detection symbol \(\hat {x}\) by γ, then it is calculated as $$\begin{array}{@{}rcl@{}} \begin{array}{l} {\gamma}= |\tilde{x} - \hat{x}|^{2}. \end{array} \end{array} $$ In order to check whether or not this measure well reflects the reliability of the detection symbol, we simulated the cumulative distribution function (CDF) of γ for the correct detection case and the incorrect detection case. Figure 7 shows CDFs of γ for two (correct and incorrect) cases when N=256 and n c =16. It is clear in Fig. 7 that γ for the correct case is distributed in the quite low range whereas γ for the incorrect case is distributed in the quite high range. For example, in the initial (i=0) detection, for the correct detection case, 96% of γs is smaller than 0.5 whereas for the incorrect detection case, 95% of γs is larger than 0.5. This feature becomes even more remarkable as the iteration goes on. This implies that by simply comparing γ with a threshold, we can properly measure the reliability of the corresponding detection symbol. We use the following criterion to decide whether or not to use the detection symbol in the ICI reconstruction. $$\begin{array}{@{}rcl@{}} {\hat{x}^{\text{(used)}}} = \left\{ \begin{array}{ll} \hat{x},&{\text{if}}\;\gamma \le {\rho} \\ 0,&{\quad \text{else}} \\ \end{array} \right. \end{array} $$ a–d CDF of γ at each iteration where \({\hat {x}^{\text {(used)}}}\) denotes the actual value which will be used in the ICI reconstruction and ρ is a threshold value which determines whether or not the detection is sufficiently reliable or not. Note in Fig. 7 that this criterion possibly misses the correct symbols or possibly uses the incorrect symbols in the ICI reconstruction step. The threshold ρ should be set by considering both of these two possibilities. The optimum value will vary according to the channel parameters, the system parameters, or even the iteration layer. However, in the practical system, it is likely to use, rather, a constant threshold, i.e., global suboptimal setting, and thus, we cannot avoid the performance loss compared to the optimized case. To complement this, we use another measure to assess the reliability of the detection symbols, i.e., detection consistency between two consecutive iterations. If a certain detection symbol is sufficiently reliable at the ith iteration, the detection result would not change in the (i+1)th iteration. Hence, we treat a detection symbol as the reliable one if its detection result is maintained between two consecutive iterations. This measure well compromises the probability that the first criterion in (24) uses the incorrect symbol(s) in the ICI reconstruction step. For example, when a certain incorrect detection symbol has a small γ, we conclude that it eventually has the small γ and it is unreliable if the detection result in the previous iteration is not equal to the detection result in the current iteration. Summing up, the proposed AS scheme uses the following criterion for selecting the detection symbols in the ICI reconstruction step. $$ {\begin{aligned} {\hat{x}_{b,2k - 1}^{(i,\text{used})}} = \left\{ \begin{array}{lll} {\hat{x}_{b,2k - 1}^{(i)},} & {{\text{if}}\;\gamma_{b,2k - 1}^{(i)} \le {\rho}}, &{\text{for}}\;i < 2\\ {0,} & {{\text{else}}} &\\ {\hat{x}_{b,2k - 1}^{(i)},} & {{\text{if}}\;\hat{x}_{b,2k - 1}^{(i)} = \hat{x}_{b,2k - 1}^{(i - 1)} {\text{and}}\;\gamma_{b,2k - 1}^{(i)} \le {\rho}}, &\text{for}\;i \ge 2,\\ {0,} &\text{else}&\\ \end{array}\right. \end{aligned}} $$ $$ {\begin{aligned} {\hat{x}_{b,2k}^{(i,\text{used})}} = \left\{ \begin{array}{lll} {\hat{x}_{b,2k}^{(i)},} & {{\mathrm{if~~}}\gamma_{b,2k}^{(i)} \le {\rho}}, &{\text{for}}\;i < 2\\ {0,} & {{\text{else}}} &\\ {\hat{x}_{b,2k}^{(i)},} & {{\mathrm{if~~}}\hat{x}_{b,2k}^{(i)} = \hat{x}_{b,2k}^{(i - 1)} {\text{and}}\;\gamma_{b,2k}^{(i)} \le {\rho}}, &{\text{for}}\;i \ge 2.\\ {0,} & \text{else} &\\ \end{array} \right. \end{aligned}} $$ In order to check whether or not the two conditions in (25) and (26) well discriminate the correct or incorrect detections, we have to see the two kinds of conditional probabilities: (1) \(p_{\text {false}}^{\text {condition}}\) denoting the probability of incorrect symbol detection despite the condition being satisfied ("false rate" in short) and (2) \(p_{\text {miss}}^{\text {condition}}\) denoting the probability of correct symbol detection despite the condition being unsatisfied ("miss rate" in short). First, \(p_{\text {false}}^{\text {condition}}\) for two conditions are written as follows: $$ p_{\text{false}}^{\mathrm{C1}} = \Pr \left[ {{\hat{x}}_{b,2k - 1}^{(i)} \ne {x_{b,2k - 1}}|\gamma_{b,2k - 1}^{(i)} \le \rho} \right] $$ $$ p_{\text{false}}^{\mathrm{C2}} = \Pr \left[ {\hat{x}_{b,2k - 1}^{(i)} \ne {x_{b,2k - 1}}|\hat{x}_{b,2k - 1}^{(i)} = \hat{x}_{b,2k - 1}^{(i - 1)}} \right] $$ and \(p_{\text {miss}}^{\text {condition}}\) for two conditions are written as follows: $$ p_{\text{miss}}^{\mathrm{C1}} = \Pr \left[ {{\hat{x}}_{b,2k - 1}^{(i)} = {x_{b,2k - 1}}|\gamma_{b,2k - 1}^{(i)} > \rho} \right] $$ $$ p_{\text{miss}}^{\mathrm{C2}} = \Pr \left[ {\hat{x}_{b,2k - 1}^{(i)} = {x_{b,2k - 1}}|\hat{x}_{b,2k - 1}^{(i)} \neq \hat{x}_{b,2k - 1}^{(i - 1)}} \right] $$ where C1 denotes the condition \({\gamma _{b,2k - 1}^{(i)} \le \rho }\) and C2 denotes the condition \({\hat {x}_{b,2k - 1}^{(i)} = \hat {x}_{b,2k - 1}^{(i - 1)}}\). As the similar expressions hold for x b,2k , we exclude the expressions for x b,2k−1 without loss of generality. To avoid the wrong cancelation, we have to lower \(p_{\text {false}}^{\text {condition}}\), and to avoid missing the correct detection symbols, we have to lower \(p_{\text {miss}}^{\text {condition}}\). Figure 8 shows the four conditional probabilities in (27)–(30) with ρ=0.4 and ε=0.5. Due to symmetry, x b,2k should have the same results. The condition C1 has smaller false rate but much larger miss rate compared to the condition C2. We can expect the performance improvement by jointly using the two conditions. To confirm this, the results for the joint condition \(\left \{C1 \bigcap C2\right \}\) which is adopted in the proposed AS scheme are also plotted in Fig. 8. Note that this joint condition achieves much lower miss rate than that of using the condition C1 alone while achieving the false rate as low as that of using the condition C1 alone. Compared to the condition C2, the joint condition has the similar level of miss rate in the practically high (signal-to-noise ratio) SNR region while achieving quite smaller false rate. False and miss rates, ρ=0.4,ε=0.5 4 Complexity reduction The hardware structures of the iterative cancelations in [5–7, 10] are basically the same. They all include the calculations for reconstructing the interference term at each FFT outputs expressed in (7) and (8) at each iteration. The interference term in (7) and (8) have 2N complex multiplications. As there are two FFTs with N outputs, the overall required number of complex calculations for interference term reconstruction per iteration is equal to 4N 2. Meanwhile in the proposed scheme, we modify this complexity-expensive structure into a mathematically equivalent but low complexity structure. Figure 3 shows the receiver structures for the previous iterative cancelation scheme in [10] and the proposed scheme. Instead of performing cancelation at the FFT output stage (subcarrier domain), we can equivalently cancel the interference at the FFT input stage (time domain). Hence, the reconstructed interference corresponds to the time domain version. The reduced computation is intuitive due to the fact that the time domain interference takes the form of just a single sampled vector but it contains the N parallel interfering subcarriers. Recall that r (A) and r (B) denote the original input vectors to N-point FFTs which are synchronized to TX A and TX B frequencies, respectively. Then, at the ith iteration of the proposed scheme, they are replaced by r (A,i) and r (B,i), respectively, which are the updated versions given as follows: $$ {\begin{aligned} \mathbf{r}^{(A,i)} &= \mathbf{r}^{(A)} - \text{IFFT} \left[H_{1,1}^{(B)} {\hat{X}}_{1,1}^{(B,i)},H_{1,2}^{(B)} {\hat{X}}_{1,3}^{(B,i)}, \cdots, H_{N_{b},n_{c}}^{(B)} {\hat{X}}_{N_{b},n_{c}}^{(B,i)} \right] \\ &\quad\odot \left[ e^{\frac{-j 2\pi \varepsilon} N }, e^{\frac{-j 2\pi 2\varepsilon} N} \cdots e^{-j 2\pi \varepsilon} \right] \end{aligned}} $$ $$ {\begin{aligned} \mathbf{r}^{(B,i)} &= \mathbf{r}^{(B)} - \text{IFFT} \left[H_{1,1}^{(A)} {\hat{X}}_{1,1}^{(A,i)},H_{1,2}^{(A)} {\hat{X}}_{1,3}^{(A,i)}, \cdots, H_{N_{b},n_{c}}^{(A)} {\hat{X}}_{N_{b},n_{c}}^{(A,i)} \right]\\ &\quad\odot \left[ e^{\frac{j 2\pi \varepsilon} N }, e^{\frac{j 2\pi 2\varepsilon} N} \cdots e^{j 2\pi \varepsilon} \right]. \end{aligned}} $$ In (31) and (32), \({\hat {X}}_{\beta,m}^{(A,i)}\) and \({\hat {X}}_{\beta,m}^{(B,i)}\) denote the estimated versions of \({X}_{\beta,m}^{(A)}\) and \({X}_{\beta,m}^{(B)}\), respectively, at the ith iteration, \( \left [ e^{\frac {-j 2\pi \varepsilon } N }, e^{\frac {-j 2\pi 2\varepsilon } N} \cdots e^{-j 2\pi \varepsilon } \right ]\) and \( \left [ e^{\frac {-j 2\pi \varepsilon } N }, e^{\frac {-j 2\pi 2\varepsilon } N } \cdots e^{-j 2\pi \varepsilon } \right ]\) denote the sampled versions of the residual complex exponentials by CFO, and ⊙ denotes the element-wise multiplication. Note that the term \(\text {IFFT} \left [H_{1,1}^{(B)} {\hat {X}}_{1,1}^{(B,i)}, H_{1,2}^{(B)} {\hat {X}}_{1,3}^{(B,i)}, \cdots, H_{N_{b},n_{c}}^{(B)} {\hat {X}}_{N_{b},n_{c}}^{(B,i)} \right ] \odot \left [ e^{\frac {-j 2\pi \varepsilon } N }, e^{\frac {-j 2\pi 2\varepsilon } N} \cdots e^{-j 2\pi \varepsilon } \right ]\) in (31) corresponds to the reconstructed time domain-sampled signal from TX antenna B to the received signal synchronized to TX antenna A. The similar remark holds for (32). In order to prove that the time domain ICI cancelation of the proposed scheme is identical to the frequency (subcarrier) domain cancelation, let us expand (31) first. We denote the nth output of IFFT, i.e., \({\text {IFFT}}\left [ H_{1,1}^{({\mathrm {B}})}{\hat {X}}_{1,1}^{({\mathrm {B,}}i)}\right., H_{1,2}^{({\mathrm {B}})}{\hat {X}}_{1,2}^{({\mathrm {B,}}i)}, \cdots,\left. H_{{N_{b}},{n_{c}}}^{({\mathrm {B}})} {\hat {X}}_{{N_{b}},{n_{c}}}^{({\mathrm {B,}}i)} \right ]\), by η (A,i)(n), then η (A,i)(n) for 1≤n≤N is calculated as follows: $$\begin{array}{@{}rcl@{}} {\eta}^{({\mathrm{A}},i)}(n) = \frac{1}{N}\sum\limits_{\beta = 1}^{{N_{b}}} {\sum\limits_{m = 1}^{{n_{c}}} {H_{\beta,m}^{({\mathrm{B}})}{\hat{X}}_{\beta,m}^{({\mathrm{B,}}i)}} {e^{- j2\pi \left({(\beta - 1){n_{c}} + m} \right)n/N}}}. \end{array} $$ For simplicity, we set l=(β−1)n c +m, and then, (33) is expressed as $$\begin{array}{@{}rcl@{}} {\eta}^{({\mathrm{A}},i)}(n) = \frac{1}{N}\sum\limits_{l = 1}^{N} {H_{l}^{({\mathrm{B}})}{\hat{X}}_{l}^{({\mathrm{B,}}i)}} {e^{j2\pi nl/N}}. \end{array} $$ Then, to reconstruct the received signal from the other TX antenna (antenna B here) in the time domain, the sampled versions of the residual complex exponential term are multiplied to η (A,i)(n). The nth sample of the received signal from the other TX antenna in the time domain is denoted by \(\mathfrak {i}^{({\mathrm {A}},i)}(n)\), and then, \(\mathfrak {i}^{({\mathrm {A}},i)}(n)\) is expressed as follows: $$\begin{array}{@{}rcl@{}} \mathfrak{i}^{({\mathrm{A}},i)}(n)&=&{\eta}^{({\mathrm{A}},i)}(n) \odot {e^{- j2\pi n\varepsilon /N}}\\ &=& \frac{1}{N}\sum\limits_{l = 1}^{N} {H_{l}^{({\mathrm{B}})}{\hat{X}}_{l}^{({\mathrm{B,}}i)}} {e^{j2\pi n(l - \varepsilon)/N}}. \end{array} $$ Denote the kth output of \({\text {FFT}}\left [ \mathfrak {i}^{({\mathrm {A}},i)}(1),\mathfrak {i}^{({\mathrm {A}},i)}(2),\ldots,\right.\left.\mathfrak {i}^{({\mathrm {A}},i)}(N) \right ]\) by \(F_{k}^{(A)}\), then \(F_{k}^{(A)}\) for 1≤k≤N is expressed as $$\begin{array}{@{}rcl@{}} F_{k}^{(A)} &=& \sum\limits_{n = 1}^{N} {\frac{1}{N}\sum\limits_{l = 1}^{N} {H_{l}^{({\mathrm{B}})}{\hat{X}}_{l}^{({\mathrm{B,}}i)}} {e^{j2\pi n(l - \varepsilon)/N}}{e^{- j2\pi nk/N}}} \\ &=&\frac{1}{N}\sum\limits_{l = 1}^{N} {H_{l}^{({\mathrm{B}})}{\hat{X}}_{l}^{({\mathrm{B,}}i)}} \sum\limits_{n = 1}^{N} {{e^{j2\pi n(l - k - \varepsilon)/N}}}. \end{array} $$ By using the summation formula for the geometric series, (36) is rewritten as $$ {\begin{aligned} F_{k}^{(A)} &= \frac{1}{N}\sum\limits_{l = 1}^{N} {H_{l}^{({\mathrm{B}})}{\hat{X}}_{l}^{({\mathrm{B,}}i)}\frac{{{e^{j\pi n(l - k - \varepsilon)}}\left({{e^{- j\pi n(l - k - \varepsilon)}} - {e^{j\pi n(l - k - \varepsilon)}}} \right)}}{{{e^{j\pi n(l - k - \varepsilon)/N}}\left({{e^{- j\pi n(l - k - \varepsilon)/N}} - {e^{j\pi n(l - k - \varepsilon)/N}}} \right)}}}\\ &=\sum\limits_{l = 1}^{N} {\frac{{\sin \left({\pi \left({l - k - \varepsilon} \right)} \right)}}{{N\sin \left({\pi \left({l - k - \varepsilon} \right)/N} \right)}}H_{l}^{({\mathrm{B}})}{\hat{X}}_{l}^{({\mathrm{B,}}i)}}\\ &=\sum\limits_{l = 1}^{N} {Q\left({l - k - \varepsilon} \right)H_{l}^{({\mathrm{B}})}{\hat{X}}_{l}^{({\mathrm{B,}}i)}}. \end{aligned}} $$ By reusing the relation l=(β−1)n c +m and setting k=(b−1)n c +k ′, (37) is expressed as $$ F_{k}^{(A)} = \sum\limits_{\beta = 1}^{{N_{b}}} {\sum\limits_{m = 1}^{{n_{c}}} {Q\left({(\beta - b){n_{c}} + m - k' - \varepsilon} \right)H_{\beta,m}^{({\mathrm{B}})}{\hat{X}}_{\beta,m}^{({\mathrm{B,}}i)}} }. $$ From (38), we know that \(F_{k}^{(A)}\) is finally equal to (7), and thus, it is proved that the proposed time domain cancelation is equivalent to the previous subcarrier domain cancelation. Note that in each of (31) and (32), N, N/2 log2N, and N multiplications are required for IFFT input vector generation, N-point IFFT operation, and N-point complex sinusoid multiplication, respectively. In addition, we have to include the computations for two FFT blocks for the original OFDM demodulation which are now inside the cancelation loop (see Fig. 3 a) unlike the previous subcarrier domain cancelation schemes (see Fig. 3 b). Consequently, 4N+2N log2N multiplications are required in total at each iteration to reconstruct the interference in the time domain cancelation as shown in Table 1. The number of multiplications required in the operations for the interference term reconstruction at each branch per iteration of the proposed scheme Number of multiplications IFFT input vector generation IFFT for time domain conversion \(\frac {N}{2} {\log }_{2} N\) N-point complex sinusoid multiplication FFT for the original OFDM demodulation 4N+2N log2N Note that as N increases to the practical range, the computational complexity of the proposed structure is proportional to N log2N whereas that of the previous scheme is proportional to N 2. Figure 9 and Table 2 compare the complexity between the previous subcarrier domain cancelation schemes and the proposed time domain cancelation scheme. It is remarkable that the proposed structure drastically reduces the complexity compared to the previous subcarrier domain cancelation schemes while maintaining the mathematical equivalence to the subcarrier domain cancelation schemes. Number of complex multiplications per iteration Comparison of the number of multiplications between the previous and the proposed schemes according to N Previous (frequency domain) Proposed (time domain) Proposed/previous 5 Simulation results In this section, we provide the simulation results to evaluate the performance of the proposed scheme. Commonly, we set N=256. Regarding multi-path profile for generating \(H_{b,k}^{(A)}\) and \(H_{b,k}^{(B)}\), the number of multi-paths is 8 and their delays are distributed uniformly in [0 T max] where T max is the maximum delay spread. The guard interval is set to be larger than T max. The subcarrier spacing (= 1/T where T = OFDM symbol duration prior to the guard time insertion) is set to 15 kHz by referring to the Long-Term Evolution (LTE) standard. For the proposed AS (adaptive selective) scheme, the threshold value ρ is set to 0.4 regardless of the iteration number and the other parameters. Figure 10 shows BERs of iterative ICI cancelation schemes according to the number of iterations i for ε=0.5 with binary phase shift keying (BPSK) and T max=T/100,T/50, and T/10. The subblock size of FADAC-OFDM frame n c is set to 8 irrespective of T max. We exclude the extremely frequency-selective fading channels where the optimal n c of FADAC-OFDM is equal to 2, and then, the transmitter structure of FADAC-OFDM is trivially the same as that of CDAC-OFDM. a–c BER according to the number of iterations for ε=0.5 and the several T maxs with n c =8 As a baseline for the performance comparison, CDAC-OFDM with the iterative ICI cancelation using the entire detection symbols in the OFDM frame is included. Note that the iterative ICI cancelation results in almost no improvement to CDAC-OFDM. This is because the initial detection performance of CDAC-OFDM under frequency-asynchronous environment is poor, and thus, the ICI cancelation based on unreliable initial detection does not work properly. On the other hand, the iterative ICI cancelation works better for the case when it is applied to FADAC-OFDM which has a superb initial detection performance. However, the performance gain of ICI cancelation scheme in [10] is still not so significant. This scheme uses all the detected symbols in the interference reconstruction step without any consideration of the reliability of the detection symbols. Even in FADAC-OFDM, some of the detection symbols may eventually have relatively high possibility of errors due to severe ICI terms such as inter-block ICI as mentioned before. It is shown that after the first iteration, the performance fairly improves but from the second iteration, the performance is stuck in the same value. This is because the reconstructed interference term for cancelation is not updated anymore due to the erroneous portion of the constructed interference. Consequently, the performance gap between the scheme in [10] and the case of no ICI is still significant. Note that two proposed schemes in this paper achieve significantly improved performance compared to the scheme in [10]. In the first iteration, the proposed DS scheme with M=1 achieves a substantially decreased BER compared to the scheme in [10]. This implies that simply excluding the band edge subcarriers can efficiently avoid the erroneous ICI reconstruction. This results in the significant improvement by the canceling ICI from the rest of the subcarriers in the first iteration. However, the band edge subcarriers will not be canceled in the remaining iterations as well, and the BER converges to a still significantly higher level than that of the ICI-free case. Meanwhile, the proposed AS scheme has significantly improved performance compared to the proposed DS scheme. Only within three or four iterations, the proposed AS scheme approaches nearly ICI-free level. This means that adaptively selecting the detection symbols for ICI reconstruction works properly, and as the iteration goes on, even the band edge subcarriers having high ICI power are gradually canceled. Despite the inferior performance of the proposed DS scheme to the proposed AS scheme, the proposed DS scheme has a merit that it is easy to implement and needs nearly no complexity overhead. We can further improve the performances of the proposed schemes by more carefully optimizing or adaptively changing the system parameters such as the subblock size of FADAC-OFDM n c , M for the proposed DS scheme, or ρ for the proposed AS scheme. However, we do not cover this case because the main point of this paper is to make sure of the improved performance of the proposed schemes even with suboptimal parameters. In addition, adaptively changing the system parameters is practically a burden in terms of system implementation. Figure 11 shows BERs according to E b /N 0 with i=4 and ε=0.5. From Fig. 10, i is set to 4 since the performances of all cases roughly converge at i=4. The results for the schemes without ICI cancelation are also included to see the improvement by adding the iterative ICI cancelation. Although there exist slight deviations in the high SNR region, the proposed AS scheme achieves nearly ICI-free performance with the fixed system parameters over the wide SNR range and the considered delay spread range. a–c BER according to E b /N 0 and i=4 for several T maxs and ε=0.5 with n c =8 In Fig. 12, the BER results are plotted for the large CFO (>0.5) cases. Although the performance gradually degrades and gets off from the ICI-free level as the CFO increases, the proposed schemes still attain the significant ICI reduction. Especially, the proposed AS scheme maintains the BER still in the meaningful level even for CFO >0.5. This is because the FADAC-OFDM basically holds its intrinsic feature, i.e., intra-block ICI self-cancelation irrespective of CFO although the inter-block ICI level increases as CFO increases. On the other hand, the ICI cancelation schemes to CDAC-OFDM abruptly break down as CFO gets larger than 0.5. For reference, see Fig. 5 in [6] and Fig. 5 in [7] which we cannot overlay on Fig. 12 in this paper as the system parameters and the channel parameters are not the same. a–c BER according to ε and i=4 for several T maxs with n c =8 To investigate the performance under the practical situation, we also consider the case when there exists a channel estimation error. Figure 13 shows the BER results of each cancelation scheme according to the variance of the channel estimation error with ε=0.5,i=4,E b /N 0=20 dB, T max = T/250, and n c =32. The model of the imperfect channel estimation in [13] is employed, and the channel estimation error refers to the normalized one by the mean channel gain. The results show that performance degradation of the proposed scheme increases and the performance gaps among the schemes accordingly decrease as the variance of the error exceeds 0.2. Note however that in the practical range of the channel estimation error, say, lower than 0.2, all the schemes' performances are almost insensitive to the channel estimation error and thus the significant performance gap between the proposed AS scheme and the other schemes still remains the same. BER according to the variance of the channel estimation error with ε=0.5,i=4,E b /N 0=20 dB, T max=T/250, and n c =32 We proposed an enhanced iterative ICI cancelation scheme distributed Alamouti-coded OFDM both in terms of the performance and the complexity. By avoiding the incorrect cancelation due to incorrect symbols, the proposed scheme achieves better performance than other ICI cancelation schemes. Only within three or four iterations, the proposed scheme achieves near ICI-free performance by instantaneously reflecting the reliability of detection symbols at each iteration. As for complexity, by converting the ICI cancelation with functional equivalence, the proposed scheme has a drastically reduced computational complexity. The performance results shown in this paper sufficiently appeal as a promising solution for the current and future cooperative transmit antenna systems using OFDM waveform and Alamouti code. The proposed scheme will be further improved by combining with some sophisticated schemes, such as the adaptively selective cancelation based on the soft decision feedback [12]. We leave this as one of our future works. This work was supported in part by the DGIST R&D Program of the Ministry of Science, ICT and Future Planning, Korea (17-IT-01), Basic Science Research Program through the National Research Foundation (2015R1D1A3A01015970) funded by the Ministry of Education, and the Information Technology Research Center support program (IITP-2016-R2718-16-0035) supervised by the Institute for Information & Communications Technology Promotion funded by the Ministry of Science, ICT and Future Planning, Korea. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Advanced Radar Technology Laboratory in DGIST, 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea Department of ICE in Yeungnam University, 280 Daehak-Ro, Gyeongsan, Gyeongbuk, 38541, Republic of Korea H Wang, XG Xia, Distributed space-frequency codes for cooperative communication systems with multiple carrier frequency offsets. IEEE Trans. Wirel. Commun. 8(2), 1045–1055 (2009).View ArticleGoogle Scholar Z Li, XG Xia, An Alamouti coded OFDM transmission for cooperative systems robust to both timing errors and frequency offsets. IEEE Trans. Wirel. Commun. 7(5), 1839–1844 (2008).View ArticleGoogle Scholar K Choi, Inter-carrier interference-free Alamouti-coded OFDM for cooperative systems with frequency offsets in non-selective fading environments. IET Commun. 5(15), 2125–2129 (2011).MathSciNetView ArticleMATHGoogle Scholar B Kim, K Choi, FADAC-OFDM: frequency asynchronous distributed Alamouti-coded OFDM. IEEE Trans. Vehi. Tech. 64(2), 466–480 (2015).View ArticleGoogle Scholar Y Zhang, J Zhang, in Proc. IEEE WCNC 2009. Multiple CFOs compensation and BER analysis for cooperative communication systems, (2009), pp. 1–6.Google Scholar T Lu, H Lin, T Sang, in Proc. IEEE ISPMRC 2010. An SFBC-OFDM receiver to combat multiple carrier frequency offsets in cooperative communications, (2010).Google Scholar J Lee, H Lin, T Sang, in Proc. IEEE ISCAS 2012. An SFBC-OFDM receiver with MLSE equalizer to combat multiple carrier frequency offsets, (2012).Google Scholar Y Yao, X Dong, Multiple CFO mitigation in amplify-and-forward cooperative OFDM transmission. IEEE Trans. on Commun. 12(60), 3844–3854 (2012).View ArticleGoogle Scholar K Lee, D Williams, in Proc. IEEE GLOBECOM '00. A space-frequency transmitter diversity technique for OFDM systems, (2000), pp. 1473–1477.Google Scholar B Kim, J Lee, D Jeong, K Choi, in Proc. ITNG 2014. Combining successive ICI cancelation to ICI suppressed Alamouti coded OFDM for frequency asynchronous distributed antenna systems, (2014).Google Scholar OE Agazzi, N Seshardsi, On the use of tentative decisions to cancel intersymbol interference and nonlinear distortion (with application to magnetic recording channels). IEEE Trans. Inf. Theory. 2(43), 394–408 (1997).View ArticleMATHGoogle Scholar SH Muller, WH Gerstacker, JB Huber, in Proc. GLOBECOM '96. Reduced-state soft-output trellis-equalization incorporating soft feedback, (1996), pp. 95–100.Google Scholar Y Chen, C Tellambura, Performance analysis of maximum ratio transmission with imperfect channel estimation. IEEE Commun. Lett. 4(9), 322–324 (2005).View ArticleGoogle Scholar B Zhou, Y Xiao, P Yang, S Li, in Proc. WiCOM, 2011. An iterative CFO compensation algorithm for distributed spatial modulation OFDM systems, (2011).Google Scholar SM Alamouti, A simple transmitter diversity scheme for wireless communications. IEEE J. Select. Areas Commun. 16(8), 1451–1458 (1998).View ArticleGoogle Scholar P Dharmawansa, N Rajatheva, H Minn, An exact error probability analysis of OFDM systems with frequency offset. IEEE Trans.Commun. 57(1), 26–31 (2009).View ArticleGoogle Scholar
CommonCrawl
DOI:10.1126/SCIENCE.288.5465.462 Orbital physics in transition-metal oxides @article{Tokura2000OrbitalPI, title={Orbital physics in transition-metal oxides}, author={Tokura and Nagaosa}, volume={288 5465}, Tokura, Nagaosa An electron in a solid, that is, bound to or nearly localized on the specific atomic site, has three attributes: charge, spin, and orbital. The orbital represents the shape of the electron cloud in solid. In transition-metal oxides with anisotropic-shaped d-orbital electrons, the Coulomb interaction between the electrons (strong electron correlation effect) is of importance for understanding their metal-insulator transitions and properties such as high-temperature superconductivity and colossal… 1,656 Citations Orbital physics in transition-metal oxides from first-principles Hua Wu Orbital Physics in Transition Metal Oxides: Magnetism and Optics P. Horsch Physics, Chemistry Orbital degeneracy and strong correlations in transition-metal oxides generate spins and orbitals as low-energy degrees of freedom. Their interplay with both real-charge motion and virtual-charge… Role of Orbitals in the Physics of Correlated Electron Systems D. Khomskii Rich properties of systems with strongly correlated electrons, such as transition metal (TM) oxides, is largely connected with an interplay of different degrees of freedom in them: charge, spin,… Orbital reflectometry of oxide heterostructures. E. Benckiser, M. Haverkort, B. Keimer Nature materials It is shown that it is possible to derive quantitative, spatially resolved orbital polarization profiles from soft-X-ray reflectivity data, without resorting to model calculations, and is sensitive enough to resolve differences of ~3% in the occupation of Ni e(g) orbitals in adjacent atomic layers of a LaNiO( 3)-LaAlO(3) superlattice. View 3 excerpts, cites results and background Orbital order, metal-insulator transition, and magnetoresistance effect in the two-orbital Hubbard model R. Peters, N. Kawakami, T. Pruschke We study the effects of temperature and magnetic field on a two-orbital Hubbard model within dynamical mean field theory. We focus on the quarter filled system, which is a special point in the phase… Observation of orbital waves as elementary excitations in a solid E. Saitoh, S. Okamoto, Y. Tokura Experimental evidence for orbitons is reported in LaMnO3, using Raman scattering measurements and a model calculation of orbiton resonances which provides a good fit to the experimental data is performed. Orbital physics in transition metal compounds: new trends S. Streltsov, D. Khomskii The present review discusses different effects related to orbital degrees of freedom. Leaving aside such aspects as the superexchange mechanism of cooperative Jahn–Teller distortions and various… First-principle study of metal oxide thin films: Electronic and magnetic properties of confined d electrons Z. Zhong, Y. Lu Spectral properties of spin-orbital polarons as a fingerprint of orbital order Krzysztof Bieniasz, M. Berciu, A. M. Ole's Transition-metal oxides are a rich group of materials with very interesting physical properties that arise from the interplay of the charge, spin, orbital, and lattice degrees of freedom. One… X-ray spectroscopic techniques are powerful tools for electronic structure investigations of transition metal oxides M. Neumann, K. Kuepper DRAMATIC SWITCHING OF MAGNETIC EXCHANGE IN A CLASSIC TRANSITION METAL OXIDE : EVIDENCE FOR ORBITAL ORDERING W. Bao, C. Broholm, G. Aeppli, P. Dai, J. Honig, P. Metcalf Spin correlations in metallic and insulating phases of V2O3 and its derivatives are investigated using magnetic neutron scattering. Metallic samples have incommensurate spin correlations varying… Ordering and fluctuation of orbital and lattice distortion in perovskite manganese oxides Y. Motome, Masatoshi Imada Tokyo Inst. of Technology, Issp, U. Tokyo Roles of orbital and lattice degrees of freedom in strongly correlated systems are investigated to understand electronic properties of perovskite Mn oxides such as La_{1-x}Sr_{x}MnO_{3}. An extended… Interaction between the d -Shells in the Transition Metals. II. Ferromagnetic Compounds of Manganese with Perovskite Structure C. Zener Recently, Jonker and Van Santen have found an empirical correlation between electrical conduction and ferromagnetism in certain compounds of manganese with perovskite structure. This observed… Orbital liquid in perovskite transition-metal oxides S. Ishihara, M. Yamanaka, N. Nagaosa Physics, Materials Science We study the effects of the degeneracy of the ${e}_{g}$ orbitals in perovskite transition-metal oxides in the limit of strong repulsive electron-electron interaction. The isospin field is introduced… k-Dependent Electronic Structure, a Large "Ghost" Fermi Surface, and a Pseudogap in a Layered Magnetoresistive Oxide D. Dessau, T. Saitoh, The k-dependent electronic structure of the low temperature ferromagnetic state of La 1.2Sr1.8Mn2O7 was measured using angle-resolved photoemission spectroscopy and calculated using the local spin… Effective Hamiltonian in manganites:mStudy of the orbital and spin structures S. Ishihara, J. Inoue, S. Maekawa In order to study nature of the electronic structures in manganites, the effective Hamiltonian is derived by taking the degeneracy of the ${\mathrm{e}}_{\mathrm{g}}$ orbitals and the strong electron… Intermediate-spin state and properties of LaCoO3. Korotin, Ezhov, Solovyev, Anisimov, Khomskii, Sawatzky Materials Science, Physics Physical review. B, Condensed matter The electronic structure of the perovskite LaCoO3 for different spin states of Co ions was calculated in the local-density approximation LDA+U approach and shows that Co 3d states of t(2g) symmetry form narrow bands which could easily localize, while e(g) orbitals, due to their strong hybridization with the oxygen 2p states, form a broad sigma* band. Phase separation scenario for manganese oxides and related materials Moreo, Yunoki, Dagotto Experimental data reviewed here by applying several techniques for manganites and other materials are consistent with a transition between the antiferromagnetic insulator state of the hole-undoped limit and the ferromagnetic metal at finite hole density through a mixed-phase process. Electronic structure and orbital ordering in perovskite-type 3d transition-metal oxides studied by Hartree-Fock band-structure calculations. Mizokawa, Fujimori Single-particle excitation spectra calculated using Koopmans' theorem give an approximate but relevant picture on the electronic structure of the perovskite-type 3d transition-metal oxides. Orbital occupation, local spin, and exchange interactions in V2O3 S. Ezhov, V. Anisimov, D. Khomskii, G. Sawatzky We present the results of an LDA and LDA+U band structure study of the monoclinic and the corundum phases of V2O3 and argue that the most prominent (spin 1/2) models used to describe the…
CommonCrawl
British Journal of Nutrition Volume 110 Issue 7 Nutritional disturbance in acid... Core reader Nutritional disturbance in acid–base balance and osteoporosis: a hypothesis that disregards the essential homeostatic role of the kidney Review of acid–base homeostasis Is there a reason to question the traditional, accepted approach to analyse acid–base chemistry? Nutrition and acid–base balance Claude Bernard's nutrition experiments in rabbits Origin of the hypothesis involving bone as regulator of acid–base balance From severe renal metabolic acidosis to the hypothesis of 'latent' acidosis of nutritional origin Bone alkali store and overestimated proton retention in chronic renal failure Extrapolation from severe metabolic acidosis in rodents to the putative nutritional protein origin of osteoporosis in the general human population Evaluation of the acid and alkali nutritional load Mathematical model to estimate the potential renal acid load of foods Relationship between bone health and the acid or alkaline load of the diet Reviews of studies dealing with the dietary acid load hypothesis of osteoporosis Randomised clinical trials with potassium alkali in postmenopausal women Phosphate intake and calcium balance Age decline in renal function and osteoporosis: are they causally related? Cited by 19 This article has been cited by the following publications. This list is generated based on data provided by CrossRef. Frassetto, Lynda A. and Sebastian, Anthony 2013. Commentary to accompany the paper entitled 'Nutritional disturbance in acid–base balance and osteoporosis: a hypothesis that disregards the essential homeostatic role of the kidney', by Jean-Philippe Bonjour. British Journal of Nutrition, Vol. 110, Issue. 11, p. 1935. Ryd, Leif Brittberg, Mats Eriksson, Karl Jurvelin, Jukka S. Lindahl, Anders Marlovits, Stefan Möller, Per Richardson, James B. Steinwachs, Matthias and Zenobi-Wong, Marcy 2015. Pre-Osteoarthritis. CARTILAGE, Vol. 6, Issue. 3, p. 156. Jia, T. Byberg, L. Lindholm, B. Larsson, T. E. Lind, L. Michaëlsson, K. and Carrero, J. J. 2015. Dietary acid load, kidney function, osteoporosis, and risk of fractures in elderly men and women. Osteoporosis International, Vol. 26, Issue. 2, p. 563. Hendrickx, Gretl Boudin, Eveline and Van Hul, Wim 2015. A look behind the scenes: the risk and pathogenesis of primary osteoporosis. Nature Reviews Rheumatology, Vol. 11, Issue. 8, p. 462. Rozenberg, Serge Body, Jean-Jacques Bruyère, Olivier Bergmann, Pierre Brandi, Maria Luisa Cooper, Cyrus Devogelaer, Jean-Pierre Gielen, Evelien Goemaere, Stefan Kaufman, Jean-Marc Rizzoli, René and Reginster, Jean-Yves 2016. Effects of Dairy Products Consumption on Health: Benefits and Beliefs—A Commentary from the Belgian Bone Club and the European Society for Clinical and Economic Aspects of Osteoporosis, Osteoarthritis and Musculoskeletal Diseases. Calcified Tissue International, Vol. 98, Issue. 1, p. 1. Whiting, S J Kohrt, W M Warren, M P Kraenzlin, M I and Bonjour, J-P 2016. Food fortification for bone health in adulthood: a scoping review. European Journal of Clinical Nutrition, Vol. 70, Issue. 10, p. 1099. Gore, Ecaterina Mardon, Julie and Lebecque, Annick 2016. Draining and salting as responsible key steps in the generation of the acid-forming potential of cheese: Application to a soft blue-veined cheese. Journal of Dairy Science, Vol. 99, Issue. 9, p. 6927. de Jonge, E. A. L. Koromani, F. Hofman, A. Uitterlinden, A. G. Franco, O. H. Rivadeneira, F. and Kiefte-de Jong, J. C. 2017. Dietary acid load, trabecular bone integrity, and mineral density in an ageing population: the Rotterdam study. Osteoporosis International, Vol. 28, Issue. 8, p. 2357. Higgs, Jennette Derbyshire, Emma and Styles, Kathryn 2017. Nutrition and osteoporosis prevention for the orthopaedic surgeon. EFORT Open Reviews, Vol. 2, Issue. 6, p. 300. Granchi, Donatella Torreggiani, Elena Massa, Annamaria Caudarella, Renata Di Pompo, Gemma Baldini, Nicola and Reddy, Sakamuri V. 2017. Potassium citrate prevents increased osteoclastogenesis resulting from acidic conditions: Implication for the treatment of postmenopausal bone loss. PLOS ONE, Vol. 12, Issue. 7, p. e0181230. 2018. Prediction of Renal Acid Load in Adult Patients on Parenteral Nutrition. Pharmaceutics, Vol. 10, Issue. 2, p. 43. Dolan, Eimear and Sale, Craig 2018. Protein and bone health across the lifespan. Proceedings of the Nutrition Society, p. 1. Rizzoli, R. Biver, E. Bonjour, J.-P. Coxam, V. Goltzman, D. Kanis, J. A. Lappe, J. Rejnmark, L. Sahni, S. Weaver, C. Weiler, H. and Reginster, J.-Y. 2018. Benefits and safety of dietary protein for bone health—an expert consensus paper endorsed by the European Society for Clinical and Economical Aspects of Osteopororosis, Osteoarthritis, and Musculoskeletal Diseases and by the International Osteoporosis Foundation. Osteoporosis International, Vol. 29, Issue. 9, p. 1933. Granchi, Donatella Caudarella, Renata Ripamonti, Claudio Spinnato, Paolo Bazzocchi, Alberto Massa, Annamaria and Baldini, Nicola 2018. Potassium Citrate Supplementation Decreases the Biochemical Markers of Bone Loss in a Group of Osteopenic Women: The Results of a Randomized, Double-Blind, Placebo-Controlled Pilot Study. Nutrients, Vol. 10, Issue. 9, p. 1293. Hou, Yi-Chou Lu, Chien-Lin and Lu, Kuo-Cheng 2018. Mineral bone disorders in chronic kidney disease. Nephrology, Vol. 23, Issue. , p. 88. Mioni, Roberto and Mioni, Giuseppe 2018. Western diets are not responsible for chronic acid retention: a critical analysis of organic acid and phosphate contribution. Scandinavian Journal of Clinical and Laboratory Investigation, Vol. 78, Issue. 1-2, p. 31. Gibson, Mark and Newsham, Pat 2018. Food Science and the Culinary Arts. p. 105. Shams-White, Marissa M. Chung, Mei Fu, Zhuxuan Insogna, Karl L. Karlsen, Micaela C. LeBoff, Meryl S. Shapses, Sue A. Sackey, Joachim Shi, Jian Wallace, Taylor C. Weaver, Connie M. and Chen, Jin-Ran 2018. Animal versus plant protein and adult bone health: A systematic review and meta-analysis from the National Osteoporosis Foundation. PLOS ONE, Vol. 13, Issue. 2, p. e0192459. Chauveau, Philippe Lasseur, Catherine Nodimar, Céline Prezelin-Reydit, Mathilde Trolonge, Stanislas Combe, Christian and Aparicio, Michel 2018. La charge acide d'origine alimentaire : une nouvelle cible pour le néphrologue ?. Néphrologie & Thérapeutique, Vol. 14, Issue. 4, p. 240. View all Google Scholar citations for this article. Scopus Citations View all citations for this article on Scopus British Journal of Nutrition, Volume 110, Issue 7 14 October 2013 , pp. 1168-1177 Jean-Philippe Bonjour (a1) (a1) Division of Bone Diseases, Geneva University Hospitals and Faculty of Medicine, Rue Gabrielle-Perret-Gentil, CH-1211Geneva 14, Switzerland Copyright: © The Author 2013 The online version of this article is published within an Open Access environment subject to the conditions of the Creative Commons Attribution licence . DOI: https://doi.org/10.1017/S0007114513000962 Published online by Cambridge University Press: 04 April 2013 Send article to Kindle To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the 'name' part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Volume 110, Issue 7 Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox. Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive. The nutritional acid load hypothesis of osteoporosis is reviewed from its historical origin to most recent studies with particular attention to the essential but overlooked role of the kidney in acid–base homeostasis. This hypothesis posits that foods associated with an increased urinary acid excretion are deleterious for the skeleton, leading to osteoporosis and enhanced fragility fracture risk. Conversely, foods generating neutral or alkaline urine would favour bone growth and Ca balance, prevent bone loss and reduce osteoporotic fracture risk. This theory currently influences nutrition research, dietary recommendations and the marketing of alkaline salt products or medications meant to optimise bone health and prevent osteoporosis. It stemmed from classic investigations in patients suffering from chronic kidney diseases (CKD) conducted in the 1960s. Accordingly, in CKD, bone mineral mobilisation would serve as a buffer system to acid accumulation. This interpretation was later questioned on both theoretical and experimental grounds. Notwithstanding this questionable role of bone mineral in systemic acid–base equilibrium, not only in CKD but even more in the absence of renal impairment, it is postulated that, in healthy individuals, foods, particularly those containing animal protein, would induce 'latent' acidosis and result, in the long run, in osteoporosis. Thus, a questionable interpretation of data from patients with CKD and the subsequent extrapolation to healthy subjects converted a hypothesis into nutritional recommendations for the prevention of osteoporosis. In a historical perspective, the present review dissects out speculation from experimental facts and emphasises the essential role of the renal tubule in systemic acid–base and Ca homeostasis. 'It is no exaggeration to say that the composition of the body fluids is determined not by what the mouth takes in but by what the kidneys keep: they are the master chemists of our internal environment. When, among other duties, they excrete the ashes of our body fires, or remove from the blood the infinite variety of foreign substances that are constantly absorbed from our indiscriminate gastrointestinal tracts, these excretory operations are incidental to the major task of keeping our internal environment in an ideal, balanced state.' Homer W. Smith (From Fish to Philosopher)(1) The hypothesis suggesting that a diet increasing the urinary excretion of acid ion (proton = H+) could be a risk factor for osteoporosis was proposed more than 40 years ago(2). Conversely, the contention that a diet rich in alkaline or basic (OH−) functions would be beneficial to bone health continues to generate a substantial scientific interest. The recurring resurgence of this interest is relayed in the general population by various mass media spreading the belief of small but very active groups of opponents to the use of any animal products(3). The same view can be expressed by the scientific community via analyses or meta-analyses of studies which suggest that certain nutrients, particularly animal protein, or foods such as meat or dairy products, by virtue of their supposed 'acidogenic' properties, may increase the risk of osteoporosis. At the same time, or in response to these suggestions appearing in the scientific literature, there is growing interest in miraculous benefits claimed for so-called 'alkalinogenic' diets or nutritional products, such as those proposed on numerous websites. Consuming 'alkalis' will bring about a number of benefits, expanding from hair loss treatment to the prevention of cancers, infections, allergies, obesity, 'all types of rheumatism' and, ultimately, osteoporosis, the subject of the present review. The keen interest in alkali has also found followers among certain anthropologists who argue that the contemporary diet, when compared with that which prevailed before the Neolithic period, has led to osteoporosis together with other diseases linked to the modern way of life, several being hypothetically caused by nutrition-induced metabolic acidosis(4). One may be surprised by this keen interest in alkalis and the associated fear of acid, forgetting that the skin or the oesophagus tolerates caustic soda (Na+OH−) as poorly as hydrochloric acid (H+Cl−). Yet, basic physiology shows that our bodies are equipped with several systems capable of neutralising or generating protons, such as the bicarbonate–CO2 buffer: $$\begin{eqnarray} H^{ + } + HCO_{3}^{ - }\leftrightarrow H_{2}CO_{3}\leftrightarrow H_{2}O + CO_{2} \end{eqnarray}$$ This system enables very effective neutralisation of the excess of H+ ions by moving this reaction to the right and therefore increasing the production of CO2, which, in physiological conditions, is easily eliminated via the respiratory tract(5). In addition to this pulmonary mechanism, the renal tubular system is extremely well equipped to maintain the acid–base balance of the extracellular compartment by modulating the reabsorption of bicarbonate and the secretion of protons. These processes are linked to buffer systems able to eliminate the excess of H+ ions produced by cellular metabolism, without substantially lowering urinary pH(5). The main urinary buffer systems are: $$\begin{eqnarray} (i)\,HPO_{4}^{2 - }\leftrightarrow H_{2}PO_{4}^{1\cdot 8 - }\,(divalent\,phosphates\leftrightarrow monovalent\,phosphates) \end{eqnarray}$$ $$\begin{eqnarray} (ii)\,NH_{3}\leftrightarrow NH_{4}^{ + }\,(ammonia\leftrightarrow ammonium) \end{eqnarray}$$ The composition of the extracellular fluid in which the cells of the body exert their specific functions must deviate towards neither the acid nor the alkaline side. Measurable deviations are due to pathological disturbances that affect primarily the digestive tract, intermediary metabolism, the pulmonary system or renal functions. The four classic disturbances of acid–base balance with clinically significant consequences are, on the one hand, acidosis and alkalosis of metabolic origin and, on the other hand, acidosis and alkalosis of respiratory origin(5, 6). Furthermore, deviations from an extracellular pH of 7·35 can be corrected or attenuated by both the capacity of chemical buffers and the physiological regulation at the respiratory and renal tubular levels. The mobilisation of such compensatory mechanisms is expressed by changes in the distribution of buffer system components. These basic concepts are essential for the understanding of the relationship between nutrition and bone health. As discussed below, the notion of latent acidosis(7), as well as the relationship between ageing, renal functional decline and blood acid–base composition(8), have been suggested to be causally related to the increased prevalence of osteoporosis in the elderly population. However, alterations in blood pH, [HCO3−] and/or pCO2 have not been documented in relation to changes in the foods or nutrients purported to cause osteoporosis in otherwise healthy individuals(9, 10). The traditional, accepted bicarbonate-centred formulation of acid–base interpretation was questioned about 25 years ago by Stewart(11), who promoted the so-called 'strong ion difference' (SID) approach. According to the mathematical model from which this theory was worked out, the components of the volatile bicarbonate–CO2 buffer system (CO2, HCO3−, H2CO2 and CO32 −) were dependent variables of the difference in the net charges of fixed cations and anions fully dissociated in solution. Thus, according to Stewart(11, 12), the strong ion difference [Na+] − [Cl−] or SID would be a determinant of [H+]. However, 30 years after Stewart(11, 12), Kurtz et al. (13) thoroughly analysed the physico-chemical, physiological and clinical aspects of Stewart's theory when compared with the traditional, accepted bicarbonate-centred approach. In this very comprehensive review(13) it was underscored that Stewart's theory(11, 12) reintroduced the confusion in the acid–base literature that existed from the beginning of the twentieth century and had prevailed until the early 1950s. During that period, clinical chemists considered Na+ as a base and Cl− as an acid(13). Such a consideration entirely disregarded the key position of H+ in acid–base reactions. This misconception in clinical acid–base chemistry was dispelled in the mid-late 1950s by Relman(14) and Christiensen(15), whose 'prescient analysis foreshadows in some sense the current issues in the literature as they relate to the Stewart framework'(13). Furthermore, the bicarbonate-centred approach utilising the Henderson–Hasselbach equation is a mechanistic formulation that reflects the underlying acid–base situation(13). It remains the most reliable and used method for physiologists and clinicians to assess acid–base chemistry in human blood(13). Therefore, it is inaccurate to claim an absence of consensus as to how to assess acid–base balance by referring primarily to the SID and bicarbonate-centred approaches without emphasising the most cogent arguments developed by Kurtz et al. (13). Adherents to the notion of diet-induced acidosis as an essential mechanism for the high prevalence of osteoporosis in the Western world suggest that if no change is observed, this does not mean there is none. However, in order to support the diet-induced acidosis hypothesis of osteoporosis, it would seem necessary to objectively measure whether diet alters blood acid–base equilibrium, and, particularly, whether such alteration can be found in association with bone fragility. In the presence of one of the above-mentioned acid–base balance disturbances, foods, depending on their nutrient composition, can either slightly accentuate or ameliorate a pathological condition. However, in the absence of such pathologies, food components trigger neither extracellular fluid acidosis nor alkalosis. Any influence of nutritional origin that slightly disrupts the acid–base equilibrium is at once corrected by biochemical buffering systems operating in both the extracellular and intracellular compartments. Then, as indicated above, come into play the homeostatic systems involved in the regulation of pulmonary ventilation and urinary acid excretion via modulation of the renal tubular reabsorption or 'reclamation' of filtered bicarbonates and of proton secretion(5). Over the last two decades, tremendous progress has been achieved in understanding the cellular and molecular mechanisms involved in renal tubular acidification (see for reviews Weiner & Hamm(16), Hamm et al. (17), Koeppen(18) and Weiner & Verlander(19)). Nevertheless, the fundamental concepts elucidated several decades ago on the overall renal control of extracellular proton homeostasis remain valid. Homeostasis is defined as the stabilisation of the various physiological constants of the 'internal environment'. It has played an essential part in the evolution of life, from the most elementary unicellular organism to Homo sapiens, both in its phylogenetic and ontogenetic trajectories. Bearing in mind the capacity of physiological systems to adapt in response to environmental changes, homeostasis provides a scientific explanation for the basic mechanism of biological evolution(1). Homeostasis includes the maintenance of a constant extracellular concentration of protons. Extracellular levels of other ions such as Na, K, Ca and inorganic phosphate are also barely affected by fluctuations in their respective nutritional intakes, unless their variations are very large in quantity and extend over prolonged periods. That diet alters urinary acidity had already been demonstrated in the nineteenth century by Bernard(20) in his fundamental experiments on rabbits. By substituting cold boiled beef for their usual dietary regimen (consisting essentially of grass), cloudy, alkaline urine became clear and acidic, like the urine of carnivores(20). For this eminent physiologist, whose major contribution was to the elucidation of the homeostasis of the internal environment, these experiments carried out on rabbits represented a particularly cogent example of functional adaptation to environmental variations(20). The urinary acidity changes observed in response to food substitution are particularly relevant to the considerations discussed below. A century after Bernard's(20) observations in rabbits, Relman and his colleagues(21–24) in Boston carried out a series of classical experiments with the objective of establishing, via quantitative data, the vital role of the kidney in acid–base balance. First, in healthy human subjects, i.e. those with normal renal function, Relman et al. (21) demonstrated that acid urinary excretion perfectly counterbalanced the net production of non-volatile acid. These experiments showed that the regulation mechanisms for the proton balance were indeed functioning. They signified that, in the absence of renal insufficiency, there was no argument for the involvement of organs other than the kidney in the maintenance of the homeostasis of non-volatile acids. They then applied their technique to patients suffering from acidosis through chronic renal insufficiency(22, 24). In these studies, carried out on a small number of patients with a pathologically decreased but stable serum level of bicarbonates, their method of calculation indicated a positive balance of protons(22, 24). This led to the hypothesis that the quantity of acid retained in the body, indirectly estimated and not measured, was neutralised by the release of bicarbonates by the dissolution of bone mineral(22, 24). (Bone mineral is not pure hydroxyapatite. The apatite crystals contain impurities, most notably carbonate (CO32 −) in place of the phosphate group. The concentration of carbonate (4–6 %) makes bone mineral similar to a carbonate apatite. Other documented substitutions are K, Mg, Sr and Na in place of the Ca ions, and Cl and F in place of the hydroxyl groups. These impurities reduce the crystallinity and solubility of the apatite(25).) In order to document this hypothetical bone mobilisation of bicarbonates, the Relman team carried out an initial study on five normal subjects(23). The administration of large doses of NH4Cl, drastically decreasing the blood level of bicarbonates from 26·5 to 18·8 mEq/l, was associated with a negative Ca balance, attributed to the mobilisation of calcium carbonate of skeletal origin(23). This interpretation was therefore based on the measurement of a decreased but stable level of bicarbonates, whereas during the same period, the estimate of acid balance indicated a progressive accumulation of protons(23). The negative Ca balance was due to an increase in urinary losses, the intestinal Ca absorption being unchanged(23). The change in the rate of urinary Ca excretion was therefore interpreted as a consequence of the mobilisation of Ca from the bones, associated with the release of buffer substances due to the dissolution of bone mineral in the presence of severe metabolic acidosis(23). The authors did not consider the possibility that the mobilisation of bone Ca might be secondary to an effect on the renal tubular reabsorption of Ca. In several subsequent studies, it turned out that acidosis is a factor that considerably inhibits the tubular reabsorption of Ca(26). Consequently, the mobilisation of bone Ca observed in these earlier experiments(23) may therefore actually represent a secondary phenomenon, compensating for the tendency towards hypocalcaemia rather than being the cause of the negative Ca balance(26). In a subsequent study from the same group, Ca balance was determined in eight patients suffering from severe renal insufficiency(24). In the majority of these patients, there were signs of osteodystrophy including generalised skeletal demineralisation and radiological evidence of secondary hyperparathyroidism as expressed at the phalanges by the presence of sub-periosteal resorption(24). Administration of NaHCO3, causing an increase in the concentration of serum bicarbonates from 18·7 to 27·4 mEq/l and thereby correcting metabolic acidosis, was associated with modest improvement in the negative Ca balance, from − 5·3 to − 1·5 mEq/d. Moreover, this correction was essentially due to a decrease in the faecal excretion of Ca, the urinary excretion being considerably reduced in these patients(24). The three above-mentioned studies, two conducted in patients with chronic renal insufficiency(22, 24) and one in normal subjects rendered severely acidotic through the administration of NH4Cl(23), are the basis of the hypothesis that bone mineral plays an important part in whole-body acid–base balance. This role would rely on the mobilisation of alkaline ions from the bone, thereby offsetting the excess of acid. This hypothesis is still being considered as a well-established scientific fact. The putative bone buffer mobilisation would be operational not only in the case of severe renal insufficiency, but also in the absence of any pathology, affecting the respiratory and/or renal regulatory systems involved in the maintenance of acid–base balance. According to this hypothesis, the 'Western diet', in particular, would be a risk factor for osteoporosis, as it may supply an excess of protons that the pulmonary and renal systems would no longer be in a position to eliminate and which, therefore, would require the mobilisation of calcium bicarbonate from the bone tissue. However, it has been demonstrated that, in subjects in good health, blood pH and serum level of bicarbonates are not altered following dietary manipulations that induce alterations in urinary proton excretion, such as quantitative variations in the protein intake or qualitative differences in the diet, when comparing omnivorous and vegetarian subjects(27–29). In the absence of studies demonstrating the existence of an acid–base imbalance in the extracellular fluid, the notion of a 'latent' metabolic acidosis state has been put forward(7). This expression appears to be a misuse of language. The term 'latent' in a medical context is used to describe a state during which a clearly identified pathological disturbance or a pathogenic agent of a disease is detectable but remains inactive. A good example is the varicella zoster virus that remains latent after the initial bout of chicken pox has ended. When the virus becomes reactivated, usually several decades later, it causes herpes zoster. However, this phenomenon does not apply to the putative relationship between metabolic acidosis, the incriminated state of nutritional origin and osteoporosis. Therefore, the systemic acidosis of pure dietary origin remains a hypothesis that has not been scientifically demonstrated but which, in a certain number of publications (see below), is considered to be a proven pathophysiological mechanism leading to osteoporosis. Even the hypothesis that bone is very important in maintaining stable serum HCO3− in established chronic metabolic acidosis has been challenged on the grounds of both theory and experimental data(30–32). Even if one admits that in the experiments conducted in patients suffering from acidosis due to chronic renal insufficiency(22, 24), the stability of low serum bicarbonates would be the consequence of some alkali mobilisation from an endogenous source, the origin cannot be the bone mineral(30–32). Indeed, the quantity of buffering substances released from the bone would be largely insufficient to neutralise the acid assumed to have accumulated in the course of years when chronic renal insufficiency has been developing(30–32). It was estimated that about 50 % of bone mineral would have to be dissolved over approximately 1·8 years in order to achieve such an acid neutralisation(30–32). In other words, calculation based on the total Ca and alkali content in the skeleton indicates that with a supposed proton retention of 12–19 mEq daily in chronic renal acidosis(22, 24), it would take 3·6 years for the bone alkali store to be exhausted in order to buffer this amount of acid(32). Thus, a quantitative estimate of the bone alkali content rules out that mobilisation of apatite mineral would be implicated in the maintenance of the low serum level of bicarbonates observed in the metabolic acidosis of chronic renal insufficiency. A re-evaluation of the various components of acid–base balance(32, 33) made highly questionable the hypothesis that bone alkali mobilisation is an important process in maintaining a stable low level of serum bicarbonate in chronic metabolic acidosis(30–32). Important technical progress has made possible the determination of the net gastrointestinal absorption of alkali, applying a method that avoids imprecise measurements of the quantities consumed and excreted in the faeces(30–32). With the use of this technique, as well as taking into account the urinary excretion of organic cations and anions (see below), the acid–base balance appeared to be neutral in end-stage renal disease patients(33, 34). Consequently, with no excess of protons to be neutralised, there was no reason to invoke the mobilisation of alkali from the bone tissue in chronic renal insufficiency with stable metabolic acidosis. Thus, a technical error, corresponding either to an underestimate of the net quantity of acid excreted, or to an overestimate of the net acid production, has perpetuated the incorrect concept that bone mineral plays a substantial part in acid–base balance in patients suffering from chronic renal acidosis. This incorrect concept does not mean that the acidosis generated by severe chronic kidney disease would not contribute to renal osteodystrophy. Nevertheless, other mechanisms probably play a more important part than acidosis per se in the deterioration of bone integrity in the case of severe chronic renal failure (for a review, see Hruska & Mathew(35)). The effects of metabolic acidosis on the skeleton were examined both in vitro and in vivo in animal experiments(36–40). The results of these studies have been interpreted as supporting the hypothesis of an acid-buffering role of bone mineral. They are considered as experimental evidence in favour of the putative causal relationship between the so-called 'Western diet' and the prevalence of osteoporosis in the general population(4, 7, 41–44). Furthermore, these observations, whether on isolated bone cells or on rodents(36, 37, 38–40), taken together with the fact that food intake modifies the degree of acidification of urine, as already demonstrated by Bernard(20) in the mid-nineteenth century, provided the rationale for exploring whether there would be a possible relationship between protein intake and osteoporosis, and, particularly, whether protein from animal v. vegetable sources would be more detrimental to bone health. To this end, many epidemiological studies have been published in the course of the last 16 years(45–56). Several of these reports appear to present some methodological flaws. Examples include the following: the age of the included subjects (varying between 35 and 74 years); the absence of an analytical distinction between sex; the inclusion of both pre-menopausal and postmenopausal women; the scarce or rather poor estimation of physical activity; the non-appreciation of the risk of falls; the variable levels of protein intake, often with average consumption above the recommended nutritional intake, therefore limiting the impact of protein malnutrition. In such disparate clinical conditions, it seems questionable to draw a synthesis from these studies by calculating an average relative risk with regard to the development of low bone mineral density (BMD)/content and/or fracture risk. Furthermore, in some reports testing the a priori hypothesis that acidic urinary excretion (particularly when positively related to protein intakes) would reflect metabolic acidosis and thereby should be associated with poor bone health, the data were a posteriori equivocally handled in favour of the postulated assumption. Thus, when the whole cohort did not show any associated relationship, further analysis focused on subgroups as computed by cross-tabulation combining highest protein with lowest Ca intakes(53), or on subjects with a history of fracture exclusively(54), or still on participants with high, but not with low urinary acid excretion(57). Starting from the hypothesis that the quantity of residual acid in the diet would influence the bone integrity of subjects otherwise in good health, several methods were proposed, based on studies conducted in the context of chronic renal insufficiency. First, it should be specified that measuring the pH of foods does not reflect the acid or alkali load they provide to the body. For example, orange juice has a low pH, by virtue of its high citric acid content, whereas once it has been ingested, it adds an alkali load to the body. Sulphurous amino acids (R-S) are neutral, but add acid loads once they have been metabolised, the reaction being: $$\begin{eqnarray} R\hyphen S\rightarrow CO_{2} + urea + H_{2}SO_{4} \end{eqnarray}$$ Foods contain numerous chemical substances. Their absorption depends not only on the type of substances ingested, but also on interactions with gastric acid and other nutrients in simultaneously ingested foods. Therefore, it is almost impossible to predict the impact of food ingestion on the regulation of acid–base balance(32). Moreover, since the intestinal absorption of the acid or alkali loads of food is incomplete, it is still necessary to be able to measure their quantity when excreted in the faeces. Taking into account both the experimental and analytical difficulties associated with such measurements, a simplified method has been developed and validated among subjects with chronic renal acidosis(31, 32–34, 58). According to this method, in the steady state, the total amount of inorganic cations $$(Na^{ + } + K^{ + } + Ca^{2 + } + Mg^{2 + }) $$ minus the total amount of anions $$(Cl^{ - } + P^{1\cdot 8 - }) $$ measured in the urine over 24 h can be used to estimate the net gastrointestinal absorption of alkalis. This measurement has the advantage of also including any other source of alkalis translocated into the extracellular environment, hypothetically including those from the bone tissue(31). The principle according to which, at the steady state, the quantity of electrolytes excreted in the urine equals their quantity absorbed by the intestine has led to the development of mathematical models in order to estimate the relationship between food intake and net renal acid excretion (NAE)(59). NAE includes the daily urinary excretion of both inorganic and organic acids. This measurement provides an estimate of net endogenous acid production (NEAP)(60). The analytical difficulty relating to the measurement of urinary organic acids (OA), which include citric, lactic, oxalic, malic and succinic acids, as well as glutamic and aspartic amino acids, has been circumvented by an estimate derived from the body surface. The equation used is: $$\begin{eqnarray} OA\,(mEq/d) = body\,surface\times (41/1\cdot 73), \end{eqnarray}$$ in which the value 41 corresponds to the median daily urinary excretion of OA for an average body surface of 1·73 m2 among subjects in good health(60, 61). This anthropometric estimate of OA is included in the calculation of the potential renal acid load (PRAL) of foods(62). This calculation avoids the direct measurement of NAE, which is already an indirect measurement in itself of the NEAP. The PRAL can be estimated relatively easily from dietary studies, using weekly diaries or regular questionnaires, in which the quantities ingested are analysed according to nutritional composition tables. The nutrients taken into account for the PRAL calculation are: (phosphorus+protein) − (K+Ca+Mg). The estimate of endogenous acid production has been further simplified by considering only protein and K intakes(63). An analysis of about twenty different diets followed by 141 subjects aged 17–73 years showed a coefficient of correlation (R 2) of 0·36 (P= 0·006) with a positive slope between protein intake and renal net acid excretion (RNAE, taken as a NEAP index), whereas it was 0·14, with a negative slope, for K intake(63). By the regression of the protein:K ratio, the R 2 became 0·72 (P< 0·001)(63). The use of this simple ratio estimates the acid load of foods according to the following equation(63): $$\begin{eqnarray} RNAE\,(mEq/d) = - 10\cdot 2 + 54\cdot 5\,(protein\,(g/d)/K\,(mEq/d)). \end{eqnarray}$$ Physiologically, the meaning of the protein:K ratio remains obscure. Indeed, K per se cannot be considered as an alkalinising ion, since hyperkalaemic states are usually the generator of acidosis and not of metabolic alkalosis(6). Of note, the development of a tool enabling the estimation of the PRAL of foods was aimed at modifying the urinary pH by dietetic means, particularly in the context of preventing recurrent urinary lithiasis(62). Thus, taking into account the differences in pH-dependent mineral solubility, the nutritional approach for preventing the recurrence of calcium phosphate or uric acid lithiasis, for example, has consisted in promoting acidification or alkalinisation of urine, respectively (see for a review Grases et al. (64) and Moe et al. (65)). Over the last two decades, several reports have considered the relationship between the Ca economy and bone metabolism and K intake from foods or from the administration of potassium bicarbonate or citrate salts(42, 66–75). In the context of osteoporosis, human intervention studies have been designed to test whether the administration of alkalinising salts may favourably affect Ca and bone metabolism and therefore eventually be developed as anti-osteoporotic therapy(66, 67, 69–75). The results obtained by the end of relatively short time interventions suggested that taking alkalinising salts may transiently reduce bone turnover markers, and/or increase the balance of bone health, and thus lead to '…tipping the scales in favour of potassium-rich, bicarbonate-rich foods'(42). However, prolonged randomised studies did not confirm such a positive influence on Ca economy and bone loss prevention(72, 73). Decreased intestinal Ca absorption can explain reduced calciuria (UCa), with K salts yielding no significant net change in Ca balance(70, 73). Furthermore, in terms of skeletal health, in a 2-year randomised placebo-controlled trial in healthy postmenopausal women aged 55–65 years, potassium citrate administered in two doses (moderate: 18·5 mEq/d and high: 55·5 mEq/d) had no persistent effect on biochemical markers of bone remodelling measured at regular intervals. In line with this negative assessment, the reduction in areal BMD observed at the end of the intervention did not slow down, despite an increase in urinary pH and excretion of K in the course of 2 years of treatment(72). In this trial, the consumption of additional fruits and vegetables (+300 g/d) increasing the urinary excretion of K neither reduced bone turnover nor prevented areal BMD decline when compared with the placebo group(72). As reported in short-term studies, a temporary reduction in bone markers was observed 4–6 weeks after the start of the treatment(72). In other words, the classical study supporting the 'benefits' of nutritional alkalinisation for bone health(66) was not confirmed by a long-term clinical trial, not only measuring bone remodelling, but also bone loss following the menopause, at two skeletal sites of extreme importance in the risk of osteoporotic fractures – spine and proximal femur(72). Despite this negative evidence from a well-designed clinical trial(72) and long-term preclinical investigations showing no relationship between urinary acid excretion and either bone status (density and strength) or remodelling(76), the idea that taking bicarbonates or alkaline K salts would be beneficial to the Ca economy and might result in better bone health and thereby prevent osteoporotic fractures continues to generate reports aimed at demonstrating such a therapeutical possibility(43, 57, 74, 77). In the context of osteoporosis prevention in postmenopausal women and the elderly, modification of dietary habits could be plausible so long as long-term efficacy can be clearly demonstrated. In contrast, the daily consumption of alkaline salt preparations over several decades appears to be hazardous in the absence of an evaluation of possible long-term toxicity. For example, the risk of enhancing vascular calcifications cannot be ruled out, particularly when alkaline salts are combined with Ca and vitamin D supplementation. In the study by Jehle et al. (71), the lumbar spine BMD difference without a consistent change in bone remodelling markers between the potassium citrate and potassium chloride groups could, as suggested by the authors, be fully attributed to the enhanced non-cellular matrix mineralisation and thus be largely independent of bone cell-mediated events. Whether such Ca deposition in soft tissues, resulting from the consumption of alkaline salt supplements and an increased supply of Ca–vitamin D, could also occur in the cardiovascular system(78, 79) is unclear and is a risk that could overbalance the small and inconsistent benefit over placebo observed on bone integrity with alkaline supplements after 1 or 2 years of intervention(71, 72, 80). Recent reports have not sustained the existence of a pathophysiological mechanism linking the consumption of some nutrients, particularly animal protein, to the induction of a biologically significant metabolic acidosis that would result in a negative Ca balance, bone loss and eventually osteoporotic fracture. A first meta-analysis including twenty-five clinical trials, and adhering to rigorous pre-defined quality criteria, focused on the association between NAE and UCa(81). The analysed trials consisted in nutritional treatment and were carried out on healthy subjects in order to test the effect of either two types of food (meat v. soya), or certain nutrients (quantity of protein or dairy protein v. soya protein), or even acidifying (NH4Cl) or alkalinising (citrate, sodium bicarbonate or K) salt supplements. A significant linear relationship was found between net acid excretion and Ca excretion for both acidic and alkaline urine(81). Whether this increase in UCa when associated with net acid excretion would correspond to a decrease in Ca balance was examined in another meta-analysis(82). The included studies had all employed stringent methods to measure Ca balance and bone metabolism in relation to changes in NAE(82). The treatments were carried out on adult subjects in good health and consisted of modifications of protein intake, in terms of quantity or quality(82). Despite an increase in UCa in response to the nutritional treatment, Ca balance, as well as bone resorption evaluated by measuring the type I collagen N-telopeptide, did not show any correlation with the acid load of the dietary regimens tested(82). This meta-analysis did not suggest that protein-induced UCa associated with increased NAE would exert a negative impact on bone health, leading to osteoporosis in the long term. Therefore, it does not argue in favour of the theory advocating alkaline diets. Furthermore, two other recent original reports did not sustain the hypothesis that a high dietary acid load might be detrimental to bone integrity. In the Framingham Osteoporosis Study, dietary acid load, estimated by the NEAP and PRAL, was not associated with BMD at any skeletal sites among 1069 'Original' and 2919 'Offspring' cohort participants(83). A possible exception was in older men with a trend between the NEAP and the femoral neck but not lumbar spine BMD, whereas no association was found with PRAL(83). Moreover, there was no interaction between either the NEAP or PRAL and total Ca intake(83). Thus, this study did not support the hypothesis that a high dietary acid load combined with a relatively low Ca intake might accelerate bone loss and increase the risk of fragility fracture(83). Another report was quite consistent with the detailed analysis of the data from the two Framingham generation cohorts(83). Indeed, no apparent relationship was found between urinary pH or urinary acid excretion and either the change in lumbar or femoral BMD or in the incidence of fractures after 5 years of monitoring including approximately 6800 person-years (age at baseline: approximately 59 years; female sex: 70 %) in a prospective investigation(9). Another recent and comprehensive review reported on a systematic search of the published literature for randomised intervention trials, prospective cohort studies and meta-analysis of the acid-ash or acid–base hypothesis in relation to bone-related outcomes. In these studies, the dietary acid load was altered, or an alkaline diet or alkaline salts were provided to healthy human adults(10). The objective of this systematic review was to evaluate the relationship between the dietary acid load and osteoporosis using Hill's epidemiological criteria of causality(84). It was concluded that a causal association between the dietary acid load and osteoporotic bone disease is not supported by evidence, nor that an alkaline diet favourably influences bone health(10). Furthermore, assuming that fruit and vegetables are beneficial to bone health, such a positive influence would be mediated by mechanisms other than those related to their alkalinising potential, as experimentally demonstrated several years ago(85). The bone data from two independent long-term randomised clinical trials testing K alkali supplements against placebo in healthy postmenopausal women(69, 72) have been analysed in one single publication(80). This analysis clearly shows, after 2 years of intervention, that K alkali treatment does not alter BMD changes at both lumbar spine and hip levels and has no effect on markers of bone resorption(80). Therefore, the previously reported long-term persistence of the urine Ca-lowering effect of potassium bicarbonate(69) was not associated with a significant benefit in terms of postmenopausal osteoporosis prevention(80). Likewise, both the greater spinal or hip BMD and the lower bone resorption markers, which were found to be associated with reduced estimates of NEAP and higher dietary K intakes in cross-sectional population studies of pre- and postmenopausal women(86, 87), were not confirmed in long-term randomised trials(72, 80). When compared with the null finding of these two trials(72, 80), a report, still in press(77), describes a positive effect of potassium citrate associated with supplements of calcium carbonate and vitamin D3 on BMD. This effect, recorded in a 2-year randomised trial carried out in healthy, elderly men and women studied together, remains to be mechanistically explained since it was observed, as in the two above-mentioned studies(72, 80), in the absence of any persistent reduction in bone resorption markers(77). The dietary acid load hypothesis also postulates that increasing the urinary excretion of phosphate, considered as an 'acidic' ion, enhances UCa and contributes to the loss and fragility of bones with ageing(59, 88, 89). In sharp contrast with this hypothesis but in full agreement with physiological notions on the phosphate–Ca interaction(90), analysis of twelve human studies indicated that higher phosphate intakes were associated with decreased UCa and improved Ca balance(91). It can be argued that the age-related decline in renal function, with its associated trend towards metabolic acidosis, would be sufficiently important to accelerate bone resorption while reducing bone formation(8), and thus could eventually explain the increased incidence of osteoporotic fractures with ageing. According to this putative pathophysiological mechanism, it would be justified to treat age-related osteoporosis by potassium bicarbonate administration or by appropriate modifications of the net dietary acid–base load(8, 66). However, there is no evidence that elderly patients with established osteoporosis, as documented by either spine or hip BMD T-score ≤ − 2·5 or by one prevalent vertebral fracture, have a lower glomerular filtration rate and more severe metabolic acidosis(92) compared with age- and sex-matched non-osteoporotic subjects(93, 94). Furthermore, in the National Health and Nutrition Examination Survey (NHANES) III population, a much larger number of subjects have osteoporosis/osteopenia(95) rather than a low glomerular filtration rate(93) or metabolic acidosis(94). In the analysis of the NHANES III survey, BMD was not found to be diminished by mild or moderate renal insufficiency(96). In fact, renal function itself was not independently associated with BMD, after taking into account sex, age and body weight(96). Furthermore, in this large survey, changes in serum bicarbonate were not apparent until chronic renal insufficiency, as estimated by the Cockcroft–Gault creatinine clearance, was ≤ 20 ml/min(97). Taken together, these results do not support the notion that age-related metabolic acidosis that would result from the deterioration of renal function could be pathophysiologically implicated in the marked increase in the prevalence of osteoporosis observed with ageing in the general population. It is a well-established biological fact that the degree of urinary acidity varies according to the type of consumed foods. In the middle of the nineteenth century, Bernard(20) considered this variation to be an example of physiological control in the internal environment. A century later, experiments carried out among patients suffering from severe metabolic acidosis caused by renal insufficiency, or among healthy subjects made acidotic by administering NH4Cl, suggested the involvement of bone tissue in maintaining the acid–base balance. This hypothesis was later refuted on the basis of both theoretical and experimental arguments. Despite this rebuttal, the hypothesis was put forward that bone could play a buffering role, with the consideration that nutrients, particularly animal proteins with their acid load, could be a major cause of osteoporosis. Several recent human studies have shown that there is no relationship between nutritionally induced variations of urinary acid excretion and Ca balance, bone metabolism and the risk of osteoporotic fractures. Variations in human diets across a plausible range of intakes have been shown to have no effect on blood pH. Consistent with this lack of a mechanistic basis, long-term studies of alkalinising diets have shown no effect on the age-related change in bone fragility. Consequently, advocating the consumption of alkalinising foods or supplements and/or removing animal protein from the human diet is not justified by the evidence accumulated over the last several decades. The author is grateful to Professor Robert P. Heaney, Creighton University, USA, for reading and providing helpful comments on the manuscript. The author received no financial support for writing the present review. There is no conflict of interest to disclose. 1Smith, HW (1961) From Fish to Philosopher. Garden City, NY: Anchor Books, Doubleday. 2Barzel, US & Jowsey, J (1969) The effects of chronic acid and alkali administration on bone turnover in adult rats. Clin Sci 36, 517–524. 3Heaney, RP (2001) Protein intake and bone health: the influence of belief systems on the conduct of nutritional science. Am J Clin Nutr 73, 5–6. 4Cordain, L, Eaton, SB, Sebastian, A, et al. (2005) Origins and evolution of the Western diet: health implications for the 21st century. Am J Clin Nutr 81, 341–354. 5Davenport, HW (1958) The ABC of Acid–Base Chemistry, 4th ed.Chicago, IL: University of Chicago Press. 6Valtin, H (1979) Renal Dysfunction: Mechanisms Involved in Fluid and Solute Imbalance. Boston, MA: Little, Brown and Company. 7Vormann, J & Goedecke, T (2006) Acid–base homeostasis: latent acidosis as a cause of chronic diseases. Swiss J Integr Med 18, 255–266. 8Frassetto, LA, Morris, RC Jr & Sebastian, A (1996) Effect of age on blood acid–base composition in adult humans: role of age-related renal functional decline. Am J Physiol 271, F1114–F1122. 9Fenton, TR, Eliasziw, M, Tough, SC, et al. (2010) Low urine pH and acid excretion do not predict bone fractures or the loss of bone mineral density: a prospective cohort study. BMC Musculoskelet Disord 11, 88. 10Fenton, TR, Tough, SC, Lyon, AW, et al. (2011) Causal assessment of dietary acid load and bone disease: a systematic review & meta-analysis applying Hill's epidemiologic criteria for causality. Nutr J 10, 41. 11Stewart, PA (1978) Independent and dependent variables of acid–base control. Respir Physiol 33, 9–26. 12Stewart, PA (1983) Modern quantitative acid–base chemistry. Can J Physiol Pharmacol 61, 1444–1461. 13Kurtz, I, Kraut, J, Ornekian, V, et al. (2008) Acid–base analysis: a critique of the Stewart and bicarbonate-centered approaches. Am J Physiol Renal Physiol 294, F1009–F1031. 14Relman, AS (1954) What are acids and bases? Am J Med 17, 435–437. 15Christiensen, HN (1959) Anion–cation balance. In Diagnostic Biochemistry: Quantitative Distribution of Body Constituents and their Physiological Interpretation, pp. 128–134. New York: Oxford University Press. 16Weiner, ID & Hamm, LL (2007) Molecular mechanisms of renal ammonia transport. Annu Rev Physiol 69, 317–340. 17Hamm, LL, Alpern, RJ & Preisig, PA (2008) Cellular mechanisms of renal tubular acidification. In Seldin and Giebisch's The Kidney, 4th ed. [Alpern, RJ and Hebert, SC, editors]. London: Academic Press. 18Koeppen, BM (2009) The kidney and acid–base regulation. Adv Physiol Educ 33, 275–281. 19Weiner, ID & Verlander, JW (2011) Role of NH3 and NH4+ transporters in renal acid–base transport. Am J Physiol Renal Physiol 300, F11–F23. 20Bernard, C (1865) Introduction à l'étude de la médecine expérimentale (Introduction to the Study of Experimental Medicine). Paris: Garnier Flammarion. 21Relman, AS, Lennon, EJ & Lemann, J Jr (1961) Endogenous production of fixed acid and the measurement of the net balance of acid in normal subjects. J Clin Invest 40, 1621–1630. 22Goodman, AD, Lemann, J Jr, Lennon, EJ, et al. (1965) Production, excretion, and net balance of fixed acid in patients with renal acidosis. J Clin Invest 44, 495–506. 23Lemann, J Jr, Litzow, JR & Lennon, EJ (1966) The effects of chronic acid loads in normal man: further evidence for the participation of bone mineral in the defense against chronic metabolic acidosis. J Clin Invest 45, 1608–1614. 24Litzow, JR, Lemann, J Jr & Lennon, EJ (1967) The effect of treatment of acidosis on calcium balance in patients with chronic azotemic renal disease. J Clin Invest 46, 280–286. 25Morgan, EF, Barnes, GL & Einhorn, TA (2008) The bone organ system: form and function. In Osteoporosis, 3rd ed., pp. 3–25 [Marcus, R, Feldman, D, Nelson, DA and Rosen, CJ, editors]. Amsterdam, Boston: Elsevier, Academic Press. 26Rizzoli, R & Bonjour, JP (2006) Physiology of calcium and phosphate homeostasis. In Dynamics of Bone and Cartilage Metabolism: Principles and Clinical Applications, 2nd ed., pp. 345–360 [Seibel, MJ, Robins, SP and Bilezikian, JP, editors]. San Diego, CA: Academic Press. 27Lutz, J (1984) Calcium balance and acid–base status of women as affected by increased protein intake and by sodium bicarbonate ingestion. Am J Clin Nutr 39, 281–288. 28Ball, D & Maughan, RJ (1997) Blood and urine acid–base status of premenopausal omnivorous and vegetarian women. Br J Nutr 78, 683–693. 29Fenton, TR & Lyon, AW (2011) Milk and acid–base balance: proposed hypothesis versus scientific evidence. J Am Coll Nutr 30, 471S–475S. 30Oh, MS (1991) Irrelevance of bone buffering to acid–base homeostasis in chronic metabolic acidosis. Nephron 59, 7–10. 31Uribarri, J, Douyon, H & Oh, MS (1995) A re-evaluation of the urinary parameters of acid production and excretion in patients with chronic renal acidosis. Kidney Int 47, 624–627. 32Oh, MS & Carroll, HJ (2008) External balance of electrolytes and acids and alkalis. In Seldin and Giebisch's The Kidney, 4th ed. [Alpern, RJ and Hebert, SC, editors]. London: Academic Press. 33Oh, MS (2000) New perspectives on acid–base balance. Semin Dial 13, 212–219. 34Uribarri, J (2000) Acidosis in chronic renal insufficiency. Semin Dial 13, 232–234. 35Hruska, KA & Mathew, S (2009) Chronic Kidney Disease Mineral Bone Disorder (CKD-MBD). In Primer on the Metabolic Bone Diseases and Disorders of Mineral Metabolism, 7th ed., pp. 343–353 [Rosen, CJ, Compston, JE and Lian, JB, editors]. Washington, DC: The American Society for Bone and Mineral Research. 36Barzel, US (1969) The effect of excessive acid feeding on bone. Calcif Tissue Res 4, 94–100. 37Arnett, TR & Dempster, DW (1986) Effect of pH on bone resorption by rat osteoclasts in vitro. Endocrinology 119, 119–124. 38Bushinsky, DA & Frick, KK (2000) The effects of acid on bone. Curr Opin Nephrol Hypertens 9, 369–379. 39Bushinsky, DA, Smith, SB, Gavrilov, KL, et al. (2003) Chronic acidosis-induced alteration in bone bicarbonate and phosphate. Am J Physiol Renal Physiol 285, F532–F539. 40Frick, KK, Krieger, NS, Nehrke, K, et al. (2009) Metabolic acidosis increases intracellular calcium in bone cells through activation of the proton receptor OGR1. J Bone Miner Res 24, 305–313. 41Barzel, US (1995) The skeleton as an ion exchange system: implications for the role of acid–base imbalance in the genesis of osteoporosis. J Bone Miner Res 10, 1431–1436. 42Lanham-New, SA (2008) The balance of bone health: tipping the scales in favor of potassium-rich, bicarbonate-rich foods. J Nutr 138, 172S–177S. 43Wynn, E, Krieg, MA, Aeschlimann, JM, et al. (2009) Alkaline mineral water lowers bone resorption even in calcium sufficiency: alkaline mineral water and bone metabolism. Bone 44, 120–124. 44Pizzorno, J, Frassetto, LA & Katzinger, J (2010) Diet-induced acidosis: is it real and clinically relevant? Br J Nutr 103, 1185–1194. 45Feskanich, D, Willett, WC, Stampfer, MJ, et al. (1996) Protein consumption and bone fractures in women. Am J Epidemiol 143, 472–479. 46Meyer, HE, Pedersen, JI, Loken, EB, et al. (1997) Dietary factors and the incidence of hip fracture in middle-aged Norwegians. A prospective study. Am J Epidemiol 145, 117–123. 47Mussolino, ME, Looker, AC, Madans, JH, et al. (1998) Risk factors for hip fracture in white men: the NHANES I Epidemiologic Follow-up Study. J Bone Miner Res 13, 918–924. 48Munger, RG, Cerhan, JR & Chiu, BC (1999) Prospective study of dietary protein intake and risk of hip fracture in postmenopausal women. Am J Clin Nutr 69, 147–152. 49Hannan, MT, Tucker, KL, Dawson-Hughes, B, et al. (2000) Effect of dietary protein on bone loss in elderly men and women: The Framingham Osteoporosis Study. J Bone Miner Res 15, 2504–2512. 50Sellmeyer, DE, Stone, KL, Sebastian, A, et al. (2001) A high ratio of dietary animal to vegetable protein increases the rate of bone loss and the risk of fracture in postmenopausal women. Study of Osteoporotic Fractures Research Group. Am J Clin Nutr 73, 118–122. 51Promislow, JH, Goodman-Gruen, D, Slymen, DJ, et al. (2002) Protein consumption and bone mineral density in the elderly: The Rancho Bernardo Study. Am J Epidemiol 155, 636–644. 52Wengreen, HJ, Munger, RG, West, NA, et al. (2004) Dietary protein intake and risk of osteoporotic hip fracture in elderly residents of Utah. J Bone Miner Res 19, 537–545. 53Dargent-Molina, P, Sabia, S, Touvier, M, et al. (2008) Proteins, dietary acid load, and calcium and risk of postmenopausal fractures in the E3N French women prospective study. J Bone Miner Res 23, 1915–1922. 54Wynn, E, Lanham-New, SA, Krieg, MA, et al. (2008) Low estimates of dietary acid load are positively associated with bone ultrasound in women older than 75 years of age with a lifetime fracture. J Nutr 138, 1349–1354. 55Darling, AL, Millward, DJ, Torgerson, DJ, et al. (2009) Dietary protein and bone health: a systematic review and meta-analysis. Am J Clin Nutr 90, 1674–1692. 56Misra, D, Berry, SD, Broe, KE, et al. (2011) Does dietary protein reduce hip fracture risk in elders? The Framingham Osteoporosis Study. Osteoporos Int 22, 345–349. 57Shi, L, Libuda, L, Schonau, E, et al. (2012) Long term higher urinary calcium excretion within the normal physiologic range predicts impaired bone status of the proximal radius in healthy children with higher potential renal acid load. Bone 50, 1026–1031. 58Oh, MS (1989) A new method for estimating G-I absorption of alkali. Kidney Int 36, 915–917. 59Remer, T & Manz, F (1994) Estimation of the renal net acid excretion by adults consuming diets containing variable amounts of protein. Am J Clin Nutr 59, 1356–1361. 60Berkemeyer, S & Remer, T (2006) Anthropometrics provide a better estimate of urinary organic acid anion excretion than a dietary mineral intake-based estimate in children, adolescents, and young adults. J Nutr 136, 1203–1208. 61Remer, T, Dimitriou, T & Manz, F (2003) Dietary potential renal acid load and renal net acid excretion in healthy, free-living children and adolescents. Am J Clin Nutr 77, 1255–1260. 62Remer, T & Manz, F (1995) Potential renal acid load of foods and its influence on urine pH. J Am Diet Assoc 95, 791–797. 63Frassetto, LA, Todd, KM, Morris, RC Jr, et al. (1998) Estimation of net endogenous noncarbonic acid production in humans from diet potassium and protein contents. Am J Clin Nutr 68, 576–583. 64Grases, F, Costa-Bauza, A & Prieto, RM (2006) Renal lithiasis and nutrition. Nutr J 5, 23. 65Moe, OW, Pearle, MS & Sakhaee, K (2011) Pharmacotherapy of urolithiasis: evidence from clinical trials. Kidney Int 79, 385–392. 66Sebastian, A, Harris, ST, Ottaway, JH, et al. (1994) Improved mineral balance and skeletal metabolism in postmenopausal women treated with potassium bicarbonate. N Engl J Med 330, 1776–1781. 67Sellmeyer, DE, Schloetter, M & Sebastian, A (2002) Potassium citrate prevents increased urine calcium excretion and bone resorption induced by a high sodium chloride diet. J Clin Endocrinol Metab 87, 2008–2012. 68Maurer, M, Riesen, W, Muser, J, et al. (2003) Neutralization of Western diet inhibits bone resorption independently of K intake and reduces cortisol secretion in humans. Am J Physiol Renal Physiol 284, F32–F40. 69Frassetto, L, Morris, RC Jr & Sebastian, A (2005) Long-term persistence of the urine calcium-lowering effect of potassium bicarbonate in postmenopausal women. J Clin Endocrinol Metab 90, 831–834. 70Rafferty, K, Davies, KM & Heaney, RP (2005) Potassium intake and the calcium economy. J Am Coll Nutr 24, 99–106. 71Jehle, S, Zanetti, A, Muser, J, et al. (2006) Partial neutralization of the acidogenic Western diet with potassium citrate increases bone mass in postmenopausal women with osteopenia. J Am Soc Nephrol 17, 3213–3222. 72Macdonald, HM, Black, AJ, Aucott, L, et al. (2008) Effect of potassium citrate supplementation or increased fruit and vegetable intake on bone metabolism in healthy postmenopausal women: a randomized controlled trial. Am J Clin Nutr 88, 465–474. 73Rafferty, K & Heaney, RP (2008) Nutrient effects on the calcium economy: emphasizing the potassium controversy. J Nutr 138, 166S–171S. 74Ceglia, L, Harris, SS, Abrams, SA, et al. (2009) Potassium bicarbonate attenuates the urinary nitrogen excretion that accompanies an increase in dietary protein and may promote calcium absorption. J Clin Endocrinol Metab 94, 645–653. 75Dawson-Hughes, B, Harris, SS, Palermo, NJ, et al. (2009) Treatment with potassium bicarbonate lowers calcium excretion and bone resorption in older men and women. J Clin Endocrinol Metab 94, 96–102. 76Mardon, J, Habauzit, V, Trzeciakiewicz, A, et al. (2008) Long-term intake of a high-protein diet with or without potassium citrate modulates acid–base metabolism, but not bone status, in male rats. J Nutr 138, 718–724. 77Jehle, S, Hulter, HN & Krapf, R (2013) Effect of potassium citrate on bone density, microarchitecture, and fracture risk in healthy older adults without osteoporosis: a randomized controlled trial. J Clin Endocrinol Metab 98, 207–217. 78Cannata-Andia, JB, Roman-Garcia, P & Hruska, K (2011) The connections between vascular calcification and bone health. Nephrol Dial Transplant 26, 3429–3436. 79Wang, L, Manson, JE & Sesso, HD (2012) Calcium intake and risk of cardiovascular disease: a review of prospective studies and randomized clinical trials. Am J Cardiovasc Drugs 12, 105–116. 80Frassetto, LA, Hardcastle, AC, Sebastian, A, et al. (2012) No evidence that the skeletal non-response to potassium alkali supplements in healthy postmenopausal women depends on blood pressure or sodium chloride intake. Eur J Clin Nutr 66, 1315–1322. 81Fenton, TR, Eliasziw, M, Lyon, AW, et al. (2008) Meta-analysis of the quantity of calcium excretion associated with the net acid excretion of the modern diet under the acid–ash diet hypothesis. Am J Clin Nutr 88, 1159–1166. 82Fenton, TR, Lyon, AW, Eliasziw, M, et al. (2009) Meta-analysis of the effect of the acid–ash hypothesis of osteoporosis on calcium balance. J Bone Miner Res 24, 1835–1840. 83McLean, RR, Qiao, N, Broe, KE, et al. (2011) Dietary acid load is not associated with lower bone mineral density except in older men. J Nutr 141, 588–594. 84Hill, AB (1965) The environment and disease: association or causation? Proc R Soc Med 58, 295–300. 85Muhlbauer, RC, Lozano, A & Reinli, A (2002) Onion and a mixture of vegetables, salads, and herbs affect bone resorption in the rat by a mechanism independent of their base excess. J Bone Miner Res 17, 1230–1236. 86New, SA, MacDonald, HM, Campbell, MK, et al. (2004) Lower estimates of net endogenous non-carbonic acid production are positively associated with indexes of bone health in premenopausal and perimenopausal women. Am J Clin Nutr 79, 131–138. 87Macdonald, HM, New, SA, Fraser, WD, et al. (2005) Low dietary potassium intakes and high dietary estimates of net endogenous acid production are associated with low bone mineral density in premenopausal women and increased markers of bone resorption in postmenopausal women. Am J Clin Nutr 81, 923–933. 88New, SA (2002) Nutrition Society Medal lecture. The role of the skeleton in acid–base homeostasis. Proc Nutr Soc 61, 151–164. 89Sebastian, A, Frassetto, LA, Sellmeyer, DE, et al. (2002) Estimation of the net acid load of the diet of ancestral preagricultural Homo sapiens and their hominid ancestors. Am J Clin Nutr 76, 1308–1316. 90Bonjour, JP (2011) Calcium and phosphate: a duet of ions playing for bone health. J Am Coll Nutr 30, 438S–448S. 91Fenton, TR, Lyon, AW, Eliasziw, M, et al. (2009) Phosphate decreases urine calcium and increases calcium balance: a meta-analysis of the osteoporosis acid–ash diet hypothesis. Nutr J 8, 41. 92Miller, PD, Schwartz, EN, Chen, P, et al. (2007) Teriparatide in postmenopausal women with osteoporosis and mild or moderate renal impairment. Osteoporos Int 18, 59–68. 93Coresh, J, Astor, BC, Greene, T, et al. (2003) Prevalence of chronic kidney disease and decreased kidney function in the adult US population: Third National Health and Nutrition Examination Survey. Am J Kidney Dis 41, 1–12. 94Eustace, JA, Astor, B, Muntner, PM, et al. (2004) Prevalence of acidosis and inflammation and their association with low serum albumin in chronic kidney disease. Kidney Int 65, 1031–1040. 95Looker, AC, Orwoll, ES, Johnston, CC Jr, et al. (1997) Prevalence of low femoral bone density in older U.S. adults from NHANES III. J Bone Miner Res 12, 1761–1768. 96Hsu, CY & Chertow, GM (2002) Elevations of serum phosphorus and potassium in mild to moderate chronic renal insufficiency. Nephrol Dial Transplant 17, 1419–1425. 97Hsu, CY, Cummings, SR, McCulloch, CE, et al. (2002) Bone mineral density is not diminished by mild to moderate chronic renal insufficiency. Kidney Int 61, 1814–1820. Loading article...
CommonCrawl
ILovePhilosophy.com Philosophical Discussion Forums http://ilovephilosophy.com/ An update on Universal Basic Income (UBI) http://ilovephilosophy.com/viewtopic.php?f=1&t=193577 Posted: Tue Nov 14, 2017 9:28 am by thinkdr One way, perhaps, to apply Ethics in practice is to initiate in a nation, state, or municipality, a form of Universal Basic Income - akin to what citizens of Alaska have with their trust fund. Critics argue that after UBI is granted people would laze about, would give up exerting themselves on any worthwhile project, or exercising any skill; they all would stagnate. They would not be productive, would not contriute to the progress of the economy. The factual evidence shows that this has not turned out to be the case. What has happened in on-the-ground actual UBI experiments is that people continue to work, but less at jobs they hate, and more in jobs and projects that they consider to be interesting. The beauty of it is that over all productivity increases in the regions where the experiments have been tried. Do the research yourself and you will discover how it works out and why [b a Universal Basic Income is necessary[/b] - since automation and robotics are displacing many, many traditional jobs and vocations. Check out these links, and learn: http://basicincome.org/news/2017/11/rep ... onference/ http://basicincome.org/news/category/features/blogs/ https://www.youtube.com/watch?v=bdHOZCy ... e=youtu.be Your views on these matters are most welcome! Re: An update on Universal Basic Income (UBI) by Silhouette I am absolutely in favour of the UBI, not just "because it sounds like a nice idea" but because it's increasingly becoming an economic necessity as thinkdr said. One has to ask themselves the question: what is the purpose and motivation of creating better and better technologies? Given the "mixed economy" model of the West and not some radical transformation that half the population consistently votes against, we are in an economic environment of being monetarily motivated to supply evermore innovative ways of meeting demands. But simply eating into the market share of already-established businesses, that might only provide a relatively basic version of what people want in some way or other, is not as lucrative as enhancing technologies to make the provision of goods and services even more efficient. Enhancing technologies to improve the provision of goods and services is what the West does. The point it seems is to shirk the classical liberal self-regulating ideal of "perfect competition" and manufacture one's very own monopoly or at least oligopoly through product differentiation: "my product isn't just the same as all the others in the same market" - even if it takes psychological tricks to force this, through advertising a unique association with your product. You're rewarded for abusing the system in your favour, and better technology can give actual substance to claims of "a better product". That's the motivation - it's built into our economic model. But what is the purpose? Where is it headed? Obviously technology enhances what mere people can do on their own, it removes the necessity for people to perform a certain aspect of a required role. Continually. Obviously again, the tendency is towards the removal of the whole role altogether. So far, the human element in jobs has been sustained by there still being room for them to augment the role in most cases. The human requirement, when not improving technology, is ever shrunk to more and more menial tasks of smaller and smaller consequence. Some roles are even on the verge of being taken over completely, such as with drivers. This one will cause a sudden huge squeeze of people into an already squeezed job market of increasingly pointless roles, and might be the turning point. We create technologies in order to remove the need for humans to work. And yet the economic necessity is still stuck in "you have to have a job". A job for a job's sake, to uphold the individualist ideal of self-sustenance. It's basically a modern-day sin to be unemployed, because it is perceived that government "steals" from the employed through taxation in order to provide for the unemployed. This is actually a form of the "fundamental attribution error" where one tends to attribute one's (e.g. financial) success far more in favour of their own actions than to those of others and to one's environment. It's actually the economy as a whole that provides the platform for you to become rich, taxation is more like a fee for being privileged enough to take part. The more you benefit from it, the more you are in debt to it. I find the lack of gratitude of the Libertarian sort to be particularly disgraceful in this regard. There isn't even any appreciation for the fact that provision for the unemployed goes straight back into businesses when it is spent, paying for the rich once more. It's just channeled temporarily through other human beings first before it goes back to them. And what are we supposed to do? Let the incapable die off by denying them any income? We are more than easily able to maintain a certain level of civilisation. And that point is an important one. Simply gaining a better understanding of economics will enable nay-sayers to see how not only is UBI necessary but there is in fact no moral or economic problem in bringing it about. The final point may be a way off: when all work is replaced by technology. Then everyone will be unemployed. There won't even be income to tax at this point, you may simply use the technologies at your disposal to get what you want. You're not going to be paying machines to do what they are programmed to do, so you need no income to pay anybody with. But how are we going to evolve to this point economically? 1) The ratio of unemployed to employed is going to steadily increase simply as a matter of course. We just let our current economy do what it does. 2) These increasing numbers of unemployed are going to need income for as long as there are people who want to be paid to provide a product or service. 3) The money can only come from where it currently is: the employed. Therefore it must be diverted by government force unless the employed can learn to give in accordance with what our civilisation can reasonably afford. 4) The money supply is going to steadily decrease, along with prices (both tending to zero) so comparative richness is going to decrease. The only quarrel left will be how much government is allowed to take from the employed to give to the redundant - how much is "reasonable" to maintain what level of civilisation for the unemployed class (how much of a human being are they)? On a light note, I don't even want everyone to work - dealing with stupid employed people is annoying. Let just the best and most motivated work. There is no need for the incapable and unwilling to work, even now in my opinion. by WendyDarling Do we need technology to replace our efficiency when we are on a planet with finite resources? Do we need technology to excavate the resources to its end more quickly? To end the ability for human survival in essence more quickly? Technology is not our road to salvation, it's more our road to annihilation at break neck speeds. I don't even understand the concept of UBI that doesn't point to some super evil culling of the population due to technology heading us towards oblivion on a planet that can no longer sustain unproductive life forms such as humans who only use and waste unsustainably. UBI may be what the communist global government gives folks as they cull the populations quietly using advanced medical technology. There is no need for the incapable and unwilling to work... ...or live. Eugenics, which has taken to the underground, will resurface and be interwoven with how civilization judges human life forms as worthy of their existence, worthy of what's left of the resources, which each life will have to prove, have to contribute to what society (or a few old men) deem vital. If UBI happens, it will be short lived, and those who accept it may be targeted as unfit, a waste of resources. There is no UBI utopia on a planet of finite resources. Could this continued striving be due to strong ingrained work ethics (be productive or look bad) pervasive still the rest of the world over? Once ambition fizzles without any status rewards, why would folks continue to work? Even if their work becomes more play inspired these projects that interest them, why would they bother rather than vacationing with their families and building memories at home? So... finite resources. The answer is quite obviously sustainable energy resources and other practices? The only reasons we still use non-renewable energy are that they are embedded so strongly in our infrastructures, they have been around for longer so are efficient and prolific, they are still making the people in power rich so they won't want to change - e.g. the supply of limited resources is much easier to regulate in your favour, and our economy rewards short-sighted and desperate approaches to competition such that reckless measures are resorted to. This all unravels once you consider those inevitable consequences that I numbered. With technology replacing the human element in the workplace, any human irresponsibility will be phased out, renewable energy technology will catch up with non-renewable, and changes to infrastructure will follow to accommodate this. Unsustainability will be replaced by sustainability whether or not the former has a chance to run out. It anything UBI will enable us to find a balance without our finite resources. There is no need for the incapable and unwilling to live?! Some super evil culling of the population by some communist global government?! Forgive me for saying this sounds absolutely hysterical and unfounded. I don't even know where to start with this one it's so removed from reality - maybe it's the exciting sensationalism of conspiracy-type media, but in reality the vast majority aren't comfortable with the actual killing of real people and won't let it happen, as history has proven - and with communication and availability of news and information like we have today, it's going to be even harder to get genocide on familiar territory off the ground. For one, as soon as religion has finally died out enough we're much more likely to transition into genetic modification to prevent more unwilling and incapables from entering the world, and more prominently, as technology replaces all need for humans to work the unemployed will be everyone. What do you think we'll all be killed off as each and everyone person becomes redundant until none of us are left?! Some level-headed factual and logical explanation is going to be needed on your part for any your fears to be remotely feasible. Everyone begins life motivated, all kids have energy, creativity and engage in activity. That's where ambition comes from - it only fizzles out when restrictions like lack of money and opportunity wear you down. You're pressured to leave behind work that you enjoy and are passionate about, to pursue "a proper job". People just return to what they thought they weren't supposed to do, but actually care about. The more necessary but undesirable stuff will be done by technology before anything else. Again, your fears are unfounded. in reality the vast majority aren't comfortable with the actual killing of real people and won't let it happen, as history has proven - and with communication and availability of news and information like we have today, it's going to be even harder to get genocide on familiar territory off the ground. Sorrily, the vast majority don't make the behind the scenes decisions made by governments that favor the more productive or wealthy over the less productive and poor. No one asks for permission to genocide segments of populations or even entire populations, the few in charge sign classified orders and its done with very few even knowing that its happening. Vaccines are an easy in to end people. People are dumb enough to let doctors inject them with whatever they are told is in the vaccines. Is that statement a question of disbelief? How can sustainable energy resources recreate the entire food chain or the plants and minerals that sustain it? Are adults supposed to behave as children...playing all the live long day? WendyDarling wrote: Sorrily, the vast majority don't make the behind the scenes decisions made by governments that favor the more productive or wealthy over the less productive and poor. No one asks for permission to genocide segments of populations or even entire populations, the few in charge sign classified orders and its done with very few even knowing that its happening. Vaccines are an easy in to end people. People are dumb enough to let doctors inject them with whatever they are told is in the vaccines. I'm unaware of any domestic genocides (not including the odd attack from one of our "enemies") in the developed world since the end of the second world war - I know they certainly still happen behind closed doors in foreign countries, but nowadays the west is in horror whenever just a few of its own people die, often when just one person is killed. Genocide doesn't happen to us lot anymore. I'm not saying it couldn't happen, or be arranged to happen, but there's just no way it would get far off the ground with all the communication and information open to us about our own countries. As for vaccines, nobody is getting killed by them, be real. They might not all be free of side-effects, but the good they do far outweighs any bad. Trusting doctors is smart, but mania and paranoia about them is able to completely undo all the good they could do in cases like vaccinations - please don't buy into that. Doctors are normal people but with an unbelievable dedication to helping people and with fine scientific mindsets - they have to keep up to date with what they're administering and all the tests that go into its legitimacy, pros and cons - only on TV and maybe in the odd isolated case are they corrupt and susceptible. That's public healthcare at least, in private healthcare they're often supposed to push whatever the private companies who own them tell them to recommend - private companies are rewarded for the absolute opposite to doctors... WendyDarling wrote: My apologies it was a rather rude sigh that I didn't have to put into words before tackling your response. Farming is already incentivised to be sustainable, they have a finite amount of land on which to produce as much as possible - except of course in the case that aggressive expansion is permitted into other habitats like rainforests - but that is that same greedy human input that will inevitably be replaced by technology. It's entirely possible to avoid that, so much is wasted afterall, it's all in the name of making far more than we actually need to make more money at the expense of peoples' health and waistlines. Food chains and plants are easy to sustain. Minerals just need recycling - they don't die, they just get chucked in landfills. All the minerals that ever were in the ground are still on the earth - except for the odd thing we send into space that didn't come back. They just need to get reused - technology could no doubt help recycling become competitive with simply throwing stuff away. But ummm..... UBI? Let's not get off topic now. Have we produced enough for 7+ billion people a day? I really don't know. Your answers are easy coming, but written while the oceans, the reefs are suffocating in an ocean that is polluted with human use bi-products in a warming cycle that is destined to flood the Earth with rising tides. Doctors trust that what is in vaccines is what's in vaccines, but some scientists have started testing vaccines to see what else might be in there and they found unnecessary additives in a vast majority of vaccines that cause a host of problems, but namely and more pervasively neurological damages, diseases, and disorders to progress unnaturally, ending people's lives much earlier with for instance Alzheimers, then others die still in their mother's wombs. I can't remember what thread I spoke of vaccine research and its revelations. UBI will be irrelevant if products to buy simply run out on a permanent basis. If we're going to live in lala land, sure UBI all the way...I vote for 7+ billion people to net one million a year each. Posted: Tue Nov 14, 2017 10:33 pm by James S Saint Is everyone forgetting that above all others, money is relative. Give everyone a $million a year and a loaf of bread would probably cost $200,000. It is the same as the raising of minimum wage, it merely causes the prices of everything to increase. And during the shuffle, the rich get even richer. by Innovice I think we should approach the problem from the perspective of need. We have universal basic needs, and money is not one of them When technology has eliminated many jobs, the basic needs will not have changed much - food, water, shelter. I think it would be better to universally supply food, water, and shelter before giving people free money from which poorer decisions can be made Money would serve in the realm of desire. Television, internet, luxury items I think humanity would be well served with sustainable solutions (in terms of population growth and limited resources) to the needs of food, water, and shelter. Maybe start with expanding things like food stamps WendyDarling wrote: If we're going to live in lala land, sure UBI all the way...I vote for 7+ billion people to net one million a year each. Why does everything have to be hyperbole with you? All your points of view are so extreme... I did request level-headed factual and logical explanation but I think you're more concerned with doom-saying. Yes, there do appear to be environmental issues that might disrupt the whole thing, but my point isn't just to say that UBI would be easy to pull off worldwide and we'll definitely last long as a species to see it, just that increased unemployment tending towards total unemployment IS happening, and UBI is a neat and simple solution to this inevitability, even if we're a way away from getting enough people such as yourself to appreciate this fact. Whether or not you think UBI is a ridiculous prospect is irrelevant if something to its effect is going to be necessary whether you like it or not. Glib statements like "oh yes let's just give 7+ billion people a million each" can't detract from this. Posted: Wed Nov 15, 2017 12:21 am As long as money is the media of control, you are going to be impoverished. That cannot and will not ever be avoided. Money DOES NOT solve social problems. It CREATES them. And the ONLY good use for technology is Per Individual, not to maintain national or global power structures, but to maintain each individual independently (much like an Iron Man suit). And if you want to help "everyone" then give everyone an Iron man suit. LOL...an iron man suit? I'll take an iron maiden suit with all the sound tracks preprogrammed in. Your use of the word "everything" is hyperbole. Actually, my statements are far from exaggeration. Global Warming Is Heating Up the Deep Ocean http://time.com/4184898/global-warming-oceans-hot/ Pollution killing world's coral reefs https://www.reuters.com/article/us-mexi ... 1G20080930 Is sea level rising? Yes, sea level is rising at an increasing rate. https://oceanservice.noaa.gov/facts/sealevel.html Neurosurgeon issues public challenge to vaccine zealots: Inject yourselves with all shots you say children should get! https://www.naturalnews.com/035335_vacc ... ldren.html Ecosystem destruction costing hundreds of billions a year https://www.theguardian.com/science/200 ... nservation There are plenty of articles about the nature of our world that cover, just as I have, a myriad of issues that are actual events, not doom-saying paranoia. Wake up buster! by Only_Humean Silhouette wrote: We create technologies in order to remove the need for humans to work. This is not the capitalist way. We create technologies in order to remove the need to pay humans to work. Technological growth is the only significant factor in economic output growth, besides population growth. The money is not with the employed, but with the technology owners. As the return on capital more and more outstrips the return on labour, the money will drain towards the owners, who will have to provide enough to the labourers that they can buy things and make them money - otherwise there's no return on capital. It's not that binary, of course: most "labourers" have pensions and savings that profit from the capital returns. But money will stagnate, and certainly if there's limited or no inheritance tax, society will to and we'll be back to a feudal hierarchy. James S Saint wrote: Is everyone forgetting that above all others, money is relative. Money is relative to economic capacity. If you just print an extra million dollars per person, that will happen. If you use a more redistributive/less regressive financial system, it won't on any significant scale. Only_Humean wrote: "More redistributive"?? Are you talking about forming a flat wealth distribution??? Anything even close to that is de-globalization. You could get burned at the stake for that. James S Saint wrote: Only_Humean wrote: Money is relative to economic capacity. If you just print an extra million dollars per person, that will happen. If you use a more redistributive/less regressive financial system, it won't on any significant scale. "Decelerating? You mean bringing your car to a dead stop on the motorway?" The current system is regressive, and growing more so all the time. Wealth inequalities and capital mobility are at pre-WWI levels. by Arminius The increased prices of everything can be caused by giving everyone more and more money or by the raising of wgaes, thus also by minimum wages. Then ( a new) immigration of poor people has to start in order to curb this process a bit, only a bit, and for a short time, only for a short time. So, indeed, in the long run, more and more humans become poorer and poorer, whereas less and less humans become richer and richer. This development is unfair, destructive, dangerous, stupid, and it is going to be stopped (the question is only: when?). Even the question of how is not relevant, because at last nature is going to stop it. James S Saint wrote: As long as money is the media of control, you are going to be impoverished. That cannot and will not ever be avoided. WendyDarling wrote: LOL...an iron man suit? I'll take an iron maiden suit with all the sound tracks preprogrammed in. Or what about an "iron horse" suit? Again: If not the human beings, then nature itself is going to stop that unfair, destructive, dangerous and - last but not least - stupid development. Infinite growth is not possible on our planet. So, globalism also means the last step of ecnomic growth on our globe. What you have is this: Natural Power Distribution Curve.png (2.8 KiB) Viewed 2366 times That is the natural stable power distribution of a free flowing aggregating substance (money through absolute Free-Trade). The inanimate universe itself conforms to this law (A true Philosopher's Stone). $$ e = \frac{1}{(1 + r^2)}$$ What you want is this, except times a million wide: SAM Power Distribution.png (8.12 KiB) Viewed 2366 times That is an organic distribution of power or information and wealth (similar to the cell structure in a living body). Note that it no longer has free flow of the wealth throughout the system. The wealth distribution is compartmentalized ("cells"). With such a structure (SAM Coops), a supremely stable, intelligent, and capable Man can form, but not be predesigned, rather allowed to grow by discovered need. Both are stable distributions of wealth, but the first, inanimate distribution, has an upper limit, after which it can no longer contain or control any more mass (people and androids), thus chooses to eliminate the excess to avoid potential instability (Globalism). The organic structure allows for much, much greater stable growth with almost unlimited mass potential. If formed of living creatures, it is like the first is an amoeba and the second is a homosapien, billions of times more massive (more populated) and capable. What is now being called "Universal Basic Income" cannot be profitable to the masses unless distributed as the second structure, simply because there are too many people for the first structure to remain stable. People in excess of the needs of the structure must be eliminated .. and will be (are being). The challenge is one of how to prevent those still using their extreme power from continuing to lust for the first, inanimate (inevitably lifeless, Clock-Work Orange style) technological structure. In theory, the answer is simple - just compartmentalize opportunity for wealth. Businesses, companies and corporations already do that in their limited way and thus become powerful and organized. But businesses are not the entirety of life and hopefully never become such without first learning of MIJOT and stable distributions. If it isn't being done the right way, it necessarily is being done a wrong way. Compartmentalize wealth via "cell structures", SAM Coops, don't just flood humanity with more syrup. Absolute Free Trade doesn't and can't exist, because money is a human convention governed by other human conventions. Nature has no laws to aggregate matter more due to derivatives of matter, for example, no government bond particles, no energy generation in the firm expectation of future energetic transfer, and no consideration of minimal requirements that a society certainly has. In addition, a power law scales according to constants; the height of the peak and the width of the spread depend on arbitrary choices, and there is no closed system to validate it. So people can pick and argue post hoc for a profile that supports their politics, but it has little predictive power or relevance. In short: that's nice, as far as Just-So Stories go. Only_Humean wrote: Absolute Free Trade doesn't and can't exist, because money is a human convention governed by other human conventions. A distribution that perfectly fits that curve doesn't exist either. And I hate to break it to you at your age in life, but humans and their wild imaginings are still all a part of nature and governed by natural law. But the reason that I specified "absolute free trade" was that humans can naturally interfere with the natural free-flow distribution and aggregation of power, most specifically the religions have that capacity. That is what national boundaries are about (despite the pretense that they don't exist). But without mindful intention, power, even among humans, will follow natural laws of fluid mechanics and aggregation, especially money. Why do you think they call it "liquidating" and "amassing". Free flow is a free flow and amassing is aggregation. They occur at predictable speeds and due to specific principles. Nothing is actually random except to the naive. Only_Humean wrote: Nature has no laws to aggregate matter more due to derivatives of matter, for example, no government bond particles, no energy generation in the firm expectation of future energetic transfer, and no consideration of minimal requirements that a society certainly has. That is your theory is it? Human affairs are totally independent of nature and the result of only free-will? Is that a theory that you have put a great deal of thought into or is it more like, "Of course no one can sail around the world. They would fall off!" Only_Humean wrote: In addition, a power law scales according to constants; the height of the peak and the width of the spread depend on arbitrary choices, and there is no closed system to validate it. Again, a well studied conclusion, or .. just another off the top opinion? You seriously believe that there is no such thing as economic theory and economic science?? The Economic Science Association (ESA) is a professional organization devoted to economics as an observational science, using controlled experiments to learn about economic behavior. The ESA welcomes participation by economists interested in the results of such experiments, as well as scholars in psychology, business, political science, and other related fields. The Journal of the Economic Science Association (JESA) is dedicated to advancing theoretical, empirical, methodological and policy-relevant knowledge using experimental economic methods. JESA promotes research pioneering and advancing laboratory and field methods to address important economic questions that are difficult to examine using naturally occurring data. JESA is open to all areas of inquiry in economics and at the intersection of economics and other disciplines including but not limited to psychology, political science, statistics, finance, marketing, and organizational behavior. 2018 ESA Asia Pacific Meeting, Brisbane, Australia The 2018 Asia Pacific ESA Meeting will be held at the Queensland University of Technology, Brisbane, Australia from Wednesday, February 7th to Friday, February 9th, 2018. The conference is jointly organised by the Queensland Behavioural Economics group and the QUT School of Economics and Finance. Following the ESA conference, there is the opportunity to stay for another day and attend a regional meeting of the Society for Experimental Finance. Attendees of the ESA conference wishing to stay on can register for the SEF "add on" at a discounted rate. The SEF conference fee is $190/$140/$50 for faculty and professionals/students/non-registered guests, which covers all lectures, welcome reception, conference dinner and catering during breaks. After the ESA early registration cutoff (Dec 20, 2017) the conference fees will increase by $50 in all categories. Get you tickets soon. Only_Humean wrote: So people can pick and argue post hoc for a profile that supports their politics, but it has little predictive power or relevance. So you're an avid supporter of the theory, "no one can know", much like Feynmann and the Quantum Magi. Is that a derivative of the theory, "Ignorance is bliss"? Or perhaps that "ignorance in others is power"? Let me guess... You believe that the current power distribution among people today is very close to that first graph merely by accidental coincidence? Or is it due merely to evil Republicans? Touché. Except it really seems like you have been spooked by one side of a story and you just aren't aware that this is giving you an exaggerated perspective. I happen to agree that we're a bit fucked environmentally, I am absolutely not a climate change denier, but you need to put all these reports into context. Yes the planet is heating up, reefs are dying, sea levels are rising, and it's all pretty economically costly. But does this all mean that taking care of the future of the economy is all for naught? It's a slow process and more serious in some places than others. But regardless of how far we get and doomsaying aside, we are still tending towards full unemployment and beyond psychotic solutions like genocide, things like UBI will be necessary. By the way, the vaccine article was pretty laughable, somebody issues a dare and there's no mention whatsoever of anyone taking it up or otherwise. It's just implied that nobody would when it would be fine if someone actually did. You're missing tricks like these, because you're only listening to what you want to hear and feeding your own bias. Try and stay critical and you'll come across less hysterical. Only_Humean wrote: This is not the capitalist way. We create technologies in order to remove the need to pay humans to work. Technological growth is the only significant factor in economic output growth, besides population growth. But besides slavery and voluntary work, isn't removing the need to pay humans to work the same as removing the need for humans to work? Unless that was your point - that the latter won't happen and we are tending back towards slavery and or voluntary work (you mentioned going back to feudal hierarchy)? I agree that the capitalist way is to reduce expenses, i.e. reduce wages, so we are currently in the process of gradually removing pay from wage labourers and transferring it to capitalists, but the result of this will be that our economy will be reduced to a circulation of money amongst just them. How then will wage labourers continue to acquire things they need to live unless they are free or money is taxed away from capitalists and a basic income given to all the rest? UBI seems like a fairly obvious solution here. Only_Humean wrote: "Decelerating? You mean bringing your car to a dead stop on the motorway?" Posted: Fri Nov 17, 2017 10:39 am I guess, the reason why many people do not understand this is that they do not really know the logic, especially the mathematics behind it. The "universal basic income" will never lead to a better economic/social status, but always to more injustice, because the same minority will become even richer, whereas the same poor majority will become even poorer. If you have physical evidence of energy being created in response to confidence in future energy, or of matter accumulating at hyper-gravitational rates dues to additional force generated by the existence of gravity and not mass, please disabuse me of my theory and show you're not handwaving. My theory doesn't rely on free will at all, though, just relations and processes - which are different in financial markets and economies to gas laws or physical relations. Complete strawman. You present a scale-free power distribution as evidence of the free market following physical laws. That's not an argument you can call "economic science". But I'm certainly aware of economics, econometrics and economic science. Those latter two use large bodies of statistical data to investigate hypotheses, so please provide your evidence. Saying "you're wrong" is not saying "no-one can possibly know". Saying "people can claim whatever they like with vague graphs and handwaving" is not an ontological claim about the nature of fiscal transactions. Is it? You have no axes, so it's an easy claim to make. Are you plotting capital or income? What's the population - world, US, US income earners? As a probability curve, wealth within a society shows a long-tail distribution: The curve parameters are clearly changing over time - compare the changes in the middle and the ends. What governs these parameters? What are the parameters for "pure free trade"? What are the optimal parameters for a society, maximised to which ends? Power laws are used to approximate wealth distributions: I'm not denying that. Depending on the society, a power law could be almost uniformly egalitarian or profoundly tyrannical - that's why you have measures like the Lorenz curve and Gini coefficient, which basically assume a Pareto power distribution. Neither of which depend on pure free market capitalism or any other political philosophy. If you have physical evidence of energy being created in response to confidence in future energy, Well, I actually can do that, but you have never been very good at analogies, I suspect you're not bright enough to follow along. Prove me wrong about that. It is a proven fact that ultra-minuscule EMR pulses veer into a higher density of ultra-minuscule EMR pulses. That is energy accumulating due to the detection of higher potential of energy. Energy, whether inanimate or social, is always merely freeing from one place or type and propagating toward another. It is never actually "created" in either case. Of course you might want to argue that the original pulses were not conscious and thus did not have emotional "confidence" in the same sense as human behavior, but then we are talking about an analogy of the same outward behavior, not the exact same mechanism for that behavior. Laws and principles are about the behavior, not the reasons for the behavior. Now, did you follow that? Was it too complicated? Are you "disabused"? Only_Humean wrote: or of matter accumulating at hyper-gravitational rates due to additional force generated by the existence of gravity and not mass I'd give you an analogy for that too if I knew what the hell you meant by it. Although I suspect that you have no idea that "gravity" is merely a gradient of an ultra low-density mass field. Mass particles are made of "gravity-field" and vsvrsa. Mass particles are merely extremely dense wheras the ambient "gravity field" is much, much lower density (precisely following that first graph). And interestingly all due to the prior explanation of ultra-minuscule EMR ("Affectance") veering into a traffic jam of the same, forming a "mass particle", cluster. The "hyper-gravitational rates" would be due to the fact that the affectance propagates at the speed of light (it IS "light"). Particles migrate toward each other ("gravitate") at a much, much slower speed than the affectance (the ultra-minuscule EMR) that veers into mass particles. Economically, the analogy is that a mass particle is analogous to a bank with money flowing in and out tof the ambient population. The bank accumulates to a maximum set by the region in which the bank does business, the "ambient field". Subatomic mass particles do that exact same thing with "energy" or "ultra-minuscule EMR pulses" or "affectance". As far as "gravity without mass", the thing they now call "dark matter" is exactly that, a higher density mass field than normal, but not close to that of actual mass particles. And when approaching a higher density field, such as stated prior, random EMR veers into the higher density at literally "light speeds", not gravitational speeds. I don't know how much that really fit what you were trying to say, but it seems to cover the bases. Only_Humean wrote: My theory doesn't rely on free will at all, though, just relations and processes - which are different in financial markets and economies to gas laws or physical relations. How different they are is merely a matter of one being able to see the analogous patterns. As I said, that isn't something you have shown to be very good at doing. One can't prove to a dog that the Internet is real. Only_Humean wrote: Power laws are used to approximate wealth distributions: I'm not denying that. Well, you DID deny that, but okay .. moving on... Only_Humean wrote: Depending on the society, a power law could be almost uniformly egalitarian or profoundly tyrannical - that's why you have measures like the Lorenz curve and Gini coefficient, which basically assume a Pareto power distribution. Neither of which depend on pure free market capitalism or any other political philosophy. Well, you were doing good until that last statement. You began by saying "depending on the society" but then ended with "independent of political philosophy". How can you have both? The social construct and/or political structure determine how free the monetary flow actually is. It isn't merely about being capitalist or not. How secure people feel has a great deal to do with it, thus even their religious beliefs and physical comforts get into the game. My statement was that when the monetary flow is free of obstructions, regardless of whatever causes, banks and banking will cause an accumulation (a "mass particle") that will reach a saturation level for the region that the banks encompass. Those who own the banks, accumulate the wealth (because it is a usury con game). That is exactly what has taken place over the last 100 years as all economies gradually got absorbed into the global economy. And that is exactly why you now have that first graph with only 1-3% of the population with a fantastic wealth and only 1-3% of those with ultra-extreme wealth. That is just the money part. I was actually referring more to the complete power package, not merely monetary gains. Money is a crude and not always accurate measure of social power, which is formed of directed social energy or effort (the very source of money's value). Only_Humean wrote: https://archive.is/P3srX/624cd34be8fa1a5784b43b9eb9714eb693a4616a.jpg This is what I found amongst others: 1) W O R L D : World Map of Wealth Distribution. Reichtum_Weltkarte.jpg (63.83 KiB) Viewed 2237 times 2) U. S. A. : Arminius wrote: Alf wrote: POVERTY almost evrywhere IN THE USA MAKE IT GREAT AGAIN? 1% of all US people has 40% of all the nation's wealth. And the poorest 80% of all US people have merely 7% of all the nation's wealth. Watch the video Serendipper posted (especially 4:41–4:54): Serendipper wrote: Thought experiment: Is there anything that one human can do 400X better than another human? Can someone be 400X smarter? Even if the dumbest guy had an iq of 1, a 400 iq is off the chart. Can someone lift 400X more weight? 1000lb is the record bench press, so the weakest person would have to only bench 2.5lbs for a 400X differential. What could possibly justify someone making 400X more money than the AVERAGE person? Being 400X more sleazy I reckon. According to your video the richest 20% of the US have more than 80% of all the US wealth, the richest 1% of the US have 40% all the US wealth, the poorest 80% of the US have more merely 7% of all the US wealth. Maybe I will have to change my thoughts about the wealth inequality in the USA. Arminius wrote: .... 2006: The richest Finnish 20% have 35% of the Finnish income (GNP). The poorest Finnish 80% have 65% of the Finnish income (GNP). The richest German 20% have 40% of the German income (GNP). The poorest German 80% have 60% of the German income (GNP). The richest US 20% have 47% of the US income (GNP). The poorest US 80% have 53% of the US income (GNP). The richest Brazilian 20% have 65% of the Brazilian income (GNP). The poorest Brazilian 80% have 35% of the Brazilian income (GNP). Maybe that the richest Brazilian 20% have already 80% of the Brazilian income (GNP). So at last we will possibly see the following scenario in the world: 20% of all humans have 80% of the global income. So 80% of all humans have merely 20% of the global income. (Cp. Pareto distribution.) ....
CommonCrawl
Structural Cryptanalysis of McEliece Schemes with Compact Keys Jean-Charles Faugère and Ayoub Otmani and Ludovic Perret and Frédéric de Portzamparc and Jean-Pierre Tillich Abstract: A very popular trend in code-based cryptography is to decrease the public-key size by focusing on subclasses of alternant/Goppa codes which admit a very compact public matrix, typically quasi-cyclic (QC), quasi-dyadic (QD), or quasi-monoidic (QM) matrices. We show that the very same reason which allows to construct a compact public-key makes the key-recovery problem intrinsically much easier. The gain on the public-key size induces an important security drop, which is as large as the compression factor $p$ on the public-key. The fundamental remark is that from the $k\times n$ public generator matrix of a compact McEliece, one can construct a $k/p \times n/p$ generator matrix which is -- from an attacker point of view -- as good as the initial public-key. We call this new smaller code the {\it folded code}. Any key-recovery attack can be deployed equivalently on this smaller generator matrix. To mount the key-recovery in practice, we also improve the algebraic technique of Faugère, Otmani, Perret and Tillich (FOPT). In particular, we introduce new algebraic equations allowing to include codes defined over any prime field in the scope of our attack. We describe a so-called ``structural elimination'' which is a new algebraic manipulation which simplifies the key-recovery system. As a proof of concept, we report successful attacks on many cryptographic parameters available in the literature. All the parameters of CFS-signatures based on QD/QM codes that have been proposed can be broken by this approach. In most cases, our attack takes few seconds (the harder case requires less than $2$ hours). In the encryption case, the algebraic systems are harder to solve in practice. Still, our attack succeeds against r cryptographic challenges proposed for QD and QM encryption schemes, but there are still some parameters that have been proposed which are out of reach of the methods given here. However, regardless of the key-recovery attack used against the folded code, there is an inherent weakness arising from Goppa codes with QM or QD symmetries. It is possible to derive from the public key a much smaller public key corresponding to the folding of the original QM or QD code, where the reduction factor of the code length is precisely the order of the QM or QD group used for reducing the key size. To summarize, the security of such schemes are not relying on the bigger compact public matrix but on the small folded code which can be efficiently broken in practice with an algebraic attack for a large set of parameters. Category / Keywords: public-key cryptography, McEliece cryptosystem, algebraic cryptanalysis, folded code Date: received 22 Mar 2014, last revised 22 Mar 2014 Contact author: frederic urvoy-de-portzamparc at polytechnique org
CommonCrawl
BMC Medicine Mortality and cancer in relation to ABO blood group phenotypes in the Golestan Cohort Study Arash Etemadi1,2, Farin Kamangar3, Farhad Islami1,4, Hossein Poustchi5, Akram Pourshams5, Paul Brennan6, Paolo Boffetta7, Reza Malekzadeh1, Sanford M Dawsey2, Christian C Abnet2 & Ashkan Emadi8 BMC Medicine volume 13, Article number: 8 (2015) Cite this article 16k Accesses 100 Altmetric A few studies have shown an association between blood group alleles and vascular disease, including atherosclerosis, which is thought to be due to the higher level of von Willebrand factor in these individuals and the association of blood group locus variants with plasma lipid levels. No large population-based study has explored this association with overall and cause-specific mortality. We aimed to study the association between ABO blood groups and overall and cause-specific mortality in the Golestan Cohort Study. In this cohort, 50,045 people 40- to 70-years old were recruited between 2004 and 2008, and followed annually to capture all incident cancers and deaths due to any cause. We used Cox regression models adjusted for age, sex, smoking, socioeconomic status, ethnicity, place of residence, education and opium use. During a total of 346,708 person-years of follow-up (mean duration 6.9 years), 3,623 cohort participants died. Non-O blood groups were associated with significantly increased total mortality (hazard ratio (HR) = 1.09; 95% confidence interval (CI): 1.01 to 1.17) and cardiovascular disease mortality (HR = 1.15; 95% CI: 1.03 to 1.27). Blood group was not significantly associated with overall cancer mortality, but people with group A, group B, and all non-O blood groups combined had increased risk of incident gastric cancer. In a subgroup of cohort participants, we also showed higher plasma total cholesterol and low-density lipoprotein (LDL) in those with blood group A. Non-O blood groups have an increased mortality, particularly due to cardiovascular diseases, which may be due to the effect of blood group alleles on blood biochemistry or their effect on von Willebrand factor and factor VIII levels. Please see related commentary http://dx.doi.org/10.1186/s12916-014-0250-y. E.B. Ford, the renowned geneticist, was quoted in 1945 as saying, 'It is reasonable to conclude, from what we know of polymorphisms, that individuals belonging to the different blood groups are not equally viable…' [1]. Although blood group antigens have been widely recognized because of the complications they produce in transfusion medicine [2], their conservation through evolution and their presence on many cells in the human body [1] suggest they are also critical to human physiology. However, the only documented roles, so far, include susceptibility to certain infections such as Plasmodium falciparum [3] and Helicobacter pylori [4], and the level and structure of the von Willebrand Factor (vWF)-FVIII complex in blood [2]. The ABO(H) blood group system was the first genetic polymorphism discovered in humans [5]. So, it is not surprising that it has been studied in the context of many chronic diseases. Many vascular disorders (especially venous thromboembolism and atherosclerotic disease) have been linked to non-O blood group status [6]. This association is thought to be mainly due to the higher level of factor VIII and vWF in these individuals, and to some extent the association of blood group locus variants with plasma lipid levels, especially cholesterol [7]. Higher levels of factor VIII and vWF lead to increased thrombotic tendency [8], and plasma cholesterol is a known risk factor for atherosclerosis. ABO blood groups have also been extensively studied in association with cancer. Some of the most consistent associations observed so far include the associations between non-O blood groups and pancreatic cancer (which was also confirmed in a genome-wide association study (GWAS) [9]), and between the A blood group and gastric cancer [10] and atrophic gastritis [11]. Despite their discovery in 1900, the critical role of ABO blood groups in transfusion medicine, and their apparent link to multiple diseases, the association of blood groups with mortality in the general population has not been evaluated in a large prospective study [6]. Therefore, we decided to examine the hypothesis that blood groups are associated with overall and cause-specific mortality, using the data from the large prospective Golestan Cohort Study. Details of the Golestan Cohort Study (GCS) have been published before [12]. This study is a population-based cohort in northeastern Iran which has followed 50,045 people above the age of 40 since 2004. At cohort recruitment, between the years 2004 and 2008, all participants were interviewed by trained cohort staff and underwent blood group determination. ABO blood group and Rh could not be determined for four and two individuals, respectively, who were excluded from analyses. The GCS was approved by the Institutional Review Boards of the Digestive Disease Research Center (DDRC), the US National Cancer Institute (NCI), and the International Agency for Research on Cancer (IARC), and all participants gave written informed consent before enrollment. Details of the GCS follow-up procedures have been published before [13]. Annual follow-up has had a 99% success rate so far. In these follow-up contacts, any case that is suspicious for cancer is evaluated and documented, and the records are complemented by linkage to local and national registries. Any reported death is also followed by a visit from a physician who completes a verbal autopsy questionnaire, validated for this population [13], by interviewing the closest relative of the deceased. At the same time, death certificates and all available medical documents are collected. The cause of death is classified according to the International Classification of Diseases, 10th revision (ICD-10) codes. For this analysis, causes of death were categorized as medical or external (that is, accidents, intoxication, suicide or other types of injury). Medical causes of death were further divided into cardiovascular disease (ischemic heart disease (ICD-10 codes I20-I25), cerebrovascular disease (I60-I69), and other diseases of the circulatory system); death due to cancer (ICD-10 codes C00-C97); and death due to other medical causes. Follow-up for this analysis continued until the subject was lost to follow-up, death occurred, or 28 February 2014, whichever came first. In a random subgroup of the original cohort (n = 11,418), a second round of risk factor assessment and blood biochemistry tests was done four to five years after the initial enrollment. These results were used to analyze the association of blood groups with cardiovascular risk factors, including plasma lipids, blood glucose, blood pressure and anthropometric measurements. We used Cox proportional hazards models, with age as the time variable, to estimate unadjusted and adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for mortality and cancer incidence, in relation to blood groups. Participants were left-censored at the age of enrollment, and all models (crude or adjusted) were adjusted for age at cohort baseline. The adjusted models also included potential confounders (sex, ethnicity, place of residence (urban or rural), education, quartiles of smoking in pack-years, opium use and an index of socioeconomic status [14]). These variables were selected because they have been shown to affect mortality in general or in this population [15]. Models for cancer incidence were adjusted for the same variables. The cancers used as outcomes were those having strong a priori associations with blood groups (gastric and pancreatic cancer) and the most common cancer in this population (esophageal squamous cell carcinoma). The follow up for these models continued until the loss to follow-up, death, cancer diagnosis, or 28 February 2014, whichever came first. Population attributable fractions (AFp) were calculated using Levin's formula [16]: $$ A{F}_p=\frac{\left\{RR-1\right\}\times P{F}_1}{1+\left\{RR-1\right\}P{F}_1} $$ where PF1 is the proportion of the population in any given blood group, and RR is the relative risk of the outcome in that blood group compared with the reference risk (group O). All statistical tests were two-sided and a P value of 0.05 or smaller was considered significant. The most common blood group phenotype in this population was A (33.4%) followed by O (29.9%), and 93.5% were Rh positive. Table 1 shows the characteristics of the population across different blood groups. During a total of 346,708 person-years of follow-up (mean ± SD: 6.9 ± 1.5 years) through February 2014, 3,623 cohort participants died. The most common cause of death was cardiovascular disease (n = 1,879, 51.9%), followed by cancer (n = 775, 21.4%). Table 1 Baseline characteristics of Golestan Cohort Study participants by blood group phenotypes For mortality analyses, 209 deaths due to external causes were excluded, and only medical causes of death were considered. As Table 2 shows, non-O blood groups were associated with a significantly increased total mortality (HR = 1.09; 95% CI: 1.01 to 1.17), which was most pronounced for cardiovascular disease mortality (HR = 1.15; 95% CI: 1.03 to 1.27). In this population, 5.9% of total mortality due to medical causes, and 8.9% of those due to cardiovascular disease could be attributed to having non-O blood group. Among the different non-O blood groups, both groups A and B were individually associated with both higher overall and cardiovascular mortality, compared with group O. Cancer mortality was, to some extent, more frequent in individuals with non-O blood group alleles, although this difference was not statistically significant. Table 2 Overall and cause-specific mortality by blood group in the Golestan Cohort Study People with a non-O blood group had an increased risk of gastric cancer (HR = 1.55; 95% CI: 1.09 to 2.21). The significantly increased risk was seen in both blood group A (HR = 1.57; 95% CI: 1.06 to 2.32) and B (HR = 1.59; 95% CI: 1.06 to 2.39). Other cancers were not associated with blood group phenotype (Table 3). Table 3 Incidence of three main cancer types by blood group in the Golestan Cohort Study Absence of the D antigen of Rhesus (Rh) blood group (Rh-) was less common in Turkmen than in non-Turkmen (5.9% versus 8.2%), but there was no association between Rh positivity and any of the study outcomes. We also tested the interaction between Rh group and ABO blood type, and the results were not significant (data not shown). In a subgroup of 11,418, for whom biochemistry test results were available, the association between cardiovascular risk factors and ABO blood groups was assessed. As Table 4 shows, compared with blood group O, group A was associated with higher total cholesterol and low-density lipoprotein (LDL), while group B had lower lipid levels. In total, the only significant difference between non-O and O groups was higher blood glucose in the former (Table 4). Table 4 Cardiovascular risk factors in a random subgroup (n = 11,418) of Golestan Cohort Study participants according to blood group phenotype Our results show that individuals with non-O blood groups have higher overall and cardiovascular mortality. They also demonstrate an association between group A and B blood groups and gastric cancer. The association of non-O blood groups with the incidence of different vascular diseases has been known for some time, although there has been some controversy because of the paucity of information from large prospective studies [17]. In one of the few such studies before the current report, the combined analysis of the Nurses' Health Study and the Health Professionals Follow-up Study, 6.27% of the coronary heart disease (CHD) cases were attributable to non-O blood groups [7]. However, this study did not report all-cause or cardiovascular mortality. In another study, among 4,901 patients with ischemic heart disease, those with non-O blood groups had higher cardiac mortality [18]. Non-O blood groups are also associated with earlier onset of coronary artery disease [19], more extensive myocardial necrosis and visible thrombus [20]. The two main mechanisms proposed for these associations include the effect of ABO blood groups on serum cholesterol and their influence on hemostasis [8]. Some studies have shown an association between non-O blood groups, particularly group A, and hypercholesterolemia [21], while others have failed to show such an association [22]. The association of variation at the ABO blood group locus with plasma lipid levels has been seen in a GWAS of more than 100,000 individuals of European descent [23]. We also observed higher total cholesterol and LDL levels in people with blood group A, but we think that this alone cannot explain the increased mortality among non-O blood groups mainly because, compared with blood group O, plasma lipid levels were only higher in group A, and these levels were actually lower in other non-O blood types. A recent study also estimated that about 10% of the CHD risk associated with non-O blood groups, is mediated by its influence on LDL cholesterol levels [24]. The association of non-O blood groups with cardiovascular mortality may also be due to the higher levels of vWF and factor VIII in these individuals [8]. A GWAS study found that ABO locus showed the top signal for myocardial infarction in patients with angiographic coronary artery disease (CAD), and concluded that the variation linked to group O and reduced vWF, was protective against myocardial infarction in CAD patients [25]. vWF levels are approximately 25% to 30% higher in people with non-O blood groups [2]. This effect is a direct functional effect and is not due to an association of ABO locus with another gene [26], and the ABH antigenic structures are present on the circulating vWF [2]. Higher levels of vWF have been shown to be independently associated with increased cardiovascular and all-cause-mortality in humans [27], and atherosclerotic plaque progression in mice [28]. The reasons for this association may be the direct role of vWF in platelet adhesion, aggregation and thrombogenesis, although some investigators believe that other mechanisms might be involved as well [29]. Evidence suggests that blood groups A, B and AB probably have a similar effect on the circulating vWF. [30] Blood group A consists of two major subgroups, A1 (about 80%) and A2 (about 20%). In this study, we did not check for differentiation between these subtypes. However, it has been reported that A2 blood group has approximately 47% lower risk of venous thromboembolism compared to other non-O blood groups [31,32]. This lower risk has been suggested to be caused by decreased glycosylation of H antigen, due to a 30- to 50-fold lower glycosyltransferase activity associated with the A2 allele compared to the A1 allele [33]. The impact of such differences on CVD risk and mortality is not clear. To the best of our knowledge, this is the first study to investigate ABO blood group in relation to cancer mortality, and unlike CVD mortality, we did not observe a significant association for mortality due to all cancers combined. However, incident gastric cancer, which is the second most common cancer in our population, was associated with both blood groups A and B. The association of gastric cancer with blood group A has been observed in many previous studies [10] and is thought to be linked to an altered inflammatory response to Helicobacter pylori, particularly cagA positive strains [4]. In the largest study so far, a 35-year follow-up of one million Swedish and Danish blood donors showed an increased risk of gastric cancer among individuals with blood group A compared to group O [34]. However, most previous reports have shown risk estimates of around 1.2 [6], while we observed a stronger association. Our study is also one of the few studies to show an association between blood group B and gastric cancer [11]. One limitation of our study is the left censoring of the mortality data, although most deaths in our population before age 40 (our cohort's minimum enrollment age) are due to accidents which were not the focus of our evaluation [35]. Also, there were not enough events in the subgroup with available biochemistry data (because of the short follow-up duration in this subgroup), to allow direct analysis of the mediation effect of biochemical changes in the ABO-mortality association. We showed that, in apparently healthy individuals, 5.9% of total deaths due to medical causes and 8.9% of cardiovascular deaths were attributable to having non-O blood groups, and these blood groups were also associated with a higher risk of gastric cancer. These findings support the clinical importance of blood group determination in assessing health risks beyond its application in transfusion medicine. Garratty G: Blood groups and disease: a historical perspective.Transfus Med Rev 2000, 14:291–301. Jenkins PV, O'Donnell JS: ABO blood group determines plasma von Willebrand factor levels: a biologic function after all?Transfusion 2006, 46:1836–1844. Cserti CM, Dzik WH: The ABO blood group system and Plasmodium falciparum malaria.Blood 2007, 110:2250–2258. Sharara AI, Abdul-Baki H, ElHajj I, Kreidieh N, Kfoury Baz EM: Association of gastroduodenal disease phenotype with ABO blood group and Helicobacter pylori virulence-specific serotypes.Dig Liver Dis 2006, 38:829–833. Yamamoto F, Cid E, Yamamoto M, Blancher A: ABO research in the modern era of genomics.Transfus Med Rev 2012, 26:103–118. Liumbruno GM, Franchini M: Beyond immunohaematology: the role of the ABO blood group in human diseases.Blood Transfus 2013, 11:491–499. He M, Wolpin B, Rexrode K, Manson JE, Rimm E, Hu FB, Qi L: ABO blood group and risk of coronary heart disease in two prospective cohort studies.Arterioscler Thromb Vasc Biol 2012, 32:2314–2320. Franchini M, Mannucci PM: ABO blood group and thrombotic vascular disease.Thromb Haemost 2014, in press. Iodice S, Maisonneuve P, Botteri E, Sandri MT, Lowenfels AB: ABO blood group and cancer.Eur J Cancer 2010, 46:3345–3350. Liumbruno GM, Franchini M: Hemostasis, cancer, and ABO blood group: the most recent evidence of association.J Thromb Thrombolysis 2014, 38:160–166. Nakao M, Matsuo K, Ito H, Shitara K, Hosono S, Watanabe M, Ito S, Sawaki A, Iida S, Sato S, Yatabe Y, Yamao K, Ueda R, Tajima K, Hamajima N, Tanaka H: ABO genotype and the risk of gastric cancer, atrophic gastritis, and Helicobacter pylori infection.Cancer Epidemiol Biomarkers Prev 2011, 20:1665–1672. Pourshams A, Khademi H, Malekshah AF, Islami F, Nouraei M, Sadjadi AR, Jafari E, Rakhshani N, Salahi R, Semnani S, Kamangar F, Abnet CC, Ponder B, Day N, Dawsey SM, Boffetta P, Malekzadeh R: Cohort profile: The Golestan Cohort Study–a prospective study of oesophageal cancer in northern Iran.Int J Epidemiol 2010, 39:52–59. Khademi H, Etemadi A, Kamangar F, Nouraie M, Shakeri R, Abaie B, Pourshams A, Bagheri M, Hooshyar A, Islami F, Abnet CC, Pharoah P, Brennan P, Boffetta P, Dawsey SM, Malekzadeh R: Verbal autopsy: reliability and validity estimates for causes of death in the Golestan Cohort Study in Iran.PLoS One 2010, 5:e11183. Islami F, Kamangar F, Nasrollahzadeh D, Aghcheli K, Sotoudeh M, Abedi-Ardekani B, Merat S, Nasseri-Moghaddam S, Semnani S, Sepehr A, Wakefield J, Møller H, Abnet CC, Dawsey SM, Boffetta P, Malekzadeh R: Socio-economic status and oesophageal cancer: results from a population-based case–control study in a high-risk area.Int J Epidemiol 2009, 38:978–988. Khademi H, Malekzadeh R, Pourshams A, Jafari E, Salahi R, Semnani S, Abaie B, Islami F, Nasseri-Moghaddam S, Etemadi A, Byrnes G, Abnet CC, Dawsey SM, Day NE, Pharoah PD, Boffetta P, Brennan P, Kamangar F: Opium use and mortality in Golestan Cohort Study: prospective cohort study of 50,000 adults in Iran.BMJ 2012, 344:e2502. Hanley JA: A heuristic approach to the formulas for population attributable fraction.J Epidemiol Community Health 2001, 55:508–514. Wu O, Bayoumi N, Vickers MA, Clark P: ABO(H) blood groups and vascular disease: a systematic review and meta-analysis.J Thromb Haemost 2008, 6:62–69. Carpeggiani C, Coceani M, Landi P, Michelassi C, L'Abbate A: ABO blood group alleles: a risk factor for coronary artery disease. An angiographic study.Atherosclerosis 2010, 211:461–466. Cesena FH, da Luz PL: ABO blood group and precocity of coronary artery disease.Thromb Res 2006, 117:401–402. Ketch TR, Turner SJ, Sacrinty MT, Lingle KC, Applegate RJ, Kutcher MA, Sane DC: ABO blood types: influence on infarct size, procedural characteristics and prognosis.Thromb Res 2008, 123:200–205. Garrison RJ, Havlik RJ, Harris RB, Feinleib M, Kannel WB, Padgett SJ: ABO blood group and cardiovacular disease: the Framingham study.Atherosclerosis 1976, 25:311–318. Amirzadegan A, Salarifar M, Sadeghian S, Davoodi G, Darabian C, Goodarzynejad H: Correlation between ABO blood groups, major risk factors, and coronary artery disease.Int J Cardiol 2006, 110:256–258. Teslovich TM, Musunuru K, Smith AV, Edmondson AC, Stylianou IM, Koseki M, Pirruccello JP, Ripatti S, Chasman DI, Willer CJ, Johansen CT, Fouchier SW, Isaacs A, Peloso GM, Barbalic M, Ricketts SL, Bis JC, Aulchenko YS, Thorleifsson G, Feitosa MF, Chambers J, Orho-Melander M, Melander O, Johnson T, Li X, Guo X, Li M, Shin Cho Y, Jin Go M, Jin Kim Y, et al: Biological, clinical and population relevance of 95 loci for blood lipids.Nature 2010, 466:707–713. Chen Y, Chen C, Ke X, Xiong L, Shi Y, Li J, Tan X, Ye S: Analysis of circulating cholesterol levels as a mediator of an association between ABO blood group and coronary heart disease.Circ Cardiovasc Genet 2014, 7:43–48. Reilly MP, Li M, He J, Ferguson JF, Stylianou IM, Mehta NN, Burnett MS, Devaney JM, Knouff CW, Thompson JR, Horne BD, Stewart AF, Assimes TL, Wild PS, Allayee H, Nitschke PL, Patel RS, Myocardial Infarction Genetics Consortium, Wellcome Trust Case Control Consortium, Martinelli N, Girelli D, Quyyumi AA, Anderson JL, Erdmann J, Hall AS, Schunkert H, Quertermous T, Blankenberg S, Hazen SL, Roberts R, et al: Identification of ADAMTS7 as a novel locus for coronary atherosclerosis and association of ABO with myocardial infarction in the presence of coronary atherosclerosis: two genome-wide association studies.Lancet 2011, 377:383–392. Souto JC, Almasy L, Muniz-Diaz E, Soria JM, Borrell M, Bayen L, Mateo J, Madoz P, Stone W, Blangero J, Fontcuberta J: Functional effects of the ABO locus polymorphism on plasma levels of von Willebrand factor, factor VIII, and activated partial thromboplastin time.Arterioscler Thromb Vasc Biol 2000, 20:2024–2028. Jager A, van Hinsbergh VW, Kostense PJ, Emeis JJ, Yudkin JS, Nijpels G, Dekker JM, Heine RJ, Bouter LM, Stehouwer CD: Von Willebrand factor, C-reactive protein, and 5-year mortality in diabetic and nondiabetic subjects: the Hoorn Study.Arterioscler Thromb Vasc Biol 1999, 19:3071–3078. Gandhi C, Ahmad A, Wilson KM, Chauhan AK: ADAMTS13 modulates atherosclerotic plaque progression in mice via a VWF-dependent mechanism.J Thromb Haemost 2014, 12:255–260. van Schie MC, van Loon JE, de Maat MP, Leebeek FW: Genetic determinants of von Willebrand factor levels and activity in relation to the risk of cardiovascular disease: a review.J Thromb Haemost 2011, 9:899–908. Thompson SG, Kienast J, Pyke SD, Haverkate F, van de Loo JC: Hemostatic factors and the risk of myocardial infarction or sudden death in patients with angina pectoris. European Concerted Action on Thrombosis and Disabilities Angina Pectoris Study Group.N Engl J Med 1995, 332:635–641. Tregouet DA, Heath S, Saut N, Biron-Andreani C, Schved JF, Pernod G, Galan P, Drouet L, Zelenika D, Juhan-Vague I, Alessi MC, Tiret L, Lathrop M, Emmerich J, Morange PE: Common susceptibility alleles are unlikely to contribute as strongly as the FV and ABO loci to VTE risk: results from a GWAS approach.Blood 2009, 113:5298–5303. Heit JA, Armasu SM, Asmann YW, Cunningham JM, Matsumoto ME, Petterson TM, De Andrade M: A genome-wide association study of venous thromboembolism identifies risk variants in chromosomes 1q24.2 and 9q.J Thromb Haemost 2012, 10:1521–1531. Yamamoto F, McNeill PD, Hakomori S: Human histo-blood group A2 transferase coded by A2 allele, one of the A subtypes, is characterized by a single base deletion in the coding sequence, which results in an additional domain at the carboxyl terminal.Biochem Biophys Res Commun 1992, 187:366–374. Edgren G, Hjalgrim H, Rostgaard K, Norda R, Wikman A, Melbye M, Nyren O: Risk of gastric cancer and peptic ulcers in relation to ABO blood type: a cohort study.Am J Epidemiol 2010, 172:1280–1285. Forouzanfar MH, Sepanlou SG, Shahraz S, Dicker D, Naghavi P, Pourmalek F, Mokdad A, Lozano R, Vos T, Asadi-Lari M, Sayyari AA, Murray CJ, Naghavi M: Evaluating causes of death and morbidity in Iran, global burden of diseases, injuries, and risk factors study 2010.Arch Iran Med 2014, 17:304–320. This work was supported in part by the intramural research program of the Division of Cancer Epidemiology and Genetics, National Cancer Institute; the Digestive Disease Research Center of Tehran University of Medical Sciences (grant No 82–603); Cancer Research UK (C20/A5860); and by the International Agency for Research on Cancer. Digestive Oncology Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran Arash Etemadi , Farhad Islami & Reza Malekzadeh Division of Cancer Epidemiology and Genetics, National Cancer Institute, 9609 Medical Center Dr, Bethesda, MD, 20859, USA , Sanford M Dawsey & Christian C Abnet Department of Public Health Analysis, School of Community Health and Policy, Morgan State University, Baltimore, MD, USA Farin Kamangar Surveillance and Health Services Research, American Cancer Society, Atlanta, GA, USA Farhad Islami Liver and Pancreatobiliary Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran Hossein Poustchi & Akram Pourshams International Agency for Research on Cancer, Lyon, France Paul Brennan Institute for Translational Epidemiology and Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA Paolo Boffetta Greenebaum Cancer Center, University of Maryland, Baltimore, MD, USA Ashkan Emadi Search for Arash Etemadi in: Search for Farin Kamangar in: Search for Farhad Islami in: Search for Hossein Poustchi in: Search for Akram Pourshams in: Search for Paul Brennan in: Search for Paolo Boffetta in: Search for Reza Malekzadeh in: Search for Sanford M Dawsey in: Search for Christian C Abnet in: Search for Ashkan Emadi in: Correspondence to Arash Etemadi. AEt, FK, PBo, RM, SMD, CCA and AEm designed the study. AEt, FI, AP, HP and RM were involved in data collection and processing. CCA, PBr, PBo, RM and SMD are the study PI's. AEt did the statistical analysis with input from AEm and CCA, and wrote the first draft. All the authors revised and approved the paper. All authors had full access to all the data in the study and accept the responsibility for the data integrity and accuracy of the report. All authors read and approved the final manuscript. Christian C Abnet and Ashkan Emadi contributed equally. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Etemadi, A., Kamangar, F., Islami, F. et al. Mortality and cancer in relation to ABO blood group phenotypes in the Golestan Cohort Study. BMC Med 13, 8 (2015). https://doi.org/10.1186/s12916-014-0237-8 Accepted: 12 November 2014 Blood group Submission enquiries: bmcmedicineeditorial@biomedcentral.com
CommonCrawl
Investigating heartbeat-related in-plane motion and stress levels induced at the aortic root Wei Wei1, Morgane Evin1, Stanislas Rapacchi2, Frank Kober2, Monique Bernard2, Alexis Jacquier2, Cyril J. F. Kahn1 & Michel Behr1 The axial motion of aortic root (AR) due to ventricular traction was previously suggested to contribute to ascending aorta (AA) dissection by increasing its longitudinal stress, but AR in-plane motion effects on stresses have never been studied. The objective is to investigate the contribution of AR in-plane motion to AA stress levels. The AR in-plane motion was assessed on magnetic resonance imagining data from 25 healthy volunteers as the movement of the AA section centroid. The measured movement was prescribed to the proximal AA end of an aortic finite element model to investigate its influences on aortic stresses. The finite element model was developed from a patient-specific geometry using LS-DYNA solver and validated against the aortic distensibility. Fluid–structure interaction (FSI) approach was also used to simulate blood hydrodynamic effects on aortic dilation and stresses. The AR in-plane motion was 5.5 ± 1.7 mm with the components of 3.1 ± 1.5 mm along the direction of proximal descending aorta (PDA) to AA centroid and 3.0 ± 1.3 mm perpendicularly under the PDA reference system. The AR axial motion elevated the longitudinal stress of proximal AA by 40% while the corresponding increase due to in-plane motion was always below 5%. The stresses at proximal AA resulted approximately 7% less in FSI simulation with blood flow. The AR in-plane motion was comparable with the magnitude of axial motion. Neither axial nor in-plane motion could directly lead to AA dissection. It is necessary to consider the heterogeneous pressures related to blood hydrodynamics when studying aortic wall stress levels. Aortic dissection is rare but a potentially life-threatening illness. Apart from hypertension and aortic dilation [1], the aortic root (AR) motion has also been proposed to be another factor leading to dissection [2, 3]. During the cardiac cycle, the aortic annulus is towed due to ventricular traction in systole and is relaxed in diastole. The traction force induces a spatial movement of the aortic annulus and is transmitted to the ascending aorta (AA). The AR motion has been proved to alter in parallel with such cardiovascular pathologies as left ventricular hypokinesis and aortic insufficiency [2]. Since supra-aortic vessels were relatively constrained compared to AA, different AR motions would bring about different levels of aortic wall stress, which was proposed as a risk prediction index for aortic dissection [4] and aortic aneurysm [5]. A mean value of 8.9 mm (range 6.4–11.3 mm) for aortic motion was observed along the lumen longitudinal direction with cine-magnetic resonance imaging (MRI) studies in healthy subjects [6]. The aortic downward displacement was also reported to range between 0 and 49% of the sino-tubular junction diameter in patients with coronary artery diseases [2]. In contrast, the mean in-plane (perpendicular to the lumen) displacement of AA was respectively reported as 5.2 ± 1.7 mm for patients with chronic aortic dissection type B [7] and 6.7 ± 1.8 mm for the young healthy volunteers [8]. However, the component displacements in the anterior–posterior or the lateral direction were not mentioned in both studies. Aortic finite element (FE) models were previously used to evaluate the AR downward [2, 3, 9] and twisted [2, 3] motion effects on proximal AA stress levels. Studying the influences of AR in-plane movement is however limited. A lack of model validation against physiological data might also undermine the accuracy of aortic stress. Moreover, an uniformly distributed loading was assumed on the aortic wall in these previous studies while the simulation fidelity could benefit from considering the inhomogeneous pressure ambient due to blood flow [10,11,12]. Therefore, the aim of our study was threefold. Firstly, in order to add additional knowledge to AR physiological motion, the in-plane components of heartbeat-related AR displacement will be evaluated in healthy volunteers with MRI data. Secondly, the fluid–structure interaction (FSI) will be simulated between the aortic wall and blood to assess the fluid dynamic effects on aortic stress levels. Finally, to determine the in-plane motion effects on AA dissection risks, the AA stress levels will be studied under different AR motions with a validated FE model. The study was approved by the local ethics committee (CPP Sud Méditerranée I, Marseille, ID RCB 2012-A01093-40) and the written informed consent was granted by each volunteer. Twenty-five volunteers (15 men and 10 women, mean age 30.4 ± 9.7 years, mean height 175.8 ± 7.6 cm, mean weight 65.8 ± 13.0 kg) were recruited into this evaluation and the candidates had to match the following criteria: no history of cardiovascular disease, hypertension, diabetes or hypercholesterolemia. Image acquisition and evaluation The image acquisition was performed for all the subjects during a breath-hold with a 1.5 T MRI scanner (Avanto VB17, Siemens, Erlagen, Germany) under a protocol as previously described in [13]. A stack of segmented steady-state free precession (SSFP) bright blood images were acquired in axial and oblique sagittal planes (Fig. 1) to assess the aortic slice segmentation. SSFP cine images were subsequently obtained at three different levels [AA together with the proximal descending aorta (PDA) at the level of pulmonary trunk, the distal descending aorta 3 cm above the diaphragm (DDA) and above the coeliac trunk (CA)] perpendicular to the aortic lumen (Fig. 1) with the following parameters: TR = 21.7 ms to 24.7 ms, TE = 1.36 ms to 1.55 ms, α = 65°, recFOV = 210 mm × 263 mm to 280 mm × 340 mm, slice thickness = 7 mm, pixel size = 1.26 mm × 1.26 mm to 1.68 mm × 1.68 mm. It is worth noting that only the images at AA section were used to evaluate the AR in-plane motion. Oblique sagittal and AA-perpendicular MRI images. The region in red colour corresponds to the aorta. Left: oblique sagittal MRI image showing the locations of different aortic sections. Right: the MRI image perpendicular to the AA for measuring its in-plane motion. MRI Sys and PDA Sys are the abbreviations for MRI reference system and PDA system respectively Dynamic datasets were loaded into a semi-automatic tool, Argus (Siemens, Erlangen, Germany), in which the region of interest (ROI) was created and the aortic lumen boundary was detected based on the intensity gradient. After being manually adjusted, the ROI was propagated and adapted for each phase of the cardiac cycle. The Cartesian coordinates of the points on the aortic contour could be provided by this software application. The ROI geometric centroid was obtained by averaging the coordinates of the aortic contour. The AA in-plane displacement was defined as the distance between the centroid at the ending of diastole (initial) and the centroid on the analysed image. The mean value and the standard deviation among all the subjects were then calculated from the maximum in-plane displacement of each time series. A PDA system was constructed with its origin at the PDA centroid at the ending of diastole, with the positive Y direction from the origin to AA centroid and with positive X normal to Y axis pointing to the left (Fig. 1). Reconstruction of the aorta The aortic lumen was detected from end-diastolic 2-dimensional stack of SSFP images of a randomly-selected volunteer (25 years, male) using the in-plane region-growing method on Mimics software (Marterialise, Louvain, Belgium). The sinus of valsalva was not reconstructed since the AR was not the focus of the aortic wall stress analysis. As the 3-dimensional surface evolution was run through the stack of segmented contour, the aortic geometry was then extracted, smoothed and exported with stereolithography version for later processing. Since the resolution of the image acquisition was not high enough to detect the thickness of the aortic wall, the geometry reconstructed here was assumed to be the inner wall of the aorta. FE modelling and material properties LS-Prepost (LSTC, Livermore, CA, USA) was used to discretize the aortic inner wall with 4-node shell elements, which were subsequently offset outward with a uniform thickness of 1.6 mm to generate the hexahedral elements for the aortic wall. The assumed uniform aortic thickness was compatible with the reported ranges in literature [14] and was also commonly performed as in previous works [2, 9]. The shell elements of the aortic inner wall were only used to generate the aortic wall brick elements only and not for the following simulations. In order to determine the aortic model size, a mesh convergence analysis was performed with a pure structural simulation. A pressure of 80 mmHg was imposed on aortic exterior walls of three models which were respectively discretized with 3.0e+4, 1.0e+5 and 3.0e+5 brick elements (resulting in 4.5e+4, 1.3e+5 and 3.8e+5 nodes). The aortic wall stresses were compared among these models. The model with 1.0e+5 elements and 1.3e+5 nodes was found converged (detailed in Additional file 1: Appendix S1) and was thus chosen for the following analysis. Since the initial aortic geometry was reconstructed at end-diastole, the intra-aortic pressure was about 80 mmHg [15] instead of a zero-pressure condition. The aorta stress-free configuration was achieved by extracting the resulting deformed mesh from mesh convergence simulation in which a pressure of 80 mmHg was imposed on the aortic external wall to offset the end-diastole pressure, as previously done in [16]. In order to simulate the blood flow and study the hydrodynamic effects, a fluid domain (Fig. 2) was constructed to immerse the stress-free aorta. The fluid part was discretized with 250,000 hexahedral Eulerian elements (2.6e+5 nodes). This size was also decided after a mesh convergence analysis against the section-averaged blood velocity with a 1% threshold. Aortic FE model and boundary conditions. Aortic FE model for structural and FSI simulation (a). Boundary conditions for SA-Pre (b), SA-Down (c), SA-XY (d) and SA-2XY (e). AA, PDA and DDA correspond to the sections of distensibility assessment The aortic wall was assumed to be transversely isotropic and incompressible hyper-elastic material, the material equation and parameters (C1 = 191 kPa, C2 = 0.451 and C3 = C4 = 0.184) of which came from a previous numerical study [17]. The fluid Eulerian mesh was subdivided in two domains and defined as multi-material: the first domain was blood initiated inside the aorta; the second domain was the fluid part outside of the aortic wall and defined as vacuum. The two fluid domains always updated as the FSI interface (i.e. aortic wall) moved or deformed, maintaining the blood inside the aorta and vacuum outside. For simplification, the blood was assumed as Newtonian fluid [16] with a density of 1050 kg/m3, a dynamic viscosity set to 4.5e−3 Pa s and a bulk modulus of 2.5 GPa [16]. The relationship between blood pressure and volume (density) was described by a linear equation of state (detailed in Additional file 2: Appendix S2). Aortic FE model validation In order to validate the bio-fidelity of the aortic FE model, a structural simulation was performed on the stress-free configuration with three cycles of physiological time-dependent pressure [18] distributed on the inner surface of aortic wall. The aortic diameters of AA, PDA and DDA (see Fig. 2a) were recorded during the simulation and those during the third cycle were used to assess the aortic distensibility. The distensibility was calculated with the Eq. (1) [19]: $$ {\text{Distensibility }}\;\left( {10^{ - 3} \;{\text{mmHg}}^{ - 1} } \right) = \frac{{A_{max} - A_{min} }}{{A_{min} }} \times P_{pulse} = \left[ {\left( {\frac{{D_{max} }}{{D_{min} }}} \right)^{2} - 1} \right] \times P_{pulse} $$ where Amax and Amin represent the maximal and minimal aortic cross-sectional areas during the cycle, Ppulse is the pulse pressure and Dmax and Dmin are respectively the maximal and minimal aortic diameters. Boundary and loading conditions Four simulations were performed for structural analysis and two for FSI simulations, all of which were conducted on the stress-free configuration. The distal ends of the superior arteries and the descending aorta were constrained during all the simulations. The proximal end of AA was fully constrained only for the simulations without AR motion prescribed (Fig. 2b–e). All of the simulations (mesh convergence, FSI and structural analysis) were performed with the solver LS-DYNA 971 R7.1.1 (LSTC, Livermore, CA, USA) on an Intel Xeon (2.57 GHz) workstation with 40 processors. For FSI analysis, the solver LS-DYNA computes fluid and structure physics separately, and then couples the two physics until equilibrium is reached [20]. The interaction between fluid and structure domain was modelled by activating the penalty coupling algorithm in LS-DYNA, in which elastic forces were computed against the structure-fluid penetration and imposed on the structural elements. The inlet and outlet (Fig. 2a) fluid parts were applied with a constant pressure of 120 mmHg for a static analysis (hereafter referred as FSI-Sta). Another hydrodynamic simulation was conducted by pressurizing the inlet with 120 mmHg and prescribing an outflow of 300 mL/s at the outlet (referred as FSI-Flow). The reason why a constant pressure and flow rate rather than a pulsatile blood flow was chosen to apply in FSI analysis was to better compare the aortic stress levels in FSI and in structural analysis. For the structural analysis, a Cartesian coordinate system was constructed to prescribe the AA motion (Fig. 2) according to the local PDA system for AA in-plane measurement (Fig. 1). A pressure of 120 mmHg inside the aortic wall was the only loading for the control model (referred as SA-Pre and see Fig. 2b). Besides 120 mmHg pressurized inside the aortic wall, a displacement of 8.9 mm along −Z was applied to AA proximal end to simulate AR downward traction (referred as SA-Down and see Fig. 2c). The corresponding displacements (3.0 mm—X, 3.1 mm—Y) obtained from the cine MRI analysis were further imposed on the AA proximal end to evaluate the effects of AA in-plane displacement (referred as SA-XY and see Fig. 2d). Finally, considering the hypothesis of AA in-plane displacement equal with AR motion, the AA proximal end was prescribed with twice magnitudes (6.0 mm—X, 6.2 mm—Y) of the in-plane displacement in another simulation (referred as SA-2XY and see Fig. 2e) to aggressively estimate its influences. AA in-plane motion Mean value (± standard deviation) of AA in-plane maximal resultant displacement was 5.5 ± 1.7 mm with X and Y components respectively: 3.1 ± 0.9 mm and − 4.4 ± 1.7 mm (Fig. 3a) under the MRI reference coordinate system. When measured in local PDA system, the X and Y components were correspondingly 3.0 ± 1.3 mm and 3.1 ± 1.5 mm (Fig. 3a). AA in-plane motion had two phases: the displacement increased and oriented left-anteriorly during systole and then regressed to its origin in diastole (Fig. 3b). In-plane motion of AA section. AA maximal in-plane motion in absolute value averaging among the volunteers (a); time-history of in-plane resultant and component displacements of a volunteer (25 years, male) (b). X-Disp-Abs and X-Disp-PDA: X component motion under MRI and PDA reference system; Y-Disp-Abs and Y-Disp-PDA: Y component motion under MRI and PDA reference system; Resultant Disp: in-plane resultant displacement Aortic distensibility During numerical validation, the diameters of AA, PDA and DDA were 11.7 mm, 8.5 mm and 7.5 mm respectively at the beginning of systole and 13.8 mm, 9.4 mm and 8.2 mm at the ending of systole (Fig. 4a). The distensibility was correspondingly 8.8e−3 mmHg−1, 5.3e−3 mmHg−1 and 3.9e−3 mmHg−1 for AA, PDA and DDA (Fig. 4b). Aortic diameters and distensibility for AA, PDA and DDA. Diameter time-history under the three-cycle pressure loading (a); aortic distensibility in simulation comparing with the literature data (b). The vertical dotted lines indicate the moments when the diameters were recorded for distensibility analysis FSI and structural analysis The distributions of von Mises, circumferential and longitudinal stress (see Additional file 2: Appendix S2) were similar among the control model (SA-Pre) and the FSI simulations (FSI-Sta and FSI-Flow). The peak von Mises and circumferential stress occurred at the interior curvature of aortic arch with the corresponding values of 0.24 MPa and 0.48 MPa for FSI-Sta and 0.22 MPa and 0.47 MPa for FSI-Flow. The peak longitudinal stress was 0.43 MPa for static FSI-Sta and 0.42 MPa for FSI-Flow, both located at the superior artery intersection. In Table 1 were displayed the aortic luminal volumes, AA, PDA and DDA sectional diameters in control and FSI simulations. The averaged stress levels, which were evaluated at the proximal AA section 2 cm above the AR, were also displayed in Table 1. Table 1 Aortic volumes, diameters of different sections and the averaged stress at the proximal AA section 2 cm above the AR in control model and FSI simulations The von Mises and circumferential stress contours were similar among all the structural simulations (Fig. 5), with the corresponding peak values approximately 0.25 MPa and 0.50 MPa located at the interior curvature of aortic arch distal to AA. The longitudinal stress distributions (Fig. 5) were also similar under different loadings with the superior artery intersection region always subjected to a peak stress of 0.43–0.51 MPa. The peak circumferential stretch ratio of aortic wall (not shown) was 1.48 for all the structural simulations. The peak longitudinal stretch ratio (not shown) was 1.37 for SA-Pre and 1.41 for the other 3 structural simulations with AR motions (SA-Down, SA-XY and SA-2XY). The averaged stress levels at proximal AA section 2 cm above AR were displayed in Table 2 for the structural simulations. von Mises, circumferential and longitudinal stress distribution for structural simulations with different AA motions Table 2 Averaged von Mises, circumferential and longitudinal stress for proximal AA 2 cm above the AR in structural simulations Ascending aortic in-plane motion assessment In this study, the AA in-plane motion was analysed under MRI and PDA reference system. Although seeming to be more explicit under MRI system, the in-plane displacement under PDA system might be more meaningful with the specific anatomic reference at PDA. The component motions under PDA system were mostly alike. The Y component mean value was 42% higher than the X component under MRI system resulting in a left-anteriorly oriented motion. Similarly, the AA in-plane motion was reported to be left-anterior in 58% cases and to be anterior in 43% [21]. Moreover, the Y component was found to be nearly twice of the X component [21]. The resultant in-plane displacement in our study was also consistent with the published values 5.2 ± 1.7 mm [7] and 6.7 ± 1.8 mm [8], all of which were comparable with the magnitude of downward motion (8.9 ± 1.8 mm) [6]. This also justified the necessity to study the in-plane motion effects on aortic stresses. Weber et al. [21] indeed studied the aortic 4-dimensional displacement with computed tomography angiography, but the final temporal resolution as well as the temporal reconstruction methods lacked of description. In CT scan, the normal range of temporal resolution was around 83–125 ms according to another research [22]. In contrast, the time resolution of our MRI dataset was about 15 ms, enabling to capture a more detailed in-plane motion during a 300 ms-long systole. Admittedly, the influences of AA through-plane displacement on its in-plane measurement had to be ignored due to the limited computational capabilities to analyse the 4-dimensional dataset. Besides, the AR in-plane displacement also had to be assumed equal to the AA motion with the current data accessible. Despite the hypothesis above, our AA in-plane motion analysis could still add to the knowledge of aortic 3-dimensional motion related to the cardiac pulsatility. Model assessment against aortic distensibility Before studying the AA in-plane motion effects, the distensibility of the model was analyzed to evaluate its bio-fidelity. Although lower than the published mean values [19, 23], the distensibility for AA and PDA was within their standard deviations. The DDA of the model seemed to be less compliant than reported [19]. The fixed boundary at distal DDA could have limited the radial inflation of DDA. The aorta was assumed to be of 1.6 mm uniform thickness in this study for modelling convenience and the difficulties to detect the aortic thickness with our available MRI data. However, the descending aorta has been suggested about 15% thinner than AA [24]. The relative thicker descending aorta in the simulation was speculated to induce its lower distensibility. Still, the reproduced circumferential stretch ratios at peak systole of AA, PDA and DDA were respectively 1.32, 1.21 and 1.17. These values coincided with the published ranges 1.08–1.47 (median value 1.26) obtained with the same pressure level inflation tests [14]. Therefore, to some extent, this aortic FE model was still believed to reflect the realistic aortic compliance under physiological conditions. Necessity to consider fluid–structure interactions Fluid dynamic effects on aortic responses were also analysed by comparing the results of the control model and the FSI simulation with or without a constant blood flow. The differences against the control model were always no more than 0.1% in terms of aortic luminal volumes, diameters or stresses when a static pressure was imposed on the inlet and outlet. However, when the blood flow was simulated in the aorta, the aortic stresses, luminal volume and radial dilation were respectively reduced by 6.4–7.5%, 3.0% and 1.1–2.9%. In fact, the continual blood flow was maintained by the pressure gradient along the aortic course. In other words, further along the aortic pathway, lower the luminal pressure became. This could explain why the stresses and aortic diameters in FSI simulation with flow were lower compared to control model and this tendency seemed to be more significant for the descending aorta (Table 1). Considering the different results between simulations with or without blood flow, it was necessary to mimic the non-uniform hydrodynamic pressure ambient in the aorta as a consequence of the flowing blood. The wall stress resulting from blood pressure could be 0.48 MPa, while the wall shear stress (WSS) due to the blood flow was < 1.5 Pa at different aortic sections (see Additional file 3: Appendix S3). In other words, WSS was negligible in terms of its magnitude compared to wall stress. Although WSS could not lead to aortic dissection directly, its variable distributions have been suggested to induce aortic aneurysm progression and aortic tissue remodelling through a complex interplay between vascular cellular migration and extracellular matrix homeostasis [25,26,27]. Therefore, it was still essential to simulate the blood flow and its interaction with the aortic wall while studying the WSS effects on aortic pathologies and diseases. Relative contribution of aortic root motions to ascending aorta dissection Both effects of AR axial and in-plane motion on aortic responses were evaluated by imposing downward and in-plane displacement on proximal AA end. Similar with other researches [2, 9], the peak aortic von Mises and circumferential stress were always located at the superior artery branches. The AR traction was previously postulated to increase proximal AA transection risk by elevating its longitudinal stress [2, 9]. Therefore, the stress levels were also evaluated by respectively averaging the von Mises, circumferential and longitudinal stresses of the AA section 2 cm above the AR under different loading conditions (Table 2). The AR axial motion contributed to 40% increase of AA longitudinal stress, in spite of the previously reported higher values 50–150% [2, 9]. However, the longitudinal motion elevated the AA von Mises and longitudinal stress by 37.6% and 32.6% in our study, which contradicted with its negligible influences on these stresses in [2, 9]. Another difference was the location of peak aortic longitudinal stress, which was always at the aortic arch interior curvature in our study but at the superior artery intersections previously [2, 9]. These discrepancies could be attributed to two reasons. On one hand, both researches [2, 9] assumed aortic wall to be linear elastic material with a Young's modulus of 3 MPa. Aorta is actually a complex fiber-reinforced composite structure displaying highly nonlinear responses. Previous aortic uniaxial stretch tests [28, 29] suggested the stiffness of young healthy samples continuously increase as the stretch ratio was higher than 1.20. With a luminal pressure of 120 mmHg, the peak aortic stretch ratio reached 1.36 previously in [28] and 1.48 in our study. Therefore, the aortic stiffness under the pressure of 120 mmHg with or without AR motion should not be defined as constant. Moreover, the elastic modulus of 3 MPa in these two studies [2, 9] might be stiffer compared to the dynamic stiffness of healthy aorta, which was found less than 1.5 MPa at the stress level of 74 kPa corresponding to a stretch ratio range of 1.18–1.49 [30]. Admittedly, the transversely isotropic material was a limitation of our study, but the behavior of healthy aortic wall was proved practically isotropic with the stretch ratio less than 1.8 [28, 29]. The transversely isotropic hyper-elastic material, the parameters of which were previously obtained by fitting aortic stretch curves [14, 17], was considered a good approximation to the aortic responses within the loading levels of our study (maximal stretch ratio < 1.50). On the other hand, a toroidal coordinate system was constructed to convert the global stresses into local circumferential and longitudinal stress in both previous researches [2, 9]. However, this approach might be questionable since the complex geometry of the aorta was beyond the ability of a single global system to convert into local stresses. In this work, each element axis was oriented along the aortic longitudinal direction (see Additional file 3: Appendix S4) during the model discretization process. The circumferential or longitudinal stress could be converted according to each local element system in post-processing. In this way, the conversion of the circumferential and longitudinal stresses could avoid being affected by the aortic geometry. The circumferential stress at AA in our work was always less than 0.20 MPa with the longitudinal component only half of its magnitudes (see Table 2). All the stresses in this study were found to be negligible compared with the yield stress (1.18 ± 0.12 MPa in circumferential and 1.21 ± 0.09 MPa in longitudinal directions) reported in [31] or the tensile rupture stress (1.27 MPa) of thoracic aorta published in [28]. Furthermore, the peak stretch ratio of AA was always less than 1.50 under all loading conditions and was also well below the previously recorded stretch failure of 2.1 [28]. Therefore, despite its effects of increasing AA longitudinal stress, the AR downward motion associated with heart traction could hardly induce aortic transverse dissection or add the injury risks to the healthy populations. The effects of AR downward motion remain to be investigated among other populations since our results were obtained with healthy subjects (normal aortic material, morphology and hemodynamics). Compared with the downward motion, the AR in-plane displacement did not seem to alter the aortic stresses especially for the AA segment, which was still true even with the in-plane displacement magnitudes doubled. Although comparable with AR axial displacement (8.9 ± 1.8 mm), the AR in-plane motion (5.5 ± 1.7 mm) was inappreciable versus the distance (130 mm in our model) between the AR and brachiocephalic artery. Thus, the in-plane motion could barely change the aortic length (longitudinal deformation). Since aortic inflation was mainly the consequence of luminal pressure, the in-plane motion hardly induced circumferential deformation, either. Without longitudinal or circumferential deformation, the stress level would not be modified. Although the AR axial or in-plane motion did not seem to elevate the aortic dissection risks in this study, additional mechanisms should account to aortic dissection. This injury should still be related to the factors increasing aortic wall stress and reducing aortic strength. The aortic stress could be enhanced by such factors as hypertension and aortic dilation. Cardiovascular diseases like aortic insufficiency would increase AR axial motion through ventricular compensation [2]. This increased motion could additionally elevate the aortic wall stress in subjects with higher aortic stiffness attributed to higher ages and vascular diseases (e.g. Marfan syndrome and atherosclerosis). Moreover, in these vascular diseases, the aortic strength would also be jeopardized with the aortic tissue remodelled. When the local aortic stress exceeds what the aortic tissue can resist, the aortic dissection might occur. The AR in-plane motion was analysed with the MRI data from 25 volunteers. The in-plane displacement increased during systole and regressed in diastole. The in-plane movement was found to be comparable to the axial motion, with its mean value (± standard deviation) 5.5 ± 1.7 mm. The X and Y components of in-plane motion were respectively 3.1 ± 0.9 mm, − 4.4 ± 1.7 mm under MRI reference system and 3.1 ± 1.5 mm, 3.0 ± 1.3 mm under PDA system. Blood flow should be simulated with FSI approach considering the lower values of aortic diameters, volumes and stresses as a result of hydrodynamics. The AR downward displacement did not improve AA's vulnerability to dissection since the resulting 40% increase of longitudinal stress was still trivial against the aortic yield stress. With inducing negligible aortic circumferential or axial deformation, AR in-plane motion had no effect on aortic stress levels. Howard DP, Sideso E, Handa A, Rothwell PM. Incidence, risk factors, outcome and projected future burden of acute aortic dissection. Ann Thorac Surg. 2014;3(3):278–84. Beller CJ, Labrosse MR, Thubrikar MJ, Robicsek F. Role of aortic root motion in the pathogenesis of aortic dissection. Circulation. 2004;109(6):763–9. Beller C, Labrosse M, Thubrikar M, Robicsek F. Finite element modeling of the thoracic aorta: including aortic root motion to evaluate the risk of aortic dissection. J Med Eng Technol. 2008;32(2):167–70. Nathan DP, Xu C, Gorman JH, Fairman RM, Bavaria JE, Gorman RC, Chandran KB, Jackson BM. Pathogenesis of acute aortic dissection: a finite element stress analysis. Ann Thorac Surg. 2011;91(2):458–63. Fillinger MF, Marra SP, Raghavan ML, Kennedy FE. Prediction of rupture risk in abdominal aortic aneurysm during observation: wall stress versus diameter. J Vasc Surg. 2003;37(4):724–32. Kozerke S, Scheidegger MB, Pedersen EM, Boesiger P. Heart motion adapted cine phase-contrast flow measurements through the aortic valve. Magn Reson Med. 1999;42(5):970–8. Weber TF, Ganten M-K, Böckler D, Geisbüsch P, Kauczor H-U, von Tengg-Kobligk H. Heartbeat-related displacement of the thoracic aorta in patients with chronic aortic dissection type B: quantification by dynamic CTA. Eur J Radiol. 2009;72(3):483–8. Rengier F, Weber TF, Henninger V, Böckler D, Schumacher H, Kauczor H-U, von Tengg-Kobligk H. Heartbeat-related distension and displacement of the thoracic aorta in healthy volunteers. Eur J Radiol. 2012;81(1):158–64. Singh S, Xu X, Pepper J, Izgi C, Treasure T, Mohiaddin R. Effects of aortic root motion on wall stress in the Marfan aorta before and after personalised aortic root support (PEARS) surgery. J Biomech. 2016;49(10):2076–84. Liu X, Peng C, Xia Y, Gao Z, Xu P, Wang X, Xian Z, Yin Y, Jiao L, Wang D. Hemodynamics analysis of the serial stenotic coronary arteries. Biomed Eng Online. 2017;16(1):127–42. Shi C, Zhang D, Cao K, Zhang T, Luo L, Liu X, Zhang H. A study of noninvasive fractional flow reserve derived from a simplified method based on coronary computed tomography angiography in suspected coronary artery disease. Biomed Eng Online. 2017;16(1):43–57. Liu X, Gao Z, Xiong H, Ghista D, Ren L, Zhang H, Wu W, Huang W, Hau WK. Three-dimensional hemodynamics analysis of the circle of Willis in the patient-specific nonintegral arterial structures. Biomech Model Mechanobiol. 2016;15(6):1439–56. Bal-Theoleyre L, Lalande A, Kober F, Giorgi R, Collart F, Piquet P, Habib G, Avierinos J-F, Bernard M, Guye M. Aortic function's adaptation in response to exercise-induced stress assessing by 1.5 T MRI: a pilot study in healthy volunteers. PLoS ONE. 2016;11(6):e0157704. https://doi.org/10.1371/journal.pone.0157704. Labrosse MR, Beller CJ, Mesana T, Veinot JP. Mechanical behavior of human aortas: experiments, material constants and 3-D finite element modeling including residual stress. J Biomech. 2009;42(8):996–1004. Gasser TC, Nchimi A, Swedenborg J, Roy J, Sakalihasan N, Böckler D, Hyhlik-Dürr A. A novel strategy to translate the biomechanical rupture risk of abdominal aortic aneurysms to their equivalent diameter risk: method and retrospective validation. Eur J Vasc Endovasc Surg. 2014;47(3):288–95. Sundaram GBK, Balakrishnan KR, Kumar RK. Aortic valve dynamics using a fluid structure interaction model-the physiology of opening and closing. J Biomech. 2015;48(10):1737–44. Labrosse MR, Lobo K, Beller CJ. Structural analysis of the natural aortic valve in dynamics: from unpressurized to physiologically loaded. J Biomech. 2010;43(10):1916–22. Kim HJ, Vignon-Clementel IE, Figueroa CA, LaDisa JF, Jansen KE, Feinstein JA, Taylor CA. On coupling a lumped parameter heart model and a three-dimensional finite element aorta model. Ann Biomed Eng. 2009;37(11):2153–69. Voges I, Jerosch-Herold M, Hedderich J, Pardun E, Hart C, Gabbert DD, Hansen JH, Petko C, Kramer H-H, Rickers C. Normal values of aortic dimensions, distensibility, and pulse wave velocity in children and young adults: a cross-sectional study. J Cardiovasc Magn Reson. 2012;14(1):77–89. Hallquist JO. LS-DYNA® keyword user's manual: volumes I, II, and III LSDYNA R7. 1. Livermore Software Technology Corporation, Livermore (LSTC), Livermore, California 2014; 1265. Weber TF, Müller T, Biesdorf A, Wörz S, Rengier F, Heye T, Holland-Letz T, Rohr K, Kauczor H-U, von Tengg-Kobligk H. True four-dimensional analysis of thoracic aortic displacement and distension using model-based segmentation of computed tomography angiography. Int J Card Imaging. 2014;30(1):185–94. Lin E, Alessio A. What are the basic concepts of temporal, contrast, and spatial resolution in cardiac CT? J Cardiovasc Comput Tomogr. 2009;3(6):403–8. Redheuil A, Yu W-C, Wu CO, Mousseaux E, de Cesare A, Yan R, Kachenoura N, Bluemke D, Lima JA. Reduced ascending aortic strain and distensibility. Hypertension. 2010;55(2):319–26. Mensel B, Kühn J-P, Schneider T, Quadrat A, Hegenscheid K. Mean thoracic aortic wall thickness determination by cine MRI with steady-state free precession: validation with dark blood imaging. Acad Radiol. 2013;20(8):1004–8. Dua MM, Dalman RL. Hemodynamic influences on abdominal aortic aneurysm disease: application of biomechanics to aneurysm pathophysiology. Vasc Pharmacol. 2010;53(1):11–21. Papaioannou TG, Karatzis EN, Vavuranakis M, Lekakis JP, Stefanadis C. Assessment of vascular wall shear stress and implications for atherosclerotic disease. Int J Cardiol. 2006;113(1):12–8. Bäck M, Gasser TC, Michel J-B, Caligiuri G. Biomechanical factors in the biology of aortic wall and aortic valve diseases. Cardiovasc Res. 2013;99(2):232–41. García-Herrera CM, Celentano DJ. Modelling and numerical simulation of the human aortic arch under in vivo conditions. Biomech Model Mechanobiol. 2013;12(6):1143–54. García-Herrera CM, Celentano DJ, Cruchaga MA. Bending and pressurisation test of the human aortic arch: experiments, modelling and simulation of a patient-specific case. Comput Methods Biomech Biomed Eng. 2013;16(8):830–9. Azadani AN, Chitsaz S, Mannion A, Mookhoek A, Wisneski A, Guccione JM, Hope MD, Ge L, Tseng EE. Biomechanical properties of human ascending thoracic aortic aneurysms. Ann Thorac Surg. 2013;96(1):50–8. Vorp DA, Schiro BJ, Ehrlich MP, Juvonen TS, Ergin MA, Griffith BP. Effect of aneurysm on the tensile strength and biomechanical behavior of the ascending thoracic aorta. Ann Thorac Surg. 2003;75(4):1210–4. WW carried out all the FSI simulations and drafted the manuscript. WW and ME performed the statistics. WW, ME, CJFK and M. Behr performed the study design and overall investigation. SR, FK, M. Bernard and AJ conducted the subject recruitment and the clinical data collection. All authors read and approved the final manuscript. The dataset used and analyzed in the current study are available from the corresponding author on reasonable request. Laboratoire de Biomécanique Appliquée, Aix-Marseille Université, IFSTTAR, LBA, UMR T24, 51 Bd. P. Dramard, 13015, Marseille, France Wei Wei, Morgane Evin, Cyril J. F. Kahn & Michel Behr Aix-Marseille Université, CNRS, CRMBM, UMR 7339, Marseille, France Stanislas Rapacchi, Frank Kober, Monique Bernard & Alexis Jacquier Morgane Evin Stanislas Rapacchi Frank Kober Monique Bernard Alexis Jacquier Cyril J. F. Kahn Michel Behr Correspondence to Wei Wei. Additional file 1: Appendix S1. Mesh convergence analysis. The equation of state (EOS) for blood. Stress distribution of structural and FSI simulations. Blood velocity profile and WSS at AA, PDA and DDA. Element axis orientation. Wei, W., Evin, M., Rapacchi, S. et al. Investigating heartbeat-related in-plane motion and stress levels induced at the aortic root. BioMed Eng OnLine 18, 19 (2019). https://doi.org/10.1186/s12938-019-0632-7 Aortic root motion Magnetic resonance imagining Aortic stress Finite element Fluid–structure interaction
CommonCrawl
Passives Test 1 olivia9153Plus Often the massive structure of a building acts as a heat sink. Many massive buildings feel comfortably cool on hot summer days. During the night, these buildings give up their heat by convection to the cool night air and by radiation to the cold sky—thus recharging their heat-sink capability for the next day. How Heat Sinks work in buildings? The human being is a biological machine that burns food as a fuel and generates heat as a by-product. The basic biological process by which we generate heat F = 9/5C + 32 Celcius to Fahrenheit the most northern line at which the sun is directly over ahead at least one day of the year the longitudinal line that's halfway between the North and South Poles, and divides the Earth into the Northern and Southern Hemispheres. Tropic of Capricorn the most southern line at which the sun is directly over ahead at least one day of the year Artic circle the most northern line at which there's at least one day that the sun doesn't rise or doesn't set Antarctic Circle the most southern line at which there's one day the sun doesn't rise or doesn't set (F - 32) * 5/9 = C Fahrenheit to Celsius formula Emitted from the sun's surface +/-10,500 degrees Fahrenheit Transmittance The 4 properties of Radiant Heat the situation in which radiation passes through material the situation in which radiation is converted into sensible heat within the material the situation in which radiation is reflected off the surface the situation in which radiation is given off by the surface, thereby reducing the sensible heat content of the object. The interdisciplinary study of systems. A system is a cohesive conglomeration of interrelated and interdependent parts that is either natural or man-made. Passives Building Systems As opposed to "Active" building systems, this does not rely on synthetically-produced energy to shape the environment they create. The design, placement, or materials optimize the use of heat or light directly from the sun. Traps heat What's the advantage of stone in architecture? Spaceship Earth R. Buckminster Fuller relates Earth to a spaceship flying through space. The spaceship has a finite amount of resources and cannot be resupplied. The Ecosphere A planetary closed ecological system. In this global ecosystem, the various forms of energy and matter that constitute a given planet interact on a continual basis Geosphere, Biosphere, Atmosphere, Magnetosphere The four components of the Ecosphere The Whole Earth Catalog Stewart Brand, in 1966, initiated a public campaign to have NASA release the then-rumored satellite photo of the sphere of Earth as seen from space, the first image of the "Whole Earth." Vernacular Architecture Native or peculiar to a particular country or locality. This architecture concerned with ordinary domestic and functional buildings rather than the essentially monumental Anonymous, Adaptive, Iterative, Evolving, Responsive to Local Conditions, and Efficient in use of locally-available materials, energies, and technologies Characteristics of vernacular architecture Site (geology, hydrology, soils) Topography (slope, accessibility) Climate(temperature, humidity, solar angles) Weather (rain, snow, wind, solar exposure) What are some important factors in the ways buildings are designed in response to the environment around them? Bioclimatic design approach Architecture design methods could take advantage of the climate through the right application of design elements and building technology to energy saving as well as to ensure comfortable conditions into buildings Arctic Circle, Tropic of Cancer, Equator, Tropic of Capricorn, and Antarctic Circle Important longitudinal lines caused by a significant ratio of luminance between the task and the source Macroclimate General climate of a region. Temperature extremes, Wind Velocity/Direction, Precipitation, Cloudiness, and Hours and days of sun North Atlantic Current clockwise ocean current that flows between Northern Europe and the Caribbean Sea Thermohaline System Density of Water caused by temperature and salinity environmental conditions within a small area that differs significantly from the climate of the surrounding area Tropical, Dry, Temperate, Continental, Polar 5 major climate zones in the Köppen-Geiger Climate Classification System Windward side of a mountain (San Fran) cool and moist due to expansion (low pressure) Leeward side of a mountain (Reno) dry and warm due to compression (high pressure) 1. Elevation above sea level 2. Form of land 3. Size, shape, and proximity of bodies of water 4. Soil types 5. Vegetation 6. Man-made structures Main factors responsible for making the microclimate deviate from macroclimate. Wrote "Design with Climate" It is the temperature range in which the body doesn't shiver or sweat, but has an idiomatic sense of a place where people feel comfortable, where they can avoid the worries of the world Thermal Lag process by which a body stores heat and releases it at a later time The ability of an object to transfer heat or electricity to another object This is vital for understanding both solar energy and climate change. This effect is due to the fact that the type of interaction that occurs between a material and radiant energy depends on the wavelength of that radiation. Energy from steam or hot water produced from hot or molten underground rocks Latitude, Elevation, and Proximity to major bodies of water Macroclimate is dependent upon what 3 things? Equal to the still-air temperature that would have the same cooling effect on a human being as does the combined effect of the actual temperature and wind speed. A horizontal angle measured clockwise from north or south. Solar Noon Time of day when the Sun reaches its highest point in the sky for a given place on Earth. Plane that contains Earth's orbit around the Sun. Solar Window 9am - 3pm Solar Time. Area of sky between sun paths at summer solstice and winter solstice for a particular location A number that indicates how much of the solar radiation is reflected from a surface. Reflectiveness of the surface measured from 0.0 to 1.0 (proportion of radiosity to irradiance) Solar Envelope Developed by Ralph Knowles. It has potential for not only ensuring high quality solar access, but also generating attractive architecture Distance east or west of the prime meridian, measured in degrees distance north or south of the Equator, measured in degrees Solar Altitude the angle of the sun above the horizon at any given latitude Distance from Sun 92.96 million miles The atmosphere acts like a greenhouse by allowing most of the solar radiation to enter but blocking the long-wave infrared radiation from leaving the planet What's the effect of atmosphere on solar radiation? Solar hot water heating the conversion of sunlight into heat for water heating, requires access year-round Solar space heating requires access in only the winter (bottom half of the solar window) The random motion of molecules is a form of energy. Can be measured with a thermometer The amount of heat needed to bring about about a one degree change in temperature which results in a PHASE CHANGE in a volume of material is -much- higher than the amount needed to change the temperature one degree when no phase change is being caused A change of state is also known as A form of electromagnetic radiation. All bodies facing an air space or a vacuum emit and absorb radiant energy continuously. Hot bodies lose heat because they emit more energy than they absorb A large amount of heat is required to make sweat evaporate from the skin. That is why sweating is such an effective cooling mechanism. Equilibrium Temperature a consequence of both the absorptance and the emittance characteristics of a material. 1. Radiation 2. Conduction 3. Convection The 4 types of Heat Transfer A component that increases the heat flow away from a hot device The amount of heat required to raise the temperature of a material by 1 degree. Thermal Resistance The property for a material to resist the flow of heat through it. Largely a function of the number and size of air spaces the material contains. Thermal Conduction The transfer of heat internal energy by microscopic collisions of particles and movement of electrons within a body Thermal convection The transfer of heat from one place to another by the movement of fluids. Usually the dominant form of heat transfer in liquids and gases. Time Lag the delay of heat flow Affected by the nature of the material with which it interacts and especially the surface of the material U- Value The amount of heat that can be lost from a building. The lower the value in this, the better heat efficiency there is. U = 1/RT R- Values measure the thermal resistance of a given material. R = ft^2 × °F/Btu/h (Btu/h = heat flow per hour) Excessive heat loss Insufficient heat loss This factor will determine the rate at which heat is lost or gained to / from the air - mostly by convection. The comfort range for 80% of people extends from 68° in the winter to 78° in the summer. The range of indoor air temperature that most building occupants are comfortable at. We must understand not inly the heat dissipation mechanisms of the human body. 1. Air temperature 2. Relative Humidity 3. Air Movement 4. Mean Radiant temperature (MRT) Four environmental conditions that allow the heat to be lost The ratio of the partial pressure of water vapor in the mixture to the equilibrium vapor pressure of water over a flat surface of pure water at a given temperature (an indication of the evaporation rate) Mean Radiant Temperature (MRT) The weighted average of all of the temperatures of all of the surfaces visible from a given position. When it differs greatly from the air temperature, its effect must be considered Energy that is transferred from one body to another as the result of a difference in temperature Air Movement Affects the heat lost rate by both convection and evaporation Thermal Stratification Occurs when two types of steam with different temperatures come into contact. Their temperature difference causes the colder and heavier water to settle at the bottom of the pipe while allowing the warmer and lighter water to float over the colder water. Dry Bulb Temperatures Read by a thermometer freely exposed to the air, but shielded from radiation and moisture. This is the true thermodynamic temperature and is what is usually thought of as the air temperature. Wet Bulb Temperatures Read by a thermometer covered in water-soaked cloth over which air is passed. At 100% relative humidity, the wet-bulb temperature is equal to the dry-bulb temperature and it is lower at lower humidity. Absolute Humidity Total mass of water vapor present in a given volume or mass of air. Does not consider temperature. When an air sample is cooled sufficiently, its RH increases until is reaches 100 percent, which is also called the saturation Once a mass of air is cooled past the Dew Point Temperature it reaches what The change of state from a gas to a liquid Any form of water that falls from clouds and reaches Earth's surface. The total heat content of air. (Sensible + Latent Heat) Psychometric chart The horizontal axis describes the temperature of the air, the vertical axis describes the actual amount of water vapor in the air, called humidity ratio or specific humidity, and the curved lines describe the relative humidity (RH). Adibiatic Change In evaporative cooling, the increase in Latent Heat equals the decrease in Sensible Heat. This is a change in which total heat content of the air remains constant. Heat loss Some heat is lost by exhaling warm, moist air from the lungs, but most of the body's heat flow is through the skin The effect of Metabolic Rate To maintain thermal equilibrium, we must lose heat at the same rate at which our metabolic processes produce it clothes, buildings, canopy bed, fire place, drapes, etc... It has potential to cause great discomfort or comfort. In the summer, it is a great asset and in the winter a liability. The effect of Air Movement on comfort The insulating effect of mass If the temperature difference across a massive material fluctuates in certain specific ways, then the massive material will act as if it had high thermal resistance Sky vault an imaginary dome place over the building site, where lines indicate the sun's path History of Arch 2 test KaseyLo History of Arch Questions test 2 Passives - Test 1 - Vocabulary lukentullis Passive Building Systems - Test 1 Ashleigh212Plus Sets found in the same folder 101 terms Passives FINAL Passives Building Systems: Final Exam (Exam 3) john_perry_9116 Passive Building Systems Test 1 Brock_Moorehead Intro to Race Quiz 1 Assemblages Exam 4 Intro to Preservation Exam 3 Assume that the population growth is described by the Beverton–Holt recruitment curve with growth parameter R and carrying capacity K . Find the population sizes for t = 1, 2, . . . , 5 and find $\lim _{t \rightarrow \infty} N_{t}$ for the given initial value $N_{0}.$ $R=2, K=10, N_{0}=2$ Find the scattering angle of the proton in the center-of-mass system for earlier problem. Which of the following represent conjugate acid-base pairs? For those pairs that are not conjugates, write the correct conjugate acid or base for each species in the pair. a. $\mathrm{H}_2\mathrm{O},\mathrm{OH}^{-}$ It has become common to replace the cataract-clouded lens of the eye with an internal lens. This intraocular lens can be chosen so that the person has perfect distant vision. Will the person be able to read without glasses? If the person was nearsighted, is the power of the intraocular lens greater or less than the removed lens? Investigating Oceanography 3rd EditionKeith A. Sverdrup, Raphael Kudela Mastering Geology 13th EditionDennis G. Tasa, Edward J. Tarbuck, Frederick K. Lutgens Principles of Environmental Science 9th EditionMary Cunningham, William P Cunningham Atmosphere: An Introduction to Meteorology Brodie_Althaus AP - Government and Politics Review Matthew_Riley24 Res112Final johnhendriksen baovy0301
CommonCrawl
This article is about Earth's natural satellite. For moons in general, see Natural satellite. For other uses, see Moon (disambiguation). Earth's natural satellite Full moon seen from Earth Earth I Selene (poetic) Cynthia (poetic) Selenian (poetic) Cynthian (poetic) Moonly (poetic) Orbital characteristics Epoch J2000 Perigee (356400–370400 km) Semi-major axis 384399 km (0.00257 AU)[1] 0.0549[1] Orbital period 27.321661 d (27 d 7 h 43 min 11.5 s[1]) Synodic period (29 d 12 h 44 min 2.9 s) Average orbital speed 1.022 km/s 5.145° to the ecliptic[2][a] Longitude of ascending node Regressing by one revolution in 18.61 years Argument of perigee Progressing by one revolution in 8.85 years Satellite of Earth[b][3] Mean radius (0.2727 of Earth's)[1][4][5] Equatorial radius (0.2725 of Earth's)[4] Polar radius 10921 km (equatorial) 3.793×107 km2 (0.074 of Earth's) 2.1958×1010 km3 (0.020 of Earth's)[4] 7.342×1022 kg (0.012300 of Earth's)[1][4][6] Mean density 3.344 g/cm3[1][4] 0.606 × Earth Surface gravity 1.62 m/s2 (0.1654 g)[4] Moment of inertia factor 0.3929±0.0009[7] Escape velocity 2.38 km/s (8600 km/h; 5300 mph) Sidereal rotation period 27.321661 d (synchronous) Equatorial rotation velocity 4.627 m/s Axial tilt 1.5424° to ecliptic 6.687° to orbit plane[2] 24° to Earth's equator [8] North pole right ascension 266.86°[9] North pole declination 65.64°[9] Surface temp. 150 K 230 K[11] Apparent magnitude −2.5 to −12.9[c] −12.74 (mean full moon)[4] Angular diameter 29.3 to 34.1 arcminutes[4][d] Atmosphere[12] 10−7 Pa (1 picobar) (day) 10−10 Pa (1 femtobar) (night)[e] Composition by volume The Moon is Earth's only proper natural satellite. It is one quarter the diameter of Earth (comparable to the width of Australia[13]) making it the largest natural satellite in the Solar System relative to the size of its planet. It is the fifth largest satellite in the Solar System and is larger than any dwarf planet. The Moon orbits Earth at an average lunar distance of 384,400 km (238,900 mi),[14] or 1.28 light-seconds. Its gravitational influence produces Earth's tides and slightly lengthens Earth's day. It is considered a planetary-mass moon and a differentiated rocky body; its surface gravity is about one-sixth of Earth's (0.1654 g) and it lacks any significant atmosphere, hydrosphere, or magnetic field. Jupiter's moon Io is the only satellite in the Solar System known to have a higher surface gravity and density. The Moon's orbit around Earth has a sidereal period of 27.3 days, and a synodic period of 29.5 days. The synodic period drives its lunar phases, which form the basis for the months of a lunar calendar. The Moon is tidally locked to Earth, which means that the length of a full rotation of the Moon on its own axis (a lunar day) is the same as the synodic period, resulting in its same side (the near side) always facing Earth. That said, 59% of the total lunar surface can be seen from Earth through shifts in perspective (its libration).[15] The near side of the Moon is marked by dark volcanic maria ("seas"), which fill the spaces between bright ancient crustal highlands and prominent impact craters. The lunar surface is relatively non-reflective, with a reflectance just slightly brighter than that of worn asphalt. However, because it reflects direct sunlight, is contrasted by the relatively dark sky, and has a large apparent size when viewed from Earth, the Moon is the brightest celestial object in Earth's sky after the Sun. The Moon's apparent size is nearly the same as that of the Sun, allowing it to cover the Sun almost completely during a total solar eclipse. The first manmade object to reach the Moon was the Soviet Union's Luna 2 uncrewed spacecraft in 1959; this was followed by the first successful soft landing by Luna 9 in 1966. The only human lunar missions to date have been those of the United States' NASA Apollo program, which conducted the first manned lunar orbiting mission with Apollo 8 in 1968. Beginning with Apollo 11, six human landings took place between 1969 to 1972. These missions returned lunar rocks which have been used to develop a detailed geological understanding of the Moon's origins, internal structure, and subsequent history; the most widely accepted origin explanation posits that the Moon formed about 4.51 billion years ago, not long after Earth, out of the debris from a giant impact between the planet and a hypothetical Mars-sized body called Theia. Both the Moon's natural prominence in the earthly sky and its regular cycle of phases as seen from Earth have provided cultural references and influences for human societies and cultures throughout history. Such cultural influences can be found in language, calendar systems, art, and mythology. 1 Name and etymology 3 Physical characteristics 3.1 Internal structure 3.2 Surface geology 3.2.1 Volcanic features 3.2.2 Impact craters 3.2.3 Lunar swirls 3.2.4 Presence of water 3.3 Gravitational field 3.4 Magnetic field 3.5 Atmosphere 3.5.1 Dust 3.5.2 Past thicker atmosphere 3.6 Seasons 3.7 Rotation 4 Earth–Moon system 4.1 Orbit 4.2 Relative size 4.3 Appearance from Earth 4.4 Tidal effects 4.5 Eclipses 5 Observation and exploration 5.1 Before spaceflight 5.2 1959–1970s 5.2.1 Soviet missions 5.2.2 United States missions 5.3 1970s – present 5.4 Future 5.4.1 Planned commercial missions 6 Human presence 6.1 Human impact 6.3 Astronomy from the Moon 6.4 Living on the Moon 7 Legal status 8 In culture 8.1 Mythology 8.2 Calendar 8.3 Lunar effect 10.1 Citations 12.1 Cartographic resources 12.2 Observation tools Name and etymology See also: List of lunar deities The Moon, tinted reddish, during a lunar eclipse During the lunar phases, only portions of the Moon can be observed from Earth. The usual English proper name for Earth's natural satellite is simply the Moon, with a capital M.[16][17] The noun moon is derived from Old English mōna, which (like all its Germanic cognates) stems from Proto-Germanic *mēnōn,[18] which in turn comes from Proto-Indo-European *mēnsis "month"[19] (from earlier *mēnōt, genitive *mēneses) which may be related to the verb "measure" (of time).[20] Occasionally, the name Luna /ˈluːnə/ is used in scientific writing[21] and especially in science fiction to distinguish the Earth's moon from others, while in poetry "Luna" has been used to denote personification of Earth's moon.[22] Cynthia /ˈsɪnθiə/ is another poetic name, though rare, for the Moon personified as a goddess,[23] while Selene /səˈliːniː/ (literally "Moon") is the Greek goddess of the Moon. The usual English adjective pertaining to the Moon is "lunar", derived from the Latin word for the Moon, lūna. The adjective selenian /səliːniən/,[24] derived from the Greek word for the Moon, σελήνη selēnē, and used to describe the Moon as a world rather than as an object in the sky, is rare,[25] while its cognate selenic was originally a rare synonym[26] but now nearly always refers to the chemical element selenium.[27] The Greek word for the Moon does however provide us with the prefix seleno-, as in selenography, the study of the physical features of the Moon, as well as the element name selenium.[28][29] The Greek goddess of the wilderness and the hunt, Artemis, equated with the Roman Diana, one of whose symbols was the Moon and who was often regarded as the goddess of the Moon, was also called Cynthia, from her legendary birthplace on Mount Cynthus.[30] These names – Luna, Cynthia and Selene – are reflected in technical terms for lunar orbits such as apolune, pericynthion and selenocentric. Near side of the Moon Far side of the Moon Lunar north pole Lunar south pole Main articles: Origin of the Moon, Giant-impact hypothesis, and Circumplanetary disk The Moon formed 4.51 billion years ago,[f] or even 100 million years earlier, some 50 million years after the origin of the Solar System, as new research suggests.[31] Several forming mechanisms have been proposed,[32] including the fission of the Moon from Earth's crust through centrifugal force[33] (which would require too great an initial rotation rate of Earth),[34] the gravitational capture of a pre-formed Moon[35] (which would require an unfeasibly extended atmosphere of Earth to dissipate the energy of the passing Moon),[34] and the co-formation of Earth and the Moon together in the primordial accretion disk (which does not explain the depletion of metals in the Moon).[34] These hypotheses also cannot account for the high angular momentum of the Earth–Moon system.[36] The evolution of the Moon and a tour of the Moon The prevailing theory is that the Earth–Moon system formed after a giant impact of a Mars-sized body (named Theia) with the proto-Earth. The impact blasted material into Earth's orbit and then the material accreted and formed the Moon.[37][38] This theory best explains the evidence. Eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists: "You have eighteen months. Go back to your Apollo data, go back to your computer, do whatever you have to, but make up your mind. Don't come to our conference unless you have something to say about the Moon's birth." At the 1984 conference at Kona, Hawaii, the giant-impact hypothesis emerged as the most consensual. Before the conference, there were partisans of the three "traditional" theories, plus a few people who were starting to take the giant impact seriously, and there was a huge apathetic middle who didn't think the debate would ever be resolved. Afterward, there were essentially only two groups: the giant impact camp and the agnostics.[39] Giant impacts are thought to have been common in the early Solar System. Computer simulations of giant impacts have produced results that are consistent with the mass of the lunar core and the angular momentum of the Earth–Moon system. These simulations also show that most of the Moon derived from the impactor, rather than the proto-Earth.[40] However, more recent simulations suggest a larger fraction of the Moon derived from the proto-Earth.[41][42][43][44] Other bodies of the inner Solar System such as Mars and Vesta have, according to meteorites from them, very different oxygen and tungsten isotopic compositions compared to Earth. However, Earth and the Moon have nearly identical isotopic compositions. The isotopic equalization of the Earth-Moon system might be explained by the post-impact mixing of the vaporized material that formed the two,[45] although this is debated.[46] The impact released a lot of energy and then the released material re-accreted into the Earth–Moon system. This would have melted the outer shell of Earth, and thus formed a magma ocean.[47][48] Similarly, the newly formed Moon would also have been affected and had its own lunar magma ocean; its depth is estimated from about 500 km (300 miles) to 1,737 km (1,079 miles).[47] While the giant-impact theory explains many lines of evidence, some questions are still unresolved, most of which involve the Moon's composition.[49] Oceanus Procellarum ("Ocean of Storms") Ancient rift valleys – rectangular structure (visible – topography – GRAIL gravity gradients) Ancient rift valleys – context Ancient rift valleys – closeup (artist's concept) In 2001, a team at the Carnegie Institute of Washington reported the most precise measurement of the isotopic signatures of lunar rocks.[50] The rocks from the Apollo program had the same isotopic signature as rocks from Earth, differing from almost all other bodies in the Solar System. This observation was unexpected, because most of the material that formed the Moon was thought to come from Theia and it was announced in 2007 that there was less than a 1% chance that Theia and Earth had identical isotopic signatures.[51] Other Apollo lunar samples had in 2012 the same titanium isotopes composition as Earth,[52] which conflicts with what is expected if the Moon formed far from Earth or is derived from Theia. These discrepancies may be explained by variations of the giant-impact theory. The Moon is a very slightly scalene ellipsoid due to tidal stretching, with its long axis displaced 30° from facing the Earth (due to gravitational anomalies from impact basins). Its shape is more elongated than current tidal forces can account for. This 'fossil bulge' indicates that the Moon solidified when it orbited at half its current distance to the Earth, and that it is now too cold for its shape to adjust to its orbit.[53] Internal structure Main article: Internal structure of the Moon Lunar surface chemical composition[54] silica SiO2 45.4% 45.5% alumina Al2O3 14.9% 24.0% lime CaO 11.8% 15.9% iron(II) oxide FeO 14.1% 5.9% magnesia MgO 9.2% 7.5% titanium dioxide TiO2 3.9% 0.6% sodium oxide Na2O 0.6% 0.6% The Moon is a differentiated body. It has a geochemically distinct crust, mantle, and core. The Moon has a solid iron-rich inner core with a radius possibly as small as 240 kilometres (150 mi) and a fluid outer core primarily made of liquid iron with a radius of roughly 300 kilometres (190 mi). Around the core is a partially molten boundary layer with a radius of about 500 kilometres (310 mi).[55][56] This structure is thought to have developed through the fractional crystallization of a global magma ocean shortly after the Moon's formation 4.5 billion years ago.[57] Crystallization of this magma ocean would have created a mafic mantle from the precipitation and sinking of the minerals olivine, clinopyroxene, and orthopyroxene; after about three-quarters of the magma ocean had crystallised, lower-density plagioclase minerals could form and float into a crust atop.[58] The final liquids to crystallise would have been initially sandwiched between the crust and mantle, with a high abundance of incompatible and heat-producing elements.[1] Consistent with this perspective, geochemical mapping made from orbit suggests the crust of mostly anorthosite.[12] The Moon rock samples of the flood lavas that erupted onto the surface from partial melting in the mantle confirm the mafic mantle composition, which is more iron-rich than that of Earth.[1] The crust is on average about 50 kilometres (31 mi) thick.[1] The Moon is the second-densest satellite in the Solar System, after Io.[59] However, the inner core of the Moon is small, with a radius of about 350 kilometres (220 mi) or less,[1] around 20% of the radius of the Moon. Its composition is not well understood, but is probably metallic iron alloyed with a small amount of sulfur and nickel; analyses of the Moon's time-variable rotation suggest that it is at least partly molten.[60] Surface geology Main articles: Topography of the Moon, Geology of the Moon, Moon rock, and List of lunar features The Topographic Globe of the Moon Geological features of the Moon (near side / north pole at left, far side / south pole at right) Topography of the Moon STL 3D model of the Moon with 10× elevation exaggeration rendered with data from the Lunar Orbiter Laser Altimeter of the Lunar Reconnaissance Orbiter The topography of the Moon has been measured with laser altimetry and stereo image analysis.[61] Its most visible topographic feature is the giant far-side South Pole–Aitken basin, some 2,240 km (1,390 mi) in diameter, the largest crater on the Moon and the second-largest confirmed impact crater in the Solar System.[62][63] At 13 km (8.1 mi) deep, its floor is the lowest point on the surface of the Moon.[62][64] The highest elevations of the surface are located directly to the northeast, and it has been suggested might have been thickened by the oblique formation impact of the South Pole–Aitken basin.[65] Other large impact basins such as Imbrium, Serenitatis, Crisium, Smythii, and Orientale also possess regionally low elevations and elevated rims.[62] The far side of the lunar surface is on average about 1.9 km (1.2 mi) higher than that of the near side.[1] The discovery of fault scarp cliffs by the Lunar Reconnaissance Orbiter suggest that the Moon has shrunk within the past billion years, by about 90 metres (300 ft).[66] Similar shrinkage features exist on Mercury. A recent study of over 12000 images from the orbiter has observed that Mare Frigoris near the north pole, a vast basin assumed to be geologically dead, has been cracking and shifting. Since the Moon doesn't have tectonic plates, its tectonic activity is slow and cracks develop as it loses heat over the years.[67] Volcanic features Main article: Volcanism on the Moon Lunar nearside with major maria and craters labeled The dark and relatively featureless lunar plains, clearly seen with the naked eye, are called maria (Latin for "seas"; singular mare), as they were once believed to be filled with water;[68] they are now known to be vast solidified pools of ancient basaltic lava. Although similar to terrestrial basalts, lunar basalts have more iron and no minerals altered by water.[69] The majority of these lavas erupted or flowed into the depressions associated with impact basins. Several geologic provinces containing shield volcanoes and volcanic domes are found within the near side "maria".[70] Evidence of young lunar volcanism Almost all maria are on the near side of the Moon, and cover 31% of the surface of the near side,[71] compared with 2% of the far side.[72] This is thought to be due to a concentration of heat-producing elements under the crust on the near side, seen on geochemical maps obtained by Lunar Prospector's gamma-ray spectrometer, which would have caused the underlying mantle to heat up, partially melt, rise to the surface and erupt.[58][73][74] Most of the Moon's mare basalts erupted during the Imbrian period, 3.0–3.5 billion years ago, although some radiometrically dated samples are as old as 4.2 billion years.[75] Until recently, the youngest eruptions, dated by crater counting, appeared to have been only 1.2 billion years ago.[76] In 2006, a study of Ina, a tiny depression in Lacus Felicitatis, found jagged, relatively dust-free features that, because of the lack of erosion by infalling debris, appeared to be only 2 million years old.[77] Moonquakes and releases of gas also indicate some continued lunar activity.[77] In 2014 NASA announced "widespread evidence of young lunar volcanism" at 70 irregular mare patches identified by the Lunar Reconnaissance Orbiter, some less than 50 million years old. This raises the possibility of a much warmer lunar mantle than previously believed, at least on the near side where the deep crust is substantially warmer because of the greater concentration of radioactive elements.[78][79][80][81] Just prior to this, evidence has been presented for 2–10 million years younger basaltic volcanism inside the crater Lowell,[82][83] Orientale basin, located in the transition zone between the near and far sides of the Moon. An initially hotter mantle and/or local enrichment of heat-producing elements in the mantle could be responsible for prolonged activities also on the far side in the Orientale basin.[84][85] The lighter-colored regions of the Moon are called terrae, or more commonly highlands, because they are higher than most maria. They have been radiometrically dated to having formed 4.4 billion years ago, and may represent plagioclase cumulates of the lunar magma ocean.[75][76] In contrast to Earth, no major lunar mountains are believed to have formed as a result of tectonic events.[86] The concentration of maria on the Near Side likely reflects the substantially thicker crust of the highlands of the Far Side, which may have formed in a slow-velocity impact of a second moon of Earth a few tens of millions of years after their formation.[87][88] Impact craters Further information: List of craters on the Moon Lunar crater Daedalus on the Moon's far side The other major geologic process that has affected the Moon's surface is impact cratering,[89] with craters formed when asteroids and comets collide with the lunar surface. There are estimated to be roughly 300,000 craters wider than 1 km (0.6 mi) on the Moon's near side alone.[90] The lunar geologic timescale is based on the most prominent impact events, including Nectaris, Imbrium, and Orientale, structures characterized by multiple rings of uplifted material, between hundreds and thousands of kilometers in diameter and associated with a broad apron of ejecta deposits that form a regional stratigraphic horizon.[91] The lack of an atmosphere, weather and recent geological processes mean that many of these craters are well-preserved. Although only a few multi-ring basins have been definitively dated, they are useful for assigning relative ages. Because impact craters accumulate at a nearly constant rate, counting the number of craters per unit area can be used to estimate the age of the surface.[91] The radiometric ages of impact-melted rocks collected during the Apollo missions cluster between 3.8 and 4.1 billion years old: this has been used to propose a Late Heavy Bombardment of impacts.[92] Blanketed on top of the Moon's crust is a highly comminuted (broken into ever smaller particles) and impact gardened surface layer called regolith, formed by impact processes. The finer regolith, the lunar soil of silicon dioxide glass, has a texture resembling snow and a scent resembling spent gunpowder.[93] The regolith of older surfaces is generally thicker than for younger surfaces: it varies in thickness from 10–20 km (6.2–12.4 mi) in the highlands and 3–5 km (1.9–3.1 mi) in the maria.[94] Beneath the finely comminuted regolith layer is the megaregolith, a layer of highly fractured bedrock many kilometers thick.[95] Comparison of high-resolution images obtained by the Lunar Reconnaissance Orbiter has shown a contemporary crater-production rate significantly higher than previously estimated. A secondary cratering process caused by distal ejecta is thought to churn the top two centimeters of regolith a hundred times more quickly than previous models suggested – on a timescale of 81,000 years.[96][97] Lunar swirls at Reiner Gamma Lunar swirls Main article: Lunar swirls Lunar swirls are enigmatic features found across the Moon's surface. They are characterized by a high albedo, appear optically immature (i.e. the optical characteristics of a relatively young regolith), and have often a sinuous shape. Their shape is often accentuated by low albedo regions that wind between the bright swirls. Presence of water Main article: Lunar water Liquid water cannot persist on the lunar surface. When exposed to solar radiation, water quickly decomposes through a process known as photodissociation and is lost to space. However, since the 1960s, scientists have hypothesized that water ice may be deposited by impacting comets or possibly produced by the reaction of oxygen-rich lunar rocks, and hydrogen from solar wind, leaving traces of water which could possibly persist in cold, permanently shadowed craters at either pole on the Moon.[98][99] Computer simulations suggest that up to 14,000 km2 (5,400 sq mi) of the surface may be in permanent shadow.[100] The presence of usable quantities of water on the Moon is an important factor in rendering lunar habitation as a cost-effective plan; the alternative of transporting water from Earth would be prohibitively expensive.[101] In years since, signatures of water have been found to exist on the lunar surface.[102] In 1994, the bistatic radar experiment located on the Clementine spacecraft, indicated the existence of small, frozen pockets of water close to the surface. However, later radar observations by Arecibo, suggest these findings may rather be rocks ejected from young impact craters.[103] In 1998, the neutron spectrometer on the Lunar Prospector spacecraft showed that high concentrations of hydrogen are present in the first meter of depth in the regolith near the polar regions.[104] Volcanic lava beads, brought back to Earth aboard Apollo 15, showed small amounts of water in their interior.[105] The 2008 Chandrayaan-1 spacecraft has since confirmed the existence of surface water ice, using the on-board Moon Mineralogy Mapper. The spectrometer observed absorption lines common to hydroxyl, in reflected sunlight, providing evidence of large quantities of water ice, on the lunar surface. The spacecraft showed that concentrations may possibly be as high as 1,000 ppm.[106] Using the mapper's reflectance spectra, indirect lighting of areas in shadow confirmed water ice within 20° latitude of both poles in 2018.[107] In 2009, LCROSS sent a 2,300 kg (5,100 lb) impactor into a permanently shadowed polar crater, and detected at least 100 kg (220 lb) of water in a plume of ejected material.[108][109] Another examination of the LCROSS data showed the amount of detected water to be closer to 155 ± 12 kg (342 ± 26 lb).[110] In May 2011, 615–1410 ppm water in melt inclusions in lunar sample 74220 was reported,[111] the famous high-titanium "orange glass soil" of volcanic origin collected during the Apollo 17 mission in 1972. The inclusions were formed during explosive eruptions on the Moon approximately 3.7 billion years ago. This concentration is comparable with that of magma in Earth's upper mantle. Although of considerable selenological interest, this announcement affords little comfort to would-be lunar colonists – the sample originated many kilometers below the surface, and the inclusions are so difficult to access that it took 39 years to find them with a state-of-the-art ion microprobe instrument. Analysis of the findings of the Moon Mineralogy Mapper (M3) revealed in August 2018 for the first time "definitive evidence" for water-ice on the lunar surface.[112][113] The data revealed the distinct reflective signatures of water-ice, as opposed to dust and other reflective substances.[114] The ice deposits were found on the North and South poles, although it is more abundant in the South, where water is trapped in permanently shadowed craters and crevices, allowing it to persist as ice on the surface since they are shielded from the sun.[112][114] In October 2020, astronomers reported detecting molecular water on the sunlit surface of the moon by several independent spacecraft, including the Stratospheric Observatory for Infrared Astronomy (SOFIA).[115][116][117][118] Gravitational field Main article: Gravity of the Moon GRAIL's gravity map of the Moon The gravitational field of the Moon has been measured through tracking the Doppler shift of radio signals emitted by orbiting spacecraft. The main lunar gravity features are mascons, large positive gravitational anomalies associated with some of the giant impact basins, partly caused by the dense mare basaltic lava flows that fill those basins.[119][120] The anomalies greatly influence the orbit of spacecraft about the Moon. There are some puzzles: lava flows by themselves cannot explain all of the gravitational signature, and some mascons exist that are not linked to mare volcanism.[121] Main article: Magnetic field of the Moon The Moon has an external magnetic field of generally less than 0.2 nanoteslas,[122] or less than one hundred thousandth that of Earth. The Moon does not currently have a global dipolar magnetic field and only has crustal magnetization likely acquired early in its history when a dynamo was still operating.[123][124] However, early in its history, 4 billion years ago, its magnetic field strength was likely close to that of Earth today.[122] This early dynamo field apparently expired by about one billion years ago, after the lunar core had completely crystallized.[122] Theoretically, some of the remnant magnetization may originate from transient magnetic fields generated during large impacts through the expansion of plasma clouds. These clouds are generated during large impacts in an ambient magnetic field. This is supported by the location of the largest crustal magnetizations situated near the antipodes of the giant impact basins.[125] Main article: Atmosphere of the Moon Sketch by the Apollo 17 astronauts. The lunar atmosphere was later studied by LADEE.[126][127] The Moon has an atmosphere so tenuous as to be nearly vacuum, with a total mass of less than 10 tonnes (9.8 long tons; 11 short tons).[128] The surface pressure of this small mass is around 3 × 10−15 atm (0.3 nPa); it varies with the lunar day. Its sources include outgassing and sputtering, a product of the bombardment of lunar soil by solar wind ions.[12][129] Elements that have been detected include sodium and potassium, produced by sputtering (also found in the atmospheres of Mercury and Io); helium-4 and neon[130] from the solar wind; and argon-40, radon-222, and polonium-210, outgassed after their creation by radioactive decay within the crust and mantle.[131][132] The absence of such neutral species (atoms or molecules) as oxygen, nitrogen, carbon, hydrogen and magnesium, which are present in the regolith, is not understood.[131] Water vapor has been detected by Chandrayaan-1 and found to vary with latitude, with a maximum at ~60–70 degrees; it is possibly generated from the sublimation of water ice in the regolith.[133] These gases either return into the regolith because of the Moon's gravity or are lost to space, either through solar radiation pressure or, if they are ionized, by being swept away by the solar wind's magnetic field.[131] A permanent asymmetric Moon dust cloud exists around the Moon, created by small particles from comets. Estimates are 5 tons of comet particles strike the Moon's surface every 24 hours. The particles striking the Moon's surface eject Moon dust above the Moon. The dust stays above the Moon approximately 10 minutes, taking 5 minutes to rise, and 5 minutes to fall. On average, 120 kilograms of dust are present above the Moon, rising to 100 kilometers above the surface. The dust measurements were made by LADEE's Lunar Dust EXperiment (LDEX), between 20 and 100 kilometers above the surface, during a six-month period. LDEX detected an average of one 0.3 micrometer Moon dust particle each minute. Dust particle counts peaked during the Geminid, Quadrantid, Northern Taurid, and Omicron Centaurid meteor showers, when the Earth, and Moon, pass through comet debris. The cloud is asymmetric, more dense near the boundary between the Moon's dayside and nightside.[134][135] Past thicker atmosphere In October 2017, NASA scientists at the Marshall Space Flight Center and the Lunar and Planetary Institute in Houston announced their finding, based on studies of Moon magma samples retrieved by the Apollo missions, that the Moon had once possessed a relatively thick atmosphere for a period of 70 million years between 3 and 4 billion years ago. This atmosphere, sourced from gases ejected from lunar volcanic eruptions, was twice the thickness of that of present-day Mars. The ancient lunar atmosphere was eventually stripped away by solar winds and dissipated into space.[136] The Moon's axial tilt with respect to the ecliptic is only 1.5424°,[137] much less than the 23.44° of Earth. Because of this, the Moon's solar illumination varies much less with season, and topographical details play a crucial role in seasonal effects.[138] From images taken by Clementine in 1994, it appears that four mountainous regions on the rim of the crater Peary at the Moon's north pole may remain illuminated for the entire lunar day, creating peaks of eternal light. No such regions exist at the south pole. Similarly, there are places that remain in permanent shadow at the bottoms of many polar craters,[100] and these "craters of eternal darkness" are extremely cold: Lunar Reconnaissance Orbiter measured the lowest summer temperatures in craters at the southern pole at 35 K (−238 °C; −397 °F)[139] and just 26 K (−247 °C; −413 °F) close to the winter solstice in the north polar crater Hermite. This is the coldest temperature in the Solar System ever measured by a spacecraft, colder even than the surface of Pluto.[138] Average temperatures of the Moon's surface are reported, but temperatures of different areas will vary greatly depending upon whether they are in sunlight or shadow.[140] The Moon is rotating around its own axis. This rotation is due to tidal locking synchronous to its orbital period around Earth. The rotation period depends on the frame of reference. There are sidereal rotation periods (or sidereal day, in relation to the stars), and synodic rotation periods (or synodic day, in relation to the Sun). A lunar day is a synodic day. Because of the tidal locked rotation, the sidereal and synodic rotation periods correspond to the sidereal (27.3 Earth days) and synodic (29.5 Earth days) orbital periods.[141] Earth–Moon system See also: Satellite system (astronomy) and Other moons of Earth Scale model of the Earth–Moon system: Sizes and distances are to scale. Main articles: Orbit of the Moon and Lunar theory Animation of Moon's orbit around Earth from 2018 to 2027 Moon · Earth Earth–Moon system (schematic) DSCOVR satellite sees the Moon passing in front of Earth The Moon makes a complete orbit around Earth with respect to the fixed stars about once every 27.3 days[g] (its sidereal period). However, because Earth is moving in its orbit around the Sun at the same time, it takes slightly longer for the Moon to show the same phase to Earth, which is about 29.5 days[h] (its synodic period).[71] Unlike most satellites of other planets, the Moon orbits closer to the ecliptic plane than to the planet's equatorial plane. The Moon's orbit is subtly perturbed by the Sun and Earth in many small, complex and interacting ways. For example, the plane of the Moon's orbit gradually rotates once every 18.61 years,[142] which affects other aspects of lunar motion. These follow-on effects are mathematically described by Cassini's laws.[143] The Moon is an exceptionally large natural satellite relative to Earth: Its diameter is more than a quarter and its mass is 1/81 of Earth's.[71] It is the largest moon in the Solar System relative to the size of its planet,[i] though Charon is larger relative to the dwarf planet Pluto, at 1/9 Pluto's mass.[j][144] The Earth and the Moon's barycentre, their common center of mass, is located 1,700 km (1,100 mi) (about a quarter of Earth's radius) beneath the Earth's surface. The Earth revolves around the Earth-Moon barycentre once a sidereal month, with 1/81 the speed of the Moon, or about 12.5 metres (41 ft) per second. This motion is superimposed on the much larger revolution of the Earth around the Sun at a speed of about 30 kilometres (19 mi) per second. The surface area of the Moon is slightly less than the areas of North and South America combined. Appearance from Earth A full moon appears as a half moon during an eclipse moonset over the High Desert in California, on the morning of the Trifecta: Full moon, Supermoon, Lunar eclipse, January 2018 lunar eclipse See also: Lunar observation, Lunar phase, Moonlight, and Earthlight (astronomy) The Moon is in synchronous rotation as it orbits Earth; it rotates about its axis in about the same time it takes to orbit Earth. This results in it always keeping nearly the same face turned towards Earth. However, because of the effect of libration, about 59% of the Moon's surface can actually be seen from Earth. The side of the Moon that faces Earth is called the near side, and the opposite the far side. The far side is often inaccurately called the "dark side", but it is in fact illuminated as often as the near side: once every 29.5 Earth days. During new moon, the near side is dark.[145] The Moon had once rotated at a faster rate, but early in its history its rotation slowed and became tidally locked in this orientation as a result of frictional effects associated with tidal deformations caused by Earth.[146] With time, the energy of rotation of the Moon on its axis was dissipated as heat, until there was no rotation of the Moon relative to Earth. In 2016, planetary scientists using data collected on the much earlier NASA Lunar Prospector mission, found two hydrogen-rich areas (most likely former water ice) on opposite sides of the Moon. It is speculated that these patches were the poles of the Moon billions of years ago before it was tidally locked to Earth.[147] The Moon is prominently featured in Vincent van Gogh's 1889 painting, The Starry Night The Moon has an exceptionally low albedo, giving it a reflectance that is slightly brighter than that of worn asphalt. Despite this, it is the brightest object in the sky after the Sun.[71][k] This is due partly to the brightness enhancement of the opposition surge; the Moon at quarter phase is only one-tenth as bright, rather than half as bright, as at full moon.[148] Additionally, color constancy in the visual system recalibrates the relations between the colors of an object and its surroundings, and because the surrounding sky is comparatively dark, the sunlit Moon is perceived as a bright object. The edges of the full moon seem as bright as the center, without limb darkening, because of the reflective properties of lunar soil, which retroreflects light more towards the Sun than in other directions. The Moon does appear larger when close to the horizon, but this is a purely psychological effect, known as the moon illusion, first described in the 7th century BC.[149] The full Moon's angular diameter is about 0.52° (on average) in the sky, roughly the same apparent size as the Sun (see § Eclipses). The Moon's highest altitude at culmination varies by its phase and time of year. The full moon is highest in the sky during winter (for each hemisphere). The orientation of the Moon's crescent also depends on the latitude of the viewing location; an observer in the tropics can see a smile-shaped crescent Moon.[150] The Moon is visible for two weeks every 27.3 days at the North and South Poles. Zooplankton in the Arctic use moonlight when the Sun is below the horizon for months on end.[151] 14 November 2016 supermoon was 356,511 kilometres (221,526 mi) away[152] from the center of Earth, the closest occurrence since 26 January 1948. It will not be closer until 25 November 2034.[153] The distance between the Moon and Earth varies from around 356,400 km (221,500 mi) to 406,700 km (252,700 mi) at perigee (closest) and apogee (farthest), respectively. On 14 November 2016, it was closer to Earth when at full phase than it has been since 1948, 14% closer than its farthest position in apogee.[154] Reported as a "supermoon", this closest point coincided within an hour of a full moon, and it was 30% more luminous than when at its greatest distance because its angular diameter is 14% greater and 1.14 2 ≈ 1.30 {\displaystyle \scriptstyle 1.14^{2}\approx 1.30} .[155][156][157] At lower levels, the human perception of reduced brightness as a percentage is provided by the following formula:[158][159] perceived reduction % = 100 × actual reduction % 100 {\displaystyle {\text{perceived reduction}}\%=100\times {\sqrt {{\text{actual reduction}}\% \over 100}}} When the actual reduction is 1.00 / 1.30, or about 0.770, the perceived reduction is about 0.877, or 1.00 / 1.14. This gives a maximum perceived increase of 14% between apogee and perigee moons of the same phase.[160] There has been historical controversy over whether features on the Moon's surface change over time. Today, many of these claims are thought to be illusory, resulting from observation under different lighting conditions, poor astronomical seeing, or inadequate drawings. However, outgassing does occasionally occur and could be responsible for a minor percentage of the reported lunar transient phenomena. Recently, it has been suggested that a roughly 3 km (1.9 mi) diameter region of the lunar surface was modified by a gas release event about a million years ago.[161][162] The Moon's appearance, like the Sun's, can be affected by Earth's atmosphere. Common optical effects are the 22° halo ring, formed when the Moon's light is refracted through the ice crystals of high cirrostratus clouds, and smaller coronal rings when the Moon is seen through thin clouds.[163] The monthly changes in the angle between the direction of sunlight and view from Earth, and the phases of the Moon that result, as viewed from the Northern Hemisphere. The Earth–Moon distance is not to scale. The illuminated area of the visible sphere (degree of illumination) is given by ( 1 − cos ⁡ e ) / 2 = sin 2 ⁡ ( e / 2 ) {\displaystyle (1-\cos e)/2=\sin ^{2}(e/2)} , where e {\displaystyle e} is the elongation (i.e., the angle between Moon, the observer (on Earth) and the Sun). Tidal effects Main articles: Tidal force, Tidal acceleration, Tide, and Theory of tides The libration of the Moon over a single lunar month. Also visible is the slight variation in the Moon's visual size from Earth. The gravitational attraction that masses have for one another decreases inversely with the square of the distance of those masses from each other. As a result, the slightly greater attraction that the Moon has for the side of Earth closest to the Moon, as compared to the part of the Earth opposite the Moon, results in tidal forces. Tidal forces affect both the Earth's crust and oceans. The most obvious effect of tidal forces is to cause two bulges in the Earth's oceans, one on the side facing the Moon and the other on the side opposite. This results in elevated sea levels called ocean tides.[164] As the Earth rotates on its axis, one of the ocean bulges (high tide) is held in place "under" the Moon, while another such tide is opposite. As a result, there are two high tides, and two low tides in about 24 hours.[164] Since the Moon is orbiting the Earth in the same direction of the Earth's rotation, the high tides occur about every 12 hours and 25 minutes; the 25 minutes is due to the Moon's time to orbit the Earth. The Sun has the same tidal effect on the Earth, but its forces of attraction are only 40% that of the Moon's; the Sun's and Moon's interplay is responsible for spring and neap tides.[164] If the Earth were a water world (one with no continents) it would produce a tide of only one meter, and that tide would be very predictable, but the ocean tides are greatly modified by other effects: the frictional coupling of water to Earth's rotation through the ocean floors, the inertia of water's movement, ocean basins that grow shallower near land, the sloshing of water between different ocean basins.[165] As a result, the timing of the tides at most points on the Earth is a product of observations that are explained, incidentally, by theory. While gravitation causes acceleration and movement of the Earth's fluid oceans, gravitational coupling between the Moon and Earth's solid body is mostly elastic and plastic. The result is a further tidal effect of the Moon on the Earth that causes a bulge of the solid portion of the Earth nearest the Moon that acts as a torque in opposition to the Earth's rotation. This "drains" angular momentum and rotational kinetic energy from Earth's rotation, slowing the Earth's rotation.[164][166] That angular momentum, lost from the Earth, is transferred to the Moon in a process (confusingly known as tidal acceleration), which lifts the Moon into a higher orbit and results in its lower orbital speed about the Earth. Thus the distance between Earth and Moon is increasing, and the Earth's rotation is slowing in reaction.[166] Measurements from laser reflectors left during the Apollo missions (lunar ranging experiments) have found that the Moon's distance increases by 38 mm (1.5 in) per year[167] (roughly the rate at which human fingernails grow).[168] Atomic clocks also show that Earth's day lengthens by about 15 microseconds every year,[169] slowly increasing the rate at which UTC is adjusted by leap seconds. Left to run its course, this tidal drag would continue until the rotation of Earth and the orbital period of the Moon matched, creating mutual tidal locking between the two. As a result, the Moon would be suspended in the sky over one meridian, as is already currently the case with Pluto and its moon Charon. However, the Sun will become a red giant engulfing the Earth-Moon system long before this occurrence.[170][171] In a like manner, the lunar surface experiences tides of around 10 cm (4 in) amplitude over 27 days, with two components: a fixed one due to Earth, because they are in synchronous rotation, and a varying component from the Sun.[166] The Earth-induced component arises from libration, a result of the Moon's orbital eccentricity (if the Moon's orbit were perfectly circular, there would only be solar tides).[166] Libration also changes the angle from which the Moon is seen, allowing a total of about 59% of its surface to be seen from Earth over time.[71] The cumulative effects of stress built up by these tidal forces produces moonquakes. Moonquakes are much less common and weaker than are earthquakes, although moonquakes can last for up to an hour – significantly longer than terrestrial quakes – because of the absence of water to damp out the seismic vibrations. The existence of moonquakes was an unexpected discovery from seismometers placed on the Moon by Apollo astronauts from 1969 through 1972.[172] According to recent research, scientists suggest that the Moon's influence on the Earth may contribute to maintaining Earth's magnetic field.[173] Main articles: Solar eclipse, Lunar eclipse, and Eclipse cycle From Earth, the Moon and the Sun appear the same size, as seen in the 1999 solar eclipse (left), whereas from the STEREO-B spacecraft in an Earth-trailing orbit, the Moon appears much smaller than the Sun (right).[174] Eclipses only occur when the Sun, Earth, and Moon are all in a straight line (termed "syzygy"). Solar eclipses occur at new moon, when the Moon is between the Sun and Earth. In contrast, lunar eclipses occur at full moon, when Earth is between the Sun and Moon. The apparent size of the Moon is roughly the same as that of the Sun, with both being viewed at close to one-half a degree wide. The Sun is much larger than the Moon but it is the vastly greater distance that gives it the same apparent size as the much closer and much smaller Moon from the perspective of Earth. The variations in apparent size, due to the non-circular orbits, are nearly the same as well, though occurring in different cycles. This makes possible both total (with the Moon appearing larger than the Sun) and annular (with the Moon appearing smaller than the Sun) solar eclipses.[175] In a total eclipse, the Moon completely covers the disc of the Sun and the solar corona becomes visible to the naked eye. Because the distance between the Moon and Earth is very slowly increasing over time,[164] the angular diameter of the Moon is decreasing. Also, as it evolves toward becoming a red giant, the size of the Sun, and its apparent diameter in the sky, are slowly increasing.[l] The combination of these two changes means that hundreds of millions of years ago, the Moon would always completely cover the Sun on solar eclipses, and no annular eclipses were possible. Likewise, hundreds of millions of years in the future, the Moon will no longer cover the Sun completely, and total solar eclipses will not occur.[176] Because the Moon's orbit around Earth is inclined by about 5.145° (5° 9') to the orbit of Earth around the Sun, eclipses do not occur at every full and new moon. For an eclipse to occur, the Moon must be near the intersection of the two orbital planes.[177] The periodicity and recurrence of eclipses of the Sun by the Moon, and of the Moon by Earth, is described by the saros, which has a period of approximately 18 years.[178] Because the Moon continuously blocks the view of a half-degree-wide circular area of the sky,[m][179] the related phenomenon of occultation occurs when a bright star or planet passes behind the Moon and is occulted: hidden from view. In this way, a solar eclipse is an occultation of the Sun. Because the Moon is comparatively close to Earth, occultations of individual stars are not visible everywhere on the planet, nor at the same time. Because of the precession of the lunar orbit, each year different stars are occulted.[180] Observation and exploration Main articles: Exploration of the Moon, List of spacecraft that orbited the Moon, List of missions to the Moon, and List of lunar probes See also: Timeline of Solar System exploration Before spaceflight Main article: Exploration of the Moon: Before spaceflight Map of the Moon by Johannes Hevelius from his Selenographia (1647), the first map to include the libration zones A study of the Moon in Robert Hooke's Micrographia, 1665 One of the earliest-discovered possible depictions of the Moon is a 5000-year-old rock carving Orthostat 47 at Knowth, Ireland.[181][182] Understanding of the Moon's cycles was an early development of astronomy: by the 5th century BC, Babylonian astronomers had recorded the 18-year Saros cycle of lunar eclipses,[183] and Indian astronomers had described the Moon's monthly elongation.[184] The Chinese astronomer Shi Shen (fl. 4th century BC) gave instructions for predicting solar and lunar eclipses.[185](p411) Later, the physical form of the Moon and the cause of moonlight became understood. The ancient Greek philosopher Anaxagoras (d. 428 BC) reasoned that the Sun and Moon were both giant spherical rocks, and that the latter reflected the light of the former.[186][185](p227) Although the Chinese of the Han Dynasty believed the Moon to be energy equated to qi, their 'radiating influence' theory also recognized that the light of the Moon was merely a reflection of the Sun, and Jing Fang (78–37 BC) noted the sphericity of the Moon.[185](pp413–414) In the 2nd century AD, Lucian wrote the novel A True Story, in which the heroes travel to the Moon and meet its inhabitants. In 499 AD, the Indian astronomer Aryabhata mentioned in his Aryabhatiya that reflected sunlight is the cause of the shining of the Moon.[187] The astronomer and physicist Alhazen (965–1039) found that sunlight was not reflected from the Moon like a mirror, but that light was emitted from every part of the Moon's sunlit surface in all directions.[188] Shen Kuo (1031–1095) of the Song dynasty created an allegory equating the waxing and waning of the Moon to a round ball of reflective silver that, when doused with white powder and viewed from the side, would appear to be a crescent.[185](pp415–416) Galileo's sketches of the Moon from Sidereus Nuncius In Aristotle's (384–322 BC) description of the universe, the Moon marked the boundary between the spheres of the mutable elements (earth, water, air and fire), and the imperishable stars of aether, an influential philosophy that would dominate for centuries.[189] However, in the 2nd century BC, Seleucus of Seleucia correctly theorized that tides were due to the attraction of the Moon, and that their height depends on the Moon's position relative to the Sun.[190] In the same century, Aristarchus computed the size and distance of the Moon from Earth, obtaining a value of about twenty times the radius of Earth for the distance. These figures were greatly improved by Ptolemy (90–168 AD): his values of a mean distance of 59 times Earth's radius and a diameter of 0.292 Earth diameters were close to the correct values of about 60 and 0.273 respectively.[191] Archimedes (287–212 BC) designed a planetarium that could calculate the motions of the Moon and other objects in the Solar System.[192] During the Middle Ages, before the invention of the telescope, the Moon was increasingly recognised as a sphere, though many believed that it was "perfectly smooth".[193] In 1609, Galileo Galilei drew one of the first telescopic drawings of the Moon in his book Sidereus Nuncius and noted that it was not smooth but had mountains and craters. Thomas Harriot had made, but not published such drawings a few months earlier. Telescopic mapping of the Moon followed: later in the 17th century, the efforts of Giovanni Battista Riccioli and Francesco Maria Grimaldi led to the system of naming of lunar features in use today. The more exact 1834–36 Mappa Selenographica of Wilhelm Beer and Johann Heinrich Mädler, and their associated 1837 book Der Mond, the first trigonometrically accurate study of lunar features, included the heights of more than a thousand mountains, and introduced the study of the Moon at accuracies possible in earthly geography.[194] Lunar craters, first noted by Galileo, were thought to be volcanic until the 1870s proposal of Richard Proctor that they were formed by collisions.[71] This view gained support in 1892 from the experimentation of geologist Grove Karl Gilbert, and from comparative studies from 1920 to the 1940s,[195] leading to the development of lunar stratigraphy, which by the 1950s was becoming a new and growing branch of astrogeology.[71] 1959–1970s See also: Space Race and Moon landing Between the first human arrival with the robotic Soviet Luna program in 1958, to the 1970s with the last Missions of the crewed U.S. Apollo landings and last Luna mission in 1976, the Cold War-inspired Space Race between the Soviet Union and the U.S. led to an acceleration of interest in exploration of the Moon. Once launchers had the necessary capabilities, these nations sent uncrewed probes on both flyby and impact/lander missions. Soviet missions Main articles: Luna program and Lunokhod programme First view in history of the far side of the Moon, taken by Luna 3, 7 October 1959 A model of Soviet Moon rover Lunokhod 1 Spacecraft from the Soviet Union's Luna program were the first to accomplish a number of goals: following three unnamed, failed missions in 1958,[196] the first human-made object to escape Earth's gravity and pass near the Moon was Luna 1; the first human-made object to impact the lunar surface was Luna 2, and the first photographs of the normally occluded far side of the Moon were made by Luna 3, all in 1959. Stamp with a drawing of the first soft landed probe Luna 9, next to the first view of the lunar surface photographed by the probe The first spacecraft to perform a successful lunar soft landing was Luna 9 and the first uncrewed vehicle to orbit the Moon was Luna 10, both in 1966.[71] Rock and soil samples were brought back to Earth by three Luna sample return missions (Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976), which returned 0.3 kg total.[197] Two pioneering robotic rovers landed on the Moon in 1970 and 1973 as a part of Soviet Lunokhod programme. Luna 24 was the last Soviet mission to the Moon. United States missions Main articles: Apollo program and Moon landing Earthrise (Apollo 8, 1968, taken by William Anders) Moon rock (Apollo 17, 1972) During the late 1950s at the height of the Cold War, the United States Army conducted a classified feasibility study that proposed the construction of a staffed military outpost on the Moon called Project Horizon with the potential to conduct a wide range of missions from scientific research to nuclear Earth bombardment. The study included the possibility of conducting a lunar-based nuclear test.[198][199] The Air Force, which at the time was in competition with the Army for a leading role in the space program, developed its own similar plan called Lunex.[200][201][198] However, both these proposals were ultimately passed over as the space program was largely transferred from the military to the civilian agency NASA.[201] Following President John F. Kennedy's 1961 commitment to a human moon landing before the end of the decade, the United States, under NASA leadership, launched a series of uncrewed probes to develop an understanding of the lunar surface in preparation for human missions: the Jet Propulsion Laboratory's Ranger program produced the first close-up pictures; the Lunar Orbiter program produced maps of the entire Moon; the Surveyor program landed its first spacecraft four months after Luna 9. The crewed Apollo program was developed in parallel; after a series of uncrewed and crewed tests of the Apollo spacecraft in Earth orbit, and spurred on by a potential Soviet lunar human landing, in 1968 Apollo 8 made the first human mission to lunar orbit. The subsequent landing of the first humans on the Moon in 1969 is seen by many as the culmination of the Space Race.[202] Neil Armstrong working at the Lunar Module Eagle during Apollo 11 (1969) "That's one small step ..." Problems playing this file? See media help. Neil Armstrong became the first person to walk on the Moon as the commander of the American mission Apollo 11 by first setting foot on the Moon at 02:56 UTC on 21 July 1969.[203] An estimated 500 million people worldwide watched the transmission by the Apollo TV camera, the largest television audience for a live broadcast at that time.[204][205] The Apollo missions 11 to 17 (except Apollo 13, which aborted its planned lunar landing) removed 380.05 kilograms (837.87 lb) of lunar rock and soil in 2,196 separate samples.[206] The American Moon landing and return was enabled by considerable technological advances in the early 1960s, in domains such as ablation chemistry, software engineering, and atmospheric re-entry technology, and by highly competent management of the enormous technical undertaking.[207][208] Scientific instrument packages were installed on the lunar surface during all the Apollo landings. Long-lived instrument stations, including heat flow probes, seismometers, and magnetometers, were installed at the Apollo 12, 14, 15, 16, and 17 landing sites. Direct transmission of data to Earth concluded in late 1977 because of budgetary considerations,[209][210] but as the stations' lunar laser ranging corner-cube retroreflector arrays are passive instruments, they are still being used. Ranging to the stations is routinely performed from Earth-based stations with an accuracy of a few centimeters, and data from this experiment are being used to place constraints on the size of the lunar core.[211] 1970s – present An artificially colored mosaic constructed from a series of 53 images taken through three spectral filters by Galileo' s imaging system as the spacecraft flew over the northern regions of the Moon on 7 December 1992. After the Moon race the focus of astronautic exploration shifted in the 1970s with probes like Pioneer 10 and the Voyager program towards the outer solar system. Years of near lunar quietude followed, only broken by a beginning internationalization of space and the Moon through for example the negotiation of the Moon treaty. Since the 1990s, many more countries have become involved in direct exploration of the Moon. In 1990, Japan became the third country to place a spacecraft into lunar orbit with its Hiten spacecraft. The spacecraft released a smaller probe, Hagoromo, in lunar orbit, but the transmitter failed, preventing further scientific use of the mission.[212] In 1994, the U.S. sent the joint Defense Department/NASA spacecraft Clementine to lunar orbit. This mission obtained the first near-global topographic map of the Moon, and the first global multispectral images of the lunar surface.[213] This was followed in 1998 by the Lunar Prospector mission, whose instruments indicated the presence of excess hydrogen at the lunar poles, which is likely to have been caused by the presence of water ice in the upper few meters of the regolith within permanently shadowed craters.[214] As viewed by Chandrayaan-1's NASA Moon Mineralogy Mapper equipment, on the right, the first time discovered water-rich minerals (light blue), shown around a small crater from which it was ejected. The European spacecraft SMART-1, the second ion-propelled spacecraft, was in lunar orbit from 15 November 2004 until its lunar impact on 3 September 2006, and made the first detailed survey of chemical elements on the lunar surface.[215] The ambitious Chinese Lunar Exploration Program began with Chang'e 1, which successfully orbited the Moon from 5 November 2007 until its controlled lunar impact on 1 March 2009.[216] It obtained a full image map of the Moon. Chang'e 2, beginning in October 2010, reached the Moon more quickly, mapped the Moon at a higher resolution over an eight-month period, then left lunar orbit for an extended stay at the Earth–Sun L2 Lagrangian point, before finally performing a flyby of asteroid 4179 Toutatis on 13 December 2012, and then heading off into deep space. On 14 December 2013, Chang'e 3 landed a lunar lander onto the Moon's surface, which in turn deployed a lunar rover, named Yutu (Chinese: 玉兔; literally "Jade Rabbit"). This was the first lunar soft landing since Luna 24 in 1976, and the first lunar rover mission since Lunokhod 2 in 1973. Another rover mission (Chang'e 4) was launched in 2019, becoming the first ever spacecraft to land on the Moon's far side. China intends to following this up with a sample return mission (Chang'e 5) in 2020.[217] Between 4 October 2007 and 10 June 2009, the Japan Aerospace Exploration Agency's Kaguya (Selene) mission, a lunar orbiter fitted with a high-definition video camera, and two small radio-transmitter satellites, obtained lunar geophysics data and took the first high-definition movies from beyond Earth orbit.[218][219] India's first lunar mission, Chandrayaan-1, orbited from 8 November 2008 until loss of contact on 27 August 2009, creating a high-resolution chemical, mineralogical and photo-geological map of the lunar surface, and confirming the presence of water molecules in lunar soil.[220] The Indian Space Research Organisation planned to launch Chandrayaan-2 in 2013, which would have included a Russian robotic lunar rover.[221][222] However, the failure of Russia's Fobos-Grunt mission has delayed this project, and was launched on 22 July 2019. The lander Vikram attempted to land on the lunar south pole region on 6 September, but lost the signal in 2.1 km (1.3 mi). What happened after that is unknown. Copernicus's central peaks as observed by the LRO, 2012 The Ina formation, 2009 The U.S. co-launched the Lunar Reconnaissance Orbiter (LRO) and the LCROSS impactor and follow-up observation orbiter on 18 June 2009; LCROSS completed its mission by making a planned and widely observed impact in the crater Cabeus on 9 October 2009,[223] whereas LRO is currently in operation, obtaining precise lunar altimetry and high-resolution imagery. In November 2011, the LRO passed over the large and bright crater Aristarchus. NASA released photos of the crater on 25 December 2011.[224] Two NASA GRAIL spacecraft began orbiting the Moon around 1 January 2012,[225] on a mission to learn more about the Moon's internal structure. NASA's LADEE probe, designed to study the lunar exosphere, achieved orbit on 6 October 2013. See also: List of proposed missions to the Moon Upcoming lunar missions include Russia's Luna-Glob: an uncrewed lander with a set of seismometers, and an orbiter based on its failed Martian Fobos-Grunt mission.[226] Privately funded lunar exploration has been promoted by the Google Lunar X Prize, announced 13 September 2007, which offers US$20 million to anyone who can land a robotic rover on the Moon and meet other specified criteria.[227] Shackleton Energy Company is building a program to establish operations on the south pole of the Moon to harvest water and supply their Propellant Depots.[228] NASA began to plan to resume human missions following the call by U.S. President George W. Bush on 14 January 2004 for a human mission to the Moon by 2019 and the construction of a lunar base by 2024.[229] The Constellation program was funded and construction and testing begun on a crewed spacecraft and launch vehicle,[230] and design studies for a lunar base.[231] However, that program has been canceled in favor of a human asteroid landing by 2025 and a human Mars orbit by 2035.[232] India has also expressed its hope to send people to the Moon by 2020.[233] On 28 February 2018, SpaceX, Vodafone, Nokia and Audi announced a collaboration to install a 4G wireless communication network on the Moon, with the aim of streaming live footage on the surface to Earth.[234] Recent reports also indicate NASA's intent to send a woman astronaut to the Moon in their planned mid-2020s mission.[235] Planned commercial missions In 2007, the X Prize Foundation together with Google launched the Google Lunar X Prize to encourage commercial endeavors to the Moon. A prize of $20 million was to be awarded to the first private venture to get to the Moon with a robotic lander by the end of March 2018, with additional prizes worth $10 million for further milestones.[236][237] As of August 2016, 16 teams were reportedly participating in the competition.[238] In January 2018 the foundation announced that the prize would go unclaimed as none of the finalist teams would be able to make a launch attempt by the deadline.[239] In August 2016, the US government granted permission to US-based start-up Moon Express to land on the Moon.[240] This marked the first time that a private enterprise was given the right to do so. The decision is regarded as a precedent helping to define regulatory standards for deep-space commercial activity in the future, as thus far companies' operation had been restricted to being on or around Earth.[240] On 29 November 2018 NASA announced that nine commercial companies would compete to win a contract to send small payloads to the Moon in what is known as Commercial Lunar Payload Services. According to NASA administrator Jim Bridenstine, "We are building a domestic American capability to get back and forth to the surface of the moon.".[241] See also: Human presence in space See also: List of artificial objects on the Moon, Space art § Art in space, and Planetary protection § Category V Remains of human activity, Apollo 17's Lunar Surface Experiments Package Beside the traces of human activity on the Moon, there have been some intended permanent installations like the Moon Museum art piece, Apollo 11 goodwill messages, Lunar plaque, the Fallen Astronaut memorial, and other artifacts. Fallen Astronaut Main article: Moonbase See also: Space infrastructure, Tourism on the Moon, and Colonization of the Moon Longterm missions continuing to be active are some orbiters such as the 2009 launched Lunar Reconnaissance Orbiter surveiling the Moon for future missions, as well as some Landers such as the 2013 launched Chang'e 3 with its Lunar Ultraviolet Telescope still operational.[242] There are several missions by different agencies and companies planned to establish a longterm human presence on the Moon, with the Lunar Gateway as the currently most advanced project as part of the Artemis program. Concept art of the Lunar Gateway of the Artemis program in 2024 serving as a communication hub, science laboratory, short-term habitation and holding area for rovers in lunar orbit.[243] Astronomy from the Moon A false-color image of Earth in ultraviolet light taken from the surface of the Moon on the Apollo 16 mission. The day-side reflects a large amount of UV light from the Sun, but the night-side shows faint bands of UV emission from the aurora caused by charged particles.[244] For many years, the Moon has been recognized as an excellent site for telescopes.[245] It is relatively nearby; astronomical seeing is not a concern; certain craters near the poles are permanently dark and cold, and thus especially useful for infrared telescopes; and radio telescopes on the far side would be shielded from the radio chatter of Earth.[246] The lunar soil, although it poses a problem for any moving parts of telescopes, can be mixed with carbon nanotubes and epoxies and employed in the construction of mirrors up to 50 meters in diameter.[247] A lunar zenith telescope can be made cheaply with an ionic liquid.[248] In April 1972, the Apollo 16 mission recorded various astronomical photos and spectra in ultraviolet with the Far Ultraviolet Camera/Spectrograph.[249] Humans have stayed for some days on the Moon. One particular challenge for astronauts' daily life during their stay on the surface is the lunar dust sticking to their suits and being carried into their quarters. Subsequently the dust was tasted and smelled by the astronauts, calling it the "Apollo aroma".[250] This contamination poses a danger since the fine lunar dust can cause health issues.[250] In 2019 at least one plant seed sprouted in an experiment, carried along with other small life from Earth on the Chang'e 4 lander in its Lunar Micro Ecosystem.[251] Main article: Space law Although Luna landers scattered pennants of the Soviet Union on the Moon, and U.S. flags were symbolically planted at their landing sites by the Apollo astronauts, no nation claims ownership of any part of the Moon's surface.[252] Russia, China, India, and the U.S. are party to the 1967 Outer Space Treaty,[253] which defines the Moon and all outer space as the "province of all mankind".[252] This treaty also restricts the use of the Moon to peaceful purposes, explicitly banning military installations and weapons of mass destruction.[254] The 1979 Moon Agreement was created to restrict the exploitation of the Moon's resources by any single nation, but as of January 2020, it has been signed and ratified by only 18 nations,[255] none of which engages in self-launched human space exploration. Although several individuals have made claims to the Moon in whole or in part, none of these are considered credible.[256][257][258] In 2020, U.S. President Donald Trump signed an executive order called "Encouraging International Support for the Recovery and Use of Space Resources". The order emphasizes that "the United States does not view outer space as a 'global commons'" and calls the Moon Agreement "a failed attempt at constraining free enterprise."[259][260] In culture Luna, the Moon, from a 1550 edition of Guido Bonatti's Liber astronomiae See also: Moon in fiction and Tourism on the Moon Further information: Lunar deity, Selene, Luna (goddess), Man in the Moon, and Crescent Statue of Chandraprabha (meaning "as charming as the moon"), the eighth Tirthankara in Jainism, with the symbol of a crescent moon below it Sun and Moon with faces (1493 woodcut) The contrast between the brighter highlands and the darker maria creates the patterns seen by different cultures as the Man in the Moon, the rabbit and the buffalo, among others. In many prehistoric and ancient cultures, the Moon was personified as a deity or other supernatural phenomenon, and astrological views of the Moon continue to be propagated today. In Proto-Indo-European religion, the Moon was personified as the male god *Meh1not.[261] The ancient Sumerians believed that the Moon was the god Nanna,[262][263] who was the father of Inanna, the goddess of the planet Venus,[262][263] and Utu, the god of the sun.[262][263] Nanna was later known as Sîn,[263][262] and was particularly associated with magic and sorcery.[262] In Greco-Roman mythology, the Sun and the Moon are represented as male and female, respectively (Helios/Sol and Selene/Luna);[261] this is a development unique to the eastern Mediterranean[261] and traces of an earlier male moon god in the Greek tradition are preserved in the figure of Menelaus.[261] In Mesopotamian iconography, the crescent was the primary symbol of Nanna-Sîn.[263] In ancient Greek art, the Moon goddess Selene was represented wearing a crescent on her headgear in an arrangement reminiscent of horns.[264][265] The star and crescent arrangement also goes back to the Bronze Age, representing either the Sun and Moon, or the Moon and planet Venus, in combination. It came to represent the goddess Artemis or Hecate, and via the patronage of Hecate came to be used as a symbol of Byzantium. An iconographic tradition of representing Sun and Moon with faces developed in the late medieval period. The splitting of the moon (Arabic: انشقاق القمر‎) is a miracle attributed to Muhammad.[266] A song titled 'Moon Anthem' was released on the occasion of landing of India's Chandrayan-II on the Moon.[267] Further information: Lunar calendar, Lunisolar calendar, Metonic cycle, Blue moon, and Movable feast The Moon's regular phases make it a very convenient timepiece, and the periods of its waxing and waning form the basis of many of the oldest calendars. Tally sticks, notched bones dating as far back as 20–30,000 years ago, are believed by some to mark the phases of the Moon.[268][269][270] The ~30-day month is an approximation of the lunar cycle. The English noun month and its cognates in other Germanic languages stem from Proto-Germanic *mǣnṓth-, which is connected to the above-mentioned Proto-Germanic *mǣnōn, indicating the usage of a lunar calendar among the Germanic peoples (Germanic calendar) prior to the adoption of a solar calendar.[271] The PIE root of moon, *méh1nōt, derives from the PIE verbal root *meh1-, "to measure", "indicat[ing] a functional conception of the Moon, i.e. marker of the month" (cf. the English words measure and menstrual),[272][273][274] and echoing the Moon's importance to many ancient cultures in measuring time (see Latin mensis and Ancient Greek μείς (meis) or μήν (mēn), meaning "month").[275][276][277][278] Most historical calendars are lunisolar. The 7th-century Islamic calendar is an exceptional example of a purely lunar calendar. Months are traditionally determined by the visual sighting of the hilal, or earliest crescent moon, over the horizon.[279] Moonrise, 1884, painting by Stanisław Masłowski (National Museum, Kraków, Gallery of Sukiennice Museum) Lunar effect Main article: Lunar effect The lunar effect is a purported unproven correlation between specific stages of the roughly 29.5-day lunar cycle and behavior and physiological changes in living beings on Earth, including humans. The Moon has long been particularly associated with insanity and irrationality; the words lunacy and lunatic (popular shortening loony) are derived from the Latin name for the Moon, Luna. Philosophers Aristotle and Pliny the Elder argued that the full moon induced insanity in susceptible individuals, believing that the brain, which is mostly water, must be affected by the Moon and its power over the tides, but the Moon's gravity is too slight to affect any single person.[280] Even today, people who believe in a lunar effect claim that admissions to psychiatric hospitals, traffic accidents, homicides or suicides increase during a full moon, but dozens of studies invalidate these claims.[280][281][282][283][284] ^ Between 18.29° and 28.58° to Earth's equator.[1] ^ There are a number of near-Earth asteroids, including 3753 Cruithne, that are co-orbital with Earth: their orbits bring them close to Earth for periods of time but then alter in the long term (Morais et al, 2002). These are quasi-satellites – they are not moons as they do not orbit Earth. For more information, see Other moons of Earth. ^ The maximum value is given based on scaling of the brightness from the value of −12.74 given for an equator to Moon-centre distance of 378 000 km in the NASA factsheet reference to the minimum Earth–Moon distance given there, after the latter is corrected for Earth's equatorial radius of 6 378 km, giving 350 600 km. The minimum value (for a distant new moon) is based on a similar scaling using the maximum Earth–Moon distance of 407 000 km (given in the factsheet) and by calculating the brightness of the earthshine onto such a new moon. The brightness of the earthshine is [ Earth albedo × (Earth radius / Radius of Moon's orbit)2 ] relative to the direct solar illumination that occurs for a full moon. (Earth albedo = 0.367; Earth radius = (polar radius × equatorial radius)½ = 6 367 km.) ^ The range of angular size values given are based on simple scaling of the following values given in the fact sheet reference: at an Earth-equator to Moon-centre distance of 378 000 km, the angular size is 1896 arcseconds. The same fact sheet gives extreme Earth–Moon distances of 407 000 km and 357 000 km. For the maximum angular size, the minimum distance has to be corrected for Earth's equatorial radius of 6 378 km, giving 350 600 km. ^ Lucey et al. (2006) give 107 particles cm−3 by day and 105 particles cm−3 by night. Along with equatorial surface temperatures of 390 K by day and 100 K by night, the ideal gas law yields the pressures given in the infobox (rounded to the nearest order of magnitude): 10−7 Pa by day and 10−10 Pa by night. ^ This age is calculated from isotope dating of lunar zircons. ^ More accurately, the Moon's mean sidereal period (fixed star to fixed star) is 27.321661 days (27 d 07 h 43 min 11.5 s), and its mean tropical orbital period (from equinox to equinox) is 27.321582 days (27 d 07 h 43 min 04.7 s) (Explanatory Supplement to the Astronomical Ephemeris, 1961, at p.107). ^ More accurately, the Moon's mean synodic period (between mean solar conjunctions) is 29.530589 days (29 d 12 h 44 min 02.9 s) (Explanatory Supplement to the Astronomical Ephemeris, 1961, at p.107). ^ There is no strong correlation between the sizes of planets and the sizes of their satellites. Larger planets tend to have more satellites, both large and small, than smaller planets. ^ With 27% the diameter and 60% the density of Earth, the Moon has 1.23% of the mass of Earth. The moon Charon is larger relative to its primary Pluto, but Pluto is now considered to be a dwarf planet. ^ The Sun's apparent magnitude is −26.7, while the full moon's apparent magnitude is −12.7. ^ See graph in Sun#Life phases. At present, the diameter of the Sun is increasing at a rate of about five percent per billion years. This is very similar to the rate at which the apparent angular diameter of the Moon is decreasing as it recedes from Earth. ^ On average, the Moon covers an area of 0.21078 square degrees on the night sky. ^ a b c d e f g h i j k l Wieczorek, Mark A.; et al. (2006). "The constitution and structure of the lunar interior". Reviews in Mineralogy and Geochemistry. 60 (1): 221–364. Bibcode:2006RvMG...60..221W. doi:10.2138/rmg.2006.60.3. S2CID 130734866. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ a b Lang, Kenneth R. (2011). The Cambridge Guide to the Solar System' (2nd ed.). Cambridge University Press. ISBN 9781139494175. Archived from the original on 1 January 2016. ^ Morais, M.H.M.; Morbidelli, A. (2002). "The Population of Near-Earth Asteroids in Coorbital Motion with the Earth". Icarus. 160 (1): 1–9. Bibcode:2002Icar..160....1M. doi:10.1006/icar.2002.6937. hdl:10316/4391. S2CID 55214551. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ a b c d e f g h i j Williams, Dr. David R. (2 February 2006). "Moon Fact Sheet". NASA/National Space Science Data Center. Archived from the original on 23 March 2010. Retrieved 31 December 2008. ^ Smith, David E.; Zuber, Maria T.; Neumann, Gregory A.; Lemoine, Frank G. (1 January 1997). "Topography of the Moon from the Clementine lidar". Journal of Geophysical Research. 102 (E1): 1601. Bibcode:1997JGR...102.1591S. doi:10.1029/96JE02940. hdl:2060/19980018849. S2CID 17475023. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ Terry, Paul (2013). Top 10 of Everything. Octopus Publishing Group Ltd. p. 226. ISBN 978-0-600-62887-3. ^ Williams, James G.; Newhall, XX; Dickey, Jean O. (1996). "Lunar moments, tides, orientation, and coordinate frames". Planetary and Space Science. 44 (10): 1077–1080. Bibcode:1996P&SS...44.1077W. doi:10.1016/0032-0633(95)00154-9. ^ Makemson, Maud W. (1971). "Determination of selenographic positions". The Moon. 2 (3): 293–308. Bibcode:1971Moon....2..293M. doi:10.1007/BF00561882. S2CID 119603394. ^ a b Archinal, Brent A.; A'Hearn, Michael F.; Bowell, Edward G.; Conrad, Albert R.; Consolmagno, Guy J.; Courtin, Régis; et al. (2010). "Report of the IAU Working Group on Cartographic Coordinates and Rotational Elements: 2009" (PDF). Celestial Mechanics and Dynamical Astronomy. 109 (2): 101–135. Bibcode:2011CeMDA.109..101A. doi:10.1007/s10569-010-9320-4. S2CID 189842666. Archived from the original (PDF) on 4 March 2016. Retrieved 24 September 2018. also available "via usgs.gov" (PDF). Archived (PDF) from the original on 27 April 2019. Retrieved 26 September 2018. ^ Matthews, Grant (2008). "Celestial body irradiance determination from an underfilled satellite radiometer: application to albedo and thermal emission measurements of the Moon using CERES". Applied Optics. 47 (27): 4981–4993. Bibcode:2008ApOpt..47.4981M. doi:10.1364/AO.47.004981. PMID 18806861. ^ A.R. Vasavada; D.A. Paige & S.E. Wood (1999). "Near-Surface Temperatures on Mercury and the Moon and the Stability of Polar Ice Deposits". Icarus. 141 (2): 179–193. Bibcode:1999Icar..141..179V. doi:10.1006/icar.1999.6175. S2CID 37706412. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ a b c Lucey, Paul; Korotev, Randy L.; et al. (2006). "Understanding the lunar surface and space-Moon interactions". Reviews in Mineralogy and Geochemistry. 60 (1): 83–219. Bibcode:2006RvMG...60...83L. doi:10.2138/rmg.2006.60.2. ^ Jonti Horner (18 July 2019). "How big is the Moon?". Archived from the original on 7 November 2020. Retrieved 15 November 2020. ^ "By the Numbers | Earth's Moon". NASA Solar System Exploration. NASA. Retrieved 15 December 2020. ^ Stern, David (30 March 2014). "Libration of the Moon". NASA. Archived from the original on 22 May 2020. Retrieved 11 February 2020. ^ "Naming Astronomical Objects: Spelling of Names". International Astronomical Union. Archived from the original on 16 December 2008. Retrieved 6 April 2020. ^ "Gazetteer of Planetary Nomenclature: Planetary Nomenclature FAQ". USGS Astrogeology Research Program. Archived from the original on 27 May 2010. Retrieved 6 April 2020. ^ Orel, Vladimir (2003). A Handbook of Germanic Etymology. Brill. Archived from the original on 17 June 2020. Retrieved 5 March 2020. ^ Fernando López-Menchero, Late Proto-Indo-European Etymological Lexicon Archived 22 May 2020 at the Wayback Machine ^ Barnhart, Robert K. (1995). The Barnhart Concise Dictionary of Etymology. Harper Collins. p. 487. ISBN 978-0-06-270084-1. ^ E.g. James A. Hall III (2016) Moons of the Solar System, Springer International ^ "Luna". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) ^ "Cynthia". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) ^ "selenian". Merriam-Webster Dictionary. ^ "selenian". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) ^ "selenic". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) ^ "selenic". Merriam-Webster Dictionary. ^ "Oxford English Dictionary: lunar, a. and n." Oxford English Dictionary: Second Edition 1989. Oxford University Press. Archived from the original on 19 August 2020. Retrieved 23 March 2010. ^ σελήνη. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project. ^ Pannen, Imke (2010). When the Bad Bleeds: Mantic Elements in English Renaissance Revenge Tragedy. V&R unipress GmbH. pp. 96–. ISBN 978-3-89971-640-5. Archived from the original on 4 September 2016. ^ "The Moon is older than scientists thought". Universe Today. Archived from the original on 3 August 2019. Retrieved 3 August 2019. ^ Barboni, M.; Boehnke, P.; Keller, C.B.; Kohl, I.E.; Schoene, B.; Young, E.D.; McKeegan, K.D. (2017). "Early formation of the Moon 4.51 billion years ago". Science Advances. 3 (1): e1602365. Bibcode:2017SciA....3E2365B. doi:10.1126/sciadv.1602365. PMC 5226643. PMID 28097222. ^ Binder, A.B. (1974). "On the origin of the Moon by rotational fission". The Moon. 11 (2): 53–76. Bibcode:1974Moon...11...53B. doi:10.1007/BF01877794. S2CID 122622374. ^ a b c Stroud, Rick (2009). The Book of the Moon. Walken and Company. pp. 24–27. ISBN 978-0-8027-1734-4. Archived from the original on 17 June 2020. Retrieved 11 November 2019. ^ Mitler, H.E. (1975). "Formation of an iron-poor moon by partial capture, or: Yet another exotic theory of lunar origin". Icarus. 24 (2): 256–268. Bibcode:1975Icar...24..256M. doi:10.1016/0019-1035(75)90102-5. ^ Stevenson, D.J. (1987). "Origin of the moon–The collision hypothesis". Annual Review of Earth and Planetary Sciences. 15 (1): 271–315. Bibcode:1987AREPS..15..271S. doi:10.1146/annurev.ea.15.050187.001415. S2CID 53516498. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ Taylor, G. Jeffrey (31 December 1998). "Origin of the Earth and Moon". Planetary Science Research Discoveries. Hawai'i Institute of Geophysics and Planetology. Archived from the original on 10 June 2010. Retrieved 7 April 2010. ^ "Asteroids Bear Scars of Moon's Violent Formation". 16 April 2015. Archived from the original on 8 October 2016. ^ Dana Mackenzie (21 July 2003). The Big Splat, or How Our Moon Came to Be. John Wiley & Sons. pp. 166–168. ISBN 978-0-471-48073-0. Archived from the original on 17 June 2020. Retrieved 11 June 2019. ^ Canup, R.; Asphaug, E. (2001). "Origin of the Moon in a giant impact near the end of Earth's formation". Nature. 412 (6848): 708–712. Bibcode:2001Natur.412..708C. doi:10.1038/35089010. PMID 11507633. S2CID 4413525. ^ "Earth-Asteroid Collision Formed Moon Later Than Thought". National Geographic. 28 October 2010. Archived from the original on 18 April 2009. Retrieved 7 May 2012. ^ Kleine, Thorsten (2008). "2008 Pellas-Ryder Award for Mathieu Touboul" (PDF). Meteoritics and Planetary Science. 43 (S7): A11–A12. Bibcode:2008M&PS...43...11K. doi:10.1111/j.1945-5100.2008.tb00709.x. Archived from the original (PDF) on 27 July 2018. Retrieved 8 April 2020. ^ Touboul, M.; Kleine, T.; Bourdon, B.; Palme, H.; Wieler, R. (2007). "Late formation and prolonged differentiation of the Moon inferred from W isotopes in lunar metals". Nature. 450 (7173): 1206–1209. Bibcode:2007Natur.450.1206T. doi:10.1038/nature06428. PMID 18097403. S2CID 4416259. ^ "Flying Oceans of Magma Help Demystify the Moon's Creation". National Geographic. 8 April 2015. Archived from the original on 9 April 2015. ^ Pahlevan, Kaveh; Stevenson, David J. (2007). "Equilibration in the aftermath of the lunar-forming giant impact". Earth and Planetary Science Letters. 262 (3–4): 438–449. arXiv:1012.5323. Bibcode:2007E&PSL.262..438P. doi:10.1016/j.epsl.2007.07.055. S2CID 53064179. ^ Nield, Ted (2009). "Moonwalk (summary of meeting at Meteoritical Society's 72nd Annual Meeting, Nancy, France)". Geoscientist. Vol. 19. p. 8. Archived from the original on 27 September 2012. ^ a b Warren, P.H. (1985). "The magma ocean concept and lunar evolution". Annual Review of Earth and Planetary Sciences. 13 (1): 201–240. Bibcode:1985AREPS..13..201W. doi:10.1146/annurev.ea.13.050185.001221. ^ Tonks, W. Brian; Melosh, H. Jay (1993). "Magma ocean formation due to giant impacts". Journal of Geophysical Research. 98 (E3): 5319–5333. Bibcode:1993JGR....98.5319T. doi:10.1029/92JE02726. ^ Daniel Clery (11 October 2013). "Impact Theory Gets Whacked". Science. 342 (6155): 183–185. Bibcode:2013Sci...342..183C. doi:10.1126/science.342.6155.183. PMID 24115419. ^ Wiechert, U.; et al. (October 2001). "Oxygen Isotopes and the Moon-Forming Giant Impact". Science. 294 (12): 345–348. Bibcode:2001Sci...294..345W. doi:10.1126/science.1063037. PMID 11598294. S2CID 29835446. Archived from the original on 20 April 2009. Retrieved 5 July 2009. ^ Pahlevan, Kaveh; Stevenson, David (October 2007). "Equilibration in the Aftermath of the Lunar-forming Giant Impact". Earth and Planetary Science Letters. 262 (3–4): 438–449. arXiv:1012.5323. Bibcode:2007E&PSL.262..438P. doi:10.1016/j.epsl.2007.07.055. S2CID 53064179. ^ "Titanium Paternity Test Says Earth is the Moon's Only Parent (University of Chicago)". Astrobio.net. 5 April 2012. Archived from the original on 8 August 2012. Retrieved 3 October 2013. ^ Garrick-Bethell; et al. (2014). "The tidal-rotational shape of the Moon and evidence for polar wander" (PDF). Nature. 512 (7513): 181–184. Bibcode:2014Natur.512..181G. doi:10.1038/nature13639. PMID 25079322. S2CID 4452886. Archived (PDF) from the original on 4 August 2020. Retrieved 12 April 2020. ^ Taylor, Stuart R. (1975). Lunar Science: a Post-Apollo View. Oxford: Pergamon Press. p. 64. ISBN 978-0-08-018274-2. ^ Brown, D.; Anderson, J. (6 January 2011). "NASA Research Team Reveals Moon Has Earth-Like Core". NASA. NASA. Archived from the original on 11 January 2012. ^ Weber, R.C.; Lin, P.-Y.; Garnero, E.J.; Williams, Q.; Lognonne, P. (21 January 2011). "Seismic Detection of the Lunar Core" (PDF). Science. 331 (6015): 309–312. Bibcode:2011Sci...331..309W. doi:10.1126/science.1199375. PMID 21212323. S2CID 206530647. Archived from the original (PDF) on 15 October 2015. Retrieved 10 April 2017. ^ Nemchin, A.; Timms, N.; Pidgeon, R.; Geisler, T.; Reddy, S.; Meyer, C. (2009). "Timing of crystallization of the lunar magma ocean constrained by the oldest zircon". Nature Geoscience. 2 (2): 133–136. Bibcode:2009NatGe...2..133N. doi:10.1038/ngeo417. hdl:20.500.11937/44375. ^ a b Shearer, Charles K.; et al. (2006). "Thermal and magmatic evolution of the Moon". Reviews in Mineralogy and Geochemistry. 60 (1): 365–518. Bibcode:2006RvMG...60..365S. doi:10.2138/rmg.2006.60.4. S2CID 129184748. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ Schubert, J. (2004). "Interior composition, structure, and dynamics of the Galilean satellites.". In F. Bagenal; et al. (eds.). Jupiter: The Planet, Satellites, and Magnetosphere. Cambridge University Press. pp. 281–306. ISBN 978-0-521-81808-7. ^ Williams, J.G.; Turyshev, S.G.; Boggs, D.H.; Ratcliff, J.T. (2006). "Lunar laser ranging science: Gravitational physics and lunar interior and geodesy". Advances in Space Research. 37 (1): 67–71. arXiv:gr-qc/0412049. Bibcode:2006AdSpR..37...67W. doi:10.1016/j.asr.2005.05.013. S2CID 14801321. ^ Spudis, Paul D.; Cook, A.; Robinson, M.; Bussey, B.; Fessler, B. (January 1998). "Topography of the South Polar Region from Clementine Stereo Imaging". Workshop on New Views of the Moon: Integrated Remotely Sensed, Geophysical, and Sample Datasets: 69. Bibcode:1998nvmi.conf...69S. ^ a b c Spudis, Paul D.; Reisse, Robert A.; Gillis, Jeffrey J. (1994). "Ancient Multiring Basins on the Moon Revealed by Clementine Laser Altimetry". Science. 266 (5192): 1848–1851. Bibcode:1994Sci...266.1848S. doi:10.1126/science.266.5192.1848. PMID 17737079. S2CID 41861312. ^ Pieters, C.M.; Tompkins, S.; Head, J.W.; Hess, P.C. (1997). "Mineralogy of the Mafic Anomaly in the South Pole‐Aitken Basin: Implications for excavation of the lunar mantle". Geophysical Research Letters. 24 (15): 1903–1906. Bibcode:1997GeoRL..24.1903P. doi:10.1029/97GL01718. hdl:2060/19980018038. ^ Taylor, G.J. (17 July 1998). "The Biggest Hole in the Solar System". Planetary Science Research Discoveries: 20. Bibcode:1998psrd.reptE..20T. Archived from the original on 20 August 2007. Retrieved 12 April 2007. ^ Schultz, P.H. (March 1997). "Forming the south-pole Aitken basin – The extreme games". Conference Paper, 28th Annual Lunar and Planetary Science Conference. 28: 1259. Bibcode:1997LPI....28.1259S. ^ "NASA's LRO Reveals 'Incredible Shrinking Moon'". NASA. 19 August 2010. Archived from the original on 21 August 2010. ^ Watters, Thomas R.; Weber, Renee C.; Collins, Geoffrey C.; Howley, Ian J.; Schmerr, Nicholas C.; Johnson, Catherine L. (June 2019). "Shallow seismic activity and young thrust faults on the Moon". Nature Geoscience (published 13 May 2019). 12 (6): 411–417. Bibcode:2019NatGe..12..411W. doi:10.1038/s41561-019-0362-2. ISSN 1752-0894. S2CID 182137223. ^ Wlasuk, Peter (2000). Observing the Moon. Springer. p. 19. ISBN 978-1-85233-193-1. ^ Norman, M. (21 April 2004). "The Oldest Moon Rocks". Planetary Science Research Discoveries. Hawai'i Institute of Geophysics and Planetology. Archived from the original on 18 April 2007. Retrieved 12 April 2007. ^ Head, L.W.J.W. (2003). "Lunar Gruithuisen and Mairan domes: Rheology and mode of emplacement". Journal of Geophysical Research. 108 (E2): 5012. Bibcode:2003JGRE..108.5012W. CiteSeerX 10.1.1.654.9619. doi:10.1029/2002JE001909. Archived from the original on 12 March 2007. Retrieved 12 April 2007. ^ a b c d e f g h Spudis, P.D. (2004). "Moon". World Book Online Reference Center, NASA. Archived from the original on 3 July 2013. Retrieved 12 April 2007. ^ Gillis, J.J.; Spudis, P.D. (1996). "The Composition and Geologic Setting of Lunar Far Side Maria". Lunar and Planetary Science. 27: 413. Bibcode:1996LPI....27..413G. ^ Lawrence, D.J., et al. (11 August 1998). "Global Elemental Maps of the Moon: The Lunar Prospector Gamma-Ray Spectrometer". Science. 281 (5382): 1484–1489. Bibcode:1998Sci...281.1484L. doi:10.1126/science.281.5382.1484. PMID 9727970. Archived from the original on 16 May 2009. Retrieved 29 August 2009. ^ Taylor, G.J. (31 August 2000). "A New Moon for the Twenty-First Century". Planetary Science Research Discoveries: 41. Bibcode:2000psrd.reptE..41T. Archived from the original on 1 March 2012. Retrieved 12 April 2007. ^ a b Papike, J.; Ryder, G.; Shearer, C. (1998). "Lunar Samples". Reviews in Mineralogy and Geochemistry. 36: 5.1–5.234. ^ a b Hiesinger, H.; Head, J.W.; Wolf, U.; Jaumanm, R.; Neukum, G. (2003). "Ages and stratigraphy of mare basalts in Oceanus Procellarum, Mare Numbium, Mare Cognitum, and Mare Insularum". Journal of Geophysical Research. 108 (E7): 1029. Bibcode:2003JGRE..108.5065H. doi:10.1029/2002JE001985. S2CID 9570915. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ a b Phil Berardelli (9 November 2006). "Long Live the Moon!". Science. Archived from the original on 18 October 2014. Retrieved 14 October 2014. ^ Jason Major (14 October 2014). "Volcanoes Erupted 'Recently' on the Moon". Discovery News. Archived from the original on 16 October 2014. ^ "NASA Mission Finds Widespread Evidence of Young Lunar Volcanism". NASA. 12 October 2014. Archived from the original on 3 January 2015. ^ Eric Hand (12 October 2014). "Recent volcanic eruptions on the moon". Science. Archived from the original on 14 October 2014. ^ Braden, S.E.; Stopar, J.D.; Robinson, M.S.; Lawrence, S.J.; van der Bogert, C.H.; Hiesinger, H. (2014). "Evidence for basaltic volcanism on the Moon within the past 100 million years". Nature Geoscience. 7 (11): 787–791. Bibcode:2014NatGe...7..787B. doi:10.1038/ngeo2252. ^ Srivastava, N.; Gupta, R.P. (2013). "Young viscous flows in the Lowell crater of Orientale basin, Moon: Impact melts or volcanic eruptions?". Planetary and Space Science. 87: 37–45. Bibcode:2013P&SS...87...37S. doi:10.1016/j.pss.2013.09.001. ^ Gupta, R.P.; Srivastava, N.; Tiwari, R.K. (2014). "Evidences of relatively new volcanic flows on the Moon". Current Science. 107 (3): 454–460. ^ Whitten, J.; et al. (2011). "Lunar mare deposits associated with the Orientale impact basin: New insights into mineralogy, history, mode of emplacement, and relation to Orientale Basin evolution from Moon Mineralogy Mapper (M3) data from Chandrayaan-1". Journal of Geophysical Research. 116: E00G09. Bibcode:2011JGRE..116.0G09W. doi:10.1029/2010JE003736. S2CID 7234547. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ Cho, Y.; et al. (2012). "Young mare volcanism in the Orientale region contemporary with the Procellarum KREEP Terrane (PKT) volcanism peak period 2 b.y. ago". Geophysical Research Letters. 39 (11): L11203. Bibcode:2012GeoRL..3911203C. doi:10.1029/2012GL051838. ^ Munsell, K. (4 December 2006). "Majestic Mountains". Solar System Exploration. NASA. Archived from the original on 17 September 2008. Retrieved 12 April 2007. ^ Richard Lovett (2011). "Early Earth may have had two moons : Nature News". Nature. doi:10.1038/news.2011.456. Archived from the original on 3 November 2012. Retrieved 1 November 2012. ^ "Was our two-faced moon in a small collision?". Theconversation.edu.au. Archived from the original on 30 January 2013. Retrieved 1 November 2012. ^ Melosh, H. J. (1989). Impact cratering: A geologic process. Oxford University Press. ISBN 978-0-19-504284-9. ^ "Moon Facts". SMART-1. European Space Agency. 2010. Archived from the original on 17 March 2012. Retrieved 12 May 2010. ^ a b Wilhelms, Don (1987). "Relative Ages" (PDF). Geologic History of the Moon. U.S. Geological Survey. Archived from the original (PDF) on 11 June 2010. Retrieved 4 April 2010. ^ Hartmann, William K.; Quantin, Cathy; Mangold, Nicolas (2007). "Possible long-term decline in impact rates: 2. Lunar impact-melt data regarding impact history". Icarus. 186 (1): 11–23. Bibcode:2007Icar..186...11H. doi:10.1016/j.icarus.2006.09.009. ^ "The Smell of Moondust". NASA. 30 January 2006. Archived from the original on 8 March 2010. Retrieved 15 March 2010. ^ Heiken, G. (1991). Vaniman, D.; French, B. (eds.). Lunar Sourcebook, a user's guide to the Moon. New York: Cambridge University Press. p. 736. ISBN 978-0-521-33444-0. Archived from the original on 17 June 2020. Retrieved 17 December 2019. ^ Rasmussen, K.L.; Warren, P.H. (1985). "Megaregolith thickness, heat flow, and the bulk composition of the Moon". Nature. 313 (5998): 121–124. Bibcode:1985Natur.313..121R. doi:10.1038/313121a0. S2CID 4245137. ^ Boyle, Rebecca. "The moon has hundreds more craters than we thought". Archived from the original on 13 October 2016. ^ Speyerer, Emerson J.; Povilaitis, Reinhold Z.; Robinson, Mark S.; Thomas, Peter C.; Wagner, Robert V. (13 October 2016). "Quantifying crater production and regolith overturn on the Moon with temporal imaging". Nature. 538 (7624): 215–218. Bibcode:2016Natur.538..215S. doi:10.1038/nature19829. PMID 27734864. S2CID 4443574. ^ Margot, J.L.; Campbell, D.B.; Jurgens, R.F.; Slade, M.A. (4 June 1999). "Topography of the Lunar Poles from Radar Interferometry: A Survey of Cold Trap Locations" (PDF). Science. 284 (5420): 1658–1660. Bibcode:1999Sci...284.1658M. CiteSeerX 10.1.1.485.312. doi:10.1126/science.284.5420.1658. PMID 10356393. Archived (PDF) from the original on 11 August 2017. Retrieved 25 October 2017. ^ Ward, William R. (1 August 1975). "Past Orientation of the Lunar Spin Axis". Science. 189 (4200): 377–379. Bibcode:1975Sci...189..377W. doi:10.1126/science.189.4200.377. PMID 17840827. S2CID 21185695. ^ a b Martel, L.M.V. (4 June 2003). "The Moon's Dark, Icy Poles". Planetary Science Research Discoveries: 73. Bibcode:2003psrd.reptE..73M. Archived from the original on 1 March 2012. Retrieved 12 April 2007. ^ Seedhouse, Erik (2009). Lunar Outpost: The Challenges of Establishing a Human Settlement on the Moon. Springer-Praxis Books in Space Exploration. Germany: Springer Praxis. p. 136. ISBN 978-0-387-09746-6. Archived from the original on 26 November 2020. Retrieved 22 August 2020. ^ Coulter, Dauna (18 March 2010). "The Multiplying Mystery of Moonwater". NASA. Archived from the original on 13 December 2012. Retrieved 28 March 2010. ^ Spudis, P. (6 November 2006). "Ice on the Moon". The Space Review. Archived from the original on 22 February 2007. Retrieved 12 April 2007. ^ Feldman, W.C.; S. Maurice; A.B. Binder; B.L. Barraclough; R.C. Elphic; D.J. Lawrence (1998). "Fluxes of Fast and Epithermal Neutrons from Lunar Prospector: Evidence for Water Ice at the Lunar Poles" (PDF). Science. 281 (5382): 1496–1500. Bibcode:1998Sci...281.1496F. doi:10.1126/science.281.5382.1496. PMID 9727973. S2CID 9005608. Archived (PDF) from the original on 23 February 2019. Retrieved 12 April 2020. ^ Saal, Alberto E.; Hauri, Erik H.; Cascio, Mauro L.; van Orman, James A.; Rutherford, Malcolm C.; Cooper, Reid F. (2008). "Volatile content of lunar volcanic glasses and the presence of water in the Moon's interior". Nature. 454 (7201): 192–195. Bibcode:2008Natur.454..192S. doi:10.1038/nature07047. PMID 18615079. S2CID 4394004. ^ Pieters, C.M.; Goswami, J.N.; Clark, R.N.; Annadurai, M.; Boardman, J.; Buratti, B.; Combe, J.-P.; Dyar, M.D.; Green, R.; Head, J.W.; Hibbitts, C.; Hicks, M.; Isaacson, P.; Klima, R.; Kramer, G.; Kumar, S.; Livo, E.; Lundeen, S.; Malaret, E.; McCord, T.; Mustard, J.; Nettles, J.; Petro, N.; Runyon, C.; Staid, M.; Sunshine, J.; Taylor, L.A.; Tompkins, S.; Varanasi, P. (2009). "Character and Spatial Distribution of OH/H2O on the Surface of the Moon Seen by M3 on Chandrayaan-1". Science. 326 (5952): 568–572. Bibcode:2009Sci...326..568P. doi:10.1126/science.1178658. PMID 19779151. S2CID 447133. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ Li, Shuai; Lucey, Paul G.; Milliken, Ralph E.; Hayne, Paul O.; Fisher, Elizabeth; Williams, Jean-Pierre; Hurley, Dana M.; Elphic, Richard C. (August 2018). "Direct evidence of surface exposed water ice in the lunar polar regions". Proceedings of the National Academy of Sciences. 115 (36): 8907–8912. Bibcode:2018PNAS..115.8907L. doi:10.1073/pnas.1802345115. PMC 6130389. PMID 30126996. ^ Lakdawalla, Emily (13 November 2009). "LCROSS Lunar Impactor Mission: "Yes, We Found Water!"". The Planetary Society. Archived from the original on 22 January 2010. Retrieved 13 April 2010. ^ Colaprete, A.; Ennico, K.; Wooden, D.; Shirley, M.; Heldmann, J.; Marshall, W.; Sollitt, L.; Asphaug, E.; Korycansky, D.; Schultz, P.; Hermalyn, B.; Galal, K.; Bart, G.D.; Goldstein, D.; Summy, D. (1–5 March 2010). "Water and More: An Overview of LCROSS Impact Results". 41st Lunar and Planetary Science Conference. 41 (1533): 2335. Bibcode:2010LPI....41.2335C. ^ Colaprete, Anthony; Schultz, Peter; Heldmann, Jennifer; Wooden, Diane; Shirley, Mark; Ennico, Kimberly; Hermalyn, Brendan; Marshall, William; Ricco, Antonio; Elphic, Richard C.; Goldstein, David; Summy, Dustin; Bart, Gwendolyn D.; Asphaug, Erik; Korycansky, Don; Landis, David; Sollitt, Luke (22 October 2010). "Detection of Water in the LCROSS Ejecta Plume". Science. 330 (6003): 463–468. Bibcode:2010Sci...330..463C. doi:10.1126/science.1186986. PMID 20966242. S2CID 206525375. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ Hauri, Erik; Thomas Weinreich; Albert E. Saal; Malcolm C. Rutherford; James A. Van Orman (26 May 2011). "High Pre-Eruptive Water Contents Preserved in Lunar Melt Inclusions". Science Express. 10 (1126): 213–215. Bibcode:2011Sci...333..213H. doi:10.1126/science.1204626. PMID 21617039. S2CID 44437587. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ a b Rincon, Paul (21 August 2018). "Water ice 'detected on Moon's surface'". BBC News. Archived from the original on 21 August 2018. Retrieved 21 August 2018. ^ David, Leonard. "Beyond the Shadow of a Doubt, Water Ice Exists on the Moon". Scientific American. Archived from the original on 21 August 2018. Retrieved 21 August 2018. ^ a b "Water Ice Confirmed on the Surface of the Moon for the 1st Time!". Space.com. Archived from the original on 21 August 2018. Retrieved 21 August 2018. ^ Honniball, C.I.; et al. (26 October 2020). "Molecular water detected on the sunlit Moon by SOFIA". Nature Astronomy. Bibcode:2020NatAs.tmp..222H. doi:10.1038/s41550-020-01222-x. Archived from the original on 27 October 2020. Retrieved 26 October 2020. ^ Hayne, P.O.; et al. (26 October 2020). "Micro cold traps on the Moon". Nature Astronomy. arXiv:2005.05369. Bibcode:2020NatAs.tmp..221H. doi:10.1038/s41550-020-1198-9. S2CID 218595642. Archived from the original on 27 October 2020. Retrieved 26 October 2020. ^ Guarino, Ben; Achenbach, Joel (26 October 2020). "Pair of studies confirm there is water on the moon - New research confirms what scientists had theorized for years — the moon is wet". The Washington Post. Archived from the original on 26 October 2020. Retrieved 26 October 2020. ^ Chang, Kenneth (26 October 2020). "There's Water and Ice on the Moon, and in More Places Than NASA Once Thought - Future astronauts seeking water on the moon may not need to go into the most treacherous craters in its polar regions to find it". The New York Times. Archived from the original on 26 October 2020. Retrieved 26 October 2020. ^ Muller, P.; Sjogren, W. (1968). "Mascons: lunar mass concentrations". Science. 161 (3842): 680–684. Bibcode:1968Sci...161..680M. doi:10.1126/science.161.3842.680. PMID 17801458. S2CID 40110502. ^ Richard A. Kerr (12 April 2013). "The Mystery of Our Moon's Gravitational Bumps Solved?". Science. 340 (6129): 138–139. doi:10.1126/science.340.6129.138-a. PMID 23580504. ^ Konopliv, A.; Asmar, S.; Carranza, E.; Sjogren, W.; Yuan, D. (2001). "Recent gravity models as a result of the Lunar Prospector mission" (PDF). Icarus. 50 (1): 1–18. Bibcode:2001Icar..150....1K. CiteSeerX 10.1.1.18.1930. doi:10.1006/icar.2000.6573. Archived from the original (PDF) on 13 November 2004. ^ a b c Mighani, S.; Wang, H.; Shuster, D.L.; Borlina, C.S.; Nichols, C.I.O.; Weiss, B.P. (2020). "The end of the lunar dynamo". Science Advances. 6 (1): eaax0883. Bibcode:2020SciA....6..883M. doi:10.1126/sciadv.aax0883. PMC 6938704. PMID 31911941. ^ Garrick-Bethell, Ian; Weiss, iBenjamin P.; Shuster, David L.; Buz, Jennifer (2009). "Early Lunar Magnetism". Science. 323 (5912): 356–359. Bibcode:2009Sci...323..356G. doi:10.1126/science.1166804. PMID 19150839. S2CID 23227936. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ "Magnetometer / Electron Reflectometer Results". Lunar Prospector (NASA). 2001. Archived from the original on 27 May 2010. Retrieved 17 March 2010. ^ Hood, L.L.; Huang, Z. (1991). "Formation of magnetic anomalies antipodal to lunar impact basins: Two-dimensional model calculations". Journal of Geophysical Research. 96 (B6): 9837–9846. Bibcode:1991JGR....96.9837H. doi:10.1029/91JB00308. ^ "Moon Storms". NASA. 27 September 2013. Archived from the original on 12 September 2013. Retrieved 3 October 2013. ^ Culler, Jessica (16 June 2015). "LADEE - Lunar Atmosphere Dust and Environment Explorer". Archived from the original on 8 April 2015. ^ Globus, Ruth (1977). "Chapter 5, Appendix J: Impact Upon Lunar Atmosphere". In Richard D. Johnson & Charles Holbrow (ed.). Space Settlements: A Design Study. NASA. Archived from the original on 31 May 2010. Retrieved 17 March 2010. ^ Crotts, Arlin P.S. (2008). "Lunar Outgassing, Transient Phenomena and The Return to The Moon, I: Existing Data" (PDF). The Astrophysical Journal. 687 (1): 692–705. arXiv:0706.3949. Bibcode:2008ApJ...687..692C. doi:10.1086/591634. S2CID 16821394. Archived from the original (PDF) on 20 February 2009. Retrieved 29 September 2009. ^ Steigerwald, William (17 August 2015). "NASA's LADEE Spacecraft Finds Neon in Lunar Atmosphere". NASA. Archived from the original on 19 August 2015. Retrieved 18 August 2015. ^ a b c Stern, S.A. (1999). "The Lunar atmosphere: History, status, current problems, and context". Reviews of Geophysics. 37 (4): 453–491. Bibcode:1999RvGeo..37..453S. CiteSeerX 10.1.1.21.9994. doi:10.1029/1999RG900005. ^ Lawson, S.; Feldman, W.; Lawrence, D.; Moore, K.; Elphic, R.; Belian, R. (2005). "Recent outgassing from the lunar surface: the Lunar Prospector alpha particle spectrometer". Journal of Geophysical Research. 110 (E9): 1029. Bibcode:2005JGRE..11009009L. doi:10.1029/2005JE002433. ^ R. Sridharan; S.M. Ahmed; Tirtha Pratim Dasa; P. Sreelathaa; P. Pradeepkumara; Neha Naika; Gogulapati Supriya (2010). "'Direct' evidence for water (H2O) in the sunlit lunar ambience from CHACE on MIP of Chandrayaan I". Planetary and Space Science. 58 (6): 947–950. Bibcode:2010P&SS...58..947S. doi:10.1016/j.pss.2010.02.013. ^ Drake, Nadia; 17, National Geographic PUBLISHED June (17 June 2015). "Lopsided Cloud of Dust Discovered Around the Moon". National Geographic News. Archived from the original on 19 June 2015. Retrieved 20 June 2015. CS1 maint: numeric names: authors list (link) ^ Horányi, M.; Szalay, J.R.; Kempf, S.; Schmidt, J.; Grün, E.; Srama, R.; Sternovsky, Z. (18 June 2015). "A permanent, asymmetric dust cloud around the Moon". Nature. 522 (7556): 324–326. Bibcode:2015Natur.522..324H. doi:10.1038/nature14479. PMID 26085272. S2CID 4453018. ^ "NASA: The Moon Once Had an Atmosphere That Faded Away". Time. Archived from the original on 14 October 2017. Retrieved 14 October 2017. ^ Hamilton, Calvin J.; Hamilton, Rosanna L., The Moon, Views of the Solar System Archived 4 February 2016 at the Wayback Machine, 1995–2011. ^ a b Amos, Jonathan (16 December 2009). "'Coldest place' found on the Moon". BBC News. Archived from the original on 11 August 2017. Retrieved 20 March 2010. ^ "Diviner News". UCLA. 17 September 2009. Archived from the original on 7 March 2010. Retrieved 17 March 2010. ^ Rocheleau, Jake (21 May 2012). "Temperature on the Moon – Surface Temperature of the Moon – PlanetFacts.org". Archived from the original on 27 May 2015. ^ Matt Williams (10 July 2017). "How Long is a Day on the Moon?". Retrieved 5 December 2020. ^ Haigh, I. D.; Eliot, M.; Pattiaratchi, C. (2011). "Global influences of the 18.61 year nodal cycle and 8.85 year cycle of lunar perigee on high tidal levels" (PDF). J. Geophys. Res. 116 (C6): C06025. Bibcode:2011JGRC..116.6025H. doi:10.1029/2010JC006645. Archived (PDF) from the original on 12 December 2019. Retrieved 24 September 2019. CS1 maint: uses authors parameter (link) ^ V V Belet︠s︡kiĭ (2001). Essays on the Motion of Celestial Bodies. Birkhäuser. p. 183. ISBN 978-3-7643-5866-2. Archived from the original on 23 March 2018. Retrieved 22 August 2020. ^ "Space Topics: Pluto and Charon". The Planetary Society. Archived from the original on 18 February 2012. Retrieved 6 April 2010. ^ Phil Plait. "Dark Side of the Moon". Bad Astronomy: Misconceptions. Archived from the original on 12 April 2010. Retrieved 15 February 2010. ^ Alexander, M.E. (1973). "The Weak Friction Approximation and Tidal Evolution in Close Binary Systems". Astrophysics and Space Science. 23 (2): 459–508. Bibcode:1973Ap&SS..23..459A. doi:10.1007/BF00645172. S2CID 122918899. ^ "Moon used to spin 'on different axis'". BBC News. BBC. 23 March 2016. Archived from the original on 23 March 2016. Retrieved 23 March 2016. ^ Luciuk, Mike. "How Bright is the Moon?". Amateur Astronomers. Archived from the original on 12 March 2010. Retrieved 16 March 2010. ^ Hershenson, Maurice (1989). The Moon illusion. Routledge. p. 5. ISBN 978-0-8058-0121-7. ^ Spekkens, K. (18 October 2002). "Is the Moon seen as a crescent (and not a "boat") all over the world?". Curious About Astronomy. Archived from the original on 16 October 2015. Retrieved 28 September 2015. ^ "Moonlight helps plankton escape predators during Arctic winters". New Scientist. 16 January 2016. Archived from the original on 30 January 2016. ^ ""Super Moon" exceptional. Brightest moon in the sky of Normandy, Monday, November 14 - The Siver Times". 12 November 2016. Archived from the original on 14 November 2016. ^ "Moongazers Delight – Biggest Supermoon in Decades Looms Large Sunday Night". 10 November 2016. Archived from the original on 14 November 2016. Retrieved 5 March 2017. ^ "Supermoon November 2016". Space.com. 13 November 2016. Archived from the original on 14 November 2016. Retrieved 14 November 2016. ^ Tony Phillips (16 March 2011). "Super Full Moon". NASA. Archived from the original on 7 May 2012. Retrieved 19 March 2011. ^ Richard K. De Atley (18 March 2011). "Full moon tonight is as close as it gets". The Press-Enterprise. Archived from the original on 22 March 2011. Retrieved 19 March 2011. ^ "'Super moon' to reach closest point for almost 20 years". The Guardian. 19 March 2011. Archived from the original on 25 December 2013. Retrieved 19 March 2011. ^ Georgia State University, Dept. of Physics (Astronomy). "Perceived Brightness". Brightnes and Night/Day Sensitivity. Georgia State University. Archived from the original on 21 February 2014. Retrieved 25 January 2014. ^ Lutron. "Measured light vs. perceived light" (PDF). From IES Lighting Handbook 2000, 27-4. Lutron. Archived (PDF) from the original on 5 February 2013. Retrieved 25 January 2014. ^ Walker, John (May 1997). "Inconstant Moon". Earth and Moon Viewer. Fourth paragraph of "How Bright the Moonlight": Fourmilab. Archived from the original on 14 December 2013. Retrieved 23 January 2014. 14% [...] due to the logarithmic response of the human eye. ^ Taylor, G.J. (8 November 2006). "Recent Gas Escape from the Moon". Planetary Science Research Discoveries: 110. Bibcode:2006psrd.reptE.110T. Archived from the original on 4 March 2007. Retrieved 4 April 2007. ^ Schultz, P.H.; Staid, M.I.; Pieters, C.M. (2006). "Lunar activity from recent gas release". Nature. 444 (7116): 184–186. Bibcode:2006Natur.444..184S. doi:10.1038/nature05303. PMID 17093445. S2CID 7679109. ^ "22 Degree Halo: a ring of light 22 degrees from the sun or moon". Department of Atmospheric Sciences, University of Illinois at Urbana–Champaign. Archived from the original on 5 April 2010. Retrieved 13 April 2010. ^ a b c d e Lambeck, K. (1977). "Tidal Dissipation in the Oceans: Astronomical, Geophysical and Oceanographic Consequences". Philosophical Transactions of the Royal Society A. 287 (1347): 545–594. Bibcode:1977RSPTA.287..545L. doi:10.1098/rsta.1977.0159. S2CID 122853694. ^ Le Provost, C.; Bennett, A.F.; Cartwright, D.E. (1995). "Ocean Tides for and from TOPEX/POSEIDON". Science. 267 (5198): 639–642. Bibcode:1995Sci...267..639L. doi:10.1126/science.267.5198.639. PMID 17745840. S2CID 13584636. ^ a b c d Touma, Jihad; Wisdom, Jack (1994). "Evolution of the Earth-Moon system". The Astronomical Journal. 108 (5): 1943–1961. Bibcode:1994AJ....108.1943T. doi:10.1086/117209. ^ Chapront, J.; Chapront-Touzé, M.; Francou, G. (2002). "A new determination of lunar orbital parameters, precession constant and tidal acceleration from LLR measurements" (PDF). Astronomy and Astrophysics. 387 (2): 700–709. Bibcode:2002A&A...387..700C. doi:10.1051/0004-6361:20020420. S2CID 55131241. Archived (PDF) from the original on 12 April 2020. Retrieved 12 April 2020. ^ "Why the Moon is getting further away from Earth". BBC News. 1 February 2011. Archived from the original on 25 September 2015. Retrieved 18 September 2015. ^ Ray, R. (15 May 2001). "Ocean Tides and the Earth's Rotation". IERS Special Bureau for Tides. Archived from the original on 27 March 2010. Retrieved 17 March 2010. ^ Murray, C.D.; Dermott, Stanley F. (1999). Solar System Dynamics. Cambridge University Press. p. 184. ISBN 978-0-521-57295-8. ^ Dickinson, Terence (1993). From the Big Bang to Planet X. Camden East, Ontario: Camden House. pp. 79–81. ISBN 978-0-921820-71-0. ^ Latham, Gary; Ewing, Maurice; Dorman, James; Lammlein, David; Press, Frank; Toksőz, Naft; Sutton, George; Duennebier, Fred; Nakamura, Yosio (1972). "Moonquakes and lunar tectonism". Earth, Moon, and Planets. 4 (3–4): 373–382. Bibcode:1972Moon....4..373L. doi:10.1007/BF00562004. S2CID 120692155. ^ Iain Todd (31 March 2018). "Is the Moon maintaining Earth's magnetism?". BBC Sky at Night Magazine. Archived from the original on 22 September 2020. Retrieved 16 November 2020. ^ Phillips, Tony (12 March 2007). "Stereo Eclipse". Science@NASA. Archived from the original on 10 June 2008. Retrieved 17 March 2010. ^ Espenak, F. (2000). "Solar Eclipses for Beginners". MrEclip. Archived from the original on 24 May 2015. Retrieved 17 March 2010. ^ Walker, John (10 July 2004). "Moon near Perigee, Earth near Aphelion". Fourmilab. Archived from the original on 8 December 2013. Retrieved 25 December 2013. ^ Thieman, J.; Keating, S. (2 May 2006). "Eclipse 99, Frequently Asked Questions". NASA. Archived from the original on 11 February 2007. Retrieved 12 April 2007. ^ Espenak, F. "Saros Cycle". NASA. Archived from the original on 24 May 2012. Retrieved 17 March 2010. ^ Guthrie, D.V. (1947). "The Square Degree as a Unit of Celestial Area". Popular Astronomy. Vol. 55. pp. 200–203. Bibcode:1947PA.....55..200G. ^ "Total Lunar Occultations". Royal Astronomical Society of New Zealand. Archived from the original on 23 February 2010. Retrieved 17 March 2010. ^ "Lunar maps". Archived from the original on 1 June 2019. Retrieved 18 September 2019. ^ "Carved and Drawn Prehistoric Maps of the Cosmos". Space Today. 2006. Archived from the original on 5 March 2012. Retrieved 12 April 2007. ^ Aaboe, A.; Britton, J.P.; Henderson, J.A.; Neugebauer, Otto; Sachs, A.J. (1991). "Saros Cycle Dates and Related Babylonian Astronomical Texts". Transactions of the American Philosophical Society. 81 (6): 1–75. doi:10.2307/1006543. JSTOR 1006543. One comprises what we have called "Saros Cycle Texts", which give the months of eclipse possibilities arranged in consistent cycles of 223 months (or 18 years). ^ Sarma, K.V. (2008). "Astronomy in India". In Helaine Selin (ed.). Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures. Encyclopaedia of the History of Science (2 ed.). Springer. pp. 317–321. Bibcode:2008ehst.book.....S. ISBN 978-1-4020-4559-2. ^ a b c d Needham, Joseph (1986). Science and Civilization in China, Volume III: Mathematics and the Sciences of the Heavens and Earth. Taipei: Caves Books. ISBN 978-0-521-05801-8. Archived from the original on 22 June 2019. Retrieved 22 August 2020. ^ O'Connor, J.J.; Robertson, E.F. (February 1999). "Anaxagoras of Clazomenae". University of St Andrews. Archived from the original on 12 January 2012. Retrieved 12 April 2007. ^ Robertson, E.F. (November 2000). "Aryabhata the Elder". Scotland: School of Mathematics and Statistics, University of St Andrews. Archived from the original on 11 July 2015. Retrieved 15 April 2010. ^ A.I. Sabra (2008). "Ibn Al-Haytham, Abū ʿAlī Al-Ḥasan Ibn Al-Ḥasan". Dictionary of Scientific Biography. Detroit: Charles Scribner's Sons. pp. 189–210, at 195. ^ Lewis, C.S. (1964). The Discarded Image. Cambridge: Cambridge University Press. p. 108. ISBN 978-0-521-47735-2. Archived from the original on 17 June 2020. Retrieved 11 November 2019. ^ van der Waerden, Bartel Leendert (1987). "The Heliocentric System in Greek, Persian and Hindu Astronomy". Annals of the New York Academy of Sciences. 500 (1): 1–569. Bibcode:1987NYASA.500....1A. doi:10.1111/j.1749-6632.1987.tb37193.x. PMID 3296915. S2CID 84491987. ^ Evans, James (1998). The History and Practice of Ancient Astronomy. Oxford & New York: Oxford University Press. pp. 71, 386. ISBN 978-0-19-509539-5. ^ "Discovering How Greeks Computed in 100 B.C." The New York Times. 31 July 2008. Archived from the original on 4 December 2013. Retrieved 9 March 2014. ^ Van Helden, A. (1995). "The Moon". Galileo Project. Archived from the original on 23 June 2004. Retrieved 12 April 2007. ^ Consolmagno, Guy J. (1996). "Astronomy, Science Fiction and Popular Culture: 1277 to 2001 (And beyond)". Leonardo. 29 (2): 127–132. doi:10.2307/1576348. JSTOR 1576348. S2CID 41861791. ^ Hall, R. Cargill (1977). "Appendix A: Lunar Theory Before 1964". NASA History Series. Lunar Impact: A History of Project Ranger. Washington, DC: Scientific and Technical Information Office, NASA. Archived from the original on 10 April 2010. Retrieved 13 April 2010. ^ Zak, Anatoly (2009). "Russia's unmanned missions toward the Moon". Archived from the original on 14 April 2010. Retrieved 20 April 2010. ^ "Rocks and Soils from the Moon". NASA. Archived from the original on 27 May 2010. Retrieved 6 April 2010. ^ a b "Soldiers, Spies and the Moon: Secret U.S. and Soviet Plans from the 1950s and 1960s". The National Security Archive. National Security Archive. Archived from the original on 19 December 2016. Retrieved 1 May 2017. ^ Brumfield, Ben (25 July 2014). "U.S. reveals secret plans for '60s moon base". CNN. Archived from the original on 27 July 2014. Retrieved 26 July 2014. ^ Teitel, Amy (11 November 2013). "LUNEX: Another way to the Moon". Popular Science. Archived from the original on 16 October 2015. ^ a b Logsdon, John (2010). John F. Kennedy and the Race to the Moon. Palgrave Macmillan. ISBN 978-0-230-11010-6. ^ Coren, M. (26 July 2004). "'Giant leap' opens world of possibility". CNN. Archived from the original on 20 January 2012. Retrieved 16 March 2010. ^ "Record of Lunar Events, 24 July 1969". Apollo 11 30th anniversary. NASA. Archived from the original on 8 April 2010. Retrieved 13 April 2010. ^ "Manned Space Chronology: Apollo_11". Spaceline.org. Archived from the original on 14 February 2008. Retrieved 6 February 2008. ^ "Apollo Anniversary: Moon Landing "Inspired World"". National Geographic. Archived from the original on 9 February 2008. Retrieved 6 February 2008. ^ Orloff, Richard W. (September 2004) [First published 2000]. "Extravehicular Activity". Apollo by the Numbers: A Statistical Reference. NASA History Division, Office of Policy and Plans. The NASA History Series. Washington, DC: NASA. ISBN 978-0-16-050631-4. LCCN 00061677. NASA SP-2000-4029. Archived from the original on 6 June 2013. Retrieved 1 August 2013. ^ Launius, Roger D. (July 1999). "The Legacy of Project Apollo". NASA History Office. Archived from the original on 8 April 2010. Retrieved 13 April 2010. ^ SP-287 What Made Apollo a Success? A series of eight articles reprinted by permission from the March 1970 issue of Astronautics & Aeronautics, a publication of the American Institute of Aeronautics and Astronautics. Washington, DC: Scientific and Technical Information Office, National Aeronautics and Space Administration. 1971. ^ "NASA news release 77-47 page 242" (PDF) (Press release). 1 September 1977. Archived (PDF) from the original on 4 June 2011. Retrieved 16 March 2010. ^ Appleton, James; Radley, Charles; Deans, John; Harvey, Simon; Burt, Paul; Haxell, Michael; Adams, Roy; Spooner N.; Brieske, Wayne (1977). "NASA Turns A Deaf Ear To The Moon". OASI Newsletters Archive. Archived from the original on 10 December 2007. Retrieved 29 August 2007. ^ Dickey, J.; et al. (1994). "Lunar laser ranging: a continuing legacy of the Apollo program". Science. 265 (5171): 482–490. Bibcode:1994Sci...265..482D. doi:10.1126/science.265.5171.482. PMID 17781305. S2CID 10157934. Archived from the original on 19 August 2020. Retrieved 2 December 2019. ^ "Hiten-Hagomoro". NASA. Archived from the original on 14 June 2011. Retrieved 29 March 2010. ^ "Clementine information". NASA. 1994. Archived from the original on 25 September 2010. Retrieved 29 March 2010. ^ "Lunar Prospector: Neutron Spectrometer". NASA. 2001. Archived from the original on 27 May 2010. Retrieved 29 March 2010. ^ "SMART-1 factsheet". [¹[European Space Agency]]. 26 February 2007. Archived from the original on 23 March 2010. Retrieved 29 March 2010. ^ "China's first lunar probe ends mission". Xinhua. 1 March 2009. Archived from the original on 4 March 2009. Retrieved 29 March 2010. ^ Leonard David (17 March 2015). "China Outlines New Rockets, Space Station and Moon Plans". Space.com. Archived from the original on 1 July 2016. Retrieved 29 June 2016. ^ "KAGUYA Mission Profile". JAXA. Archived from the original on 28 March 2010. Retrieved 13 April 2010. ^ "KAGUYA (SELENE) World's First Image Taking of the Moon by HDTV". Japan Aerospace Exploration Agency (JAXA) and Japan Broadcasting Corporation (NHK). 7 November 2007. Archived from the original on 16 March 2010. Retrieved 13 April 2010. ^ "Mission Sequence". Indian Space Research Organisation. 17 November 2008. Archived from the original on 6 July 2010. Retrieved 13 April 2010. ^ "Indian Space Research Organisation: Future Program". Indian Space Research Organisation. Archived from the original on 25 November 2010. Retrieved 13 April 2010. ^ "India and Russia Sign an Agreement on Chandrayaan-2". Indian Space Research Organisation. 14 November 2007. Archived from the original on 17 December 2007. Retrieved 13 April 2010. ^ "Lunar CRater Observation and Sensing Satellite (LCROSS): Strategy & Astronomer Observation Campaign". NASA. October 2009. Archived from the original on 1 January 2012. Retrieved 13 April 2010. ^ "Giant moon crater revealed in spectacular up-close photos". NBC News. Space.com. 6 January 2012. Archived from the original on 18 March 2020. Retrieved 22 November 2019. ^ Chang, Alicia (26 December 2011). "Twin probes to circle moon to study gravity field". Phys.org. Associated Press. Archived from the original on 22 July 2018. Retrieved 22 July 2018. ^ Covault, C. (4 June 2006). "Russia Plans Ambitious Robotic Lunar Mission". Aviation Week. Archived from the original on 12 June 2006. Retrieved 12 April 2007. ^ "About the Google Lunar X Prize". X-Prize Foundation. 2010. Archived from the original on 28 February 2010. Retrieved 24 March 2010. ^ Wall, Mike (14 January 2011). "Mining the Moon's Water: Q&A with Shackleton Energy's Bill Stone". Space News. ^ "President Bush Offers New Vision For NASA" (Press release). NASA. 14 December 2004. Archived from the original on 10 May 2007. Retrieved 12 April 2007. ^ "Constellation". NASA. Archived from the original on 12 April 2010. Retrieved 13 April 2010. ^ "NASA Unveils Global Exploration Strategy and Lunar Architecture" (Press release). NASA. 4 December 2006. Archived from the original on 23 August 2007. Retrieved 12 April 2007. ^ NASAtelevision (15 April 2010). "President Obama Pledges Total Commitment to NASA". YouTube. Archived from the original on 28 April 2012. Retrieved 7 May 2012. ^ "India's Space Agency Proposes Manned Spaceflight Program". Space.com. 10 November 2006. Archived from the original on 11 April 2012. Retrieved 23 October 2008. ^ "SpaceX to help Vodafone and Nokia install first 4G signal on the Moon | The Week UK". Archived from the original on 19 August 2020. Retrieved 28 February 2018. ^ "NASA plans to send first woman on Moon by 2024". The Asian Age. 15 May 2019. Archived from the original on 14 April 2020. Retrieved 15 May 2019. ^ Chang, Kenneth (24 January 2017). "For 5 Contest Finalists, a $20 Million Dash to the Moon". The New York Times. ISSN 0362-4331. Archived from the original on 15 July 2017. Retrieved 13 July 2017. ^ Mike Wall (16 August 2017), "Deadline for Google Lunar X Prize Moon Race Extended Through March 2018", space.com, archived from the original on 19 September 2017, retrieved 25 September 2017 ^ McCarthy, Ciara (3 August 2016). "US startup Moon Express approved to make 2017 lunar mission". The Guardian. ISSN 0261-3077. Archived from the original on 30 July 2017. Retrieved 13 July 2017. ^ "An Important Update From Google Lunar XPRIZE". Google Lunar XPRIZE. 23 January 2018. Archived from the original on 24 January 2018. Retrieved 12 May 2018. ^ a b "Moon Express Approved for Private Lunar Landing in 2017, a Space First". Space.com. Archived from the original on 12 July 2017. Retrieved 13 July 2017. ^ Chang, Kenneth (29 November 2018). "NASA's Return to the Moon to Start With Private Companies' Spacecraft". The New York Times. The New York Times Company. Archived from the original on 1 December 2018. Retrieved 29 November 2018. ^ Andrew Jones (23 September 2020). "China's Chang'e 3 lunar lander still going strong after 7 years on the moon". Archived from the original on 25 November 2020. Retrieved 16 November 2020. ^ Jackson, Shanessa (11 September 2018). "Competition Seeks University Concepts for Gateway and Deep Space Exploration Capabilities". nasa.gov. NASA. Archived from the original on 17 June 2019. Retrieved 19 September 2018. This article incorporates text from this source, which is in the public domain. ^ "NASA - Ultraviolet Waves". Science.hq.nasa.gov. 27 September 2013. Archived from the original on 17 October 2013. Retrieved 3 October 2013. ^ Takahashi, Yuki (September 1999). "Mission Design for Setting up an Optical Telescope on the Moon". California Institute of Technology. Archived from the original on 6 November 2015. Retrieved 27 March 2011. ^ Chandler, David (15 February 2008). "MIT to lead development of new telescopes on moon". MIT News. Archived from the original on 4 March 2009. Retrieved 27 March 2011. ^ Naeye, Robert (6 April 2008). "NASA Scientists Pioneer Method for Making Giant Lunar Telescopes". Goddard Space Flight Center. Archived from the original on 22 December 2010. Retrieved 27 March 2011. ^ Bell, Trudy (9 October 2008). "Liquid Mirror Telescopes on the Moon". Science News. NASA. Archived from the original on 23 March 2011. Retrieved 27 March 2011. ^ "Far Ultraviolet Camera/Spectrograph". Lpi.usra.edu. Archived from the original on 3 December 2013. Retrieved 3 October 2013. ^ a b Leonard David (21 October 2019). "Moon Dust Could Be a Problem for Future Lunar Explorers". Retrieved 26 November 2020. ^ Zheng, William (15 January 2019). "Chinese lunar lander's cotton seeds spring to life on far side of the moon". South China Morning Post. Retrieved 26 November 2020. ^ a b "Can any State claim a part of outer space as its own?". United Nations Office for Outer Space Affairs. Archived from the original on 21 April 2010. Retrieved 28 March 2010. ^ "How many States have signed and ratified the five international treaties governing outer space?". United Nations Office for Outer Space Affairs. 1 January 2006. Archived from the original on 21 April 2010. Retrieved 28 March 2010. ^ "Do the five international treaties regulate military activities in outer space?". United Nations Office for Outer Space Affairs. Archived from the original on 21 April 2010. Retrieved 28 March 2010. ^ "Agreement Governing the Activities of States on the Moon and Other Celestial Bodies". United Nations Office for Outer Space Affairs. Archived from the original on 9 August 2010. Retrieved 28 March 2010. ^ "The treaties control space-related activities of States. What about non-governmental entities active in outer space, like companies and even individuals?". United Nations Office for Outer Space Affairs. Archived from the original on 21 April 2010. Retrieved 28 March 2010. ^ "Statement by the Board of Directors of the IISL On Claims to Property Rights Regarding The Moon and Other Celestial Bodies (2004)" (PDF). International Institute of Space Law. 2004. Archived from the original (PDF) on 22 December 2009. Retrieved 28 March 2010. ^ "Further Statement by the Board of Directors of the IISL On Claims to Lunar Property Rights (2009)" (PDF). International Institute of Space Law. 22 March 2009. Archived from the original (PDF) on 22 December 2009. Retrieved 28 March 2010. ^ "Administration Statement on Executive Order on Encouraging International Support for the Recovery and Use of Space Resources". Archived from the original on 19 August 2020. Retrieved 17 June 2020. ^ "Executive Order on Encouraging International Support for the Recovery and Use of Space Resources". Archived from the original on 19 June 2020. Retrieved 17 June 2020. ^ a b c d Dexter, Miriam Robbins (1984). "Proto-Indo-European Sun Maidens and Gods of the Moon". Mankind Quarterly. 25 (1 & 2): 137–144. ^ a b c d e Nemet-Nejat, Karen Rhea (1998), Daily Life in Ancient Mesopotamia, Daily Life, Greenwood, p. 203, ISBN 978-0-313-29497-6, archived from the original on 16 June 2020, retrieved 11 June 2019 ^ a b c d e Black, Jeremy; Green, Anthony (1992). Gods, Demons and Symbols of Ancient Mesopotamia: An Illustrated Dictionary. The British Museum Press. p. 135. ISBN 978-0-7141-1705-8. Archived from the original on 19 August 2020. Retrieved 28 October 2017. ^ Zschietzschmann, W. (2006). Hellas and Rome: The Classical World in Pictures. Whitefish, Montana: Kessinger Publishing. p. 23. ISBN 978-1-4286-5544-7. ^ Cohen, Beth (2006). "Outline as a Special Technique in Black- and Red-figure Vase-painting". The Colors of Clay: Special Techniques in Athenian Vases. Los Angeles: Getty Publications. pp. 178–179. ISBN 978-0-89236-942-3. Archived from the original on 19 August 2020. Retrieved 28 April 2020. ^ "Muhammad." Encyclopædia Britannica. 2007. Encyclopædia Britannica Online, p.13 ^ Ahead Of Chandrayaan 2 Landing, Poet-Diplomat Writes "Moon Anthem" Archived 20 September 2019 at the Wayback Machine NDTV, 6 Sept.2019 ^ Marshack, Alexander (1991), The Roots of Civilization, Colonial Hill, Mount Kisco, NY. ^ Brooks, A.S. and Smith, C.C. (1987): "Ishango revisited: new age determinations and cultural interpretations", The African Archaeological Review, 5 : 65–78. ^ Duncan, David Ewing (1998). The Calendar. Fourth Estate Ltd. pp. 10–11. ISBN 978-1-85702-721-1. ^ For etymology, see Barnhart, Robert K. (1995). The Barnhart Concise Dictionary of Etymology. Harper Collins. p. 487. ISBN 978-0-06-270084-1. . For the lunar calendar of the Germanic peoples, see Birley, A. R. (Trans.) (1999). Agricola and Germany. Oxford World's Classics. US: Oxford University Press. p. 108. ISBN 978-0-19-283300-6. Archived from the original on 17 June 2020. Retrieved 11 June 2019. ^ Mallory, J.P.; Adams, D.Q. (2006). The Oxford Introduction to Proto-Indo-European and the Proto-Indo-European World. Oxford Linguistics. Oxford University Press. pp. 98, 128, 317. ISBN 978-0-19-928791-8. ^ Harper, Douglas. "measure". Online Etymology Dictionary. ^ Harper, Douglas. "menstrual". Online Etymology Dictionary. ^ Smith, William George (1849). Dictionary of Greek and Roman Biography and Mythology: Oarses-Zygia. 3. J. Walton. p. 768. Archived from the original on 26 November 2020. Retrieved 29 March 2010. ^ Estienne, Henri (1846). Thesaurus graecae linguae. 5. Didot. p. 1001. Archived from the original on 28 July 2020. Retrieved 29 March 2010. ^ mensis. Charlton T. Lewis and Charles Short. A Latin Dictionary on Perseus Project. ^ μείς in Liddell and Scott. ^ "Islamic Calendars based on the Calculated First Visibility of the Lunar Crescent". University of Utrecht. Archived from the original on 11 January 2014. Retrieved 11 January 2014. ^ a b Lilienfeld, Scott O.; Arkowitz, Hal (2009). "Lunacy and the Full Moon". Scientific American. Archived from the original on 16 October 2009. Retrieved 13 April 2010. ^ Rotton, James; Kelly, I.W. (1985). "Much ado about the full moon: A meta-analysis of lunar-lunacy research". Psychological Bulletin. 97 (2): 286–306. doi:10.1037/0033-2909.97.2.286. PMID 3885282. ^ Martens, R.; Kelly, I.W.; Saklofske, D.H. (1988). "Lunar Phase and Birthrate: A 50-year Critical Review". Psychological Reports. 63 (3): 923–934. doi:10.2466/pr0.1988.63.3.923. PMID 3070616. S2CID 34184527. ^ Kelly, Ivan; Rotton, James; Culver, Roger (1986), "The Moon Was Full and Nothing Happened: A Review of Studies on the Moon and Human Behavior", Skeptical Inquirer, 10 (2): 129–143 . Reprinted in The Hundredth Monkey - and other paradigms of the paranormal, edited by Kendrick Frazier, Prometheus Books. Revised and updated in The Outer Edge: Classic Investigations of the Paranormal, edited by Joe Nickell, Barry Karr, and Tom Genoni, 1996, CSICOP. ^ Foster, Russell G.; Roenneberg, Till (2008). "Human Responses to the Geophysical Daily, Annual and Lunar Cycles". Current Biology. 18 (17): R784–R794. doi:10.1016/j.cub.2008.07.003. PMID 18786384. S2CID 15429616. Solar System portal Astronomy portal "Revisiting the Moon". The New York Times. Archived from the original on 8 September 2014. Retrieved 8 September 2014. The Moon Archived 11 March 2011 at the Wayback Machine. Discovery 2008. BBC World Service. Bussey, B.; Spudis, P.D. (2004). The Clementine Atlas of the Moon. Cambridge University Press. ISBN 978-0-521-81528-4. Cain, Fraser. "Where does the Moon Come From?". Universe Today. Archived from the original on 7 March 2008. Retrieved 1 April 2008. (podcast and transcript) Jolliff, B. (2006). Wieczorek, M.; Shearer, C.; Neal, C. (eds.). New views of the Moon. Reviews in Mineralogy and Geochemistry. 60. Chantilly, Virginia: Mineralogy Society of America. p. 721. Bibcode:2006RvMG...60D...5J. doi:10.2138/rmg.2006.60.0. ISBN 978-0-939950-72-0. Archived from the original on 27 June 2007. Retrieved 12 April 2007. Jones, E.M. (2006). "Apollo Lunar Surface Journal". NASA. Archived from the original on 18 May 2015. Retrieved 12 April 2007. "Exploring the Moon". Lunar and Planetary Institute. Archived from the original on 18 February 2012. Retrieved 12 April 2007. Mackenzie, Dana (2003). The Big Splat, or How Our Moon Came to Be. Hoboken, NJ: John Wiley & Sons. ISBN 978-0-471-15057-2. Archived from the original on 17 June 2020. Retrieved 11 June 2019. Moore, P. (2001). On the Moon. Tucson, Arizona: Sterling Publishing Co. ISBN 978-0-304-35469-6. "Moon Articles". Planetary Science Research Discoveries. Hawai'i Institute of Geophysics and Planetology. Archived from the original on 17 November 2015. Retrieved 18 November 2006. Spudis, P.D. (1996). The Once and Future Moon. Smithsonian Institution Press. ISBN 978-1-56098-634-8. Archived from the original on 17 June 2020. Retrieved 11 June 2019. Taylor, S.R. (1992). Solar system evolution. Cambridge University Press. p. 307. ISBN 978-0-521-37212-1. Teague, K. (2006). "The Project Apollo Archive". Archived from the original on 4 April 2007. Retrieved 12 April 2007. Wilhelms, D.E. (1987). "Geologic History of the Moon". U.S. Geological Survey Professional Paper. Professional Paper. 1348. doi:10.3133/pp1348. Archived from the original on 23 February 2019. Retrieved 12 April 2007. Wilhelms, D.E. (1993). To a Rocky Moon: A Geologist's History of Lunar Exploration. Tucson: University of Arizona Press. ISBN 978-0-8165-1065-8. Archived from the original on 17 June 2020. Retrieved 10 March 2009. Moonat Wikipedia's sister projects Definitions from Wiktionary Media from Wikimedia Commons News from Wikinews Quotations from Wikiquote Texts from Wikisource Textbooks from Wikibooks Travel guide from Wikivoyage Resources from Wikiversity NASA images and videos about the Moon Albums of images and high-resolution overflight videos by Seán Doran, based on LROC data, on Flickr and YouTube Video (04:56) – The Moon in 4K (NASA, April 2018) on YouTube Video (04:47) – The Moon in 3D (NASA, July 2018) on YouTube Cartographic resources Unified Geologic Map of the Moon - United States Geological Survey Moon Trek – An integrated map browser of datasets and maps for the Moon The Moon on Google Maps, a 3-D rendition of the Moon akin to Google Earth "Consolidated Lunar Atlas". Lunar and Planetary Institute. Retrieved 26 February 2012. Gazetteer of Planetary Nomenclature (USGS) List of feature names. "Clementine Lunar Image Browser". U.S. Navy. 15 October 2003. Retrieved 12 April 2007. 3D zoomable globes: "Google Moon". 2007. Retrieved 12 April 2007. "Moon". World Wind Central. NASA. 2007. Retrieved 12 April 2007. Aeschliman, R. "Lunar Maps". Planetary Cartography and Graphics. Retrieved 12 April 2007. Maps and panoramas at Apollo landing sites Japan Aerospace Exploration Agency (JAXA) Kaguya (Selene) images Lunar Earthside chart (4497 x 3150px) Large image of the Moon's north pole area Large image of Moon's south pole area (1000x1000px) Observation tools "NASA's SKYCAL – Sky Events Calendar". NASA. Archived from the original on 20 August 2007. Retrieved 27 August 2007. "Find moonrise, moonset and moonphase for a location". 2008. Retrieved 18 February 2008. "HMNAO's Moon Watch". 2005. Retrieved 24 May 2009. See when the next new crescent moon is visible for any location. Lunar shelter (building a lunar base with 3D printing) Gravity field Hill sphere Sodium tail Orbital elements Perigee and apogee Nodal period Total penumbral lunar eclipse Solar eclipses on the Moon Eclipse cycle Tidal force Tidal locking Tidal acceleration Tidal range Lunar station Surface and Selenography Near side Far side Peak of eternal light Lava tubes Ray systems Crater of eternal darkness South Pole–Aitken basin Rilles Wrinkle ridges Lunar basalt 70017 Space weathering Micrometeorite Transient lunar phenomenon Selenographic coordinates Lunar theory Giant-impact hypothesis Lunar magma ocean Late Heavy Bombardment KREEP Lunar laser ranging ALSEP Lunar sample displays Lunar seismology Apollo program Lunar resources Time-telling and Lunisolar calendar Lunar month Fortnight Sennight Phases and Super and micro Lunar deities Moon illusion Moon rabbit Craters named after people Artificial objects on the Moon Memorials on the Moon Moon in fiction Moon landing conspiracy theories Moon Treaty "Moon is made of green cheese" Natural satellite Double planet Lilith (hypothetical second moon) Splitting of the moon Age of Earth Extremes on Earth Geological history Geologic record History of Earth Atmosphere of Earth Human impact on the environment Evolutionary history of life Planetary science Earth's orbit Evolution of Solar System Geology of solar terrestrial planets Location in the Universe Outline of Earth Earth sciences portal Natural satellites of the Solar System Saturnian Plutonian Haumean Makemakean Eridian Minor-planet moons · Near-Earth: Didymos (Dimorphos) Moshup (Squannit) 2001 SN263 Main belt: Kalliope (Linus) Euphrosyne Daphne (Peneius) Eugenia (Petit-Prince) Sylvia (Romulus · Remus) Minerva (Aegis · Gorgoneion) Kleopatra (Alexhelios · Cleoselene) Ida (Dactyl) Roxane (Olympias) Pulcova Balam Jupiter trojans: Patroclus (Menoetius) Hektor (Skamandrios) Eurybates (Queta) TNOs: Lempo (Hiisi · Paha) Quaoar (Weywot) 2002 UX25 Sila–Nunam Orcus (Vanth) Salacia (Actaea) Varda (Ilmarë) Gonggong (Xiangliu) Gǃkúnǁ'hòmdímà (Gǃò'é ǃHú) 2013 FY27 Ranked by size Planetary-mass moon largest: 5268 km / 0.413 Earths Umbriel Dysnomia Enceladus Vanth Hiʻiaka Inner moons Irregular moons Planetary-mass moons Subsatellite Regular moons Trojan moons other near-Earth objects Namaka S/2015 (136472) 1 Possible Dwarf planets Gravitationally rounded objects Natural satellites Solar System models by discovery date Interstellar/Circumstellar molecules Damocloids Meteoroids Names and meanings Planetesimal Mercury-crossers Venus-crossers Venus trojans Earth-crossers Earth trojans Mars-crossers Mars trojans Asteroid belt first 1000 Kirkwood gap Jupiter-crossers Jupiter trojans Saturn-crossers Uranus-crossers Uranus trojans Neptune-crossers Cis-Neptunian objects Neptune trojans Trans-Neptunian objects Cubewanos Plutinos Detached objects Sednoids Scattered disc Hills cloud Saturnian (Rhean) Charikloan Chironean Fifth giant Planet V Subsatellites Tyche Vulcanoids (outline) historical models Space probes Accretion disk Excretion disk Circumplanetary disk Circumstellar disc Circumstellar envelope Coatlicue Debris disk Disrupted planet Exozodiacal dust Extraterrestrial materials Sample-return mission Sample curation Gravitational collapse Interplanetary dust cloud Interplanetary medium Interplanetary space Interstellar dust Interstellar medium Interstellar space Merging stars Molecular cloud Planetary migration Protoplanetary disk Ring system Rubble pile Star formation Outline of the Solar System Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → Universe Each arrow (→) may be read as "within" or "part of". Retrieved from "https://en.wikipedia.org/w/index.php?title=Moon&oldid=1000047751" Astronomical objects known since antiquity Planetary-mass satellites Planetary satellite systems Articles containing Ancient Greek (to 1453)-language text CS1 maint: numeric names: authors list CS1 maint: uses authors parameter Wikipedia indefinitely semi-protected pages Wikipedia indefinitely move-protected pages Use American English from August 2019 Use dmy dates from March 2020 Pages using multiple image with manual scaled images Articles containing Latin-language text Articles containing German-language text Articles with hAudio microformats Articles containing Arabic-language text Pages using Sister project links with hidden wikidata Pages using Sister project links with default search Wikipedia articles with NARA identifiers Articles containing video clips ꆇꉙ
CommonCrawl
why is linear b important It may show for example how demand changes when price changes or how consumption //-->. Linear regression is used to perform regression analysis. Linear relationships are fairly common in daily life. Minoan pottery circulated widely, and We can determine what effect the independent variables have on a dependent variable. With positive slope The later, Linear B, represented a subsequent intrusion of Mycenaeans from the mainland. than a letter of an alphabet. Both linear scripts Linear B was deciphered as Greek in 1952 by Michael Ventris. The Archaeological area of Eleusis, • the change in y or the change on the vertical axis versus the change in x or In econometrics, linear regression is an often-used method of generating linear relationships to explain various phenomena. Below are the uses of regression analysis. Second, he analysed statistically the placing and frequency of different symbols within individual words in order to build up an understanding of grammatical structure. Accessed Aug. 10, 2020. Slope measures the rate of change in the dependent variable as the independent In the study of linear algebra, the properties of linear functions are extensively studied and made rigorous. Regression analysis helps in understanding the various data points and the relationship between them. variable changes. In one variable, a linear function can be written as follows: f(x)=mx+bwhere:m=slopeb=y-intercept\begin{aligned} &f(x) = mx + b \\ &\textbf{where:}\\ &m=\text{slope}\\ &b=\text{y-intercept}\\ \end{aligned}​f(x)=mx+bwhere:m=slopeb=y-intercept​. Some data describe relationships that are curved (such as polynomial relationships) while still other data cannot be parameterized. ALL RIGHTS RESERVED. This is a typical upward sloping supply curve which says that Regression analysis also helps the company provide maximum efficiency and refine its processes. These include white papers, government data, original reporting, and interviews with industry experts. classes, Linear A and Linear B. Tragically, Michael Ventris, the man who made this critical discovery, was almost immediately killed in a car crash. To determine if a series of numbers is a linear sequence, subtract each number by the number before it. But before 1952 no-one knew who the Mycenaeans were. Another important variable will result in a change in y by the amount of b. slope = change in y/change in x = rise/run. He was rewarded almost immediately by the discovery of slabs of baked clay, some rectangular, some leaf-shaped, bearing two types of inscription of hitherto unknown form. The Linear B Tablets revealed Below is the equation that represents the relation between x and y. When Arthur Evans started digging at Knossos on Crete in 1900, a major aim was to find inscriptions and prove that the ancient Cretans had been literate. You can learn more about the standards we follow in producing accurate, unbiased content in our. The usual growth is 3 inches. Hadoop, Data Science, Statistics & others. supply rises as price rises. The Linear B Tablets interpreted Assume that the independent variable is the size of a house (as measured by square footage) which determines the market price of a home (the dependent variable) when it is multiplied by the slope coefficient of 207.65 and is then added to the constant term $10,500. The Linear B tablets were created and preserved because they contained important economic or financial records. Linear B Tablets The Linear B Tablets revealed When Arthur Evans started digging at Knossos on Crete in 1900, a major aim was to find inscriptions and prove that the ancient Cretans had been literate. We should understand are important variables and unimportant variables before we create a model. The formula we use to calculate speed is as follows: the rate of speed is the distance traveled over time. Third, in a speculative leap, he substituted Greek sounds for Linear B symbols. The variable names may differ. They represent the oldest known Greek dialect, elements of which survived in Homer's language as a result of a long oral tradition of epic poetry. It is commonly used in extrapolating events from the past to make forecasts for the future. Linear A, the older script, has The same is represented in the below equation. The residual standard deviation describes the difference in standard deviations of observed values versus predicted values in a regression analysis. A linear relationship (or linear association) is a statistical term used to describe a straight-line relationship between two variables. the Aegean world. The independent variable can also be called an exogenous variable. Represented graphically with the distance on the Y-axis and time on the X-axis, a line tracking the distance over those 20 hours would travel straight out from the convergence of the X and Y-axis. With negative and the like. Simply put, algebra is about finding the unknown or putting real life variables into equations and then solving them. The oldest Mycenaean writing dates to about 1450 BC. during the period between 1450 and 1400 B.C., when the currently accepted This article is an extract from the full article published in World Archaeology Issue 34. Learning to detect and predict sequences is useful in pattern recognition, both by visual inspection and technological algorithms. Many scholars still doubted the Mycenaeans were Greek, and viewed Homer's Iliad and Odyssey as wholly mythological. Some linear relationships between two objects can be called a "proportional relationship." Predictive analysis helps in understanding the relationship between the predictor and outcome variable (i.e. interchange comes from Cretan writing on tablets that Evans and With positive slope the line moves upward when going from left to right. Understanding the data and relationship between them helps businesses to grow and analyze certain trends or patterns. Slope means that a unit change in x, the independent Ancient Greek jewelry blog. Crete's strategical position, • What Does a Linear Relationship Tell You? If a home's square footage is 1,250 then the market value of the home is (1,250 x 207.65) + $10,500 = $270,062.50. google_ad_slot = "9598862192"; Minoan's Art - Linear A and B, • Canvas On Sale Shipping, Whinnied Meaning In Urdu, Engender Crossword Clue, Bell Aliant Home Phone, Gun Permit Requirements, Fender Custom Shop 4-step Guitar Cleaning Kit, Sog Seal Pup Sale, Valuable Corningware Patterns, Gun Shaped Thermometer, Tibalt, The Fiend-blooded Price, Qualification Meaning In Bengali, Malfy Con Arancia, Dupont Lawsuit Movie, Tempe, Arizona Cost Of Living, Desolation Wilderness Permits, Does Kirkland Almond Milk Need To Be Refrigerated, Snake Gourd Meaning In Tamil, Discrimination Meaning In Urdu, Cheerios Peanut Butter Bars With Chocolate, Final Fantasy Record Keeper Pc, Teeccino Dandelion Tea Caramel Nut, Elderscale Wurm Rules, Diy Woodworking Tools, God Forbid Meaning In Urdu, Organic Bread Flour, Basic Woodworking Hand Tools, Country Breakfast Skillet, why is linear b important November 13, 2020
CommonCrawl
Period doubling and reducibility in the quasi-periodically forced logistic map DCDS-B Home The Euler-Maruyama approximation for the absorption time of the CEV diffusion July 2012, 17(5): 1473-1506. doi: 10.3934/dcdsb.2012.17.1473 Symmetric error estimates for discontinuous Galerkin approximations for an optimal control problem associated to semilinear parabolic PDE's Konstantinos Chrysafinos 1, and Efthimios N. Karatzas 1, National Technical University of Athens, School of Applied Mathematical and Physical Sciences, Department of Mathematics, Zografou Campus, 15780, Athens, Greece, Greece Received September 2011 Revised December 2011 Published March 2012 A discontinuous Galerkin finite element method for an optimal control problem having states constrained to semilinear parabolic PDE's is examined. The schemes under consideration are discontinuous in time but conforming in space. It is shown that under suitable assumptions, the error estimates of the corresponding optimality system are of the same order to the standard linear (uncontrolled) parabolic problem. These estimates have symmetric structure and are also applicable for higher order elements. Keywords: distributed control, discontinuous Galerkin, Symmetric error estimates, semi-linear parabolic PDE's.. Mathematics Subject Classification: Primary: 65M60, 49J2. Citation: Konstantinos Chrysafinos, Efthimios N. Karatzas. Symmetric error estimates for discontinuous Galerkin approximations for an optimal control problem associated to semilinear parabolic PDE's. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1473-1506. doi: 10.3934/dcdsb.2012.17.1473 G. Akrivis and C. Makridakis, Galerkin time-stepping methods for nonlinear parabolic equations, M2AN Math. Model. and Numer. Anal., 38 (2004), 261-289. doi: 10.1051/m2an:2004013. Google Scholar G. Allaire and O. Pantz, Structural optimization with FreeFem++, Struct. Multidiscip. Optim., 32 (2006), 173-181. doi: 10.1007/s00158-006-0017-y. Google Scholar T. Apel and T. Flaig, Crank-Nicolson schemes for optimal control problems with evolution equations,, submitted. Available from: \url{http://www.unibw.de/bauv1/personen/apel/papers}., (). Google Scholar A. Borzi and R. Griesse, Distributed optimal control for lambda-omega systems, J. Numer. Math., 14 (2006), 17-40. doi: 10.1515/156939506776382120. Google Scholar E. Casas and J.-P. Raymond, Error estimates for the numerical approximation of Dirichlet boundary control for semilinear elliptic equation, SIAM J. Control and Optim., 45 (2006), 1586-1611. doi: 10.1137/050626600. Google Scholar E. Casas, M. Mateos and F. Tröltzsch, Error estimates for the numerical approximation of boundary semilinear elliptic control problem, Comput. Optim. and Appl., 31 (2005), 193-219. Google Scholar K. Chrysafinos, Discontinous Galerkin approximations for distributed optimal control problems constrained by parabolic PDE's, Int. J. Numer. Anal. and Mod., 4 (2007), 690-712. Google Scholar K. Chrysafinos, Analysis and finite element approximations for distributed optimal control problems for implicit parabolic equations, J. Comput. Appl. Math., 231 (2009), 327-348. doi: 10.1016/j.cam.2009.02.092. Google Scholar K. Chrysafinos, Convergence of discontinuous Galerkin approximations of an optimal control problem associated to semilinear parabolic PDE's, M2AN Math. Model. Numer. Anal., 44 (2010), 189-206. Google Scholar K. Chrysafinos, Convergence of discontinuous time-stepping schemes for a Robin boundary control problem under minimal regularity assumptions,, submitted. Available from: \url{http://www.math.ntua.gr/~chrysafinos}., (). Google Scholar K. Chrysafinos and N. J. Walkington, Error estimates for the discontinuous Galerkin methods for parabolic equations, SIAM J. Numer. Anal., 44 (2006), 349-366. doi: 10.1137/030602289. Google Scholar K. Chrysafinos and N. J. Walkington, Discontinuous Galerkin approximations of the Stokes and Navier-Stokes equations, Math. Comp., 79 (2010), 2135-2167. doi: 10.1090/S0025-5718-10-02348-3. Google Scholar K. Chrysafinos and N. J. Walkington, Lagrangian and moving mesh methods for the convection diffusion equation, M2AN Math. Model. Numer. Anal., 42 (2008), 25-55. Google Scholar K. Chrysafinos, M. D. Gunzburger and L. S. Hou, Semidiscrete approximations of optimal Robin boundary control problems constrained by semilinear parabolic PDE, J. Math. Anal. Appl., 323 (2006), 891-912. doi: 10.1016/j.jmaa.2005.10.053. Google Scholar P. G. Ciarlet, "The Finite Element Method for Elliptic Problems," Reprint of the 1978 original, Classics in Applied Math., Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002. Google Scholar K. Deckelnick and M. Hinze, Semidiscretization and error estimates for distributed control of the instationary Navier-Stokes equations, Numer. Math., 97 (2004), 297-320. doi: 10.1007/s00211-003-0507-4. Google Scholar K. Deckelnick and M. Hinze, Variational discretization of parabolic control problems in the presence of pointwise state constraints, J. Comput. Math., 29 (2011), 1-15. Google Scholar T. F. Dupont and Y. Liu, Symmetric error estimates for moving mesh Galerkin methods for advection-diffusion equations, SIAM J. Numer. Anal., 40 (2002), 914-927. doi: 10.1137/S0036142900380431. Google Scholar D. Estep and S. Larsson, The discontinuous Galerkin method for semilinear parabolic problems, RAIRO Modél. Math. Anal. Numér., 27 (1993), 35-54. Google Scholar K. Eriksson and C. Johnson, Adaptive finite element methods for parabolic problems. I. A linear model problem, SIAM J. Numer. Anal., 28 (1991), 43-77. doi: 10.1137/0728003. Google Scholar K. Eriksson and C. Johnson, Adaptive finite element methods for parabolic problems. II. Optimal error estimates in $L_{\infty}(L^2)$ and $L_{\infty}(L_{\infty})$, SIAM J. Numer. Anal., 32 (1995), 706-740. doi: 10.1137/0732033. Google Scholar K. Ericksson and C. Johnson, Adaptive finite element methods for parabolic problems IV, Nonlinear problems, SIAM J. Numer. Anal., 32 (1995), 1729-1749. doi: 10.1137/0732078. Google Scholar K. Eriksson, C. Johnson and V. Thomée, Time discretization of parabolic problems by the discontinuous Galerkin method, RAIRO Modél. Math. Anal. Numér., 19 (1985), 611-643. Google Scholar L. Evans, "Partial Differential Equations," Graduate Studies in Mathematics, 19, AMS, Providence, RI, 1998. Google Scholar R. Falk, Approximation of a class of otimal control problems with order of convergence estimates, J. Math. Anal. Appl., 44 (1973), 28-47. doi: 10.1016/0022-247X(73)90022-X. Google Scholar A. Fursikov, "Optimal Control of Distributed Systems. Theory and Applications," Translations of Mathematical Monographs, 187, AMS, Providence, RI, 2000. Google Scholar V. Girault and P.-A. Raviart, "Finite Element Methods for Navier-Stokes Equations. Theory and Algorithms," Springer Series in Computational Mathematics, 5, Springer-Verlag, Berlin, 1986. Google Scholar W. Gong, M. Hinze and Z. Zhou, A priori error analysis for finite element approximation of parabolic optimal control problems with pointwise control,, submitted. Available from: \url{http://preprint.math.uni-hamburg.de/public/papers/hbam/hbam2011-07.pdf}., (): 2011. Google Scholar M. D. Gunzburger, "Perspectives in Flow Control and Optimization," Advances in Design and Control, 5, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2003. Google Scholar M. D. Gunzburger, L. S. Hou and Th. P. Svobodny, Analysis and finite element approximation of optimal control problems for the stationary Navier-Stokes equations with Dirichlet controls, RAIRO Modél. Math. Anal. Numér., 25 (1991), 711-748. Google Scholar M. D. Gunzburger and S. Manservisi, Analysis and approximation of the velocity tracking problem for Navier-Stokes flows with distributed control, SIAM J. Numer. Anal., 37 (2000), 1481-1512. doi: 10.1137/S0036142997329414. Google Scholar F. Hecht, FreeFem++, Third edition, Version 3.13, 2011. Available from: http://www.freefem.org/ff++. Google Scholar M. Hinze, A variational discretization concept in control constrained optimization: The linear-quadratic case, Comput. Optim. Appl., 30 (2005), 45-61. doi: 10.1007/s10589-005-4559-5. Google Scholar M. Hinze and K. Kunisch, Second order methods for optimal control of time-dependent fluid flow, SIAM J. Control and Optim., 40 (2001), 925-946. doi: 10.1137/S0363012999361810. Google Scholar K. Ito and K. Kunisch, "Lagrange Multiplier Approach to Variational Problems and Applications," Advances in Design and Control, 15, SIAM, Philadelphia, PA, 2008. Google Scholar G. Knowles, Finite element approximation of parabolic time optimal control problems, SIAM J. Control and Optim., 20 (1982), 414-427. doi: 10.1137/0320032. Google Scholar I. Lasiecka, Rietz-Galerkin approximation of the time optimal boundary control problem for parabolic systems with Dirichlet boundary conditions, SIAM J. Control and Optim., 22 (1984), 477-500. Google Scholar I. Lasiecka and R. Triggiani, "Control Theory for Partial Differential Equations," Cambridge University Press, Cambridge, 2000. Google Scholar J.-L. Lions, "Some Aspects of the Control of Distributed Parameter Systems," Conference Board of the Mathematical Sciences, SIAM, 1972. Google Scholar W.-B. Liu and N. Yan, A posteriori error estimates for optimal control problems governed by parabolic equations, Numer. Math., 93 (2003), 497-521. doi: 10.1007/s002110100380. Google Scholar W.-B. Liu, H.-P. Ma, T. Tang and N. Yan, A posteriori error estimates for discontinuous Galerkin time-stepping method for optimal control problems governed by parabolic equations, SIAM J. Numer. Anal., 42 (2004), 1032-1061. doi: 10.1137/S0036142902397090. Google Scholar Y. Liu, R. E. Bank, T. F. Dupont, S. Garcia and R. F. Santos, Symmetric error estimates for moving mesh mixed methods for advection-diffusion equations, SIAM J. Numer. Anal., 40 (2002), 2270-2291. doi: 10.1137/S003614290038073X. Google Scholar K. Malanowski, Convergence of approximations vs. regularity of solutions for convex, control-constrained optimal-control problems, Appl. Math. Optim., 8 (1982), 69-95. doi: 10.1007/BF01447752. Google Scholar D. Meidner and B. Vexler, Adaptive space-time finite element methods for parabolic optimization problems, SIAM J. Control and Optim., 46 (2007), 116-142. doi: 10.1137/060648994. Google Scholar D. Meidner and B. Vexler, A priori error estimates for space-time finite element discretization of parabolic optimal control problems. Part I. Problems without control constraints, SIAM J. Control and Optim., 47 (2008), 1150-1177. doi: 10.1137/070694016. Google Scholar D. Meidner and B. Vexler, A-priori error analysis of the Petrov-Galerkin Crank-Nicolson scheme for parabolic optimal control problems, SIAM J. Control and Optim., 49 (2011), 2183-2211. doi: 10.1137/100809611. Google Scholar P. Neittaanmäki and D. Tiba, "Optimal Control of Nonlinear Parabolic Systems. Theory, Algorithms and Applications," Monographs and Textbooks in Pure and Applied Mathematics, 179, Marcel Dekker, Inc., New York, 1994. Google Scholar I. Neitzel and B. Vexler, A priori error estimates for space-time finite element discretization of semilinear parabolic optimal control problems, Numer. Math., 120 (2011), 345-386. doi: 10.1007/s00211-011-0409-9. Google Scholar A. Rösch, Error estimates for parabolic optimal control problems with control constraints, Zeitschrift für Analysis und ihre Anwendunges, 23 (2004), 353-376. Google Scholar V. Thomée, "Galerkin Finite Element Methods for Parabolic Problems," Springer Series in Computational Mathematics, 25, Spinger-Verlag, Berlin, 1997. Google Scholar F. Tröltzsch, Semidiscrete Ritz-Galerkin approximation of nonlinear parabolic boundary control problems, in "Optimal Control" (Freiburg, 1991), International Series of Numerical Mathematics, 111, Birkhäuser, Basel, (1993), 57-68. Google Scholar F. Tröltzsch, Semidiscrete Ritz-Galerkin approximation of nonlinear parabolic boundary control problems-strong convergence of optimal controls, Appl. Math. Optim., 29 (1994), 309-329. Google Scholar F. Tröltzsch, "Optimal Control of Partial Differential Equations: Theory, Methods and Applications," Graduate Studies in Mathematics, 112, American Mathematical Society, Providence, RI, 2010. Google Scholar N. J. Walkington, Compactness properties of the DG and CG time stepping schemes for parabolic equations, SIAM J. Numer. Anal, 47 (2010), 4680-4710. doi: 10.1137/080728378. Google Scholar R. Winther, Error estimates for a Galerkin approximation of a parabolic control problem, Ann. Math. Pura Appl. (4), 117 (1978), 173-206. doi: 10.1007/BF02417890. Google Scholar R. Winther, Initial value methods for parabolic control problems, Math. Comp., 34 (1980), 115-125. doi: 10.1090/S0025-5718-1980-0551293-7. Google Scholar E. Zeidler, "Nonlinear Functional Analysis and its Applications. II/B. Nonlinear Monotone Operators," Springer-Verlag, New York, 1990. doi: 10.1007/978-1-4612-0985-0. Google Scholar Xiaojie Wang. Weak error estimates of the exponential Euler scheme for semi-linear SPDEs without Malliavin calculus. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 481-497. doi: 10.3934/dcds.2016.36.481 Út V. Lê. Contraction-Galerkin method for a semi-linear wave equation. Communications on Pure & Applied Analysis, 2010, 9 (1) : 141-160. doi: 10.3934/cpaa.2010.9.141 Tomás Caraballo Garrido, Oleksiy V. Kapustyan, Pavlo O. Kasyanov, José Valero, Michael Zgurovsky. Preface to the special issue "Dynamics and control in distributed systems: Dedicated to the memory of Valery S. Melnik (1952-2007)". Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : i-v. doi: 10.3934/dcdsb.20193i Marita Holtmannspötter, Arnd Rösch, Boris Vexler. A priori error estimates for the space-time finite element discretization of an optimal control problem governed by a coupled linear PDE-ODE system. Mathematical Control & Related Fields, 2021, 11 (3) : 601-624. doi: 10.3934/mcrf.2021014 Bruno Fornet, O. Guès. Penalization approach to semi-linear symmetric hyperbolic problems with dissipative boundary conditions. Discrete & Continuous Dynamical Systems, 2009, 23 (3) : 827-845. doi: 10.3934/dcds.2009.23.827 Yongqin Liu. The point-wise estimates of solutions for semi-linear dissipative wave equation. Communications on Pure & Applied Analysis, 2013, 12 (1) : 237-252. doi: 10.3934/cpaa.2013.12.237 Paul Sacks, Mahamadi Warma. Semi-linear elliptic and elliptic-parabolic equations with Wentzell boundary conditions and $L^1$-data. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 761-787. doi: 10.3934/dcds.2014.34.761 Hua Chen, Nian Liu. Asymptotic stability and blow-up of solutions for semi-linear edge-degenerate parabolic equations with singular potentials. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 661-682. doi: 10.3934/dcds.2016.36.661 Anne Mund, Christina Kuttler, Judith Pérez-Velázquez. Existence and uniqueness of solutions to a family of semi-linear parabolic systems using coupled upper-lower solutions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5695-5707. doi: 10.3934/dcdsb.2019102 Frank Pörner, Daniel Wachsmuth. Tikhonov regularization of optimal control problems governed by semi-linear partial differential equations. Mathematical Control & Related Fields, 2018, 8 (1) : 315-335. doi: 10.3934/mcrf.2018013 Thuy N. T. Nguyen. Carleman estimates for semi-discrete parabolic operators with a discontinuous diffusion coefficient and applications to controllability. Mathematical Control & Related Fields, 2014, 4 (2) : 203-259. doi: 10.3934/mcrf.2014.4.203 Armen Shirikyan. Ergodicity for a class of Markov processes and applications to randomly forced PDE'S. II. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 911-926. doi: 10.3934/dcdsb.2006.6.911 Tuan Anh Dao, Michael Reissig. $ L^1 $ estimates for oscillating integrals and their applications to semi-linear models with $ \sigma $-evolution like structural damping. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 5431-5463. doi: 10.3934/dcds.2019222 Jean-Daniel Djida, Arran Fernandez, Iván Area. Well-posedness results for fractional semi-linear wave equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 569-597. doi: 10.3934/dcdsb.2019255 Qianqian Hou, Tai-Chia Lin, Zhi-An Wang. On a singularly perturbed semi-linear problem with Robin boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 401-414. doi: 10.3934/dcdsb.2020083 Li Ma, Lin Zhao. Regularity for positive weak solutions to semi-linear elliptic equations. Communications on Pure & Applied Analysis, 2008, 7 (3) : 631-643. doi: 10.3934/cpaa.2008.7.631 Nguyen Thieu Huy, Vu Thi Ngoc Ha, Pham Truong Xuan. Boundedness and stability of solutions to semi-linear equations and applications to fluid dynamics. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2103-2116. doi: 10.3934/cpaa.2016029 Masataka Shibata. Multiplicity of positive solutions to semi-linear elliptic problems on metric graphs. Communications on Pure & Applied Analysis, 2021, 20 (12) : 4107-4126. doi: 10.3934/cpaa.2021147 Wenqiang Zhao. Random dynamics of non-autonomous semi-linear degenerate parabolic equations on $\mathbb{R}^N$ driven by an unbounded additive noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2499-2526. doi: 10.3934/dcdsb.2018065 Kokum R. De Silva, Tuoc V. Phan, Suzanne Lenhart. Advection control in parabolic PDE systems for competitive populations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 1049-1072. doi: 10.3934/dcdsb.2017052 Konstantinos Chrysafinos Efthimios N. Karatzas
CommonCrawl
Detection of Quantitative Trait Loci for Growth and Carcass Traits on BTA6 in a Hanwoo Population Lee, Y.-M.;Lee, Y.S.;Han, C.-M.;Lee, J.-H.;Yeo, J.S.;Kim, Jong-Joo 287 The purpose of this study was to detect quantitative trait loci (QTL) for growth and carcass quality traits on BTA6 in a population of Hanwoo cattle. Three hundred and sixty one steers were produced from 39 sires that were sired by 17 grandsires in the two Hanwoo farming branches of the National Livestock Research Institute of Korea, between Spring 2000 and Fall 2002. DNA samples were collected for all of the steers, sires and grandsires, and the phenotypes for six growth and carcass quality traits were measured at 24 months of age. Twelve microsatellite markers were chosen on BTA6 and a linkage map was constructed by using seven of the twelve markers. Then, a chromosome-wide QTL scan was performed by applying an Animal Model, in which effects of QTL alleles within the grand sires were fitted as a random term. Three QTL were detected at the 5% chromosome-wise level for backfat thickness, average daily gain, and final weight. The most likely positions for the QTL were in the proximal region, i.e. 0 cM, 35 cM, and 63 cM, respectively. Also, another QTL for longissimus dorsi muscle area was detected at the 10% chromosome-wise level at 67 cM. These results were, in general, consistent with our previous report, in which candidate gene analyses showed that a SNP near ILSTS035 flanked by BM4621 (62.5 cM) and BMS2460 (81.3 cM) was associated with final weight, carcass weight, average daily gain, and longissimus dorsi muscle area in the same Hanwoo population. Cloning and Expression of FSHb Gene and the Effect of $FSH{\beta}$ on the mRNA Levels of FSHR in the Local Chicken Zhao, L.H.;Chen, J.L.;Xu, H.;Liu, J.W.;Xu, Ri Fu 292 Follicle-stimulating hormone (FSH) is a pituitary glycoprotein hormone that is encoded by separate alpha- and betasubunit genes. It plays a key role in stimulating and regulating ovarian follicular development and egg production in chicken. FSH signal transduction is mediated by the FSH receptor (FSHR) that exclusively interacts with the beta-subunit of FSH, but characterization of prokaryotic expression of the FSHb gene and its effect on the expression of the FSHR gene in local chickens have received very little attention. In the current study, the cDNA fragment of the FSHb gene from Dagu chicken was amplified using reverse transcription polymerase chain reaction (RT-PCR), and inserted into the pET-28a (+) vector to construct the pET-28a-FSHb plasmid. After expression of the plasmid in E. coli BL21 (DE3) under inducing conditions, the recombination protein, $FSH{\beta}$ subunit, was purified and injected into the experimental hens and the effect on the mRNA expression levels of the FSHR gene was investigated. Sequence comparison showed that the coding region of the FSHb gene in the local chicken shared 99%-100% homology to published nucleotides in chickens; only one synonymous nucleotide substitution was detected in the region. The encoded amino acids were completely identical with the reported sequence, which confirmed that the sequences of the chicken FSHb gene and the peptides of the $FSH{\beta}$ subunit are highly conserved. This may be due to the critical role of the normal function of the FSHb gene in hormonal specificity and regulation of reproduction. The results of gene expression revealed that a recombinant protein with a molecular weight of about 19 kDa was efficiently expressed and it was identified by Western blotting analysis. After administration of the purified $FSH{\beta}$ protein, significantly higher expression levels were demonstrated in uterus, ovary and oviduct samples (p<0.05). These observations suggested that the expressed $FSH{\beta}$ protein possesses biological activity, and has a potential role in regulation of reproductive physiology in chickens. Effects of Maternal Factors on Day-old Chick Body Weight and Its Relationship with Weight at Six Weeks of Age in a Commercial Broiler Line Jahanian, Rahman;Goudarzi, Farshad 302 The present study aimed to investigate the effects of maternal factors on body weight at hatching (day-old) and at six weeks of age in a commercial broiler line. A total of 6,765 records on body weight at day-old (BWTDO) and 115,421 records on body weight at six weeks of age (BWT6W), originated from a commercial broiler line during 14 generations, were used to estimate genetic parameters related to the effects of maternal traits on body weight of chicks immediately after hatch or six weeks thereafter. The data were analyzed using restricted maximum likelihood procedure (REML) and an animal model with DFREML software. Direct heritability ($h^{2}{_a}$), maternal heritability ($h^{2}{_m}$), and maternal environmental variance as the proportions of phenotypic variance ($c^{2}$) for body weight at day-old were estimated to be 0.050, 0.351, and 0.173, respectively. The respective estimated values for body weight at six weeks of age were 0.340, 0.022, and 0.030. The correlation coefficient between direct and maternal genetic effects for six-week-old body weight was found to be -0.335. Covariance components and genetic correlations were estimated using a bivariate analysis based on the best model determined by a univariate analysis. Between weights at hatching and at six week-old, the values of -0.07, 0.53 and 0.47 were found for the direct additive genetic variance, maternal additive genetic variance and permanent maternal environmental variance, respectively. The estimated correlation between direct additive genetic effect influencing weight at hatch and direct additive maternal effect affecting weight at six weeks of age was -0.21, whereas the correlation value of 0.15 was estimated between direct additive maternal effect influencing weight at hatch and direct additive genetic effect affecting weight at six-week-old. From the present findings, it can be concluded that the maternal additive genetic effect observed for weight at six weeks of age might be a factor transferred from genes influencing weight at hatch to weight at six-week-old. Effects of Progestagen and Pmsg on Estrous Synchronization and Fertility in Kivircik Ewes during Natural Breeding Season Koyuncu, M.;Ozis Alticekic, S. 308 An experiment was conducted using indigenous Kivircik ewes to evaluate the effect of intravaginal progestagen sponges, containing 30 mg of fluorogestone acetate (FGA), followed by administration of pregnant mare serum gonadotrophin (PMSG) on inducing synchronized oestrus in the season and fertility. Three times of PMSG administration relative to sponge withdrawal (24 h before (n = 30), at (n = 29) or 24 h after (n= 29)) and two routes of PMSG administration (intramuscular (n = 46) and subcutaneous (n = 42) were compared for estrous response, number of multiple births and fecundity rates. There were no significant differences in terms of estrous response, due to differences in the time and route of PMSG administration. Lambing percentage, proportion of multiple births and fecundity were 75.6, 51.6 and 114.6%, respectively. The administration had a significant effect on lambing (p<0.05), multiple birth and fecundity rates (p<0.01). The subcutaneous administration of PMSG resulted in a significantly higher lambing rate (p<0.05) and fecundity rate (p<0.01), compared to the intramuscular injection of the PMSG. Decreased Complete Oxidation Capacity of Fatty Acid in the Liver of Ketotic Cowsa Xu, Chuang;Liu, Guo-wen;Li, Xiao-bing;Xia, Cheng;Zhang, Hong-you;Wang, Zhe 312 Complete oxidation of fatty acid in the liver of ketotic cows was investigated. Serum non-esterified fatty acid (NEFA), beta-hydroxybutyric acid (BHBA) and glucose concentrations were measured using biochemical techniques. Carnitine palmitoyl transferase II (CPT II), 3-hydroxy acyl-CoA dehydrogenase (HAD) and oxaloacetic acid (OAA) concentrations in the liver were detected by ELISA. Serum glucose was lower in ketotic cows than controls (p<0.05). Serum BHBA and NEFA concentrations were higher in ketotic cows than controls (p<0.05). OAA, CPT II, and HAD contents in the liver of ketotic cows were lower than in controls (p<0.05). There were negative correlations between serum NEFA concentration and OAA, CPT II and HAD, but no correlation between serum BHBA concentration and capacity for complete oxidation of fatty acid. Overall, the capacity for complete fatty acid oxidation in the liver of ketotic cows might have been decreased. High serum NEFA concentrations may be unfavorable factors for the pathway of complete oxidation of fatty acid in the liver. The Effects of Different Concentrations of Glycine and Cysteine on the Freezability of Moghani Ram Spermatozoa Khalili, B.;Jafaroghli, M.;Farshad, Abbas;Paresh-Khiavi, M. 318 Two experiments were designed to evaluate the effects of the amino acids glycine and cysteine on cryopreservation of ram spermatozoa. After primary evaluation of collected ejaculates, the semen samples were pooled and diluted 1:4 before cooling (experiment 1) and freezing (experiment 2) with Tris-Citrate-Fructose-Yolk (TCFY) extender supplemented with different concentrations of glycine and cysteine (5, 10, 15 and 20 mM). As the control, semen was diluted and frozen in the extender without amino acids. Motility, viability and membrane integrity were assessed as the parameters for semen quality in the first experiment. In the second experiment, motility, progressive motility, viability, membranes and acrosome integrity were evaluated after the freezing-thawing process. The results of the first experiment indicated that the addition of 10 and 15 mM cysteine compared to the control (basic) extender significantly increased (p<0.01) the motility, viability and membrane integrity of spermatozoa after cooling. However, further increasing these amino acids up to 20 mM had a significant negative effect (p<0.05). Our results showed no significant differences (p>0.05) between 5 mM glycine compared to 5 mM cysteine and between 20 mM glycine and 20 mM cysteine. The results of experiment 2 showed that the amino acids significantly improved post-thaw motility, progressive motility, viability, membranes and acrosome integrity of ram spermatozoa. These positive effects were observed at concentrations between 5 to 15 mM of glycine and cysteine, with the best results at 15 mM. Further increasing of amino acid concentrations significantly decreased the post-thaw characteristics of spermatozoa, but the results showed that cysteine was better than glycine and control extenders. The data indicated that addition of glycine or cysteine to the freezing extender can be recommended for cryopreservation of ram spermatozoa. However, further studies are still needed to determine the effect of such addition on fertility in farm animals. Effects of Daidzein on mRNA Expression of Bone Morphogenetic Protein Receptor Type I and II Genes in the Ovine Granulosa Cells Chen, A Qin;Xu, Zi Rong;Yu, Song Dong;Yang, Zhi Gang 326 Daidzein, a natural isoflavonoid phytoestrogen, structurally resembles estradiol (E2) and possesses estrogenic activity. This study was designed to test the hypothesis that daidzein may mimic the effects of E2 on ovine follicle development by regulation of the mRNA expression of bone morphogenetic protein receptor genes and thereby influence the reproductive system. Granulosa cells were cultured in serum-free McCoy's 5A medium with and without supplementation of daidzein. Results showed that daidzein (10-100 ng/ml) significantly increased the proliferation of ovine granulosa cells (p<0.05), but inhibited the growth of granulosa cells at a dose of 1,000 ng/ml (p<0.01). Daidzein inhibited progesterone production in a dose dependent manner; however, it did not affect estradiol production by granulosa cells. We also investigated the effects of daidzein on BMPRII, BMPRIB and ALK-5 mRNA expression in ovine granulosa cells by quantitative real-time PCR. Treatment of granulosa cells with daidzein increased significantly expression of these genes at 10-100 ng/ml. Thus, these data suggested that a low concentration of daidzein (10-100 ng/ml) had a direct stimulatory effect on ovine granulosa cells while a high concentration was toxic. Modeling Nutrient Supply to Ruminants: Frost-damaged Wheat vs. Normal Wheat Yu, Peiqiang;Racz, V. 333 The objectives of this study were to use the NRC-2001 model and DVE/OEB system to model potential nutrient supply to ruminants and to compare frost damaged (also called "frozen" wheat with normal wheat. Quantitative predictions were made in terms of: i) Truly absorbed rumen synthesized microbial protein in the small intestine; ii) Truly absorbed rumen undegraded feed protein in the small intestine; iii) Endogenous protein in the digestive tract; iv). Total truly absorbed protein in the small intestine; and v). Protein degraded balance. The overall yield losses of the frozen wheat were 24%. Results showed that using the DVE/OEB system to predict the potential nutrient supply, the frozen wheat had similar truly absorbed rumen synthesized microbial protein (65 vs. 66 g/kg DM; p>0.05), tended to have lower truly absorbed rumen undegraded feed protein (39 vs. 53 g/kg DM; p<0.10) and had higher endogenous protein (14 vs. 9 g/kg DM; p<0.05). Total truly absorbed protein in the small intestine was significantly lower (89 vs. 110 g/kg DM, p<0.05) in the frozen wheat. The protein degraded balance was similar and both were negative (-2 vs. -1 g/kg DM). Using the NRC-2001 model to predict the potential nutrient supply, the frozen wheat also had similar truly absorbed rumen synthesized microbial protein (average 56 g/kg DM; p>0.05), tended to have lower truly absorbed rumen undegraded feed protein (35 vs. 48, g/kg DM; p<0.10) and had similar endogenous protein (average 4 g/kg DM; p>0.05). Total truly absorbed protein in the small intestine was significantly lower (95 vs. 108 g/kg DM, p<0.05) in the frozen wheat. The protein degraded balance was not significantly different and both were negative (-16 vs. -19 g/kg DM). In conclusion, both models predict lower protein value and negative protein degraded balance in the frozen wheat. The frost damage to the wheat reduced nutrient content and availability and thus reduced nutrient supply to ruminants by around 12 to 19%. Variation in Milk Fatty Acid Composition with Body Condition in Dairy Buffaloes (Bubalus bubalis) Qureshi, Muhammad Subhan;Mushtaq, Anila;Khan, Sarzamin;Habib, Ghulam;Swati, Zahoor Ahmad 340 Buffaloes usually maintain higher body condition and do not produce milk at the cost of their own body reserves under tropical conditions. The mobilization of body reserves for fulfilling the demands of lactation has been extensively studied in dairy cows while limited work is available on this aspect in dairy buffaloes. Therefore, the present study was conducted to examine variations in milk fatty acid profiles with body condition in Nili-Ravi buffaloes. A total of 24 Nili-Ravi buffaloes within 60 days after parturition, were selected from a private dairy farm in the district of Peshawar. All animals consumed the same diet during the experimental period. A total of 576 raw milk samples were collected for laboratory analysis. The study continued up to 6 months during 2008. Body condition score (BCS), milk yield and composition were recorded once a week. Means for milk fatty acid profile were compared for various levels of BCS. The mean milk yield and fat content were 9.28 kg/d and 5.36%, respectively. The total saturated fatty acids (SFA) were 64.22 g/100 g and the unsaturated fatty acids (UFA) were 35.79 g/100 g. Of the SFA the highest amount was recorded for $C_{16:0}$, followed by $C_{18:0}$, and $C_{14:0}$. The total sum of hypercholesterolemic fatty acids (HCFA, $C_{12:0}$, $C_{14:0}$ and $C_{16:0}$) was 43.33 g/100 g. The concentrations of UFA were greater for moderate BCS followed by poor and highest BCS while SFA showed the opposite trend. The correlation analysis showed that milk yield was negatively affected by BCS and milk fat positively affected, though non-significantly. The present study suggests that Nili-Ravi dairy buffaloes produce similar milk to dairy cows regarding availability of cardioprotective fatty acids, with the highest concentration of $C_{18:1\;cis-9}$. Two HCFA ($C_{12:0}$ and $C_{14:0}$) were associated with higher body condition. Buffaloes with moderate body condition yielded milk containing healthier fatty acids. Changes in Nutritive Value and Digestion Kinetics of Canola Seed Due to Microwave Irradiation Ebrahimi, S.R.;Nikkhah, A.;Sadeghi, A.A. 347 This study aimed to evaluate effects of 800 W microwave irradiation for 2, 4 and 6 min on chemical composition, antinutritional factors, ruminal dry matter (DM) and crude protein (CP) degradability, and in vitro CP digestibility of canola seed (CS). Nylon bags of untreated or irradiated CS were suspended in the rumen of three bulls from 0 to 48 h. Protein subfractions of untreated and microwave irradiated CS before and after incubation in the rumen were monitored by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). Microwave irradiation had no effect on chemical composition of CS (p>0.05). There was a linear decrease (p<0.001) in the phytic acid and glucosinolate contents of CS as irradiation time increased. Microwave irradiation for 2, 4 and 6 min decreased the phytic acid content of CS by 8.2, 27.6 and 48.6%, respectively. The total glucosinolate contents of CS microwave irradiated for 2, 4 and 6 min decreased by 41.5, 54.7 and 59.0% respectively, compared to untreated samples. The washout fractions of DM and CP and degradation rate of the b fraction of CP decreased linearly (p<0.001) as irradiation time increased. Microwave irradiation for 2, 4 and 6 min decreased effective degradability (ED) of CP at a ruminal outflow rate of 0.05 $h^{-1}$ by 4.7, 12.3 and 21.0%, respectively. Microwave irradiation increased linearly (p<0.001) in vitro CP digestibility of ruminally undegraded CS collected after 16 h incubation. Electrophoresis results showed that napin subunits of untreated CS disappeared completely within the zero incubation period, whereas cruciferin subunits were degraded in the middle of the incubation period (16 h incubation period). In 4 and 6 min microwave irradiated CS, napin subunits were degraded after 4 and 16 h incubation periods, respectively, and cruciferin subunits were not degraded untile 24 h of incubation. In conclusion, it seems that microwave irradiation not only protected CP of CS from ruminal degradation, but also increased in vitro digestibility of CP. Moreover, microwave irradiation was effective in reducing glucosinolate and phytic acid contents of CS. Effects of Supplemental Recombinant Bovine Somatotropin (rbST) and Cooling with Misters and Fans on Renal Function in Relation to Regulation of Body Fluids in Different Stages of Lactation in Crossbred Holstein Cattle Boonsanit, D.;Chanpongsang, S.;Chaiyabutr, N. 355 The aim of this study was to investigate the effect of supplemental recombinant bovine somatotropin (rbST) and cooling with misters and fans on renal function in relation to regulation of body fluids in different stages of lactation in crossbred Holstein cattle. Ten, 87.5% crossbred Holstein cattle were divided into two groups of 5 animals each, housing in a normal shaded barn (NS) and in a shaded barn with a mister-fans cooling system (MF). The experiment in each group was divided into 3 phases, early- (Day 75 postpartum), mid- (Day 135 postpartum), and late stage of lactation (Day 195 postpartum). The pre-treatment study was conducted on the starting day of each stage of lactation and the treatment study was performed after the end of the pre-treatment, during which the animal was injected with 500 mg of rbST (POSILAC) every 14 days for three times. During the study, ambient temperature at the hottest period daily in the MF barn was significantly lower, while relative humidity was higher than that of the NS barn. The temperature humidity index (THI) in both barns ranged from 79-85 throughout the periods of study. Cows in the MF barn showed a lower rectal temperature and respiration rate as compared with cows in the NS barn. The effect of rbST administration increased both rectal temperature and respiration rates of cows housed in either the NS or MF barn. Milk yield significantly increased in cows treated with rbST in all stages of lactation. Increases in mammary blood flow, accompanied by increases of total body water (TBW), extracellular fluid (ECF), blood volume (BV) and plasma volume (PV), were observed in both groups of cows receiving rbST in all stages of lactation. No alterations of renal blood flow and glomerular filtration rate were observed in cows receiving rbST, but decreases in urinary excretion and fractional excretion of sodium, potassium and chloride ions appeared to correlate with reduction in the rate of urine flow and osmolar clearance during rbST administration. These results suggest that the effect of rbST supplementation to cows housed either in NS or MF barns on body fluid volume expansion is attributable to changes in the rate of electrolyte excretion by the kidney. The increased availability of renal tubular reabsorption of sodium, potassium and chloride ions during rbST treatment was a major factor in retaining body water through its colligative properties in exerting formation of an osmotic force mechanism. Effects of Volatile Fatty Acids on IGF-I, IGFBP-3, GH, Insulin and Glucagon in Plasma, and IGF-I and IGFBP-3 in Different Tissues of Growing Sheep Nourished by Total Intragastric Infusions Zhao, Guang-Yong;Sun, Ya-Bo 366 Twelve Suffolk${\times}$Small-tail-Han male sheep (body weight 21-26 kg), aged four months, were used to study the effects of volatile fatty acids (VFA) on IGF-I (insulin-like growth factor-I), IGFBP-3 (insulin-like growth factor binding protein-3), GH (growth hormone), insulin and glucagon in plasma, and IGF-I and IGFBP-3 in different tissues. The sheep were randomly divided into four groups with 3 sheep in each group. The sheep were sustained by total intragastric infusions and four levels of mixed VFA (the molar proportion of acetic acid, propionic acid and butyric acid was 65:25:10), which supplied 333, 378, 423 and 468 KJ energy/kg $W^{0.75}$/d, were infused into the rumen as experimental Treatments I, II, III and IV, respectively. The experiment lasted 12 days, of which the first 8 days were for pretreatment and the last 4 days for collection of samples. At the end of the experiment, blood samples were taken and then the sheep were slaughtered and tissue samples from the rumen ventral sac, rumen dorsal sac, liver, duodenum and Longissimus dorsi muscle were obtained. IGF-I, IGFBP-3, GH, insulin and glucagon in plasma and IGF-I and IGFBP-3 in different tissues were analysed. Results showed that the concentration of IGF-I, IGFBP-3, GH, insulin or glucagon in plasma and the content of IGF-I and IGFBP-3 in the rumen dorsal sac, rumen ventral sac, liver or Longissimus dorsi muscle were increased with VFA infusion level (p<0.05). No significant differences were found in duodenum IGF-I between Treatments I and II and in rumen dorsal sac IGFBP-3 between Treatments II and III (p>0.05). It was concluded that IGF-I, IGFBP-3, GH, insulin and glucagon in plasma and IGF-I and IGFBP-3 in rumen dorsal sac, rumen ventral sac, liver and Longissimus dorsi muscle were increased significantly with increasing level of ruminal infusion of mixed VFA. Effects of Propylene Glycol on Milk Production, Serum Metabolites and Reproductive Performance during the Transition Period of Dairy Cows Lien, T.F.;Chang, L.B.;Horng, Y.M.;Wu, Chean-Ping 372 The objective of this study was to investigate the effects of an oral drench of propylene glycol (PG) on milk production, serum metabolites and reproductive performance during the transition period of animals. Twenty-four 2-3 multiparous Holstein cows (average body weight 565 kg, body condition score about 3.6, at the $9^{th}$ month of gestation) were selected, blocked, and then randomly assigned into a PG and a control group. The control and the PG group cows were orally drenched with water or 50 ml sugarcane molasses mixed with 500 ml PG from 7 days pre-partum to 30 days post-partum, respectively. Experimental results indicated that the oral drench PG had no effect on dry matter intake (DMI). The milk yield of the PG group was significantly higher than that of the control group (p<0.05), whereas milk fat content, milk protein and somatic cell counts (SCC) were not significantly different between groups. Concentration of plasma glucose in the PG group was significantly higher than that of the control group (p<0.05). Conversely, the concentrations of non-esterified fatty acids (NEFA), and blood urea nitrogen (BUN) in the PG group were lower than those of the control group (p<0.05). Concentrations of insulin and ketone bodies were not significantly difference between groups. Body condition score (BCS) in the PG group was significantly higher than that of the control group (p<0.05). In reproductive performance there was no difference between groups. The experimental results indicate that supplementation of PG during the transition period of dairy cows can supply energy rapidly, resulting in reduced catabolism of body tissue and increased milk yield. Effects of Dietary Betaine on the Secretion of Insulin-like Growth Factor-I and Insulin-like Growth Factor Binding Protein-1 and -3 in Laying Hens Choe, H.S.;Li, H.L.;Park, J.H.;Kang, C.W.;Ryu, Kyeong Seon 379 The principal objective of this experiment was to determine the effects of dietary betaine on IGF-I, IGFBP-3 and IGFBP-1 secretion and IGF-I mRNA gene expression in the serum and liver of laying hens. A total of 72 ISA-Brown laying hens were fed with four different levels of betaine (0, 300, 600, 1,200 ppm) based on a corn-soybean meal diet containing 2,800 kcal/kg of metabolizable energy (ME) and 16% crude protein (CP) for four weeks. The results indicated significantly higher serum and liver IGF-I concentrations in the laying hens fed with 600 and 1,200 ppm betaine (p<0.05) compared to controls. IGF-I gene expression in liver showed a statistically correlated increase in 600 and 1,200 ppm betaine-fed groups as compared to the controls (p<0.05). Serum IGFBP-3 concentrations were elevated significantly in the groups fed 600 ppm of betaine. However, the secretion of IGFBP-1 in the liver of laying hens fed on 600 and 1,200 ppm of betaine was significantly lower than in the controls (p<0.05). The results of this experiment showed that dietary betaine supplementation plays a pivotal role in changes of the IGFs system in laying hens. Effect of Dietary Lysine Supplement on the Performance of Mong Cai Sows and Their Piglets Tu, Pham Khanh;Le Duc, Ngoan;Hendriks, W.H.;van der Peet-Schwering, C.M.C.;Verstegen, M.W.A. 385 The objective of this study was to determine optimal lysine requirement of lactating Mong Cai sows and their piglets. An experiment was conducted using 30 Mong Cai sows in a factorial randomized design with 5 dietary total lysine levels (0.60, 0.70, 0.85, 1.0 and 1.15%) for one-week pre-partum and 5 dietary total lysine levels (0.60, 0.75, 0.90, 1.05 and 1.2%) for lactation diets. Mong Cai sows were about 1 to 2 years old and had an initial body weight of 120 kg (sd = 2.5) after farrowing. Sows were restrictively fed 1.7 kg feed during gestation and were fed ad libitum during lactation. Diets of sows contained about 12% CP during pregnancy and about 14% CP for the lactation period. DE concentration of the diets ranged between 12.5-13.0 MJ of DE. Water was supplied at up to 8 liters per sow per day in a basin. Studied traits were related to both sows and their progeny. Sows were weighed at 107 days of gestation, after farrowing and at weaning. Sow back-fat depth was measured at 110 days of gestation, after farrowing, at 21 days of lactation and at weaning. Number of piglets born, at 24 h after birth, at 21 days of age and at weaning were recorded. Piglets were weighte at birth, at 21 days and at weaning. Supplying lysine one week pre-partum had no effect on the number of piglets born nor litter weight at birth (p = 0.776 and p = 0.224). A positive effect of increasing dietary lysine level during lactation from 0.60 to 1.20% was observed with regard to less sow weight loss, and increased piglet weight at 21 days and at weaning. The level of lysine that resulted in the lowest sow backfat loss and the highest weaned piglet weight was 1.05%; this may be the optimum level of lysine for the diet of lactating Mong Cai sows. At this lysine level, the number of weaned piglets was also highest. Effect of Vitamin E on Production Performance and Egg Quality Traits in Indian Native Kadaknath Hen Biswas, Avishek;Mohan, J.;Sastry, K.V.H. 396 This experiment investigated the effects of increasing dietary vitamin E (VE) on production performance and egg quality traits of Indian reared Kadaknath (KN) hens. One hundred and eighty (180), day old female KN chicks were randomly distributed to three dietary treatment groups for a period of 30 weeks. Each treatment comprised three replicates, each containing 20 chicks. The basal diet ($T_1$) contained 15 IU VE/kg and the two experimental diets were supplemented with 150 and 300 IU VE/kg (diets $T_2$ and $T_3$, respectively). DL-${\alpha}$-tocopherol acetate was used as the source of VE. All chicks were provided feed and water ad libitum. Production performance in terms of body weight, egg weight and hatchability did not differ significantly (p>0.05), whereas sexual maturity, egg production and fertility differed significantly (p<0.05) in $T_2$ compared to the other two groups. Egg quality traits in terms of albumin weight, yolk weight, shell thickness, albumin index and yolk index did not differ significantly (p>0.05), whereas the Haugh unit score was significantly higher (p<0.05) in $T_2$ than the control ($T_1$) and high dose treatment group ($T_3$). From this study, it can be concluded that lower levels of dietary VE may be beneficial for production performance and Haugh unit score but have no effect on egg quality traits in Indian reared KN hens. Construction of Mammalian Cell Expression Vector for pAcGFP-bFLIP(L) Fusion Protein and Its Expression in Follicular Granulosa Cells Yang, Run Jun;Li, Wu Feng;Li, Jun Ya;Zhang, Lu Pei;Gao, Xue;Chen, Jin Bao;Xu, Shang Zhong 401 FLICE inhibitory protein (FLIP) is one of the important anti-apoptotic proteins in the Fas/FasL apoptotic path which has death effect domains, mimicking the pro-domain of procaspase-8. To reveal the intracellular signal transduction molecules involved in the process of follicular development in the bovine ovary, we cloned the c-FLIP(L) gene in bovine ovary tissue with the reverse transcription polymerase chain reaction (RT-PCR), deleted the termination codon in its cDNA, and directionally cloned the amplified c-FLIP(L) gene into eukaryotic expression vector pAcGFP-Nl, including AcGFP, and successfully constructed the fusion protein recombinant plasmid. After identifying by restrictive enzyme BglII/EcoRI and sequencing, pAcGFP-bFLIP(L) was then transfected into follicular granulosa cells, mediated by Lipofectamine 2000, the expression of AcGFP observed and the transcription and expression of c-FLIP(L) detected by RT-PCR and Western blot. The results showed that the cattle c-FLIP(L) was successfully cloned; the pAcGFPbFLIP(L) fusion protein recombinant plasmid was successfuly constructed by introducing a BglII/EcoRI cloning site at the two ends of the c-FLIP(L) open reading frame and inserting a Kozak sequence before the start codon. AcGFP expression was detected as early as 24 h after transfection. The percentage of AcGFP positive cells reached about 65% after 24 h. A 1,483 bp transcription was amplified by RT-PCR, and a 83 kD target protein was detected by Western blot. Construction of the pAcGFP-bFLIP(L) recombinant plasmid should be helpful for further understanding the mechanism of regulation of c-FLIP(L) on bovine oocyte formation and development. Abatement of Methane Production from Ruminants: Trends in the Manipulation of Rumen Fermentation Kobayashi, Yasuo 410 https://doi.org/10.5713/ajas.2010.r.01 PDF KSCI Methane emitted from ruminant livestock is regarded as a loss of feed energy and also a contributor to global warming. Methane is synthesized in the rumen as one of the hydrogen sink products that are unavoidable for efficient succession of anaerobic microbial fermentation. Various attempts have been made to reduce methane emission, mainly through rumen microbial manipulation, by the use of agents including chemicals, antibiotics and natural products such as oils, fatty acids and plant extracts. A newer approach is the development of vaccines against methanogenic bacteria. While ionophore antibiotics have been widely used due to their efficacy and affordable prices, the use of alternative natural materials is becoming more attractive due to health concerns regarding antibiotics. An important feature of a natural material that constitutes a possible alternative methane inhibitor is that the material does not reduce feed intake or digestibility but does enhance propionate that is the major hydrogen sink alternative to methane. Some implications of these approaches, as well as an introduction to antibiotic-alternative natural materials and novel approaches, are provided.
CommonCrawl
OSA Publishing > Optics Express > Volume 27 > Issue 18 > Page 25126 James Leger, Editor-in-Chief Engineering equations for the filamentation collapse distance in lossy, turbulent, nonlinear media Larry B. Stotts, Joseph R. Peñano, Jason A. Tellez, Jason D. Schmidt, and Vincent J Urick Larry B. Stotts,1,* Joseph R. Peñano,2 Jason A. Tellez,3 Jason D. Schmidt,3 and Vincent J Urick4 1Science and Technology Associates, Inc., 4100 N. Fairfax Drive, Suite 910, Arlington, VA 22203, USA 2Plasma Physics Division, U.S. Naval Research Laboratory, 4555 Overlook Ave. SW, Washington, DC 20375, USA 3MZA Associates Corporation, 1360 Technology Court, Suite 200, Dayton, OH 45430, USA 4Strategic Technology Office, Defense Advanced Research Projects Agency, 675 N. Randolph St., Arlington, VA 22203, USA *Corresponding author: lstotts@stassociates.com Larry B. Stotts https://orcid.org/0000-0002-7464-7769 Jason D. Schmidt https://orcid.org/0000-0001-8958-8944 L Stotts J Peñano J Tellez J Schmidt V Urick pp. 25126-25141 •https://doi.org/10.1364/OE.27.025126 Larry B. Stotts, Joseph R. Peñano, Jason A. Tellez, Jason D. Schmidt, and Vincent J Urick, "Engineering equations for the filamentation collapse distance in lossy, turbulent, nonlinear media," Opt. Express 27, 25126-25141 (2019) Nonlinear Optics Atmospheric turbulence Extinction coefficients High power lasers Turbulence effects Revised Manuscript: August 4, 2019 Manuscript Accepted: August 5, 2019 The propagation of high peak-power laser beams in real atmospheres has been an active research area for a couple of decades. Atmospheric turbulence and loss will induce decreases in the filamentation self-focusing collapse distance as the refractive index structure parameter and volume extinction coefficient, respectively, increase. This paper provides a validated analytical method for predicting the filamentation onset distance in lossy, turbulent, nonlinear media. It is based on a modification of Petrishchev's and Marburger theories. It postulates that the ratio of the peak power to critical power at range in turbulence is modified by a multiplicative, rather than additive, gain factor. Specifically, this new approach modifies the Petrishchev's turbulence equation to create the required multiplicative factor. This is necessary to create the shortened filamentation onset distance that occurs when a laser beam propagates through the cited nonlinear medium. This equation then is used with the Marburger distance and the Karr et al loss equations to yield the filamentation onset distance estimate in lossy, turbulent, nonlinear environment. Theory validation is done against two independent sets of computer simulation results. One comes from the NRL's HELCAP software and the other from MZA's Wave Train modeling software package. This paper also shows that once the zero-turbulence onset distance is set based on link loss, the addition of turbulence creates essentially the same PDFs at similar median distances for each loss case. This result had not been previously reported. This is the first quantitative comparison between closed form equations and computer simulation results characterizing filament generation in a lossy, turbulent, nonlinear medium. Engineering equation for filamentation self-focusing collapse distance in atmospheric turbulence Larry B. Stotts, Joseph Peñano, and Vincent J. Urick Theoretical and numerical investigation of filament onset distance in atmospheric turbulence J. Peñano, B. Hafizi, A. Ting, and M. Helle J. Opt. Soc. Am. B 31(5) 963-971 (2014) Engineering equations for characterizing non-linear laser intensity propagation in air with loss Thomas Karr, Larry B. Stotts, Jason A. Tellez, Jason D. Schmidt, and Justin D. Mansell Laser filaments generated and transmitted in highly turbulent air R. Ackermann, G. Méjean, J. Kasparian, J. Yu, E. Salmon, and J.-P. Wolf Opt. Lett. 31(1) 86-88 (2006) Control of the collapse distance in atmospheric propagation Gadi Fibich, Yonatan Sivan, Yosi Ehrlich, Einat Louzon, Moshe Fraenkel, Shmuel Eisenmann, Yiftach Katzir, and Arie Zigler Opt. Express 14(12) 4946-4957 (2006) S. Tochitsky, E. Welch, M. Polyanskiy, I. Pogorelsky, P. Panagiotopoulos, M. Kolesik, E. M. Wright, S. W. Koch, J. V. Moloney, J. Pigeon, and C. Joshi, "Megafilament in air formed by self-guided terawatt long-wavelength infrared laser," Nat. Photonics 13(1), 41–46 (2019). J. Peñano, J. P. Palastro, B. Hafizi, M. H. Helle, and G. P. DiComo, "Self-channeling of high-power laser pulses through strong atmospheric turbulence," Phys. Rev. A 96(1), 013829 (2017). D. Eeltink, N. Berti, N. Marchiando, S. Hermelin, J. Gateau, M. Brunetti, J. P. Wolf, and J. Kasparian, "Triggering filamentation using turbulence," Phys. Rev. A 94(3), 033806 (2016). C. Jeon, J. Lane, S. Rostami, L. Shah, M. Baudelet, and M. Richardson, "Laser Induced Filament Propagation Through Adverse Conditions," in Propagation Through and Characterization of Atmospheric and Oceanic Phenomena, OSA Technical Digest (online) (Optical Society of America, 2016), paper Tu2A.3 (2016). A. Schmitt-Sody, H. G. Kurz, L. Bergé, S. Skupin, and P. Polynkin, "Picosecond laser filamentation in air," New J. Phys. 18(9), 093005 (2016). A. V. Mitrofanov, A. A. Voroninl, D. A. Sidorov-Biryukov, A. Pugžlys, E. A. Stepanov, G. Andriukaitis, T. Flöry, S. Ališauskas, A. B. Fedotov, A. Baltuška, and A. M. Zheltikov, "Mid-infrared laser filaments in the atmosphere," J. Sci. Res. Rev. 5(1), 1–5 (2015). M. Durand, A. Houard, B. Prade, A. Mysyrowicz, A. Durécu, B. Moreau, D. Fleury, O. Vasseur, H. Borchert, K. Diener, R. Schmitt, F. Théberge, M. Chateauneuf, J. Daigle, and J. Dubois, "Kilometer range filamentation," Opt. Express 21, 26836–26845 (2013). A. Houard, M. Franco, B. Prade, A. Durécu, L. Lombard, P. Bourdon, O. Vasseur, B. Fleury, C. Robert, V. Michau, A. Couairon, and A. Mysyrowicz, "Femtosecond filamentation in turbulent air," Phys. Rev. A 78(3), 033804 (2008). S. A. Shlenov, V. P. Kandidov, O. G. Kosareva, A. E. Bezborodov, and V. Yu. Fedorov, "Spatio-temporal control of femtosecond laser pulse filamentation in the atmosphere," Proc. SPIE 6733, International Conference on Lasers, Applications, and Technologies 2007: Environmental Monitoring and Ecological Applications; Optical Sensors in Biological, Chemical, and Engineering Technologies; and Femtosecond Laser Pulse Filamentation, 67332M (2007). S. A. Shlenov and A. I. Markov, "Control of filamentation of femtosecond laser pulses in a turbulent atmosphere," Quantum Electron. 39(7), 658–662 (2009). R. Ackermann, G. Méjean, J. Kasparian, J. Yu, E. Salmon, and J. P. Wolf, "Laser filaments generated and transmitted in highly turbulent air," Opt. Lett. 31(1), 86–88 (2006). L. B. Stotts, J. Peñano, and V. Urick, "Engineering equation for filamentation self-focusing collapse distance in atmospheric turbulence," Opt. Express 27(11), 15159–15171 (2019). T. Karr, L. B. Stotts, J. A. Tellez, J. D. Schmidt, and J. D. Mansell, "Engineering equations for characterizing non-linear laser intensity propagation in air with loss," Opt. Express 26(4), 3974–3987 (2018); Thomas Karr, Larry B. Stotts, Jason A. Tellez, Jason D. Schmidt, and Justin D. Mansell, "Engineering equations for characterizing nonlinear laser intensity propagation in air with loss: erratum," Opt. Express, 26(7), 8417 (2018). T. Karr, L. B. Stotts, J. A. Tellez, J. D. Schmidt, and J. D. Mansell, "Propagation of infrared ultrashort pulses in the air," Proc. SPIE 10684, 1068414 (2018). J. H. Marburger, "Self-Focusing Theory," in R. W. Boyd, S. G. Lukishova, and Y. R. Shen, eds., Self-focusing: Past and Present / Fundamentals and Prospects (Topics in Applied Optics, Springer Science + Business Media, 1975) Chap. 2. V. Petrishchev, "Application of the Method of Moments to Certain Problems in the Propagation of Partially Coherent Light Beams," Radiophys. Quantum Electron. 14(9), 1112–1119 (1971). P. Sprangle, J. R. Peñano, and B. Hafizi, "Propagation of intense short laser pulses in the atmosphere," Phys. Rev. E 66(4), 046418 (2002). C. Wu, J. R. Rzasa, J. Ko, D. A. Paulson, J. Coffaro, J. Spychalsky, R. F. Crabbs, and C. C. Davis, "A multi-aperture laser transmissometer system for long-path aerosol extinction rate measurement," Appl. Opt. 57(3), 551–559 (2018). P. Spangle, J. R. Pefiano, and B. Hafizi, "Optimum Wavelength and Power for Efficient Laser Propagation in Various Atmospheric Environments," NRL/MR/6790-05-8907. J. R. Peñano, P. Sprangle, B. Hafizi, A. Ting, D. F. Gordon, and C. A. Kapetanakos, "Propagation of ultra-short, intense laser pulses in air," Phys. Plasmas 11(5), 2865–2874 (2004). J. Peñano, B. Hafizi, A. Ting, and M. H. Helle, "Theoretical and numerical investigation of Filament onset distance in atmospheric turbulence," J. Opt. Soc. Am. B 31(5), 963–971 (2014). A. Houard, V. Jukna, G. Point, Y.-B. André, S. Klingebiel, M. Schultze, K. Michel, T. Metzger, and A. Mysyrowicz, "Study of filamentation with a high-power high repetition rate ps laser at 1.03 µm," Opt. Express 24(7), 7437–7448 (2016). A. A. Zemlyanov and Y. É. Geints, "Evolution of Effective Characteristics of Laser Beam of Femtosecond Duration upon Self-Action in a Gas Medium," Opt. Spectrosc. 104(5), 772–783 (2008). V. I. Talanov, "Focusing of light in cubic media," JETP Lett. 11, 199–201 (1970). E. L. Dawes and J. H. Marburger, "Computer studies in self-focusing," Phys. Rev. 179(3), 862–868 (1969). L. C. Andrews, A Field Guide to Atmospheric Optics, 2nd Edition, 17 (SPIE Press, 2019). Ackermann, R. Ališauskas, S. André, Y.-B. Andrews, L. C. Andriukaitis, G. Baltuška, A. Baudelet, M. Bergé, L. Berti, N. Bezborodov, A. E. Borchert, H. Bourdon, P. Brunetti, M. Chateauneuf, M. Coffaro, J. Couairon, A. Crabbs, R. F. Daigle, J. Davis, C. C. Dawes, E. L. DiComo, G. P. Diener, K. Dubois, J. Durand, M. Durécu, A. Eeltink, D. Fedorov, V. Yu. Fedotov, A. B. Fleury, B. Fleury, D. Flöry, T. Franco, M. Gateau, J. Geints, Y. É. Gordon, D. F. Hafizi, B. Helle, M. H. Hermelin, S. Houard, A. Jeon, C. Joshi, C. Jukna, V. Kandidov, V. P. Kapetanakos, C. A. Karr, T. Kasparian, J. Klingebiel, S. Ko, J. Koch, S. W. Kolesik, M. Kosareva, O. G. Kurz, H. G. Lane, J. Lombard, L. Mansell, J. D. Marburger, J. H. Marchiando, N. Markov, A. I. Méjean, G. Metzger, T. Michau, V. Michel, K. Mitrofanov, A. V. Moloney, J. V. Moreau, B. Mysyrowicz, A. Palastro, J. P. Panagiotopoulos, P. Paulson, D. A. Pefiano, J. R. Peñano, J. Peñano, J. R. Petrishchev, V. Pigeon, J. Pogorelsky, I. Point, G. Polyanskiy, M. Polynkin, P. Prade, B. Pugžlys, A. Richardson, M. Robert, C. Rostami, S. Rzasa, J. R. Salmon, E. Schmidt, J. D. Schmitt-Sody, A. Schultze, M. Shah, L. Shlenov, S. A. Sidorov-Biryukov, D. A. Skupin, S. Spangle, P. Sprangle, P. Spychalsky, J. Stepanov, E. A. Stotts, L. B. Talanov, V. I. Tellez, J. A. Théberge, F. Ting, A. Tochitsky, S. Urick, V. Vasseur, O. Voroninl, A. A. Welch, E. Wolf, J. P. Wright, E. M. Wu, C. Yu, J. Zemlyanov, A. A. Zheltikov, A. M. Appl. Opt. (1) J. Opt. Soc. Am. B (1) J. Sci. Res. Rev. (1) JETP Lett. (1) New J. Phys. (1) Opt. Lett. (1) Opt. Spectrosc. (1) Phys. Plasmas (1) Phys. Rev. (1) Proc. SPIE (1) Quantum Electron. (1) Radiophys. Quantum Electron. (1) Fig. 10. Fig. 1. Extinction Rate versus Hour of the Day Measured on the Kennedy Space Center runway in mid-October 2017 [18]. (Figure published with permission from Originator, Dr. Chris C. Davis, and the Optical Society of America.) Fig. 2. Refractive Index Structure Parameter versus Hour of the Day Measured on the Kennedy Space Center runway in mid-October 2017 [18]. (Figure published with permission from Originator, Dr. Chris C. Davis, and the Optical Society of America.) Fig. 3. Fluence contours generated by the HELCAP code for beams with (a) ${P \mathord{\left/ {\vphantom {P {{P_{crit}} = 1.5}}} \right.} {{P_{crit}} = 1.5}}$ and b) ${P \mathord{\left/ {\vphantom {P {{P_{crit}} = 20}}} \right.} {{P_{crit}} = 20}}$ near the onset of filamentation. Taken from Ref. [22]. Fig. 4. Filamentation Onset Distance Probability Density Function as a function of propagation distance and various values of the Refractive Index Structure Parameter for a lossless medium and a Zero Turbulence Collapse Distance of 4 km. Fig. 5. Filamentation Onset Distance Probability Density Function as a function of propagation distance and various values of the Refractive Index Structure Parameter and a volume Extinction Coefficient of 0.1 inverse-kilometers and a Zero Turbulence Collapse Distance of 4 km. Fig. 6. Filamentation Onset Distance Probability Density Function as a function of propagation distance and various values of the Refractive Index Structure Parameter and a volume Extinction Coefficient of 0.2 inverse-kilometers and Zero Turbulence Collapse Distance of 4 km. Fig. 7. Comparison of Median Collapse Distances from Eqs. (6), (8) [LHS] and (9), and HELCAP Computer Simulation Results. Fig. 8. Filamentation Onset Distance Probability Density Function as a function of propagation distance and various values of the Refractive Index Structure Parameter and a volume Extinction Coefficient of 0.0 inverse-kilometers and a Zero Turbulence Collapse Distance of 3.6 km. Fig. 10. Comparison of Median Collapse Distances from Eqs. (6), (8) [LHS] and (9), and HELCAP Computer Simulation Results Fig. 11. Comparison of Hard Aperture (red) versus Super Gaussian Beam (blue) Profiles. Fig. 12. Wave Train component layout for modeling propagation in turbulent, non-linear atmospheres. Fig. 13. Graph for the Filamentation Onset Distance Probability Density Function as a function of propagation distance for various ratios of Peak Power to Critical Power for $C_n^2 = 1x{10^{ - 17}}\,{m^{ - {2 \mathord{\left/ {\vphantom {2 3}} \right.} 3}}}.$ Fig. 15. Comparison of Average Collapse Distances from Eqs. (6), (8) [LHS] and (9), and Wave Train Computer Simulation Results Fig. 16. Filamentation Onset Distance Probability Density Function as a function of propagation distance and various values of the Refractive Index Structure Parameter and a volume Extinction Coefficient of 0.1 inverse-kilometers and a Zero Turbulence Collapse Distance of 4 km and Peak Power to Critical Power Ratios of 5 and 10. Fig. 17. Filamentation Onset Distance Probability Density Function as a function of propagation distance and various values of the Refractive Index Structure Parameter and a volume Extinction Coefficient of 0.1 inverse-kilometers and a Zero Turbulence Collapse Distance of 1 km and Peak Power to Critical Power Ratios of 1.5 and 8. Fig. 18. Comparison of Average Collapse Distances from Eqs. (6) and (8) [LHS], and Wave Train Computer Simulation Results using Figs. 16 and 17 Data. Fig. 19. Comparison of Average Collapse Distances from Eqs. (6) and (8) [LHS], and Wave Train Computer Simulation Results using Fig. 17 Data Only. Table 1. Average filamentation self-focusing collapse distances from the plots in Figs. 13 and 14. View Table | View all tables in this article (1) ∂ A ∂ z = i c 2 ω 0 ∇ ⊥ 2 A + [ i ω 0 c ( δ n T + δ n T B ) − 1 2 ( α T B + β ) ] A + ∑ j S j (2) z s f = 0.367 z r ( P p e a k / P p e a k P c r i t P c r i t − 0.852 ) 2 − 0.0219 , (3) z r = k a 0 2 (4) z s f ′ = z s f f / z s f f ( z s f + f ) ( z s f + f ) (5) a 16 ( z ) = a L 16 ( z ) { 1 − ( 2 α 2 z s f ′ 2 ) [ 1 − ( 1 + α z ) exp ⁡ { − α z } ] } (6) P p e a k ∗ / P c r i t = ( P P e a k / P c r i t ) [ 1 + m 0 k 2 a 0 2 ( a 0 C / a 0 C ℓ 0 ℓ 0 ) 2 / 2 3 3 ] , (7) C = 4.38 ℓ 0 − 1 / 1 3 3 C n 2 ( h ) [ 1 − { 1 + 17.5 ( a 0 / a 0 ℓ 0 ℓ 0 ) 2 } − 1 / 1 6 6 ] , (8a) z s f ∗ = 0.367 z r ( P p e a k ∗ / P p e a k ∗ P c r i t P c r i t − 0.852 ) 2 − 0.0219 , (8b) ≈ 0.367 z r ( P p e a k ∗ / P p e a k ∗ P c r i t P c r i t − 0.852 ) (9) z s f ∗ ∗ = z s f ∗ f z s f ∗ + f , (10) I ( r ) = I p e a k e − 2 ( r 2 / r 2 W 0 2 W 0 2 ) N (11) W 0 = D a p / D a p 2 3 / 3 2 2 2 3 / 3 2 2 (12) P p e a k ∗ / P c r i t = ( P P e a k / P c r i t ) [ 1 + m 0 k 2 a 0 2 ( a 0 C / a 0 C ℓ 0 ℓ 0 ) 2 / 2 3 3 ] , (13) C = 4.38 ℓ 0 − 1 / 1 3 3 C n 2 ( h ) [ 1 − { 1 + 17.5 ( a 0 / a 0 ℓ 0 ℓ 0 ) 2 } − 1 / 1 6 6 ] , (14) z s f ∗ = 0.367 z r ( P p e a k ∗ / P p e a k ∗ P c r i t P c r i t − 0.852 ) 2 − 0.0219 , (15) a 16 ( z ) = a L 16 ( z ) { 1 − ( 2 α 2 z s f ∗ ∗ 2 ) [ 1 − ( 1 + α z ) exp ⁡ { − α z } ] } (16) z s f ∗ ∗ = { z s f ∗ ; w i t h o u t a t r a n s m i t t e r l e n s z s f ∗ f / z s f ∗ f ( z s f ∗ + f ) ; w i t h a t r a n s m i t t e r l e n s ( z s f ∗ + f ) ; w i t h a t r a n s m i t t e r l e n s . Average filamentation self-focusing collapse distances from the plots in Figs. 13 and 14. $C_n^2 = 1x{10^{ - 17}}\,{m^{ - {2 \mathord{\left/ {\vphantom {2 3}} \right.} 3}}}$ C n 2 = 1 x 10 − 17 m − 2 / 2 3 3 $f = - 10\,km;\;{1 \mathord{\left/ {\vphantom {1 {f = - 0.1\,k{m^{ - 1}}}}} \right.} {f = - 0.1\,k{m^{ - 1}}}}$ f = − 10 k m ; 1 / 1 f = − 0.1 k m − 1 f = − 0.1 k m − 1 $4.89\,km \pm 328\,m$ 4.89 k m ± 328 m $4.20\,km \pm 485\,m$ 4.20 k m ± 485 m $f = \infty \,km;\;{1 \mathord{\left/ {\vphantom {1 {f = 0\,k{m^{ - 1}}}}} \right.} {f = 0\,k{m^{ - 1}}}}$ f = ∞ k m ; 1 / 1 f = 0 k m − 1 f = 0 k m − 1 $5.01\,km \pm 200\,m$ 5.01 k m ± 200 m $4.75\,km \pm 459\,m$ 4.75 k m ± 459 m $f = 10\,km;\;{1 \mathord{\left/ {\vphantom {1 {f = 0.1\,k{m^{ - 1}}}}} \right.} {f = 0.1\,k{m^{ - 1}}}}$ f = 10 k m ; 1 / 1 f = 0.1 k m − 1 f = 0.1 k m − 1 $5.00\,km \pm 98\,m$ 5.00 k m ± 98 m $4.98\,km \pm 309\,m$ 4.98 k m ± 309 m $C_n^2$ C n 2 ${{{P_{peak}}} \mathord{\left/ {\vphantom {{{P_{peak}}} {{P_{crit}}}}} \right.} {{P_{crit}}}}$ P p e a k / P p e a k P c r i t P c r i t $Collapse\;Distance$ C o l l a p s e D i s t a n c e $1x{10^{ - 14}}\,{m^{{2 \mathord{\left/ {\vphantom {2 3}} \right.} 3}}}$ 1 x 10 − 14 m 2 / 2 3 3 $5$ 5 $3510m \pm 593.6m$ 3510 m ± 593.6 m $10$ 10 $2607m \pm 405.4m$ 2607 m ± 405.4 m $1x{10^{ - 15}}\,{m^{{2 \mathord{\left/ {\vphantom {2 3}} \right.} 3}}}$ 1 x 10 − 15 m 2 / 2 3 3 $5$ 5 $3968.6m \pm 303.5\,m$ 3968.6 m ± 303.5 m $10$ 10 $3242.0m \pm 205.0m$ 3242.0 m ± 205.0 m $1x{10^{ - 16}}\,{m^{{2 \mathord{\left/ {\vphantom {2 3}} \right.} 3}}}$ 1 x 10 − 16 m 2 / 2 3 3 $5$ 5 $4005.8m \pm 85.6\,m$ 4005.8 m ± 85.6 m $10$ 10 $3661.9m \pm 118.2\,m$ 3661.9 m ± 118.2 m $10$ 10 $3913.4 \pm 56.6\,m$ 3913.4 ± 56.6 m
CommonCrawl
2D peak reasoning to move to a half You have a 2D integer matrix given. An element is a peak element if it is greater than or equal to its four neighbors, left, right, top and bottom. For example neighbors for A[i][j] are A[i-1][j], A[i+1][j], A[i][j-1] and A[i][j+1]. For corner elements, missing neighbors are considered of negative infinite value. If someone has any understanding please share on how it decides which side to pick as that reduces complexity to $O(n \log n)$. Solution is to consider the middle column, find its 1d maximum, then if it's not the peak, look at left and right side and pick a side which is larger. My doubt is, why is this algorithm correct? A) geeks link: https://www.geeksforgeeks.org/find-peak-element-2d-array/ B) mit This was shared in mit slides where it talks about 1d peak and then 2d peak 1) https://courses.csail.mit.edu/6.006/spring11/lectures/lec02.pdf 2) its second link gives an working example but not the reasoning http://courses.csail.mit.edu/6.006/fall11/lectures/lecture1.pdf shows a working C) Stackoverflow There is also a StackOverflow discussion https://stackoverflow.com/questions/23120300/2d-peak-finding-algorithm-in-on-worst-case-time My doubt is How Can it predict based on seeing left and right element accurately that it has to go to the bigger half. I tried it a lot of times and it always works. So I know it to be true. I then tried to create a counter example but I couldn't . It's a very old question : I initiated a chat but it didnt helped so asking question here : https://chat.stackoverflow.com/rooms/192196/2d-peak algorithms arrays MAGMAG $\begingroup$ I understand the algorithmic problem you're trying to solve, but it isn't clear what question you have. Do you want to know how solve this in $O(\log n)$? (if this is possible) $\endgroup$ – Discrete lizard ♦ $\begingroup$ There is an elementary $O(n)$ time algorithm for $n\times n$ grids, though $\endgroup$ $\begingroup$ It is not possible to solve the problem in time O(log n). I think the question is why the algorithm called "Divide and Conquer #1" in the link B) is correct. That algorithm takes time O(n log n) on a $n \times n$ grid. The slides also discuss a faster algorithm taking time O(n), but OP does not seem to be asking about that. $\endgroup$ – Vincenzo The point is that we are not trying to find the global maximum element of the array, only a peak (that is, a local maximum). Let's say that you look at the middle column and find its global maximum, and this maximum is not a peak, because the value to its left is larger. Then you know that the left half of the matrix contains a value that is larger than all of the elements of the middle column. Hence there is always a peak in the left half (for example, the global maximum of the left half, but we do not necessarily search for that). The values in the right half are irrelevant; they could be larger or smaller, but we don't care. We only want to be sure that we do not recurse into a half that contains 0 peaks. VincenzoVincenzo $\begingroup$ I understand when you said "Then you know that the left half of the matrix contains a value that is larger than all of the elements of the middle column." But when you say " Hence there is always a peak in the left half " why is this ?? $\endgroup$ $\begingroup$ Because in this case the global maximum of the left half must also be a peak. $\endgroup$ The algorithm from Geeks starts by finding the maximum element in the middle column (not the 1D-peak). Let us denote this element by $p_m$ and the middle column by $c_m$. A peak found on the $\min$-side is not guaranteed to be a peak in the initial matrix; consider the case where the peak is found in the cell directly adjacent to $p_m$. A peak found on the $\max$-side will always be a peak in the initial matrix. If the peak is found in the column directly adjacent to $c_1$. It must be the maximum element in its column to be declared a peak, and therefore it must be greater than all elements in $c_1$, and thus a global peak. If the peak is not found in the column adjacent to $c_1$, then it must also be a global peak. AcIdAcId Not the answer you're looking for? Browse other questions tagged algorithms arrays or ask your own question. Reasoning on Efficiency (2) Finding the half with greatest elements in a set Peaking finding when equals is taken out of the equation 1D Peak-finding problem, how to derive the formula? What is the peak in fitness landscape Proof of QuickSort algorithm correctness
CommonCrawl
PowerPedia:Solar cell 296 errors has been found on this page. Administrator will correct this soon. Solar cell is a device with a pn-junction that converts the luminous Radiant energy directly and efficiently into Electrical energy. A solar cell (or photovoltaic cell) is a device that converts photons from the sun (solar light) into electricity. In general, a solar cell that includes the capacity to capture both solar and nonsolar sources of light (such as photons from incandescent bulbs) is termed a photovoltaic cell. Fundamentally, the device needs to fulfill only two functions: photogeneration of charge carriers (electrons and holes) in a light-absorbing material, and separation of the charge carriers to a conductive contact that will transmit the electricity. This conversion is called the photovoltaic effect, and the field of research related to solar cells is known as photovoltaics. Solar cells have many applications. Historically solar cells have been used in situations where electrical power from the grid is unavailable, such as in remote area power systems, Earth orbiting satellites or space probes , consumer systems, e.g. handheld calculators or wrist watches, remote radiotelephones and water pumping applications. Recently solar cells are particularly used in assemblies of solar modules (photovoltaic arrays) connected to the electricity grid through an inverter, often in combination with a net metering arrangement. Solar cells are regarded as one of the key technologies towards a sustainable energy supply. Three generations of development The first generation photovoltaic (also called wafer solar cells), consists of a large-area, single layer There was an error working with the wiki: Code[48] There was an error working with the wiki: Code[49], which is capable of generating usable There was an error working with the wiki: Code[50] Energy from light sources with the There was an error working with the wiki: Code[51]s of solar light. These cells are typically made using There was an error working with the wiki: Code[52] wafer. First generation photovoltaic cells (also known as silicon wafer-based solar cells) are the dominant technology in the commercial production of solar cells, accounting for more than 86% of the solar cell market. The second generation photovoltaic materials is based on the use of thin-film deposits of semiconductors. These devices were initially designed to be high-efficiency, multiple junction photovoltaic cells. Later, the advantage of using a thin-film of material was noted, reducing the mass of material required for cell design. This contributed to a prediction of greatly reduced costs for thin film solar cells. Currently (2007) there are different technologies/semiconductor materials under investigation or in mass production, such as amorphous silicon, poly-crystalline silicon, micro-crystalline silicon, cadmium telluride, copper indium selenide/sulfide. Typically, the efficiencies of thin-film solar cells are lower compared to bulk silicon (=wafer-based) solar cells, but manufacturing costs are also lower, so that a lower price in terms of $/watt of electrical output can be achieved. Another advantage of the reduced mass is that less support is needed when placing panels on rooftops and it allows fitting panels on light materials or flexible materials, even textiles. Third generation photovoltaic (also called advanced thin-film photovoltaic) materials are very different from the other two, broadly defined as semiconductor devices which do not rely on a traditional p-n junction to separate photogenerated charge carriers. These new devices include There was an error working with the wiki: Code[53]s, There was an error working with the wiki: Code[54]s, and There was an error working with the wiki: Code[55]s. The term "photovoltaic" comes from the There was an error working with the wiki: Code[12] ???:phos meaning "light", and the name of the There was an error working with the wiki: Code[13] physicist There was an error working with the wiki: Code[14], after whom the Volt (and consequently Voltage) are named. It means literally of light and electricity. The photovoltaic effect was first recognised in There was an error working with the wiki: Code[15]. However, it was not until There was an error working with the wiki: Code[56] that the first solar cell was built, by There was an error working with the wiki: Code[57], who coated the There was an error working with the wiki: Code[58] There was an error working with the wiki: Code[59] with an extremely thin layer of There was an error working with the wiki: Code[60] to form the junctions. The device was only around 1% efficient. There was an error working with the wiki: Code[61] patented the modern solar cell in 1946 (US2402662, "Light sensitive device"). Sven Ason Berglund had a prior patent concerning methods of increasing the capacity of photosensitive cells. The modern age of solar power technology arrived in 1954 when Bell Laboratories, experimenting with semiconductors, accidentally found that silicon doped with certain impurities was very sensitive to light. This resulted in the production of the first practical solar cells with a sunlight energy conversion efficiency of around 6 percent. This milestone created interest in producing and launching a There was an error working with the wiki: Code[62] communications There was an error working with the wiki: Code[63] by providing a viable power supply. Russia launched the first artificial satellite in 1957, and the United States' first artificial satellite was launched in 1958. Russian There was an error working with the wiki: Code[64] ("Satellite-3"), launched on There was an error working with the wiki: Code[65], There was an error working with the wiki: Code[66], was the first satellite to use solar arrays. This was a crucial development which diverted funding from several governments into research for improved solar cells. The There was an error working with the wiki: Code[16] of solar cells begins in the 1800s when it is observed that the presence of sunlight is capable of generating usable electrical energy. Solar cells have gone on to be used in many applications. They have historically been used in situations where electrical power from the grid is unavailable. There was an error working with the wiki: Code[17] observes the There was an error working with the wiki: Code[67] via an electrode in a conductive solution exposed to light. There was an error working with the wiki: Code[68] - There was an error working with the wiki: Code[69] finds that There was an error working with the wiki: Code[70] is photoconductive. There was an error working with the wiki: Code[71] - W.G. Adams and R.E. Day observed the There was an error working with the wiki: Code[72] effect in solid There was an error working with the wiki: Code[73], and published a paper on the selenium cell. 'The action of light on selenium,' in "Proceedings of the Royal Society, A25, 113. There was an error working with the wiki: Code[74] - There was an error working with the wiki: Code[75] develops a solar cell using selenium on a thin layer of gold to form a device giving less than 1% efficiency. There was an error working with the wiki: Code[76] - There was an error working with the wiki: Code[77] investigates ultraviolet light photoconductivity. There was an error working with the wiki: Code[18] receives patent US389124, "Solar cell", and US389125, "Solar cell". There was an error working with the wiki: Code[78] - There was an error working with the wiki: Code[79] receives patent US527377, "Solar cell", and US527379, "Solar cell". There was an error working with the wiki: Code[80] - There was an error working with the wiki: Code[81] receives patent US588177, "Solar cell".. There was an error working with the wiki: Code[82] - Nikola Tesla receives the patent US685957, "Apparatus for the Utilization of Radiant Energy", and US685958, "Method of Utilizing of Radiant Energy". There was an error working with the wiki: Code[83] - There was an error working with the wiki: Code[84] observes the variation in electron energy with light frequency. There was an error working with the wiki: Code[85] - There was an error working with the wiki: Code[86] publishes a paper on the photoelectric effect. There was an error working with the wiki: Code[87] makes a semiconductor-junction solar cell (There was an error working with the wiki: Code[88] and There was an error working with the wiki: Code[89]). There was an error working with the wiki: Code[90] - There was an error working with the wiki: Code[91] receives US1077219, "Solar cell". There was an error working with the wiki: Code[92] - There was an error working with the wiki: Code[93] patents "methods of increasing the capacity of photosensitive cells". There was an error working with the wiki: Code[94] - There was an error working with the wiki: Code[95] conducts experiments and proves the photoelectric effect. There was an error working with the wiki: Code[96] - There was an error working with the wiki: Code[97], a Polish scientist, produces a method to grow single-crystal silicon. There was an error working with the wiki: Code[98] - Audobert and Stora discover the photovoltaic effect in Cadmium-Selenide (CdS), a photovoltaic material still used today. There was an error working with the wiki: Code[99] - There was an error working with the wiki: Code[100] receives patent US2402662, "Light sensitive device". There was an error working with the wiki: Code[101] - There was an error working with the wiki: Code[102] produce solar cells for space activities. There was an error working with the wiki: Code[103] - There was an error working with the wiki: Code[104] begins research into There was an error working with the wiki: Code[105]-There was an error working with the wiki: Code[106] photovoltaic cells. There was an error working with the wiki: Code[107] - There was an error working with the wiki: Code[108] exhibits solar cells at There was an error working with the wiki: Code[109]. Shortly afterwards, AT&T shows them at the <pesn type= forecasts that solar cells will eventually lead to a source of "limitless energy of the sun". There was an error working with the wiki: Code[111] - There was an error working with the wiki: Code[112] licences commercial solar cell technologies. Hoffman Electronics-Semiconductor Division creates a 2% efficient commercial solar cell for $25/cell or $1,785/Watt. There was an error working with the wiki: Code[19]". Hoffman Electronics creates an 8% efficient solar cell. There was an error working with the wiki: Code[113] - T. Mandelkorn, U.S. Signal Corps Laboratories, creates n-on-p silicon solar cells, which are more resistant to radiation damage and are better suited for space. Hoffman Electronics creates 9% efficient solar cells. There was an error working with the wiki: Code[114], the first solar powered satellite, was launched with a 0.1W, 100 cm² solar panel. There was an error working with the wiki: Code[115] - Hoffman Electronics creates a 10% efficient commercial solar cell, and introduces the use of a grid contact, reducing the cell's resistance. There was an error working with the wiki: Code[116] - Hoffman Electronics creates a 14% efficient solar cell. There was an error working with the wiki: Code[117] - "Solar Energy in the Developing World" conference is held by the There was an error working with the wiki: Code[118]. There was an error working with the wiki: Code[119] - The There was an error working with the wiki: Code[120] communications satellite is powered by solar cells. There was an error working with the wiki: Code[121] - There was an error working with the wiki: Code[122] produces a viable photovoltaic module of silicon solar cells. There was an error working with the wiki: Code[123] - There was an error working with the wiki: Code[124] is the first manned spacecraft to be powered by solar cells There was an error working with the wiki: Code[125] - There was an error working with the wiki: Code[126] is powered by solar cells. There was an error working with the wiki: Code[129] - There was an error working with the wiki: Code[130] begins http://www.fsec.ucf.edu/]. There was an error working with the wiki: Code[131] - David Carlson and Christopher Wronski of RCA Laboratories create first amorphous silicon PV cells, which have an efficiency of 1.1%. There was an error working with the wiki: Code[20] is established at There was an error working with the wiki: Code[132]. World production of solar cells exceeds 500 kW. There was an error working with the wiki: Code[133] - The Institute of Energy Conversion at University of Delaware develops the first thin-film solar cell exceeding 10% efficiency using Cu2S/CdS technology. There was an error working with the wiki: Code[134] - Worldwide photovoltaic production exceeds 21.3 megawatts, and sales exceed $250 million. There was an error working with the wiki: Code[135] - 20% efficient silicon cell are created by the There was an error working with the wiki: Code[136] at the There was an error working with the wiki: Code[137]. There was an error working with the wiki: Code[138] - Reflective solar concentrators are first used with solar cells. There was an error working with the wiki: Code[139] - The There was an error working with the wiki: Code[140] installs solar cells on the roof, marking the first installation on a church in East Germany. There was an error working with the wiki: Code[141] - Efficient There was an error working with the wiki: Code[142] are developed the There was an error working with the wiki: Code[143] is invented. There was an error working with the wiki: Code[144] - There was an error working with the wiki: Code[145] There was an error working with the wiki: Code[146] directs the There was an error working with the wiki: Code[147] to establish the There was an error working with the wiki: Code[148] (transferring the existing Solar Energy Research Institute). There was an error working with the wiki: Code[149] - The National Renewable Energy Laboratory's There was an error working with the wiki: Code[150] is established. There was an error working with the wiki: Code[151] - NREL develops a GaInP/GaAs two-terminal concentrator cell (180 suns) which becomes the first solar cell to exceed 30% conversion efficiency. There was an error working with the wiki: Code[152] - The There was an error working with the wiki: Code[153] is established. Graetzel, EPFL, Laussane, Switzerland achieves 11% efficient energy conversion with dye-sensitized cells that use a photoelectrochemical effect. There was an error working with the wiki: Code[154] - Total worldwide installed photovoltaic power reached 1000 megawatts. 2000-Today There was an error working with the wiki: Code[155] - Solar cells in modules can convert around 17% of visible incidental Radiant energy to electrical energy. There was an error working with the wiki: Code[156] - Estimated yearly solar cell production reached 1868 megawatts. Worldwide polysilicon production is projected to grow from 31,000 tons in 2005 to 36,000 tons in 2006. There are currently many research groups active in the field of There was an error working with the wiki: Code[157] in There was an error working with the wiki: Code[158] and research institutions around the world. This research can be divided into three areas: making current technology solar cells cheaper and/or more efficient to effectively compete with other energy sources developing new technologies based on new solar cell architectural designs and developing new materials to serve as light absorbers and charge carriers. There was an error working with the wiki: Code[159]s are proposed satellites to be built in high Earth orbit that would use There was an error working with the wiki: Code[160] to beam Solar power to a very large antenna on Earth where it would be used in place of conventional power sources. There was an error working with the wiki: Code[161] has potential applications in cadmium-telluride solar cells. Some of the highest efficiencies for solar cell electric power generation have been obtained by using this material, but previous applications have not yet caused demand to increase significantly. Silicon processing One way of doing this is to develop cheaper methods of obtaining silicon that is sufficiently pure. Silicon is a very common element, but is normally bound in silica, or There was an error working with the wiki: Code[21]. Processing silica (SiO2) to produce silicon is a very high energy process, and more energy efficient methods of synthesis are not only beneficial to the solar industry, but also to industries surrounding silicon technology as a whole. The current industrial production of silicon is via the reaction between carbon (charcoal) and silica at a temperature around 1700 There was an error working with the wiki: Code[162]. In this process, known as carbothermic reduction, each tonne of silicon (metallurgical grade, about 98% pure) is produced with the emission of about 1.5 tonnes of carbon dioxide. Solid silica can be directly converted (reduced) to pure silicon by electrolysis in a molten salt bath at a fairly mild temperature (800 to 900 degrees Celsius).T. Nohira et al, 'Pinpoint and bulk electrochemical reduction of insulating silicon dioxide to silicon', Nat. Mater., 2 (2003) 397.X. B. Jin et al, Electrochemical preparation of silicon and its alloys from solid oxides in molten calcium chloride', Angew. Chem. Int. Ed., 43 (2004) 733. While this new process is in principle the same as the There was an error working with the wiki: Code[163] which was first discovered in late 1996, the interesting laboratory finding is that such electrolytic silicon is in the form of porous silicon which turns readily into a fine powder, (with a particle size of a few micrometres), and may therefore offer new opportunities for development of solar cell technologies. Another approach is also to reduce the amount of silicon used and thus cost, as done by There was an error working with the wiki: Code[164] in production of their "Sliver" cells, by micromachining wafers into very thin, virtually transparent layers that could be used as transparent architectural coverings.http://solar.anu.edu.au/level_1/research/sliver.php Using this technique, two silicon wafers are enough to build a 140 watt panel, compared to about 60 wafers needed for conventional modules of same power output. Yet another way to achieve cost improvements is to reduce wastes during the crystal formation by improved modelisation of the process, as done by There was an error working with the wiki: Code[165], spin-off of the There was an error working with the wiki: Code[166]. Another novel approach employed by There was an error working with the wiki: Code[167] is to grow silicon ribbons from specialized 'string puller' furnaces. They claim to be able to produce thinner cells without machining waste plus the resulting cells are naturally rectangular in shape. Thin-film processing Thin-film solar cells use less than 1% of the raw material (silicon or other light absorbers) compared to wafer based solar cells, leading to a significant price drop per kWh. There are many research groups around the world actively researching different thin-film approaches and/or materials, however it remains to be seen if these solutions can generate the same space-efficiency as traditional silicon processing. One particularly promising technology is crystalline silicon thin films on glass substrates. This technology makes use of the advantages of crystalline silicon as a solar cell material, with the cost savings of using a thin-film approach. Another interesting aspect of thin-film solar cells is the possibility to deposit the cells on all kind of materials, including flexible substrates (There was an error working with the wiki: Code[22] for example), which opens a new dimension for new applications. The invention of There was an error working with the wiki: Code[168] (for which There was an error working with the wiki: Code[169], There was an error working with the wiki: Code[170] and There was an error working with the wiki: Code[171] were awarded a There was an error working with the wiki: Code[172]) may lead to the development of much cheaper cells that are based on inexpensive plastics. However, all There was an error working with the wiki: Code[173] made to date suffer from degradation upon exposure to There was an error working with the wiki: Code[174] light, and hence have lifetimes which are far too short to be viable. The conjugated double bond systems in the polymers, which carry the charge, are always susceptible to breaking up when radiated with shorter wavelengths. Additionally, most conductive polymers, being highly unsaturated and reactive, are highly sensitive to atmospheric moisture and oxidation, making commercial applications difficult. Nanoparticle processing Experimental non-silicon solar panels can be made of There was an error working with the wiki: Code[175]s, eg. There was an error working with the wiki: Code[176]s or There was an error working with the wiki: Code[177]s, embedded in There was an error working with the wiki: Code[178] or mesoporous metal oxides. In addition, thin films of many of these materials on conventional silicon solar cells can increase the optical coupling efficiency into the silicon cell, thus boosting the overal efficiency. By varying the size of the quantum dots, the cells can be tuned to absorb different wavelengths. Although the research is still in its infancy, There was an error working with the wiki: Code[177]-modified photovoltaics may be able to achieve up to 42 percent energy conversion efficiency due to multiple exciton generation.There was an error working with the wiki: Code[1] Transparent conductors Many new solar cells use transparent thin films that are also conductors of electrical charge. The dominant conductive thin films used in research now are transparent conductive oxides (abbreviated "TCO"), and include fluorine-doped tin oxide (SnO2:F, or "FTO"), doped zinc oxide (e.g.: ZnO:Al), and There was an error working with the wiki: Code[180] (abbreviated "ITO"). These conductive films are also used in the LCD industry for flat panel displays. The dual function of a TCO allows light to pass through a substrate window to the active light absorbing material beneath, and also serves as an ohmic contact to transport photogenerated charge carriers away from that light absorbing material. The present TCO materials are effective for research, but perhaps are not yet optimized for large-scale photovoltaic production. They require very special deposition conditions at high vacuum, they can sometimes suffer from poor mechanical strength, and most have poor transmittance in the infrared portion of the spectrum (e.g.: ITO thin films can also be used as infrared filters in airplane windows). These factors make large-scale manufacturing more costly. A relatively new area has emerged using There was an error working with the wiki: Code[181] networks as a transparent conductor for organic solar cells. Nanotube networks are flexible and can be deposited on surfaces a variety of ways. With some treatment, nanotube films can be highly transparent in the infrared, possibly enabling efficient low bandgap solar cells. Nanotube networks are p-type conductors, whereas traditional transparent conductors are exclusively n-type. The availability of a p-type transparent conductor could lead to new cell designs that simplify manufacturing and improve efficiency. Silicon wafer based solar cells Despite the numerous attempts at making better solar cells by using new and exotic materials, the reality is that the photovoltaics market is still dominated by silicon wafer-based solar cells (first-generation solar cells). This means that most solar cell manufacturers are equipped to produce these type of solar cells. Therefore, a large body of research is currently being done all over the world to create silicon wafer-based solar cells that can achieve higher conversion efficiency without an exorbitant increase in production cost. The aim of the research is to achieve the lowest $/watt solar cell design that is suitable for commercial production. Silicon solar cell device manufacture Because solar cells are semiconductor devices, they share many of the same processing and manufacturing techniques as other semiconductor devices such as There was an error working with the wiki: Code[23] There was an error working with the wiki: Code[24]. However, the stringent requirements for cleanliness and quality control of semiconductor fabrication are a little more relaxed for solar cells. Most large-scale commercial solar cell factories today make screen printed poly-crystalline silicon solar cells. Single crystalline wafers which are used in the semiconductor industry can be made into excellent high efficiency solar cells, but they are generally considered to be too expensive for large-scale mass production. Poly-crystalline silicon wafers are made by wire-sawing block-cast silicon ingots into very thin (180 to 350 micrometer) slices or wafers. The wafers are usually lightly p-type doped. To make a solar cell from the wafer, a surface diffusion of n-type dopants is performed on the front side of the wafer. This forms a p-n junction a few hundred nanometers below the surface. Antireflection coatings, which increase the amount of light coupled into the solar cell, are typically applied next. Over the past decade, 'silicon nitride has gradually replaced titanium dioxide' as the antireflection coating of choice because of its excellent surface passivation qualities (i.e., it prevents carrier recombination at the surface of the solar cell). It is typically applied in a layer several hundred nanometers thick using plasma-enhanced chemical vapor deposition (PECVD). Some solar cells have textured front surfaces that, like antireflection coatings, serve to increase the amount of light coupled into the cell. Such surfaces can usually only be formed on single-crystal silicon, though in recent years methods of forming them on multicrystalline silicon have been developed. The wafer is then metallized, whereby a full area metal contact is made on the back surface, and a grid-like metal contact made up of fine "fingers" and larger "busbars" is screen-printed onto the front surface using a There was an error working with the wiki: Code[182] paste. The rear contact is also formed by screen-printing a metal paste, typically aluminium. Usually this contact covers the entire rear side of the cell, though in some cell designs it is printed in a grid pattern. The metal electrodes will then require some kind of heat treatment or "sintering" to make There was an error working with the wiki: Code[183] with the silicon. After the metal contacts are made, the solar cells are interconnected in series (and/or parallel) by flat wires or metal ribbons, and assembled into modules or "solar panels". Solar panels have a sheet of tempered There was an error working with the wiki: Code[184] on the front, and a There was an error working with the wiki: Code[185] encapsulation on the back. Tempered glass cannot be used with amorphous silicon cells because of the high temperatures during the deposition process. Applications and implementations Solar cells are often electrically connected and encapsulated as a module. PV modules often have a sheet of glass on the front (sun up) side with a resin barrier behind, allowing light to pass while protecting the semiconductor There was an error working with the wiki: Code[25] from the elements (There was an error working with the wiki: Code[26] in modules, creating an additive There was an error working with the wiki: Code[27] (or kilowatt-hours per day) is often used, which accounts for changes in There was an error working with the wiki: Code[186]. Photon to Electron Conversion Simple explanation # There was an error working with the wiki: Code[187]s in There was an error working with the wiki: Code[188] hit the solar panel and are absorbed by semiconducting materials, such as There was an error working with the wiki: Code[189]. # There was an error working with the wiki: Code[28] and flow in the direction opposite of the electrons in a silicon solar panel. # An array of solar panels converts solar energy into a usable amount of Direct current (DC) electricity. Optionally: # The DC current enters an There was an error working with the wiki: Code[29]. # The inverter turns DC electricity into 120 or 240-volt AC (alternating current) electricity needed for home appliances. # The AC power enters the utility panel in the house. # The electricity is then distributed to appliances or lights in the house. # The electricity that is not used will be recycled and reused in other facilities. Photogeneration of charge carriers When a There was an error working with the wiki: Code[190] hits a piece of silicon, one of three things can happen: # the photon can pass straight through the silicon - this (generally) happens for lower energy photons, # the photon can reflect off the surface, # the photon can be absorbed by the silicon which either: #Generates heat, OR #Generates electron-hole pairs, if the photon energy is higher than the silicon There was an error working with the wiki: Code[191] value. Note that if a photon has an integer multiple of band gap energy, it can create more than one electron-hole pair. However, this effect is usually not significant in solar cells. The "integer multiple" part is a result of Quantum mechanics and the quantization of energy. When a photon is absorbed, its energy is given to an electron in the crystal lattice. Usually this electron is in the There was an error working with the wiki: Code[192], and is tightly bound in covalent bonds between neighboring atoms, and hence unable to move far. The energy given to it by the photon "excites" it into the There was an error working with the wiki: Code[193], where it is free to move around within the semiconductor. The covalent bond that the electron was previously a part of now has one less electron - this is known as a hole. The presence of a missing covalent bond allows the bonded electrons of neighboring atoms to move into the "hole," leaving another hole behind, and in this way a hole can move through the lattice. Thus, it can be said that photons absorbed in the semiconductor create mobile electron-hole pairs. A photon need only have greater energy than that of the band gap in order to excite an electron from the valence band into the conduction band. However, the solar There was an error working with the wiki: Code[194] approximates a There was an error working with the wiki: Code[195] spectrum at ~6000 K, and as such, much of the solar radiation reaching the There was an error working with the wiki: Code[196] is composed of photons with energies greater than the band gap of silicon. These higher energy photons will be absorbed by the solar cell, but the difference in energy between these photons and the silicon band gap is converted into heat (via lattice vibrations - called There was an error working with the wiki: Code[197]) rather than into usable electrical energy. Charge carrier separation There are two main modes for charge carrier separation in a solar cell: #drift of carriers, driven by an electrostatic field established across the device #diffusion of carriers from zones of high carrier concentration to zones of low carrier concentration (following a gradient of electrochemical potential). In the widely used p-n junction designed solar cells, the dominant mode of charge carrier separation is by drift. However, in non-p-n junction designed solar cells (typical of the third generation of solar cell research such as dye and polymer thin-film solar cells), a general electrostatic field has been confirmed to be absent, and the dominant mode of separation is via charge carrier diffusion. The p-n junction The most commonly known solar cell is configured as a large-area There was an error working with the wiki: Code[198] made from silicon. As a simplification, one can imagine bringing a layer of n-type silicon into direct contact with a layer of p-type silicon. In practice, p-n junctions of silicon solar cells are not made in this way, but rather, by diffusing an n-type dopant into one side of a p-type wafer (or vice versa). If a piece of p-type silicon is placed in intimate contact with a piece of n-type silicon, then a There was an error working with the wiki: Code[30] to flow in only one direction across the junction. Electrons may pass from the n-type side into the p-type side, and holes may pass from the p-type side to the n-type side. This region where electrons have diffused across the junction is called the There was an error working with the wiki: Code[31] because it no longer contains any mobile charge carriers. It is also known as the "space charge region". Connection to an external load Ohmic There was an error working with the wiki: Code[199]-semiconductor contacts are made to both the n-type and p-type sides of the solar cell, and the electrodes connected to an external load. Electrons that are created on the n-type side, or have been "collected" by the junction and swept onto the n-type side, may travel through the wire, power the load, and continue through the wire until they reach the p-type semiconductor-metal contact. Here, they recombine with a hole that was either created as an electron-hole pair on the p-type side of the solar cell, or swept across the junction from the n-type side after being created there. Equivalent circuit of a solar cell To understand the electronic behaviour of a solar cell, it is useful to create a There was an error working with the wiki: Code[32] which is electrically equivalent, and is based on discrete electrical components whose behaviour is well known. An ideal solar cell may be modelled by a current source in parallel with a There was an error working with the wiki: Code[200]. In practice no solar cell is ideal, so a shunt resistance and a series resistance component are added to the model. The result is the "equivalent circuit of a solar cell" shown on the left. Also shown on the right, is the schematic representation of a solar cell for use in circuit diagrams. Solar Cell Anatomy All solar cells require a light absorbing material contained within the cell structure to absorb photons and generate electrons via the photovoltaic effect. The materials used in solar cells tend to have the property of preferentially absorbing the wavelengths of solar light that reach the earth surface however, some solar cells are optimized for light absorption beyond Earth's atmosphere as well. Light absorbing materials can often be used in multiple physical configurations to take advantage of different light absorption and charge separation mechanisms (listed in alphabetical order). Many currently available solar cells are configured as bulk materials that are subsequently cut into wafers and treated in a "top-down" method of synthesis (silicon being the most prevalent bulk material). Other materials are configured as thin-films (inorganic layers, organic dyes, and organic polymers) that are deposited on supporting substrates, while a third group are used as quantum dots (electron-confined nanoparticles) embedded in a supporting matrix in a "bottom-up" approach. Silicon remains the only material that is well-researched in both bulk and thin-film configurations. These bulk technologies are often referred to as wafer-based manufacturing. In other words, in each of these approaches, self-supporting wafers between 180 to 240 micrometers thick are processed and then soldered together to form a solar cell module. A general description of silicon wafer processing is provided in Manufacture and Devices. By far, the most prevalent bulk material for solar cells is There was an error working with the wiki: Code[201] There was an error working with the wiki: Code[202] (abbreviated as a group as c-Si), also known as "solar grade silicon". Bulk silicon is separated into multiple categories according to crystallinity and crystal size in the resulting There was an error working with the wiki: Code[203], There was an error working with the wiki: Code[204], or There was an error working with the wiki: Code[205]. #monocrystalline silicon (c-Si): often made using the There was an error working with the wiki: Code[206]. Single-crystal wafer cells tend to be expensive, and because they are cut from cylindrical ingots, do not completely cover a square solar cell module without a substantial waste of refined silicon. Hence most c-Si panels have uncovered gaps at the corners of four cells. #Poly- or multicrystalline silicon (poly-Si or mc-Si): made from cast square ingots - large blocks of molten silicon carefully cooled and solidified. These cells are less expensive to produce than single crystal cells but are less efficient. #Ribbon silicon: formed by drawing flat thin films from molten silicon and having a multicrystalline structure. These cells have lower efficiencies than poly-Si, but save on production costs due to a great reduction in silicon waste, as this approach does not require sawing from ingots. #New Structures: These new compounds are special arrangements of silicon that can dramatically improve efficiency such as There was an error working with the wiki: Code[207] The various thin-film technologies currently being developed reduce the amount (or mass) of light absorbing material required in creating a solar cell. This can lead to reduced processing costs from that of bulk materials (in the case of silicon thin films) but also tends to reduce energy conversion efficiency, although many multi-layer thin films have efficiencies above those of bulk silicon wafers. There was an error working with the wiki: Code[208] is an efficient light-absorbing material for thin-film solar cells. Compared to other thin-film materials, CdTe is easier to deposit and more suitable for large-scale production. Despite much discussion of the toxicity of CdTe-based solar cells, this is the only technology (apart from amorphous silicon) that can be delivered on a large scale, as shown by First Solar and Antec Solar. There is a 40 megawatt plant in Ohio (USA) and a 10 megawatt plant in Germany. First Solar is scaling up to a 100 MW plant in Germany. The perception of the toxicity of CdTe is based on the toxicity of elemental There was an error working with the wiki: Code[209]. However, it is possible for toxic elements to combine to form a harmless compound, as in the example of There was an error working with the wiki: Code[210], better known as common Salt, which consists of the highly reactive metal There was an error working with the wiki: Code[211] and the corrosive and toxic gas There was an error working with the wiki: Code[212]. Scientific work, particularly by researchers of the National Renewable Energy Laboratories (NREL) in the USA, has shown that the release of cadmium to the atmosphere is lower with CdTe-based solar cells than with silicon photovoltaics and other thin-film solar cell technologies. CIGS are multi-layered thin-film composites. The abbreviation stands for There was an error working with the wiki: Code[213]. Unlike the basic silicon solar cell, which can be modelled as a simple p-n junction (see under There was an error working with the wiki: Code[214]), these cells are best described by a more complex heterojunction model. The best efficiency of a thin-film solar cell as of December 2005 was 19.5% with CIGS. Higher efficiencies (around 30%) can be obtained by using optics to concentrate the incident light. The use of gallium increases the bandgap of the CIGS layer as compare to CIS thus increase the voltage. In another point of view, gallium is added to replace as much indium as possible due to gallium's relative availability to indium. Approximately 70% of Indium currently produced is used by the flat-screen monitor industry. Some investors in solar technology worry that production of CIGS cells will be limited by the availability of indium. Producing 2 GW of CIGS cells (roughly the amount of silicon cells produced in 2006) would use about 10% of the indium produced in 2004. For comparison, silicon solar cells used up 33% of the world's electronic grade silicon production in 2006! There was an error working with the wiki: Code[215] claims to waste only 5% of the indium it uses and suggests that Daystar's vacuum sputtering technology may tend to waste about 60% of the indium.As of 2006, the best conversion efficiency for flexible CIGS cells on polyimide is 14.1% by Tiwari et al, at the ETH, Switzerland. That being said, indium can easily be recycled from decommissioned PV modules. The recycling program in Germany would be one good example to follow. It also highlights the new regenerative industrial paradigm: "From cradle to cradle". Selenium allows for better uniformity across the layer and so the number of recombination sites in the film are reduced which benefits the quantum efficiency and thus the conversion efficiency. \begin{pmatrix}Cu\\Ag\\Au\end{pmatrix} \begin{pmatrix}Al\\Ga\\In\end{pmatrix} \begin{pmatrix}S \\Se\\Te\end{pmatrix}_2 possible combinations of I III VI elements in the periodic table that has photovoltaic effect The materials based on CuInSe2 that are of interest for photovoltaic applications include several elements from groups I, III and VI in the periodic table. These semiconductors are especially attractive for thin film solar cell application because of their high optical absorption coefficients and versatile optical and electrical characteristics which can in principle be manipulated and tuned for a specific need in a given device. CIS is an abbreviation for general chalcopyrite films of copper indium selenide (CuInSe2), CIGS mentioned above is a variation of CIS. While these films can achieve 13.5% efficiency, their manufacturing costs at present are high when comparing to silicon solar cell but continuing work is leading to more cost-effective production processes. A manufacturing plant was built in Germany by Würth Solar. It was inaugurated in October 2006. Full production is expected by end of 2006. There are more plans by AVANCIS and Shell in a joint effort to build another plant in Germany with a capacity of 20 MW. Honda in Japan has finished its pilot-plant testing and is launching its commercial production. In North America, Global Solar has been producing pliable CIS solar cell in smaller scale since 2001. Apart from Daystar Technologies and Nanosolar mentioned in CIGS, there are other potential manufacturers coming on line such as Miasole using vacuum sputtering method and also a Canadian initiative CIS Solar attempting to make solar cells by low cost electroplating process. Gallium arsenide (GaAs) multijunction High-efficiency cells have been developed for special applications such as There was an error working with the wiki: Code[33] and There was an error working with the wiki: Code[216]. These multijunction cells consist of multiple thin films produced using There was an error working with the wiki: Code[217]. A triple-junction cell, for example, may consist of the semiconductors: GaAs, Ge, and GaInP2. Each type of semiconductor will have a characteristic There was an error working with the wiki: Code[218] energy which, loosely speaking, causes it to absorb light most efficiently at a certain color, or more precisely, to absorb There was an error working with the wiki: Code[219] over a portion of the spectrum. The semiconductors are carefully chosen to absorb nearly all of the solar spectrum, thus generating electricity from as much of the solar energy as possible. GaAs multijunction devices are the most efficient solar cells to date, reaching as high as 39% efficiency. They are also some of the most expensive cells per unit area (up to US$40/cm²). Light absorbing dyes Typically a Ruthenium metalorganic dye (Ru-centered) used as a monolayer of light-absorbing material. The dye-sensitized solar cell depends on a mesoporous layer of There was an error working with the wiki: Code[220] There was an error working with the wiki: Code[221] to greatly amplify the surface area (200-300 m²/gram TiO2, as compared to approximately 10 m²/gram of flat single crystal). The photogenerated electrons from the light absorbing dye are passed on to the n-type TiO2, and the holes are passed to an electrolyte on the other side of the dye. The circuit is completed by a redox couple in the electrolyte, which can be liquid or solid. This type of cell allows a more flexible use of materials, and typically are manufactured by screen printing, with the potential for lower processing costs than those used for bulk solar cells. However, the dyes in these cells also suffer from degradation under heat and UV light, and the cell casing is difficult to seal due to the solvents used in assembly. In spite of the above, this is a popular emerging technology with some commercial impact forecasted within this decade. Organic/polymer solar cells Organic solar cells and There was an error working with the wiki: Code[34]. Energy conversion efficiencies achieved to date using conductive polymers are low at 4-5% efficiency for the best cells to date. However, these cells could be beneficial for some applications where mechanical flexibility and disposability are important. There was an error working with the wiki: Code[222] thin-films are mainly deposited by There was an error working with the wiki: Code[223] (typically plasma enhanced (PE-CVD)) from There was an error working with the wiki: Code[224] gas and Hydrogen gas. Depending on the deposition's parameters, this can yield: #There was an error working with the wiki: Code[225] (a-Si or a-Si:H) #There was an error working with the wiki: Code[35] or #There was an error working with the wiki: Code[226] (nc-Si or nc-Si:H). These types of silicon present dangling and twisted bonds, which results in deep defects (energy levels in the bandgap) as well as deformation of the valence and conduction bands (band tails). The solar cells made from these materials tend to have lower energy conversion efficiency than bulk silicon, but are also less expensive to produce. The There was an error working with the wiki: Code[227] of thin film solar cells is also lower due to reduced number of collected charge carriers per incident photon. Amorphous silicon has a higher bandgap (1.7 eV) than crystalline silicon (c-Si) (1.1 eV), which means it absorbs the visible part of the solar spectrum more strongly than the There was an error working with the wiki: Code[228] portion of the spectrum. As nc-Si has about the same bandgap as c-Si, the two material can be combined in thin layers, creating a layered cell called a tandem cell. The top cell in a-Si absorbs the visible light and leaves the infrared part of the spectrum for the bottom cell in nanocrystalline Si. Recently, solutions to overcome the limitations of thin film crystalline silicon have been developed. Light trapping schemes where the incoming light is obliquely coupled into the silicon and the light traverses the film several times enhance the absorption of sunlight in the films. Thermal processing techniques enhance the crystallinity of the silicon and passify electronic defects. The result is a new technology - thin film Crystaline Silicon on Glass (CSG)http://www.csgsolar.com/downloads/CSG_Press_PVSC31Jan2005.pdf. CSG solar devices represent a balance between the low cost of thin films and the high efficiency of bulk silicon. A silicon thin film technology is being developed for building integrated photovoltaics (BIPV) in the form of semi-transparent solar cells which can be applied as window glazing. These cells function as window tinting while generating electricity. Nanocrystalline solar cells These structures make use of some of the same thin-film light absorbing materials but are overlain as an extremely thin absorber on a supporting matrix of conductive polymer or mesoporous metal oxide having a very high surface area to increase internal reflections (and hence increase the probability of light absorption). Concentrating photovoltaics (CPV) Concentrating photovoltaic systems use a large area of lenses or mirrors to focus sunlight on a small area of photovoltaic cells. If these systems use single or dual-axis tracking to improve performance, they may be referred to as Heliostat Concentrator Photovoltaics (HCPV). The primary attraction of CPV systems is their reduced usage of semiconducting material which is expensive and currently in short supply. Additionally, increasing the concentration ratio improves the performance of general photovoltaic materials and also allows for the use of high-performance materials such as There was an error working with the wiki: Code[36]. Despite the advantages of CPV technologies their application has been limited by the costs of focusing, tracking and cooling equipment. 'On There was an error working with the wiki: Code[229], There was an error working with the wiki: Code[230], Australia announced it would construct a solar plant using this technology to come online in 2008 and be completed by 2013. This plant, at 154 MW', would be ten times larger than the largest current photovoltaic plant in the world. Energy Loss Solar-powering A photovoltaic array is a linked collection of photovoltaic modules, one of which is shown in the picture to the right. Each There was an error working with the wiki: Code[37] (PV) module is made of multiple interconnected Solar cell. The cells convert Solar power into There was an error working with the wiki: Code[38]. PV modules distinguish themselves from solar cells in that they are conveniently sized and packaged in weather-resistant housings for easy installation and deployment in residential, commercial, and industrial applications. The application and study of photovoltaic devices is known as There was an error working with the wiki: Code[231]. Solar cells work because of the There was an error working with the wiki: Code[232]. Certain materials are able to convert sunlight into electricity. They absorb some of the energy of the There was an error working with the wiki: Code[233] and cause current to flow between two oppositely charged layers. Individual solar cells provide a relatively small amount of power, but electrical output can be significant when cells are connected together in a PV module. The cells, modules, and arrays can be connected in series or parallel, or typically a combination, to create a desired peak voltage output. In 1839, during the There was an error working with the wiki: Code[39]. As such, photovoltaic cells were used mainly for the purposes of measuring There was an error working with the wiki: Code[234]. Just under one hundred years later, Albert Einstein received the Nobel prize in physics in 1921 for explaining the photoelectric effect, which allowed practical use of photo cells to be put into use. In 1941, There was an error working with the wiki: Code[235] invented the solar cell, following the invention of the There was an error working with the wiki: Code[236]. Solar photovoltaic panels are frequently applied in There was an error working with the wiki: Code[237] power. However, costs of production have been reduced in recent years for more widespread use through production and technological advances. For example, single crystal silicon solar cells have largely been replaced by less expensive multicrystalline silicon solar cells, and thin film silicon solar cells have also been developed recently at lower costs of production yet (see Solar cell). Although they are reduced in energy conversion efficiency from single crystalline Si wafers, they are also much easier to produce at comparably lower costs. Together with a storage Battery (electricity), There was an error working with the wiki: Code[238] have become commonplace for certain low-power applications, such as signal There was an error working with the wiki: Code[239]s or devices in remote areas or simply where connection to the electricity mains would be impractical. In Experimental form they have even been used to power automobiles in races such as the There was an error working with the wiki: Code[240] across There was an error working with the wiki: Code[241]. Many There was an error working with the wiki: Code[242]s and land vehicles use them to charge on-board batteries. PV performance At high noon on a cloudless day at the equator, the There was an error working with the wiki: Code[40]/m², on the Earth's surface, to a plane that is perpendicular to the sun's rays. As such, PV arrays could track the sun through each day to greatly enhance energy collection. However, tracking devices add cost, and require maintenance, so it is more common for PV arrays to have fixed mounts that tilt the array and face due South in the Northern Hemisphere. In the Southern Hemisphere, they should point due North. The tilt angle, from horizontal, can be varied for season, but if fixed, should be set to give optimal array output during the peak electrical demand portion of a typical year. Other factors affect PV performance. Accounting for clouds, and the fact that most of the world is not on the equator, and that the sun sets in the evening, the correct measure of solar power is There was an error working with the wiki: Code[243]: the average number of kilowatt-hours per square meter per day. For the weather and latitudes of the United States and Europe, typical insolation ranges from 4 KWh/m²/day in northern climes to 6.5 KWh/m²/day in the sunniest regions. Typical solar panels have an average efficiency of 12%, with the best commercially available panels at 20%. Thus, a photovoltaic installation in the southern latitudes of Europe or the United States may expect to produce 1 KWh/m²/day. A typical "150 Watt" solar panel is about a square meter in size: such a panel may be expected to produce 1 KWh every day, on average, after taking account the weather and the latitude. In the There was an error working with the wiki: Code[41], and There was an error working with the wiki: Code[244]).http://www.colorado.gov/oemc/presentations/060125-manure.pdf Photovoltaic cells' electrical output is extremely sensitive to shading. When even a small portion of a cell, module, or array is shaded, while the remainder is in sunlight, the output falls dramatically due to internal 'short-circuiting' (the electrons reversing course through the shaded portion of the P-N junction). Therefore it is extremely important that a PV installation is not shaded at all by trees, architectural features, flag poles, or other obstructions. Module output and life are also degraded by increased temperature. Allowing ambient air to flow over, and if possible behind, PV modules reduces this problem. However, effective module lives are typically 25 years or more http://www.agr.gc.ca/pfra/water/solardugout_e.htm, so replacement costs should be considered as well. Solar photovoltaic panels on spacecraft Solar panels can be used on There was an error working with the wiki: Code[42] There was an error working with the wiki: Code[245]. Because of these efforts to maximize electric production, and the fact that the Sun is mostly the only source of energy, the There was an error working with the wiki: Code[246] of solar cells on spacecraft could be one of the highest costs. When journeying to outer parts of the solar system (or beyond), nuclear reactors or There was an error working with the wiki: Code[247]s are preferred, as the Sun's rays are too weak at such massive distances to power a spacecraft. The There was an error working with the wiki: Code[43]. In addition, solar power is being considered to be used as a There was an error working with the wiki: Code[248] mechanism in lieu of There was an error working with the wiki: Code[249] propulsion. Theory and construction There was an error working with the wiki: Code[250] and There was an error working with the wiki: Code[251] are typical choices of There was an error working with the wiki: Code[252]s for solar cells. Gallium arsenide crystals are grown especially for photovoltaic use, while silicon crystals are available in less-expensive standard There was an error working with the wiki: Code[253]s. These ingots are produced mainly for consumption in the There was an error working with the wiki: Code[254] industry. There was an error working with the wiki: Code[255] has lower conversion efficiency but also lower cost. During the manufacturing process, crystalline silicon ingots are sliced into There was an error working with the wiki: Code[44] to remove slicing damage, There was an error working with the wiki: Code[45]s are deposited onto each surface: a thin There was an error working with the wiki: Code[256] on the sun-facing side and usually a flat sheet on the other.There was an error working with the wiki: Code[8] Solar panels are constructed of these cells cut into appropriate shapes, protected from radiation and handling There was an error working with the wiki: Code[257] on the front surface by bonding on a cover There was an error working with the wiki: Code[258], and There was an error working with the wiki: Code[259]ed onto a substrate (either a rigid panel or a flexible blanket). Electrical connections are made in series to achieve a desired output voltage and/or in parallel to provide a desired amount of current source capability. The cement and the substrate must be thermally conductive, because the cells heat up from absorbing There was an error working with the wiki: Code[260] energy that is not converted to electricity. Since cell heating reduces the operating efficiency it is desirable to minimize the heating. The resulting assemblies are called solar panels or solar arrays. Solar-power Issues Maximum-power point A solar cell may operate over a wide range of There was an error working with the wiki: Code[46] (I). By increasing the resistive load on an irradiated cell from zero (a short circuit) to a very high value (an open circuit) one can determine the maximum-power point, that is the maximum output electrical power that the cell can deliver at that level of irradiation. Vm x Im = Pm in There was an error working with the wiki: Code[261]. Energy conversion efficiency A solar cell's energy conversion efficiency (\eta , "eta"), is the percentage of power converted (from absorbed light to electrical energy) and collected, when a solar cell is connected to an electrical circuit. This term is calculated using the ratio of Pm, divided by the input light There was an error working with the wiki: Code[262] under "standard" test conditions (E, in W/m2) and the surface area of the solar cell (Ac in m²). :\eta = \frac{P_{m}}{E \times A_c} At solar noon on a clear March or September There was an error working with the wiki: Code[263] day, the solar radiation at the equator is about 1000 W/m2. Hence, the "standard" solar radiation (known as the "air mass 1.5 spectrum") has a power density of 1000 Watts per square Meter. Thus, a 12% efficiency solar cell having 1 m² of surface area in full sunlight at solar noon at the equator during either the March or September There was an error working with the wiki: Code[263] will produce approximately 120 watts of peak power. Another defining term in the overall behavior of a solar cell is the There was an error working with the wiki: Code[265] (FF). This is the ratio of the maximum power point divided by the open circuit voltage (Voc) and the short circuit current (Isc): :FF = \frac{P_{m}}{V_{oc} \times I_{sc}} = \frac{\eta \times A_c \times E}{V_{oc} \times I_{sc}} Quantum efficiency There was an error working with the wiki: Code[266] refers to the percentage of absorbed photons that produce electron-hole pairs (or charge carriers). This is a term intrinsic to the light absorbing material, and not the cell as a whole (which becomes more relevant for thin-film solar cells). This term should not be confused with energy There was an error working with the wiki: Code[267] efficiency, as it does not convey information about the power collected from the solar cell. Comparison of energy conversion efficiencies Silicon solar cell efficiencies vary from 6% for amorphous silicon-based solar cells to 40,7% with multiple-junction research lab cells. Solar cell energy conversion efficiencies for commercially available mc-Si solar cells are around 14-16%. The highest efficiency cells have not always been the most economical -- for example a 30% efficient multijunction cell based on exotic materials such as gallium arsenide or indium selenide and produced in low volume might well cost one hundred times as much as an 8% efficient amorphous silicon cell in mass production, while only delivering about four times the electrical power. To make practical use of the solar-generated energy, the electricity is most often fed into the electricity grid using inverters (grid-connected PV systems) in stand alone systems, batteries are used to store the energy that is not needed immediately. A common method used to express economic costs of electricity-generating systems is to calculate a price per delivered kilowatt-hour (kWh). The solar cell efficiency in combination with the available irradiation has a major influence on the costs, but generally speaking the overall system efficiency is important. Using the commercially available solar cells (as of 2006) and system technology leads to system efficiencies between 5 and 19%. As of 2005, photovoltaic electricity generation costs ranged from ~ 50 eurocents/kWh (0.60 US$/kWh) (central Europe) down to ~ 25 eurocents/kWh (0.30 US$/kWh) in regions of high solar irradiation. This electricity is generally fed into the electrical grid on the customer's side of the meter. The cost can be compared to prevailing retail electric pricing (as of 2005), which varied from between 0.04 and 0.50 US$/kWh worldwide. (Note: in addition to solar irradiance profiles, these costs/kwh calculations will vary depending on assumptions for years of useful life of a system. Most c-Si panels are warrantied for 25 years and should see 35+ years of useful life.) The chart at the right illustrates the various commercial large-area module energy conversion efficiencies and the best laboratory efficiencies obtained for various materials and technologies. Peak watt (or Watt peak) Since solar cell output power depends on multiple factors, such as the There was an error working with the wiki: Code[268]'s There was an error working with the wiki: Code[269], for comparison purposes between different cells and panels, the peak watt (Wp) is used. It is the output power under these conditions: # There was an error working with the wiki: Code[270] 1000 W/m² # solar There was an error working with the wiki: Code[271] AM (There was an error working with the wiki: Code[272]) 1.5 # cell temperature 25There was an error working with the wiki: Code[273] Solar cells and energy payback There is a common conception that solar cells never produce more energy than it takes to make them. While the expected working lifetime is around 40 years, the energy payback time of a solar panel is anywhere from 1 to 20 years (usually under five) depending on the type and where it is used (see net energy gain). This means solar cells can be net energy producers and can "reproduce" themselves (from just over once to more than 30 times) over their lifetime. This is disputed, however, by some researchers who object that such analysis doesn't take into account waste, inefficiency, and related energy costs that would come with a real-world solar cell. There was an error working with the wiki: Code[274] List of energy topics There was an error working with the wiki: Code[47] There was an error working with the wiki: Code[287]s External links and references "Solar Resources". SunPower Corporation, 2004. "History: Photovoltaics Timeline". About, Inc., 2005. "Bell Labs Celebrates 50th Anniversary of the Solar Cell - Timeline". Lucent Technologies, 2004. Lenardic, Denis, "History of photovoltaics". PVResources.com, 2005. Perlin, John, "<pesn type="link" url="http://www.californiasolarcenter.
CommonCrawl
Shape-directed rotation of homogeneous micromotors via catalytic self-electrophoresis Propagating wave in a fluid by coherent motion of 2D colloids Koki Sano, Xiang Wang, … Takuzo Aida Reconfigurable engineered motile semiconductor microparticles Ugonna Ohiri, C. Wyatt Shields IV, … Nan Jokerst Chemokinesis-driven accumulation of active colloids in low-mobility regions of fuel gradients Jeffrey L. Moran, Philip M. Wheat, … Jonathan D. Posner Experimental observation of flow fields around active Janus spheres Andrew I. Campbell, Stephen J. Ebbens, … Ramin Golestanian Emergent microrobotic oscillators via asymmetry-induced order Jing Fan Yang, Thomas A. Berrueta, … Michael S. Strano Chemically-powered swimming and diffusion in the microscopic world Yifei Zhang & Henry Hess Emergence of traveling waves in linear arrays of electromechanical oscillators Yong Dou, Shashank Pandey, … Kyle J. M. Bishop Nanomotor tracking experiments at the edge of reproducibility Filip Novotný & Martin Pumera Rotating robots move collectively and self-organize Christian Scholz, Michael Engel & Thorsten Pöschel Allan M. Brooks1, Mykola Tasinkevych2, Syeda Sabrina1, Darrell Velegol1, Ayusman Sen3 & Kyle J. M. Bishop ORCID: orcid.org/0000-0002-7467-36684 Nature Communications volume 10, Article number: 495 (2019) Cite this article The pursuit of chemically-powered colloidal machines requires individual components that perform different motions within a common environment. Such motions can be tailored by controlling the shape and/or composition of catalytic microparticles; however, the ability to design particle motions remains limited by incomplete understanding of the relevant propulsion mechanism(s). Here, we demonstrate that platinum microparticles move spontaneously in solutions of hydrogen peroxide and that their motions can be rationally designed by controlling particle shape. Nanofabricated particles with n-fold rotational symmetry rotate steadily with speed and direction specified by the type and extent of shape asymmetry. The observed relationships between particle shape and motion provide evidence for a self-electrophoretic propulsion mechanism, whereby anodic oxidation and cathodic reduction occur at different rates at different locations on the particle surface. We develop a mathematical model that explains how particle shape impacts the relevant electrocatalytic reactions and the resulting electrokinetic flows that drive particle motion. Catalytic nanomotors1 harness chemical energy from their environment to power fluid flows and particle motions with emerging opportunities in microrobotics2,3 and the study of active matter4,5. In these contexts, it is often desirable to direct independently the motions of individual motors within heterogeneous ensembles—for example, to guide self-organization in mixtures of counter-rotating spinners6,7. Such control cannot be achieved using global fields, which act on all motors simultaneously, but instead requires guidance at the level of the individual particles. For a broad class of self-phoretic motors8,9, different types of motion can be achieved by controlling the particle symmetry through variations in surface composition and/or geometric shape. For example, axially symmetric motors with broken fore-aft symmetry, such as bimetallic nanorods1 or platinum Janus spheres10, enable linear self-propulsion; other motions such as circular11 and helical trajectories are possible by breaking additional symmetries. While particle symmetry dictates the types of motion allowed (e.g., particle rotation), it says nothing about the quantitative details of such motions (e.g., the speed and direction). The ability to rationally design or program the desired dynamics requires control over particle shape and/or composition as well as knowledge of how such asymmetries direct particle motions. These capabilities could enable the realization of active colloidal ecosystems in which mixtures of different particle species exhibit distinct dynamical behaviors within a common environment to perform collective functions12,13,14. Particle shape provides a particularly attractive medium in which to encode the desired motions of active colloidal particles. In contrast to strategies based on surface patchiness, shape-directed motions can be realized in particles of homogeneous composition, thereby facilitating particle synthesis or fabrication15. Moreover, particle shape can be used to control both the type and the extent of asymmetry, thereby enabling the continuous tuning of dynamical behaviors16. Perhaps most importantly, the use of shape to direct motion is almost universally applicable to the wide variety of propulsion mechanisms used to power active colloids. Recent examples of shape-directed motions make use of acoustic17,18, thermocapillary19, electrokinetic16,20,21, and vibrational22,23 energy inputs to propel homogeneous material particles. Similar asymmetries underlie the propulsion of catalytic microrockets24,25,26,27, which direct the release of chemically generated gas bubbles to perform linear and helical motions. By contrast, motors based on the platinum-catalyzed decomposition of hydrogen peroxide—arguably the most studied system—are propelled by self-phoretic flows due to reaction-induced gradients at the particle surface28,29. Previous efforts to direct their motion have relied on compositional asymmetries patterned onto particles of isotropic10 or anisotropic30,31,32 shapes. However, it remains unclear if and how shape alone might induce and direct the motions of homogeneous platinum particles through peroxide solutions. Here, we demonstrate that catalytic micromotors made of solid platinum rotate autonomously in solutions of hydrogen peroxide as directed by asymmetries in their parametric shapes (Fig. 1). Particles are fabricated using projection lithography to form thin plates shaped like twisted stars with n-fold rotational symmetry (Cnh point group). As anticipated by symmetry considerations, the particles orient parallel to the substrate and rotate steadily about their symmetry axis. Importantly, the direction and speed of rotation is prescribed quantitatively by the type and extent of shape asymmetry; particles can be made to spin clockwise (CW) or counterclockwise (CCW) at a specified rate. Such rotary motions provide a basis for the experimental study of active matter comprised of self-rotating units6,7,33,34 and may be useful as components of colloidal machines13,14,35. Shape-directed particle rotation. a Schematic illustration of two platinum disks (a = 5.77 μm, δ = 100 nm) of opposite handedness rotating above a glass substrate in a solution of hydrogen peroxide. b The parametric particle shape depends on the number of fins n and the fin asymmetry c. c Optical microscope image of particles with n = 3 and c = 2 rotating clockwise at a speed of Ω = 1.5 ± 0.2 rad s−1 in a 10 wt% solution of hydrogen peroxide (see Supplementary Movie 1). d Optical microscope image of multiple types of particles rotating in different directions and at different rates in a common environment of 10 wt% hydrogen peroxide (see Supplementary Movie 2). Scale bars are 20 μm To explain the relationship between particle shape and motion, we describe additional experiments and simulations that reveal the likely mechanism underlying catalytic propulsion. Consistent with previous studies of platinum Janus spheres28,29, we observe that rotation speed decreases monotonically with increasing salt concentration and reverses direction upon addition of a cationic surfactant. These observations support a self-electrophoretic propulsion mechanism analogous to that of bimetallic nanorods36,37,38, whereby the anodic oxidation and cathodic reduction of H2O2 occur at different rates at different locations on the Pt surface28,29. The emergence of anodic and cathodic regions on an otherwise homogeneous platinum surface is explained by a kinetic mechanism for the electrocatalytic decomposition of hydrogen peroxide39 on platinum in the limit of transport-controlled reaction rates. With no free parameters, we show that this mechanism is consistent with our experimental observations regarding the direction and speed of rotation as a function of particle shape, peroxide concentration, ionic strength, and particle zeta potential. Looking forward, we discuss how insights from the model can be used to improve the performance of catalytic nanomotors and design shape-directed particle motions and fluid flows with increased functionality. Shape-directed particle motions We fabricated plate-like Pt particles (δ = 100 nm thick) with twisted-star shapes using projection photolithography followed by electron beam evaporation (Fig. 1; see Methods). The shape of the particle perimeter was prescribed by the following parametric equations for a plane curve in polar coordinates, $$r(s) = a[1 + b\,{\mathrm{cos}}(ns)]\,{\mathrm{and}}\,\theta (s) = s + \frac{c}{n}{\mathrm{cos}}(ns)\,$$ for 0 < s < 2π where r denotes the radius of the particle at angle θ, and s is the parameter. These equations define a twisted star with n asymmetric arms (or 'fins') and an average radius of a = 5.77 μm (Fig. 1b). The dimensionless parameter b controls the length of each fin, while c determines the degree of fin asymmetry. In our experiments, the fin asymmetry was varied from c = 0 to ±2; the fin length was held constant at b = 0.3. The platinum particles were dispersed in aqueous solutions of hydrogen peroxide and sedimented onto glass slides for imaging by an optical microscope. The particles settled at the surface in a preferred orientation (Fig. 1c) due to a slight warping of the plates introduced during the fabrication process (Supplementary Fig. 1). These observations suggest that the particles were actually chiral (Cn point group) with two distinct stereoisomers denoted R and S by analogy to the chemical convention. In their preferred orientations, R particles correspond to c > 0 when viewed from above; S particles to c < 0. At the surface, particles rotated steadily at angular speeds of up to 1.5 rad s−1 in 10 wt% hydrogen peroxide (Fig. 1c; see also Supplementary Movie 1 and Supplementary Fig. 2). Within heterogeneous mixtures, particles of different shapes rotated independently in different directions and at different rates within a common environment (Fig. 1d; see also Supplementary Movie 2). The speed and direction of rotation were determined by particle shape as parameterized by the fin asymmetry c and the number of fins n (Fig. 2). As viewed from above, R-particles rotated clockwise (Ω < 0) while S-particles rotated counterclockwise (Ω > 0); achiral particles (c = 0) showed no systematic rotation. The angular speed increased monotonically with increasing fin asymmetry as observed previously for acoustically-powered spinners of similar shapes18. Interestingly, catalytic spinners rotated in the opposite direction from that of acoustic spinners, highlighting the important role of the propulsion mechanism on shaped-directed particle motions. By contrast to the fin asymmetry, the number of fins n did not significantly influence the speed or direction of particle rotation (Fig. 2, inset). Effect of shape asymmetry. The angular velocity Ω decreases with increasing shape asymmetry c for particles with different numbers of fins n. The inset shows the dependence on n for a constant asymmetry of c = 2. All experiments were performed in 10 wt% hydrogen peroxide. Error bars denote 95% confidence intervals for the mean velocity based on the analysis of at least 10 particles (see Methods). Dashed lines are only to guide the eye Propulsion mechanism The propulsion of motors based on the platinum-catalyzed decomposition of hydrogen peroxide has been the subject of considerable debate9,28, which provides useful context for interpreting the present experiments. The motion of platinum Janus spheres was originally attributed to self-diffusiophoresis10, whereby asymmetric consumption and production of neutral solutes (namely, H2O2 and dissolved O2) create local concentration gradients that drive interfacial flows due to interactions between the solute(s) and the particle surface40. Modeling studies have since predicted that self-diffusiophoresis could propel the motion of catalytic particles with homogeneous surface chemistry and asymmetric shape41,42. Alternatively, the motion of Janus spheres was attributed to the asymmetric formation and release of oxygen bubbles43, which are known to power the motion of larger tubular motors known as microrockets27,44. These mechanisms were called into doubt by experiments on platinum Janus motors in electrolyte solutions, which revealed a striking decrease in the propulsion velocity with increasing salt concentration28,29. This and other observations supported a different mechanism based on self-electrophoresis28,29,36 due to spatial variations in the anodic oxidation and cathodic reduction of hydrogen peroxide at the platinum surface, $${\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}} \to {\mathrm{O}}_{\mathrm{2}} + {\mathrm{2H}}^ + + {\mathrm{2e}}^ - \quad {\mathrm{(anode)}}$$ $${\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}} + {\mathrm{2H}}^ + + {\mathrm{2e}}^ - \to {\mathrm{2H}}_{\mathrm{2}}{\mathrm{O}}\quad {\mathrm{(cathode)}}$$ In this scenario, the electric current through the electrolyte from anodic to cathodic surface regions is accompanied by an electric field that drives electrokinetic flows and propels particle motions. The addition of salt increases the conductivity of the electrolyte, thereby reducing the strength of the chemically generated electric field and the speed of particle motions. While the origins of these currents are uncertain, one plausible explanation attributes variations in catalytic activity to gradients in the nanoscale thickness of the platinum layer coating one half of the Janus spheres29. To investigate the mechanism of our homogeneous platinum spinners, we first measured how the particle velocity varied with the concentration of hydrogen peroxide fuel (Fig. 3a) and the ionic strength of the surrounding fluid (Fig. 3b). The angular velocity increased monotonically with increasing peroxide concentration in a manner similar to that observed for Pt-coated Janus swimmers (Fig. 3a)10. Moreover, the characteristic linear velocity aΩ ~ 10 μm s−1 was similar in magnitude to that of other catalytic motors powered by hydrogen peroxide1,10. The addition of monovalent salts such as NaCl, KBr, and KNO3 caused a reduction in the rotation velocity with increasing concentration (Fig. 3b). Beyond a few millimolar of added salt, particle rotation was no longer observed. As previously argued28,29, the influence of added salt is not consistent with mechanisms based on neutral self-diffusiophoresis10 or bubble propulsion43, which should be largely insensitive to the salt concentration. Experiments supporting a self-electrophoretic propulsion mechanism. a Angular velocity as a function of hydrogen peroxide concentration for particles with shape asymmetry c = 2 and different numbers of fins n. Error bars represent 95% confidence intervals for the mean velocity. b Angular velocity as a function of salt concentration for particles with n = 3 fins and shape asymmetry c = 2 rotating in 10 wt% H2O2. The inset shows the rotation reversal with addition of 0.33 mM of the cationic surfactant CTAB (see Supplementary Movie 3); scale bar is 20 μm Further support for an electrokinetic propulsion mechanism is provided by experiments involving the addition of a cationic surfactant, cetyl triethylammonium bromide (CTAB; Fig. 3b). At low concentrations (less that ca. 0.1 mM), CTAB behaves like a simple salt and reduces the angular velocity of the particles. At higher concentrations, however, the direction of rotation reverses such that R-particles rotate in the counterclockwise direction (Fig. 3b; see also Supplementary Movie 3). Such rotation reversal is attributed to the adsorption of CTAB at the particle surface, which changes the zeta potential from negative to positive thereby reversing the direction of electrokinetic flows. Model of shape-directed self-electrophoresis The putative mechanism outlined above posits the existence of anodic and cathodic regions on the particle surface; however, it does not explain why these regions exist and how they are influenced by particle shape. Here, we develop a quantitative mathematical model of shape-directed propulsion that provides a plausible explanation for the experimental observations. Our approach combines the standard electrokinetic model45,46 with the reported kinetic mechanism for the electrocatalytic decomposition of H2O2 over platinum39. The full formulation of the model is detailed in the Methods and the Supplementary Information; here, we focus on the underlying physics, the relevant assumptions, and the key results. The catalytic decomposition of hydrogen peroxide over platinum occurs by several different kinetic mechanisms, which proceed simultaneously at different rates depending on the reaction conditions47. For the present conditions (room temperature, neutral pH, 1 M H2O2), the dominant mechanism for decomposition is not electrochemical in nature and leads to gradients in neutral species (namely, H2O2 and O2)47. However, these gradients do not contribute significantly to particle propulsion as evidenced by the absence of particle motions at high salt concentrations (>1 mM). Instead, we argue that the dominant mechanism for propulsion relies on the electrocatalytic decomposition of hydrogen peroxide, which contributes only negligibly to the overall rate of peroxide consumption. A similar picture was suggested for the self-electrophoresis of gold-platinum nanorods in hydrogen peroxide, for which the rate of electrocatalytic decomposition was estimated to be only 0.1% of the overall rate48. The electrochemical kinetics of anodic oxidation (2) and cathodic reduction (3) of H2O2 over platinum have been investigated previously to determine the apparent rate laws and possible reaction mechanisms (Fig. 4a)39. At high peroxide concentrations \(( {C_{{\mathrm{H}}_2{\mathrm{O}}_2} > 1\,{\mathrm{mM}}} )\), these reactions proceed with the same rate at the so-called mixed potential, which lies between the reversible (equilibrium) potentials for reactions (2) and (3)49. Consequently, both the anodic oxidation and the cathodic reduction of hydrogen peroxide proceed with large overpotentials as approximated by the Tafel equation of electrochemical kinetics39,50. This scenario is supported by the measured current densities for peroxide oxidation and reduction as a function of H2O2 concentration, potential, and pH51. Based on this data, Vetter39 proposed the following expressions for the partial current densities, ia and ic, at the platinum surface for the anodic and cathodic reactions $$i_{\mathrm{a}} = k_{\mathrm{a}}C_{{\mathrm{HO}}_{\mathrm{2}}^ - }{\mathrm{exp}}\left( {\frac{{\alpha _{\mathrm{a}}e}}{{k_{\mathrm{B}}T}}\left( {{\mathrm{\Phi }}_{\mathrm{p}} - {\mathrm{\Phi }}_{\mathrm{s}}} \right)} \right),$$ $$i_{\mathrm{c}} = - k_{\mathrm{c}}C_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}}{\mathrm{exp}}\left( { - \frac{{(1 - \alpha _{\mathrm{c}})e}}{{k_{\mathrm{B}}T}}\left( {{\mathrm{\Phi }}_{\mathrm{p}} - {\mathrm{\Phi }}_{\mathrm{s}}} \right)} \right).$$ Here, ka and kc are the respective rate constants, Ci denotes the concentration of species i at the particle surface, αa and αc are charge transfer coefficients, e is the elementary charge, kB is the Boltzmann constant, and T is the temperature. The Stern layer voltage, Φp − Φs, corresponds to the difference between the particle potential Φp and the surface potential Φs at the outer Helmholtz plane50. At steady-state, the potential of the particle Φp—the mixed potential—is determined by the condition that total current to/from the particle be zero, $${\int\!\!\!}_{\cal S} \left( {i_{\mathrm{a}} + i_{\mathrm{c}}} \right){\mathrm{d}}{\cal S} = 0,$$ where \({\cal S}\) denotes the particle surface. Results of the self-electrophoretic propulsion model. a Kinetic mechanism for the electrocatalytic decomposition of H2O2 over platinum. b Computed ion concentrations Ci (left) and fluxes ji (right) as a function of distance x from a planar Pt surface. Concentration is scaled by the bulk concentration \(C_ \pm ^{\infty} = ( {K_{\mathrm{p}}C_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}}^{\mathrm{0}}} )^{1/2}\), distance by the reaction-diffusion length \(\lambda = ( {D_ \pm /k_{ - {\mathrm{p}}}C_ \pm ^{\infty} } )^{1/2}\), and flux by the \(j_ \pm ^\infty = D_ \pm C_ \pm ^\infty {\mathrm{/}}\lambda\). c Computed current density i (left) and fluid velocity u (right) around a circular Pt disk of radius a = 41λ and thickness δ = 0.72λ centered at the origin. The insets show magnified views of the current and flow near the edge of the disk. The maximum flow velocity is \(u_{\mathrm{max}} = 0.0082C_ \pm ^{\infty} k_{\mathrm{B}}T\lambda {\mathrm{/}}\eta\), which corresponds to 8.6 μm s−1 for the experimental conditions. d Computed angular velocity Ω as a function of the asymmetry parameter c for spinners with n = 2–6 fins and b = 0.3. Velocities are scaled by \({\mathrm{\Omega }}_{\mathrm{0}} = C_ \pm ^{\infty} k_{\mathrm{B}}Th{\mathrm{/}}\eta a\), which is estimated to be 240 s−1 for the experimental conditions. The inset shows the conformal mapping from disk to twisted star used in the calculation The rate law for anodic oxidation (4) suggests that reaction (2) can be divided into two steps: the dissociation of H2O2 in solution to form \({\mathrm{HO}}_2^ -\) and its subsequent consumption at the platinum surface $${\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}} \mathop{\rightleftharpoons}\limits^{{k_{\mathrm{p}}}}_{k_{ - {\mathrm{p}}}} {\mathrm{H}}^ + + {\mathrm{HO}}_2^ -$$ $${\mathrm{HO}}_2^ - \mathop{\longrightarrow}\limits^{{k_{\mathrm{a}}}}{\mathrm{O}}_2 + {\mathrm{H}}^ + + 2{\mathrm{e}}^ -$$ Vetter suggests a series of elementary steps by which reaction (8) may occur; however, these details are not relevant here39. Similarly, the rate law for the cathodic reaction (5) is consistent with a two step reduction of peroxide at the platinum surface followed by the association of H+ and OH− in solution $${\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}} + {\mathrm{e}}^ - \mathop{\longrightarrow}\limits^{{k_{\mathrm{c}}}}{\mathrm{OH}} + {\mathrm{OH}}^ -$$ $${\mathrm{OH}} + {\mathrm{e}}^ - \to {\mathrm{OH}}^ -$$ $$2\left( {{\mathrm{H}}^ + + {\mathrm{OH}}^ - \mathop{\rightleftharpoons}\limits^{ {k_{ - {\mathrm{w}}}}}_ {k_{\mathrm{w}}} {\mathrm{H}}_2{\mathrm{O}}} \right)$$ where reaction (9) is the rate-limiting step39. These mechanisms are not firmly established; however, the rate laws (4) and (5) are largely consistent with experimental measurements under conditions relevant to the present experiments—specifically, 0.1 M H2O2 at neutral to acidic pH values39,51. Using the rate laws (4) and (5), we modeled the steady-state concentration profiles for the relevant species (H+, OH−, \({\mathrm{HO}}_2^ -\)) as a function of distance from a planar platinum surface at the mixed potential (Fig. 4b; see also Methods, Supplementary Note 1, Supplementary Table 1). Importantly, the computed current densities agree with previous experimental measurements39,51 in the limit as ka → ∞, such that the reaction rate is limited by transport of \({\mathrm{HO}}_2^ -\) from solution where it is generated by H2O2 dissociation (Supplementary Note 2). Under these conditions, the concentration of the rate-limiting species \({\mathrm{HO}}_2^ -\) increases from zero at the Pt surface to its bulk value in solution over a characteristic length scale λ (Fig. 4b)52. Analysis of the model reveals that the bulk \({\mathrm{HO}}_2^ -\) concentration increases with increasing H2O2 concentration as \(C_ \pm ^\infty = ( {K_{\mathrm{p}}C_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}}^0} )^{1/2}\), where Kp = kp/k−p is the equilibrium constant for peroxide dissociation (see Methods and Supplementary Note 3). The length scale λ is determined by a competition between the generation of \({\mathrm{HO}}_2^ -\) in solution and its transport to the surface such that \(\lambda = ( {D_ \pm {\mathrm{/}}k_{ - {\mathrm{p}}}C_ \pm ^\infty } )^{1/2}\), where \(D_ \pm = {\textstyle{1 \over 2}}( {D_{{\mathrm{H}}^ + } + D_{{\mathrm{HO}}_2^ - }} )\) is the average diffusivity of the dominant ions. In experiment, the reaction-diffusion length is estimated to be λ = 139 nm, which is comparable to the particle thickness (δ = 100 nm) and to the Debye screening length (λD = 185 nm; Table S1). A key feature of the model is that it predicts the emergence of cathodic and anodic regions on the surface of an otherwise homogeneous platinum particle as a function of particle shape. For this to happen, the anodic and cathodic currents must respond differently to the geometric features of the particle. In the model, the anodic current (4) is transport-limited while the cathodic current (5) is reaction-limited; the former responds to changes in particle shape while the latter does not. To show this, we modeled the steady-state concentration profiles and ionic currents surrounding a thin platinum disk (radius, a = 41λ; thickness, δ = 0.72λ) immersed in a solution of 10 wt% hydrogen peroxide (Fig. 4c, left; see also Methods, Supplementary Figs. 3–5). The high curvature along the particle edge enhances transport of \({\mathrm{HO}}_2^ -\) to those locations relative to the particle face, resulting in spatial variations of the anodic current density. By contrast, the cathodic current density is approximately constant as reaction-induced variations in the peroxide concentration are negligible compared to the bulk concentration. Differences between the anodic and cathodic rates give rise to steady currents through the surrounding electrolyte from the high curvature edges to the low curvature faces. Similar currents are expected for any platinum particle with geometric features comparable to or smaller than the reaction-diffusion length λ. Having established a plausible mechanism for the formation of anodic and cathodic regions on the particle surface, the resulting fluid flows and particle motions are anticipated by previous studies of self-electrophoresis9. We computed the steady-state fluid flows around a thin platinum disk due to electric forces induced by the electrocatalytic reactions described above (see Methods). The electric field associated with the ionic currents acts on ions within the double layer to drive electrokinetic flows around the particle (Fig. 4c, right). For particles with negative surface potentials, these flows are directed radially inward from the particle edges, at which the electric force density and the fluid velocity are largest. Assuming a surface potential of Φs = −40 mV (that of Pt-Au nanorods53), the peak flow velocity in 10 wt% peroxide is estimated to be ca. 10 μm s−1. The steady streaming flows induced by the catalytic reactions are qualitatively similar to those driven by external acoustic18 and electric16 fields, which are known to drive motions of plate-like particles with asymmetric contours. The rotation of twisted-star particles can be understood by balancing the net torque on the fluid due to electric forces Le with the viscous torque due to particle rotation Lη such that Le = Lη. For thin particles rotating above a planar substrate, the viscous torque can be approximated as Lη = πηa4Ω/2h where \(h \ll a\) is the (unknown) surface separation. We will assume that the separation is comparable to the Debye screening length (i.e., h = λD), which determines the range of electrostatic interactions between the particle and the substrate (Supplementary Note 4). The full computation of the electric torque requires solving a three dimensional, nonlinear transport model on lengths spanning <100 nm (particle thickness) to >10 μm (particle diameter) to determined the ion currents and the electric field. In light of these difficulties, we adopt a heuristic procedure that uses the axisymmetric solution for the circular disk illustrated in Fig. 4c to roughly approximate the electric torque Le and thereby the rotation velocity Ω (see Methods and Supplementary Note 5). Briefly, we map the stress acting at the surface of the disk onto that of a star-shaped particle of equal radius and integrate to obtain the net torque (Fig. 4d, inset). The results of this heuristic procedure agree qualitatively with the experimental measurements (cf. Figs. 2 and 4d). Star-shaped disks are predicted to spin at rates of up to Ω = 1 rad s−1 with R spinners rotating CW and S spinners rotating CCW as in experiment. Moreover, the velocity increases with increasing asymmetry c but is largely independent of the number of fins n (Fig. 4d). The model also helps to explain the dependence of the rotation velocity on the peroxide concentration, the ionic strength, and the particle zeta potential. The predicted rotation speed increases slower than linearly with the concentration of hydrogen peroxide (Supplementary Fig. 6), which is consistent with the experimental results of Fig. 3a. In the model, changes in the peroxide concentration have several competing effects on the chemically driven particle motions. In the absence of added salt, the concentration of the rate-limiting species \({\mathrm{HO}}_2^ -\) as well as the pH and ionic strength of the solution are determined by the equilibrium dissociation of H2O2. These quantities are expected to depend sensitively on the presence of additional dissolved species such as carbon dioxide (not considered in the model), which may alter the pH and/or the ionic strength of the solution. As noted above, the addition of salt increases the electrolyte conductivity, thereby reducing the reaction induced electric fields that drive the electrokinetic flows around the particle (Supplementary Fig. 7). The direction of these flows depends on the sign of the particle surface potential Φs, which is assumed equal to the particle zeta potential (Supplementary Fig. 8). For particles with positive zeta potential, predicted fluid flows are directed radially outward from the cathodic face of the particle to its anodic edges, thereby reversing the direction of rotation. Notably, particles with positive surface potentials are predicted to induce faster fluid flow due to enhanced transport of the negatively charged rate limiting species \({\mathrm{HO}}_2^ -\) to the particle surface. Finally, we observe that the transport-limited kinetic model can provide an alternative explanation for the motion of platinum Janus particles in hydrogen peroxide solutions (Supplementary Fig. 9)10,28,29. Considering the geometry of the Pt patch, the transport of the rate-limiting reactant \({\mathrm{HO}}_2^ -\) is maximal at the edge of the platinum layer along the Janus equator. As a result, this equatorial region becomes more anodic relative to the rest of the Janus cap as previously hypothesized28,29. Additionally, the present mechanism can offer an explanation for the reported increase in the particle velocity upon addition of a strong base NaOH at low concentrations28. The base acts to decrease the proton concentration, which shifts the dissociation equilibrium of reaction (8) to increase the concentration of the rate-limiting species \({\mathrm{HO}}_2^ -\). Further study is required to quantify these and other details of the proposed model. Such studies will require numerical methods that fully describe how the three-dimensional particle shape can direct species transport, electrokinetic flows, and particle motions. We have shown that solid platinum particles rotate spontaneously in solutions of hydrogen peroxide as directed by their geometric shape alone. Particles with rotational symmetry spin about their axis at a constant rate that depends on the type and extent of fin asymmetry. Perhaps counterintuitively, the angular velocity of the particles was largely insensitive to the number of fins. The non-trivial relationship between particle shape and motion is reproduced and explained by a model of self-electrophoresis that accounts for transport-limitations in the electrocatalytic decomposition of hydrogen peroxide at the platinum surface. Despite uncertainties in the reaction mechanism, the proposed model has no free parameters and provides quantitative predictions that agree with the measured rates39,51 of peroxide oxidation and reduction at the mixed potential (Supplementary Note 2). This mechanism can also explain the motion of platinum-capped Janus particles (Supplementary Fig. 9). Particle rotation derives from shape asymmetries on two distinct length scales. First, the sharp edges of the plate-like particles serve to enhance transport of the rate-limiting species \({\mathrm{HO}}_2^ -\) and thereby accelerate the anodic half-reaction at those locations. To maintain charge neutrality, ionic charge flows from the anodic edges to the cathodic faces, while the associated electric fields drive fluid flow in the same direction. The electrokinetic flows are guided by a second asymmetry—that of the curved fins—causing the particles to rotate about their axis. We note that these motors are not particularly efficient, since the majority of H2O2 consumed does not contribute to the ionic currents and electric fields that drive fluid flow and particle motion. The predicted rate of electrocatalytic decomposition is ca. 400 times smaller than the overall rate of H2O2 decomposition measured by Brown and Poon28. This observation highlights the need for more selective electrocatalysts that could operate more effectively at low fuel concentrations. The design principles developed here for rotating particles should be helpful in guiding the design of other shape-directed swimmers capable of translational, circular, and helical motions. Rotational motions could be useful for microscale mixing at low Reynolds numbers, while the addition of translation along the rotation axis would aid the design of self-powered microtools25,54. By fixing platinum patches of specified geometry to a substrate, one could program microfluidic pumps55 with asymmetric flow profiles. Perhaps the most exciting (and speculative) opportunities involve motors of different shapes and sizes moving in concert via hydrodynamic, chemical, electrical, and/or steric interactions. Shape-based programming of particle dynamics within such mixtures could provide a basis for multicomponent colloidal machines that assemble to perform dynamic functions. Fabrication of platinum motors Platinum motors were fabricated in the Materials Research Institute of the Pennsylvania State University. Silicon wafers (100 mm, Virginia Semiconductor) were first coated with a 70 nm sacrificial layer of silver using electron beam evaporation in a Semicore E-Gun Thermal Evaporator. A 1 μm layer of LOR5A photoresist was spin-coated onto the wafer, followed by a 0.5 μm layer of Megaposit SPR 3012 photoresist. The motor shapes were designed according to the parametric Eq. (1) using Wolfram Mathematica 11.1 and Tanner L-Edit 16.2. Arrays of such shapes were patterned into the photoresist by projection photolithography using a GCA 8500 Wafer Stepper and developed with Microposit MF CD-26 Developer. After developing, wafers were coated with 100 nm of platinum by electron beam evaporation. The wafers were submerged in acetone for 10 min at room temperature followed by Nano Remover PG at 80 °C for 5 min to strip the photoresist, leaving an array of platinum particles on a layer of silver. The sacrificial silver layer was dissolved in 8 M nitric acid, and the wafer was sonicated to separate the motors. Prior to use, motors were sonicated in 15% hydrogen peroxide (VWR) for 1 h and left in hydrogen peroxide overnight before dispersing in deionized water. Motors were characterized by scanning electron microscopy using the FEI Nova NanoSEM 630 FESEM in Penn State's Materials Characterization Lab (Supplementary Fig. 1). Motor experiments Motors were mixed with hydrogen peroxide (VWR) diluted to appropriate concentrations and deposited onto glass slides (PTFE slides from Electron Microscopy Sciences). Prior to each experiment, the slide was rinsed in ethanol followed by deionized water. Approximately 90 μL of the particle dispersion was pipetted into a circular well made from a silicone isolation chamber (Electron Microscopy Sciences). Particles in the well were observed using an inverted optical microscope by Carl Zeiss. Videos were captured at 60 frames per second using a Flycam Flea3 digital camera. Particle motions were tracked manually using Tracker 4.11 (physlets.org). For each condition, at least 10 particles were tracked for 5 s at 6 frames per second. Each particle was tracked at two points fixed to the particle. From the two points, the time evolution of particle orientation was determined. The average angular velocity of each particle was found from the slope of a linear least-squares fit of the particle's orientation versus time (Supplementary Fig. 2). The mean of these slopes along with the 95% confidence intervals were calculated from these data for each experimental condition. Governing equations and boundary conditions We consider a single platinum particle immersed in a solution of hydrogen peroxide in water with no added salt. The particle moves as a rigid body with linear velocity U and angular velocity Ω through an unbounded quiescent fluid. The distribution of reactive species (namely, H2O2, H+, OH−, \({\mathrm{HO}}_2^ -\)) surrounding the particle is governed by conservation equations that account for transport due to convection, diffusion, and migration in the local electric field. At steady-state, the concentration of species i is governed by $$\nabla \cdot {\bf{j}}_i = \mathop {\sum}\limits_j {\kern 1pt} \nu _{ij}R_j\quad {\mathrm{with}}\quad {\bf{j}}_i = {\bf{u}}C_i - D_i\nabla C_i + \frac{{z_ieD_i}}{{k_{\mathrm{B}}T}}C_i{\bf{E}},$$ where the species flux ji includes contributions due to convection with fluid velocity u, diffusion with diffusivity Di, and migration in the electric field E = −∇ϕ. The rates of the two dissociation reactions (11) and (7) are given by $$R_{\mathrm{w}} = k_{\mathrm{w}}{\mathrm{C}}_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}} - k_{ - {\mathrm{w}}}C_{{\mathrm{H}}^{\mathrm{ + }}}{\mathrm{C}}_{{\mathrm{OH}}^ - },$$ $$R_{\mathrm{p}} = k_{\mathrm{p}}C_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}} - k_{ - {\mathrm{p}}}C_{{\mathrm{H}}^{\mathrm{ + }}}C_{{\mathrm{HO}}_{\mathrm{2}}^ - },$$ which result in the production or consumption of species i in accordance with stoichiometric coefficients νij. The electric potential ϕ is governed by the Poisson equation $$- \varepsilon \nabla ^2\phi = \rho _e = e\mathop {\sum}\limits_i {\kern 1pt} z_iC_i,$$ where ρe is the charge density, ε is the permittivity of the fluid, and zi denotes valence of species i. At low Reynolds numbers, the fluid velocity and pressure are well approximated by the Stokes equations, $$\nabla \cdot {\bf{u}} = 0,$$ $$0 = - \nabla p + \eta \nabla ^2{\bf{u}} + {\bf{f}}_e,$$ which include a volumetric force density, fe = ρeE, due to the action of the field on the local charge density. In the absence of the dissociation reactions, these equations are often referred to as the standard electrokinetic model45,46. Far from the particle, the species concentrations, electric potential, fluid velocity, and pressure approach their bulk values $$C_i({\bf{x}}) = C_i^\infty ,\quad \phi ({\bf{x}}) = 0,\quad {\bf{u}}({\bf{x}}) = 0,\quad p({\bf{x}}) = 0\quad {\mathrm{for}}\ \left| {\bf{x}} \right| \rightarrow \infty.$$ At the particle surface \({\cal S}\), the outward flux of reactive species normal to the interface is balanced by their generation due to the electrochemical reactions detailed above $$i_{\mathrm{a}} = - 2e( {{\bf{n}} \cdot {\bf{j}}_{{\mathrm{HO}}_{\mathrm{2}}^ - }} ) = 2e( {{\bf{n}} \cdot {\bf{j}}_{{\mathrm{H}}^ + }} )\quad {\mathrm{for}}\,{\bf{x}} \in {\cal S},$$ $$i_{\mathrm{c}} = 2e( {{\bf{n}} \cdot {\bf{j}}_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}}} ) = - e( {{\bf{n}} \cdot {\bf{j}}_{{\mathrm{OH}}^ - }} )\quad {\mathrm{for}}\,{\bf{x}} \in {\cal S}.$$ Here, n is the unit normal vector directed out from the particle surface. For transport-limited reactions, the flux of \({\mathrm{HO}}_2^ -\) is determined implicitly by the additional condition that its concentration approaches zero at the particle surface, $$C_{{\mathrm{HO}}_2^ - }({\bf{x}}) = 0\quad {\mathrm{for}}\,{\bf{x}} \in {\cal S}.$$ The rate of the cathodic reaction is determined by the particle potential Φp, which must be computed self-consistently from the zero current condition of Eq. (6). The fluid velocity at the particle surface is governed by the no-slip condition $${\bf{u}}({\bf{x}}) = {\bf{U}} + {\bf{\Omega }} \times {\bf{x}}\quad {\mathrm{for}}\,{\bf{x}} \in {\cal S}.$$ Finally, the velocities U and Ω are determined by the conditions that there be no net force or torque on the particle surface, $${\int\!\!\!}_{\cal S} \left( {{\boldsymbol{\sigma }} + {\boldsymbol{\sigma }}_e} \right) \cdot {\bf{n}}{\mathrm{d}}{\cal S} = 0,$$ $${\int\!\!\!}_{\cal S} {\bf{x}} \times \left( {{\boldsymbol{\sigma }} + {\boldsymbol{\sigma }}_e} \right) \cdot {\bf{n}}{\mathrm{d}}{\cal S} = 0,$$ where σ = −pδ + η(∇u + ∇uT) is the hydrodynamic stress tensor and \({\boldsymbol{\sigma }}_e = \varepsilon \left( {{\bf{EE}} - {\textstyle{1 \over 2}}E^2{\boldsymbol{\delta }}} \right)\) is the Maxwell stress tensor, which is related to the electric force as fe = ∇ · σe. In Supplementary Note 1, we non-dimensionalize this model in order to identify relevant dimensionless parameters and guide further simplifications; estimates for all model parameters are summarized in Supplementary Table 1. Simplifying assumptions The analysis of the model above is facilitated by a series of simplifying assumptions summarized here and detailed in the Supplementary Information. First, we focus our attention on large peroxide concentrations, \(C_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}}^{\mathrm{o}} \gg K_{\mathrm{w}} C_{{\mathrm{H}}_2{\mathrm{O}}}^{\mathrm{o}} / K_{\mathrm{p}} = 4.2\,{\mathrm{mM}}\), for which H+ and \({\mathrm{HO}}_2^ -\) are the dominant ionic species within the bulk electrolyte. Here, Kw = kw/k−w = 1.82 × 10−16 M and Kp = kp/k−p = 2.40 × 10−12 M are the equilibrium dissociation constants for water and peroxide, respectively, at 25 °C. Under these conditions, the equilibrium ion concentrations are $$C_{{\mathrm{H}}^ + }^\infty = C_{{\mathrm{HO}}_2^ - }^{\mathrm{o}} = \sqrt {K_{\mathrm{p}}C_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}}^\infty } + {\cal O}(\omega )\,{\mathrm{and}}\,C_{{\mathrm{OH}} - }^\infty = 0 + {\cal O}(\omega ),$$ where \(\omega = K_{\mathrm{w}}{\mathrm{C}}_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}}^{\mathrm{o}}{\mathrm{/}}K_{\mathrm{p}}C_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}}^{\mathrm{o}} \ll 1\) (Supplementary Note 3). We focus on the limit as ω → 0, such that the bulk hydroxide concentration is identically zero. In this regime, the characteristic ion concentration increases as the square root of the peroxide concentration. Second, we assume that the peroxide concentration is everywhere constant and equal to its analytical value, \(C_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}} = C_{{\mathrm{H}}_2{\mathrm{O}}_2}^o\). This simplification reflects that fact that the overall rate of peroxide consumption at the particle surface is much slower than its rate of delivery by diffusion. This assumption is supported both by previous experiments on platinum Janus particles28 and by numerical estimates based on the present model (Supplementary Note 2). Third, we neglect the contribution of fluid convection to the species flux, such that the concentrations Ci and the potential ϕ can be solved independently of the fluid velocity u. This assumption is valid when the Péclet number is small—that is, when \({\mathrm{Pe}} = a^2{\mathrm{\Omega /}}D_ \pm \ll 1\). For the experimental conditions (a = 5.77 μm, Ω = 1 rad s−1), the Péclet number is \({\mathrm{Pe}} = 0.005 \ll 1\). The separation of ion transport and fluid motion is useful in adapting results obtained for axially symmetric particles to compute the motions of star-shaped particles used in experiment. Finally, we assume that the association rate constants k−w and k−p are diffusion-limited. The association rate constant for water is reported to be k−w = (1.4 ± 0.2) × 1011 M−1 s−156; the rate constant for peroxide is expected to be similar. To show this, we consider the Debye-Smolucholwski model for diffusion-limited reaction rates between ions57 $$k_{ - {\mathrm{w}}} = 4\pi d\left( {D_ + + D_ - } \right)\left[ {\frac{{\lambda _{\mathrm{B}}{\mathrm{/}}d}}{{1 - {\mathrm{exp}}\left( { - \lambda _{\mathrm{B}}{\mathrm{/}}d} \right)}}} \right],$$ where d is an ion separation at which the reaction is assumed to occur instantaneously, and D± are the diffusion coefficients of the respective ions. The term in brackets describes the acceleration in the collision rate due to Coulombic interactions between the oppositely charged ions where λB = e2/4πεkBT is the Bjerrum length. For water at 25 °C, the ion diffusivities are D+ = 9.31 × 10−9 m2/s and D− = 5.27 × 10−9 m2/s for H+ and OH− ions, respectively; the Bjerrum length is λB = 0.71 nm, which is considerably larger than the ions themselves. We make the approximation that d ≈ λB, which implies that ions approaching closer than λB are attracted to one another and ultimately react. With the choice that d = 1.3λB, the prediction of Eq. (26) agrees with the measured value quoted above. In the same way, the association rate for peroxide is estimated to be k−p = 1.0 × 1011 M−1 assuming that the diffusivity of \({\mathrm{HO}}_2^ -\) is equal to that of hydrogen peroxide, \(D_{{\mathrm{HO}}_2^ - } = 1.43 \times 10^{ - 9}\,{\mathrm{m}}^2{\mathrm{/s}}\)58. Numerical solution With the above assumptions, the model was non-dimensionalized and solved numerically using a commercial finite element solver (COMSOL 5.2a) for the specific case of a circular disk of radius a and thickness δ. Owing to the axial symmetry of the particle, we used a two dimensional, cylindrical coordinate system (r, z). The ion concentrations, electric potential, and fluid velocity were solved on a large but finite domain: z > 0, r > 0, and r2 + z2 < 100a2. The resulting ion concentrations, electric potential, fluid velocity, and ionic currents are shown graphically in Supplementary Figs. 3–5. Conformal mapping To approximate the torque Le on the twisted star particle, we first compute the stress ez · σ on the top face of a thin circular disk due to the reaction-induced fluid flows (Fig. 4c). The axial symmetry of the problem implies that the stress is oriented along the radial direction and depends only on the distance r from the center of the particle face—that is, ez · σ = σzr(r)er. For the twisted star particles, we make the heuristic approximation that the stress has an analogous form, ez · σ = σzρ(ρ)eρ, where ρ is a generalized radial coordinate that ranges from ρ = 0 at the particle center to ρ = 1 at the particle perimeter. The unit vector eρ in the ρ-direction is oriented perpendicular to the particle perimeter and becomes equivalent to er in the limit of weak shape-asymmetry (b → 0 and c → 0). As detailed in Supplementary Note 5, the new coordinate is computed as ρ = exp(v(r)), where v(r) satisfies the Poisson equation on a two-dimensional domain bounded by the particle contour \({\cal C}\) with a point sink at the origin, $$\nabla ^2v = 2\pi \delta ({\bf{r}})\quad {\mathrm{with}}\quad v({\bf{r}}) = 0\,{\mathrm{for}}\,{\bf{r}} \in {\cal C}$$ To estimate the torque, we map the stress on the circular disk onto the twisted star as σzρ(ρ) = σzr(r/a) and numerically integrate the torque density r × σzρeρ over the particle face (Fig. 4d). In Supplementary Note 5, we discuss how both the particle shape and the stress distribution σzρ(ρ) can influence the speed and direction of particle rotation. The data that support the findings of this study are available from the corresponding authors upon reasonable request. Paxton, W. F. et al. Catalytic nanomotors: autonomous movement of striped nanorods. J. Am. Chem. Soc. 126, 13424–13431 (2004). Han, K., Shields, C. W. & Velev, O. D. Engineering of self-propelling microbots and microdevices powered by magnetic and electric fields. Adv. Funct. Mater. 28, 1705953 (2018). Palagi, S. & Fischer, P. Bioinspired microrobots. Nat. Rev. Mater. 3, 113–124 (2018). Article CAS ADS Google Scholar Bechinger, C. et al. Active particles in complex and crowded environments. Rev. Mod. Phys. 88, 045006 (2016). Article MathSciNet ADS Google Scholar Illien, P., Golestanian, R. & Sen, A. 'Fuelled' motion: phoretic motility and collective behaviour of active colloids. Chem. Soc. Rev. 46, 5508–5518 (2017). Nguyen, N. H. P., Klotsa, D., Engel, M. & Glotzer, S. C. Emergent collective phenomena in a mixture of hard shapes through active rotation. Phys. Rev. Lett. 112, 075701 (2014). Scholz, C., Engel, M. & Pöschel, T. Rotating robots move collectively and self-organize. Nat. Commun. 9, 931 (2018). Golestanian, R., Liverpool, T. B. & Ajdari, A. Designing phoretic micro- and nano-swimmers. New J. Phys. 9, 126 (2007). Moran, J. L. & Posner, J. D. Phoretic self-Propulsion. Ann. Rev. Fluid Mech. 49, 511–540 (2017). Howse, J. R. et al. Self-motile colloidal particles: from directed propulsion to random walk. Phys. Rev. Lett. 99, 048102 (2007). Qin, L., Banholzer, M. J., Xu, X., Huang, L. & Mirkin, C. A. Rational design and synthesis of catalytically driven nanorotors. J. Am. Chem. Soc. 129, 14870–14871 (2007). Soto, R. & Golestanian, R. Self-assembly of active colloidal molecules with dynamic function. Phys. Rev. E 91, 052304 (2015). Spellings, M. et al. Shape control and compartmentalization in active colloidal cells. Proc. Natl Acad. Sci. USA. 112, E4642–E4650 (2015). Aubret, A., Youssef, M., Sacanna, S. & Palacci, J. Targeted assembly and synchronization of self-spinning microgears. Nat. Phys. 14, 1114–1118 (2018). Sánchez, S. et al. Chemically Powered Micro- and Nanomotors. Angew. Chem. Int. Ed. 54, 1414–1444 (2015). Brooks, A. M., Sabrina, S. & Bishop, K. J. M. Shape-directed dynamics of active colloids powered by induced-charge electrophoresis. Proc. Natl Acad. Sci. USA 115, E1090–E1099 (2018). Ahmed, S. et al. Density and shape effects in the acoustic propulsion of bimetallic nanorod motors. ACS Nano 10, 4763–4769 (2016). Sabrina, S. et al. Shape-Directed Microspinners Powered by Ultrasound. ACS Nano 12, 2939–2947 (2018). Maggi, C., Saglimbeni, F., Dipalo, M., De Angelis, F. & Di Leonardo, R. Micromotors with asymmetric shape that efficiently convert light into work by thermocapillary effects. Nat. Commun. 6, 7855 (2015). Ma, F., Wang, S., Wu, D. T. & Wu, N. Electric-field–induced assembly and propulsion of chiral colloidal clusters. Proc. Natl Acad. Sci. USA 112, 6307–6312 (2015). Shields IV, C. W. et al. Supercolloidal spinners: complex active particles for electrically powered and switchable rotation. Adv. Funct. Mater. 28, 1803465 (2018). Klotsa, D., Baldwin, K. A., Hill, R. J. A., Bowley, R. M. & Swift, M. R. Propulsion of a two-sphere swimmer. Phys. Rev. Lett. 115, 248102 (2015). Jones, S. K., Bhalla, A. P. S., Katsikis, G., Griffith, B. E. & Klotsa, D. Transition in motility mechanism due to inertia in a model self-propelled two-sphere swimmer, arXiv:1801.03974 (2018). Solovev, A. A., Mei, Y., Ureña, E. B., Huang, G. & Schmidt, O. G. Catalytic microtubular jet engines self-propelled by accumulated gas bubbles. Small 5, 1688–1692 (2009). Solovev, A. A. et al. Self-propelled nanotools. ACS Nano 6, 1751–1756 (2012). Magdanz, V., Stoychev, G., Ionov, L., Sanchez, S. & Schmidt, O. G. Stimuli-responsive microjets with reconfigurable shape. Angew. Chem. - Int. Ed. 53, 2673–2677 (2014). Li, J., Rozen, I. & Wang, J. Rocket Science at the Nanoscale. ACS Nano 10, 5619–5634 (2016). Brown, A. & Poon, W. Ionic effects in self-propelled Pt-coated Janus swimmers. Soft Matter 10, 4016–4027 (2014). Ebbens, S. et al. Electrokinetic effects in catalytic platinum-insulator Janus swimmers. Europhys. Lett. 106, 58003 (2014). Catchmark, J. M., Subramanian, S. & Sen, A. Directed rotational motion of microscale objects using interfacial tension gradients continually generated via catalytic reactions. Small 1, 202–206 (2005). Valadares, L. F. et al. Catalytic nanomotors: Self-propelled sphere dimers. Small 6, 565–572 (2010). Gibbs, J. G., Kothari, S., Saintillan, D. & Zhao, Y. P. Geometrically designing the kinematic behavior of catalytic nanomotors. Nano. Lett. 11, 2543–2550 (2011). Yeo, K., Lushi, E. & Vlahovska, P. M. Collective Dynamics in a Binary Mixture of Hydrodynamically Coupled Microrotors. Phys. Rev. Lett. 114, 188301 (2015). Sabrina, S., Spellings, M., Glotzer, S. C. & Bishop, K. J. M. Coarsening dynamics of binary liquids with active rotation. Soft Matter 11, 8409–8416 (2015). Snezhko, A. & Aranson, I. S. Magnetic manipulation of self-assembled colloidal asters. Nat. Mater. 10, 698–703 (2011). Wang, Y. et al. Bipolar electrochemical mechanism for the propulsion of catalytic nanomotors in hydrogen peroxide solutions. Langmuir 22, 10451–10456 (2006). Moran, J. L. & Posner, J. D. Electrokinetic locomotion due to reaction-induced charge autoelectrophoresis. J. Fluid. Mech. 680, 31–66 (2011). Article MathSciNet CAS ADS Google Scholar Moran, J. L. & Posner, J. D. Role of solution conductivity in reaction induced charge autoelectrophoresis. Phys. Fluids 26, 042001 (2014). Vetter, K. J. Electrochemical Kinetics pp. 635–642. (Academic Press, New York, 1967). Anderson, J. L. Colloid transport by interfacial forces. Annu. Rev. Fluid. Mech. 21, 61–99 (1989). Michelin, S. & Lauga, E. Autophoretic locomotion from geometric asymmetry. Eur. Phys. J. E 38, 7 (2015). Yang, M., Ripoll, M. & Chen, K. Catalytic microrotor driven by geometrical asymmetry. J. Chem. Phys. 142, 054902 (2015). Gibbs, J. G. & Zhao, Y. P. Autonomously motile catalytic nanomotors by bubble propulsion. Appl. Phys. Lett. 94, 163104 (2009). Gallino, G., Gallaire, F., Lauga, E. & Michelin, S. Physics of Bubble-Propelled Microrockets. Adv. Funct. Mater. 1800686 (2018). O'Brien, R. W. & White, L. R. Electrophoretic Mobility of a Spherical Colloidal Particle. J. Chem. Soc. Faraday Trans. 2 74, 1607–1626 (1978). DeLacey, E. H. & White, L. R. Dielectric response and conductivity of dilute solutions of colloidal particles. J. Chem. Soc. Faraday Trans. 2 77, 2007–2039 (1981). Serra-Maia, R. et al. Mechanism and Kinetics of Hydrogen Peroxide Decomposition on Platinum Nanocatalysts. ACS Appl. Mater. Interfaces 10, 21224–21234 (2018). Wang, W., Chiang, T. Y., Velegol, D. & Mallouk, T. E. Understanding the efficiency of autonomous nano- and microscale motors. J. Am. Chem. Soc. 135, 10557–10565 (2013). Urbach, H. B. & Bowen, R. J. Behaviour of the oxygen-peroxide couple on platinum. Electrochim. Acta 14, 927–940 (1969). Bockris, J. O. & Reddy, A. K. Modern electrochemistry 2B: electrodics in chemistry, engineering, biology and environmental science (Kluwer Academic/Plenum Publishers, New York, 1998). Gerischer, R. & Gerischer, H. Über die katalytische Zersetzung von Wasserstoffsuperoxyd an metallischem Platin. Z. für Phys. Chem. 6, 178–200 (1956). Brown, A. T., Poon, W. C., Holm, C. & De Graaf, J. Ionic screening and dissociation are crucial for understanding chemical self-propulsion in polar solvents. Soft Matter 13, 1200–1222 (2017). Paxton, W. F. et al. Catalytically induced electrokinetics for motors and micropumps. J. Am. Chem. Soc. 128, 14881–14888 (2006). Schamel, D. et al. Nanopropellers and their actuation in complex viscoelastic media. ACS Nano 8, 8794–8801 (2014). Esplandiu, M. J., Zhang, K., Fraxedas, J., Sepulveda, B. & Reguera, D. Unraveling the operational mechanisms of chemically propelled motors with micropumps. Acc. Chem. Res. 51, 1921–1930 (2018). Stillinger, F. H. Proton transfer reactions and kinetics in water. Theor. Chem. Adv. 3, 177–234 (1978). Debye, P. Reaction Rates in Ionic Solutions. Trans. Electrochem. Soc. 82, 265 (1942). Van Stroe-Biezen, S. A. M., Everaerts, F. M., Janssen, L. J. J. & Tacken, R. A. Diffusion coefficients of oxygen, hydrogen peroxide and glucose in a hydrogel. Anal. Chim. Acta 273, 553–560 (1993). This work was supported in part by the Center for Bio-Inspired Energy Science, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award DE-SC0000989. Additional support was provided by the Penn State MRSEC, funded by the National Science Foundation (NSF, DMR-1420620). A.M.B. was supported in part by the National Science Foundation Graduate Research Fellowship Program under Grant DGE1255832. M.T. acknowledges financial support from the Portuguese Foundation for Science and Technology (FCT) under Contract No. IF/00322/2015. We thank Alan West for helpful discussions on electrochemical kinetics. Department of Chemical Engineering, Pennsylvania State University, University Park, PA, 16802, USA Allan M. Brooks, Syeda Sabrina & Darrell Velegol Centro de Fisica Teórica e Computacional, Departamento de Fisica, Faculdade de Ciências, Universidade de Lisboa, Campo Grande P-1749-016, Lisboa, Portugal Mykola Tasinkevych Department of Chemistry, Pennsylvania State University, University Park, PA, 16802, USA Ayusman Sen Department of Chemical Engineering, Columbia University, New York, NY, 10027, USA Kyle J. M. Bishop Allan M. Brooks Syeda Sabrina Darrell Velegol A.M.B., S.S., A.S., and K.J.M.B conceived the project and designed the experiments; A.M.B. acquired the data; A.M.B., M.T., S.S., D.V., A.S., and K.J.M.B contributed to the analysis and interpretation of data; A.M.B., M.T., and K.J.M.B. developed the mathematical model and performed the simulations; A.M.B. and K.J.M.B. wrote the paper with input from all authors. Correspondence to Ayusman Sen or Kyle J. M. Bishop. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Description of Additional Supplementary Files Supplementary Movie 1 Brooks, A.M., Tasinkevych, M., Sabrina, S. et al. Shape-directed rotation of homogeneous micromotors via catalytic self-electrophoresis. Nat Commun 10, 495 (2019). https://doi.org/10.1038/s41467-019-08423-7 Time irreversibility in active matter, from micro to macro J. O'Byrne Y. Kafri F. van Wijland Nature Reviews Physics (2022) Reversible morphology-resolved chemotactic actuation and motion of Janus emulsion droplets Bradley D. Frank Saveh Djalali Lukas Zeininger Direct dynamic read-out of molecular chirality with autonomous enzyme-driven swimmers Serena Arnaboldi Gerardo Salinas Alexander Kuhn Nature Chemistry (2021) Accurate localization microscopy by intrinsic aberration calibration Craig R. Copeland Craig D. McGray Samuel M. Stavis Harnessing the power of chemically active sheets in solution Raj Kumar Manna Abhrajit Laskar Anna C. Balazs Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Quantum Computing Meta Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. It only takes a minute to sign up. Clarification on Watrous' proof of Alberti's theorem on the fidelity function I am reading John Watrous' quantum information theory book. In the proof of Theorem 3.19 (practically the Alberti's theorem on the characterization of the fidelity function) he claims the following fact: given two Hermitian operators $Y_0, Y_1$ on a complex Euclidean space $\mathcal{X}$, the operator $$ \begin{pmatrix} Y_0 & -\mathbb{1} \\ -\mathbb{1} & Y_1 \end{pmatrix} $$ on $\mathcal{X} \oplus \mathcal{X}$ is positive semidefinite if and only if both $Y_0$ and $Y_1$ are positive definite and satisfy $Y_1 \geq Y_0^{-1}$. I do not immediately see how this can be proven. He says that Lemma 3.18 can be used for this purpose, but I still do not get how to link these two results. Someone is willing to help? For convenience I write here the statement of Lemma 3.18. It says the following: given two positive semidefinite operators $P,Q$ and a linear operator $X$ on $\mathcal{X}$, the operator $$ \begin{pmatrix} P & X \\ X^* & Q \end{pmatrix} $$ on $\mathcal{X} \oplus \mathcal{X}$ is positive semidefinite if and only if there exists an operator $K$ such that $X = \sqrt{P} K \sqrt{Q}$ and $||K|| \leq 1$. Here $^*$ stands for the conjugate transposition and $|| \cdot ||$ is the spectral norm. information-theory fidelity linear-algebra adabbadabb $\begingroup$ The initial statement follows from the Schur complement characterization of block PSD matrices, see the final part of the wiki page. $\endgroup$ – Rammus $\begingroup$ Perfect, thank you, I didn't know about this result. I suppose it can be posted as an answer! However, I still wonder how this fact follows from the Lemma 3.18 cited above. $\endgroup$ – adabb Firstly, observe that if $Y_0$ were not positive semi-definite, the whole thing cannot be positive semi-definite, because if $|\lambda\rangle$ were an eigenvector of $Y_0$ with negative eigenvalue, then $\left(\begin{array}{c} |\lambda\rangle\\0\end{array}\right)$ would have a negative expectation on the overall operator, and hence there would be a negative eigenvalue, and the operator would not be positive semi-definite. So, that means $P=Y_0$ and $Q=Y_1$ are both positive semi-definite with $X=-1$, so the Lemma applies. It requires $$ -I=\sqrt{Y_0}K\sqrt{Y_1}. $$ In other words, $$ -1\leq K=-\sqrt{Y_0}^{-1}\sqrt{Y_1}^{-1}\leq 1 $$ So, let's rearrange the first inequality $$ \sqrt{Y_1}\geq\sqrt{Y_0}^{-1}. $$ Since both $Y_0$ and $Y_1$ are positive semi-definite, I can square this, with no ambiguity on the direction of the inequality: $$ Y_1\geq Y_0^{-1}. $$ You probably want to go through this carefully to make sure that the "if and only if" conclusion is valid, but this should give a sense of how you could use stated lemma. DaftWullieDaftWullie $\begingroup$ Thank you for the insight in the first paragraph! I just don't get why you say that $K \geq -\mathbb{1}$. I suppose it comes from the requirement $||K|| \leq 1$, but how? $\endgroup$ $\begingroup$ Alternatively, we can prove that $||K|| \leq 1$ is equivalent to $Y_0^{-1} \leq Y_1$ in the following way. If $Y_0^{-1} \leq Y_1$ then $||K|| = ||\sqrt{Y_0^{-1}}\sqrt{Y_1^{-1}}|| \leq ||\mathbb{1}|| = 1$. Vice versa, if $||K|| \leq 1$ then $K^* K \leq \mathbb{1}$, which is equivalent to $Y_0^{-1} \leq Y_1$. $\endgroup$ $\begingroup$ I guess you've pretty much answered your own question there. The reasoning I was using was that $\|K\|\leq 1$ basically means that the absolute value of all eigenvalues is $\leq 1$, i.e. all eigenvalues are bounded between $\pm 1$. You can equivalently write this as $-1\leq K\leq 1$. $\endgroup$ – DaftWullie $\begingroup$ Right, I was missing the fact that our $K$ is Hermitian. $\endgroup$ $\begingroup$ This answer is on the right track, but there are two problems. First, $K$ may not be Hermitian, and second, squaring is not operator monotone (i.e., $P\leq Q$ does not necessarily imply $P^2\leq Q^2$ for $P,Q\geq 0$). These problems can be fixed at the same time, though: starting from $Y_0^{-1/2} = - K Y_1^{1/2}$, left multiply each side to its adjoint to obtain $Y_0^{-1} = Y_1^{1/2} K^{\ast} K Y_1^{1/2} \leq Y_1$ (using $K^{\ast} K\leq \mathbb{1}$, as adabb has noted in a comment). $\endgroup$ – John Watrous Thanks to John Watrous itself, in the comments the following answer emerged. I will call $B$ the block operator defined in the question in terms of $Y_0, Y_1$. Suppose $B \geq 0$. Then, as DaftWullie pointed out in another answer, it must be true that $Y_0 \geq 0$. In fact, if there exists a vector $u \in \mathcal{X}$ such that $u^* Y_0 u < 0$, then we can construct the vector $v = \begin{bmatrix} u \\ 0 \end{bmatrix} \in \mathcal{X} \oplus \mathcal{X}$ and find out that $v^* B v = u^* Y_0 u < 0$, in contradiction with the hypothesis $B \geq 0$. The same reasoning can be applied to say that $Y_1 \geq 0$ as well. By means of the Lemma, there must exist an operator $K$ such that $Y_0^{1/2} K Y_1^{1/2} = -\mathbb{1}$ and $\lVert K \rVert \leq 1$. Note that $\lVert K \rVert \leq 1$ implies $\lVert K^* K \rVert \leq 1$, and since $K^* K$ is Hermitian this means that its largest eigenvalue has modulus bounded by one, i.e. $K^* K \leq \mathbb{1}$. We can now see the following facts. First of all, $Y_0 > 0$ and $Y_1 > 0$, since if they were singular $Y_0^{1/2} K Y_1^{1/2}$ should have been singular too, which is false because it is equal to the nonsingular operator $-\mathbb{1}$. Moreover, if we write $Y_0^{-1/2} = -KY_1^{1/2}$ we can left multiply each side by its adjoint to obtain $Y_0^{-1} = Y_1^{1/2} K^* K Y_1^{1/2} \leq Y_1$. Suppose $Y_0 > 0$, $Y_1 > 0$, and $Y_0^{-1} \leq Y_1$. Let us define $K = -Y_0^{-1/2} Y_1^{-1/2}$. By definition, we have that $Y_0^{1/2} K Y_1^{1/2} = -\mathbb{1}$. Moreover $$ K^* K = Y_1^{-1/2} Y_0^{-1} Y_1^{-1/2} \leq Y_1^{-1/2} Y_1 Y_1^{-1/2} = \mathbb{1} \, . $$ Since $K^* K$ is Hermitian, this implies that $\lVert K^* K \rVert \leq 1$, hence $\lVert K \rVert \leq 1$. By means of the Lemma, we conclude that $B \geq 0$. Alternatively, one can invoke the Löwner-Heinz theorem, according to which the square root is a monotone operation with respect to the Löwner order, that is $Y_0^{-1} \leq Y_1 \Rightarrow Y_0^{-1/2} \leq Y_1^{1/2}$. Since the spectral norm is monotone, we have then $$ \lVert K \rVert = \lVert Y_0^{-1/2} Y_1^{-1/2} \rVert \leq \lVert Y_1^{1/2} Y_1^{-1/2} \rVert = \lVert \mathbb{1} \rVert = 1 \, . $$ Thanks for contributing an answer to Quantum Computing Stack Exchange! Not the answer you're looking for? Browse other questions tagged information-theory fidelity linear-algebra or ask your own question. Intuitive role of the polar decomposition in proof of Uhlmann's theorem for fidelity In the proof of the joint entropy theorem, why are $p_i\lambda_i^j$ the eigenvalues? Are there disadvantages in using the inner product between states instead of the fidelity? What would be an ideal fidelity measure to determine the closeness between two non unitary matrices? Can we combine the square roots inside the definition of the fidelity? The proof of monotonicity of fidelity for channels and its meaning How is the connection between Bures fidelity and quantum Fisher information derived? What are the "higher moments" of the gate fidelity? How to calculate the threshold for gate fidelity? Properties of the generalized fidelity for subnormalized states
CommonCrawl
First Order ODE/y' - f (y) phi' (x) over f' (y) = phi (x) phi' (x) over f' (y) Let $f \left({y}\right)$ and $\phi \left({x}\right)$ be known real functions of $y$ and $x$ respectively. The general solution of: $(1): \quad \dfrac {\mathrm d y} {\mathrm d x} - \dfrac {f \left({y}\right)} {f' \left({y}\right)} \phi' \left({x}\right) = \dfrac {\phi \left({x}\right) \phi' \left({x}\right)} {f' \left({y}\right)}$ $\displaystyle f \left({y}\right) = e^{2 \phi \left({x}\right)} \left({\phi \left({x}\right) - 1}\right) + C$ Let $u = f \left({y}\right)$ Then by the Chain Rule: $\dfrac {\mathrm d u} {\mathrm d x} = f' \left({y}\right) \dfrac {\mathrm d y} {\mathrm d x}$ \(\displaystyle \dfrac {\mathrm d y} {\mathrm d x} - \dfrac {f \left({y}\right)} {f' \left({y}\right)} \phi' \left({x}\right)\) \(=\) \(\displaystyle \dfrac {\phi \left({x}\right) \phi' \left({x}\right)} {f' \left({y}\right)}\) \(\displaystyle \implies \ \ \) \(\displaystyle f' \left({y}\right) \dfrac {\mathrm d y} {\mathrm d x} - f \left({y}\right) \phi' \left({x}\right)\) \(=\) \(\displaystyle \phi \left({x}\right) \phi' \left({x}\right)\) multiplying $(1)$ by $f' \left({y}\right)$ \(\displaystyle \implies \ \ \) \(\displaystyle \dfrac {\mathrm d u} {\mathrm d x} - u \phi' \left({x}\right)\) \(=\) \(\displaystyle \phi \left({x}\right) \phi' \left({x}\right)\) substituting $u$ for $f \left({y}\right)$ This is a linear first order ordinary differential equation in the form: $\dfrac {\mathrm d u}{\mathrm d x} + P \left({x}\right) u = Q \left({x}\right)$ whose general solution from Solution to Linear First Order Ordinary Differential Equation is: $\displaystyle u e^{\int P \ \mathrm d x} = \int Q e^{\int P \ \mathrm d x} \ \mathrm d x + C$ In this instance: $P \left({x}\right) = -\phi' \left({x}\right)$ $Q \left({x}\right) = \phi \left({x}\right) \phi' \left({x}\right)$ \(\displaystyle \int P \ \mathrm d x\) \(=\) \(\displaystyle -\int \phi' \left({x}\right) \ \mathrm d x\) \(\displaystyle \) \(=\) \(\displaystyle -\phi \left({x}\right) + A\) $\displaystyle u e^{-\phi \left({x}\right) + A} = \int \phi \left({x}\right) \phi' \left({x}\right) e^{-\phi \left({x}\right) + A} \ \mathrm d x + C$ Let $v = \phi \left({x}\right)$. Then by Integration by Substitution: $\displaystyle \int \phi \left({x}\right) \phi' \left({x}\right) e^{-\phi \left({x}\right) + A} \ \mathrm d x = \int v e^{-v + A} \ \mathrm d v$ \(\displaystyle \int \phi \left({x}\right) \phi' \left({x}\right) e^{-\phi \left({x}\right) + A} \ \mathrm d x\) \(=\) \(\displaystyle \int v e^{-v + A} \ \mathrm d v\) Integration by Substitution \(\displaystyle \) \(=\) \(\displaystyle \int v e^{-v} e^A \ \mathrm d v\) \(\displaystyle \) \(=\) \(\displaystyle e^A \int v e^{-v} \ \mathrm d v\) \(\displaystyle \) \(=\) \(\displaystyle e^A e^v \left({v - 1}\right) + C'\) Primitive of $x e^{a x}$ The validity of the material on this page is questionable. Not sure the Primitive of $x e^{a x}$ has been calculated correctly -- looks like it ought to be $-e^A e^{-v} \left({v + 1}\right) + C'$ because in this context $a = -1$. This changes everything. When you substitute back in it appears that you have $e^{-\phi \left({x}\right) + A}$ on both sides. They will cancel and you'll have not much left. You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by resolving the issues. To discuss this proof in more detail, feel free to use the talk page. If you are able to fix this page, then when you have done so you can remove this instance of {{Questionable}} from the code. If you would welcome a second opinion as to whether your correction is correct, add a call to {{Proofread}} the page (see the proofread template for usage). This leaves us with: $\displaystyle u e^{-\phi \left({x}\right) + A} = e^A e^v \left({v - 1}\right) + C$ subsuming $C'$ into $C$. Substituting back in, the general solution is seen to be: $\displaystyle f \left({y}\right) e^{-\phi \left({x}\right) + A} = e^{\phi \left({x}\right) + A} \left({\phi \left({x}\right) - 1}\right) + C$ and so multiplying both sides by $e^{-\phi \left({x}\right) + A}$: $\displaystyle f \left({y}\right) = e^{2 \phi \left({x}\right) + A - A} \left({\phi \left({x}\right) - 1}\right) + C$ and the constant disappears. Hence the result. 1896: Joseph Edwards: Integral Calculus for Beginners with an Introduction to Differential Equations: Exercise $18 \ \text{(d)}$ Retrieved from "https://proofwiki.org/w/index.php?title=First_Order_ODE/y%27_-_f_(y)_phi%27_(x)_over_f%27_(y)_%3D_phi_(x)_phi%27_(x)_over_f%27_(y)&oldid=247796" Pages with Questionable Content Examples of First Order ODE This page was last modified on 16 February 2016, at 16:37 and is 4,363 bytes
CommonCrawl
Bioresources and Bioprocessing Valorization of porcine by-products: a combined process for protein hydrolysates and hydroxyapatite production Sandra Borges ORCID: orcid.org/0000-0003-4665-10981, Clara Piccirillo2, Francesca Scalera2, Rui Martins3, Ana Rosa3, José António Couto1, André Almeida3 & Manuela Pintado1 Bioresources and Bioprocessing volume 9, Article number: 30 (2022) Cite this article The meat industry generates large amounts of by-products that are costly to be treated and discarded ecologically; moreover, they could be used to extract high added-value compounds. In this work, we present an innovative combined process which allowed the parallel extraction of both organic and mineral compounds; more specifically protein hydrolysates and single-phase hydroxyapatite were obtained. The protein hydrolysates, extracted through an enzymatic hydrolysis with alcalase, showed a degree of hydrolysis of 53.3 ± 5.1%; moreover, they had a high protein content with peptides with molecular weight lower than 1.2 kDa. Their antioxidant activities, measured with ABTS and ORAC tests, were 21.1 ± 0.5 mg ascorbic acid equivalent/g of dry extract and 87.7 ± 6.3 mg Trolox equivalent/g of dry extract, respectively. Single-phase hydroxyapatite, obtained with a simple calcination at 700 °C on the residues of the hydrolysis process, showed a Ca/P ratio close to the stoichiometric one (1.65 vs. 1.67) and presented a nanometric structure. This study reports a simple and feasible process for the valorization of porcine by-products in a large-scale up generating products with potential applications for environment remediation, biomedicine, nutrition and catalysis/bioenergy. Graphic Abstract The agri-food industry generates massive quantities of by-products, which can be an environmental issue and should be properly addressed. By-products from slaughter and processing of pigs represent approximately 44% of the total live weight of the animal. These by-products are commonly used as animal feed, fertilizers and also in the production of biogas; these applications, however, have a relatively low economic value (Lapeña et al. 2018). Nevertheless, this residual raw material has a high nutritional value containing large amounts of protein, lipids and minerals, which have potential to generate high value-added ingredients. Therefore, a better exploitation of these meat by-products is crucial for sustainability and the circular economy. Such valorization, as an alternative to a simple reuse of the by-products, could also provide novel ingredients and products that innovate the food industry (Fu et al. 2018). Several studies show that meat residues such as trimmings and bones contain high quantities of proteins, particularly collagen (Toldrá et al. 2016), whose potential is well known; indeed, collagen, is used in various fields, including biomedicine and cosmetics (Ferraro et al. 2017). In addition to this, collagen can also be used as a source of smaller bioactive molecules, which can be obtained with a process of hydrolysis (Ahmed et al. 2020), as proteins are broken down into smaller and more water-soluble peptides and free amino acids. With the hydrolysis, there is an increase in protein recovery; moreover, valuable compounds such as protein hydrolysates are produced. Protein hydrolysis can be a suitable method to extract proteins from meat residues; the process can be made more efficient if performed with appropriate enzymes (Toldrá et al. 2016). Enzymatic hydrolysis can be performed using endogenous enzymes (digestive enzymes) or exogenous enzymes (commercially available) (Aspevik et al. 2017). The process, however, is more specific and reproducible with exogenous enzymes; hence, this represents a good option to produce food-grade and well-defined protein hydrolysates (PH). Despite the additional costs of commercial enzymes, the process is still economically viable, since the products have potential to achieve higher-paying markets compared for example with products based on rendering (Aspevik et al. 2017). Enzymatic hydrolysis has been used to obtain antioxidant compounds from various animal by-products including duck (Li et al. 2020), goat (de Queiroz et al. 2017) and bovine (Zou et al. 2019). Previous studies also showed that porcine peptides could be an antioxidant source, namely peptides from porcine hemoglobin (Chang et al. 2007; Álvarez et al. 2012), skin (Li et al. 2007), myofibrillar protein (Saiga et al. 2003) and other porcine tissues (colon, appendix, rectum, pancreas, heart, liver, and lung) (Damgaard et al. 2014). These bioactive peptides derived from pork by-products with potential health-promoting effects have a wide range of promising applications, such as nutraceuticals for pets and humans, as well as in cosmetic and pharmaceutical formulations (Aspevik et al. 2017). Animal by-products, besides being a valuable protein source, can also be an important basis to extract calcium phosphates (CaP), particularly hydroxyapatite (HAp). HAp, whose formula is Ca10(PO4)6(OH)2, is the major inorganic component of hard tissues (Lü et al. 2007); more specifically, in animal bones its content is over 60%. HAp is widely used in the biomedical area for bone regeneration due to its excellent properties such as biocompatibility, bioactivity, osteoconductivity and also noninflammatory and nonimmunogenicity behaviors (Barakat et al. 2009). In addition to this, HAp has other applications; in fact, it can also be used for environment remediation, as it can remove bivalent heavy metals from contaminated wastewaters and soils (Khan et al. 2020; Nie et al. 2020; Safavi et al. 2020). The synthetic HAp involves a chemical reaction between calcium and phosphorus in appropriate conditions; this approach, however, is not sustainable in the long term, due to the increasing demand of phosphorus for agriculture (Santos et al. 2019). It is therefore important to consider innovative and sustainable sources of HAp; indeed HAp extraction from food by-products has been explored, for instance from bovine bones (Barakat et al. 2009) and porcine bones and teeth (Lü et al. 2007). In other cases, for instance from fish bones (Piccirillo et al. 2013) a mixture of HAp and other CaP compounds was obtained; this was because the ratio between Ca and P in the bones was smaller than the stoichiometric one (1.67). Literature data showed that natural HAp and/or CaP are suitable for biomedical applications (i.e., bone substitutes, grafting, etc.). Currently, some bone substitutes of animal origin are commercially available, as is the case of Apatos® which is derived from a cortical porcine bone in the form of particles. As mentioned above, processes to recover/extract protein hydrolysates or CaP have been considered; literature, however, does not report on a combined process to extract both compounds from meat by-products. Such a process would be important to have a more complete valorization of these by-products. This work explores for the first time a combined process for the valorization of porcine by-products (bone residues and meat trimmings), which includes a bioprocess (enzymatic hydrolysis) followed by a thermal treatment. This approach allows the simultaneous extraction of organic and mineral fraction as added-value compounds (PH and CaP). The obtained products (PH and CaP) were characterized by several analytical techniques, to evaluate their composition and to explore their potential to food/feed, medical and environmental applications. This work shows it is possible to perform a simultaneous extraction of several high added-value compounds from the same by-products of pigs slaughter and processing, usually discarded. Materials and reagents Porcine by-products (meat and bones) were obtained by ETSA (Loures, Portugal), a company specialized in the collection of animal by-products from conventional centers including slaughterhouses. All reagents were purchased from Sigma-Aldrich (USA) unless mentioned otherwise. Combined process of protein and hydroxyapatite extraction from porcine by-products A scheme of the process employed to extract proteins and CaP is shown in Fig. 1. Scheme of the combined process to extract proteins and CaP A mixture of porcine bones and meat trimmings were used; they were ground at room temperature, to obtain a pulp-like meat paste, which was then submitted to hydrolysis in order to extract the PH and CaP. To extract the proteins by hydrolysis, water was added to the meat/bone paste in a ratio of 1:1. Prior to the enzymatic hydrolysis, the pH was adjusted to 8.0 with 1 M NaOH. The substrate was hydrolyzed by alcalase at a ratio of enzyme:substrate of 1% (v/w) at 50 °C for 6 h. During the hydrolysis, the pH of the reaction mixture was kept constant by continuous addition of NaOH. Enzymatic reaction occurred at the optimal pH and temperature conditions described for alcalase (Borrajo et al. 2020b; Sousa et al. 2020). The mixture was then submitted to centrifugation (5000g, 5 min) (Gyrozen 1248, Korea) in order to separate and obtain three phases: the fat in the upper phase, the intermediate water phase containing the soluble protein, and the lower phase containing the mineral part. The upper phase containing the fat was discarded and the protein fraction was collected and stored at −20 °C for further analysis. The mineral fraction or inorganic fraction was washed in water, dried at 120 °C in an oven, and ground in a coffee mill, obtaining a white bone powder. Then, to remove the residual organic fraction and obtain pure minerals, the powder was calcined at 700 °C. The heating ramp was 5 °C/min and the annealing time was 1 h (Piccirillo et al. 2013). Characterization of porcine protein hydrolysates Determination of degree of hydrolysis The hydrolysis efficiency was determined through the degree of hydrolysis (DH), which was assessed by measuring the free amino groups by reaction of 2,4,6-trinitrobenzenesulfonic acid solution (TNBS) (Sousa et al. 2020). Briefly, a reaction mixture with 50 μL of PH extract, 125 μL of 200 mM sodium phosphate buffer (pH 8.2) and 50 μL of TNBS at 0.025% were placed in a 96-well microplate (Sarstedt, Germany). The microplate was incubated at 45 °C for 1 h and the absorbance was measured at 340 nm using a Multiskan GO plate reader (Thermo Scientific, USA). L-leucine (0.078–2.5 mM) was used to generate a standard curve. Three replicates were recorded. The DH was determined by following formula: $$DH \left(\%\right)=100*\frac{{L}_{t}-{L}_{0}}{{L}_{max}-{L}_{0}},$$ where Lt is the amount of amino groups released after a hydrolysis time equal to t, L0 is the amount of amino groups in the sample at initial hydrolysis time (blank) and Lmax is the maximum amount amino groups existing in porcine by-products. The Lmax was obtained by acid hydrolysis of porcine by-products with 6 M HCl at 105 ºC for 24 h. Then, the acid-hydrolyzed sample was filtered and the supernatant was neutralized with 6 M NaOH before amino group acids assessment. Composition analysis The composition analysis was performed according to the Association of Official Analytical Chemists procedures (AOAC 1995). The moisture was determined at 105 °C for 24 h. The ash content was determined at 550 °C for 5 h. The protein content was measured using the Kjeldahl method and the nitrogen to protein conversion factor used was 6.25. The protein content was expressed on a dry weight basis. All measurements were performed in triplicate. Molecular weight distribution The molecular weight (MW) distribution of porcine PH extract was determined by a fast protein liquid chromatography (FPLC) (Sousa et al. 2020). An aliquot (100 µL) of filtered samples was injected in a AKTA pure 25 L system, from GE Healthcare Life Sciences (Freiburg, Germany), coupled with two gel filtration columns: Superdex 200 increase10/300 GL and Superdex peptide, 10/300 GL. The eluent used was 0.025 M phosphate buffer (pH 7.0), 0.15 M sodium chloride and 0.2 g/L of sodium azide. The flow rate was 0.5 mL/ min and elution was monitored at 280 nm. A MW standard curve was established using thyroglobulin (669 kDa), ferritin (440 kDa), aldolase (158 kDa), conalbumin (75 kDa), ovalbumin (44 kDa), carbonic anhydrase (29 kDa), ribonuclease A (13.7 kDa) and a whey peptide (1.2 kDa). The analysis was performed in duplicate and the results were expressed in milli Absorbance Units (mAU) per eluted volume (mL). The software used to evaluate the results was UNICORN 7.0. Analysis of antioxidant activity ABTS scavenging assay The ability of free radical-scavenging by porcine PH extract was evaluated through 2,2-azino-bis-3-ethylbenzothiazoline-6-sulphonic acid (ABTS) radical decolourization assay (Re et al. 1999). The radical cation was formed by reacting ABTS with potassium persulfate. Then, 1 mL of ABTS solution was reacted with the sample for 6 min and then the absorbance was measure at 734 nm. A calibration curve was prepared with ascorbic acid in the range of 0.063–0.250 mg/mL and all the determinations performed in triplicate. Results were expressed as mg ascorbic acid equivalent/g of dry extract. ORAC assay The measurement of oxygen radical absorbance capacity (ORAC-FL) was performed (Ou et al. 2001). The porcine PH sample were dissolved in 75 mM phosphate buffer (pH 7.4) and the solution was placed in a black 96-well microplate (Nunc, Denmark), mixed with 120 μL of fluorescein (70 nM) and incubated at 40 °C for 10 min. Then, 60 μL of 2,2'-azobis(2-amidinopropane) dihydrochloride (AAPH) solution (14 mM) was added to the mixture, and the fluorescence was recorded using a microplate reader (Synergy H1, USA) at excitation and emission wavelengths of 485 and 528 nm, respectively, for 140 min at intervals of 1 min. The area under curve (AUC) was calculated for each sample by integrating the relative fluorescence curve. Trolox (9.98 × 10−4–7.99 × 10−3 μmol/mL) was used as the standard and regression equations for Trolox and samples were calculated. The ORAC values were determined by the ratio of sample slope to the trolox slope obtained in the same assay. Final ORAC values were expressed as mg Trolox equivalent/g of dry extract. Characterization of the inorganic fraction—CaP The inorganic fraction, separated and calcined as described above, was characterized with the following techniques. Thermal analysis (TGA) The thermogravimetric analysis of the inorganic residues (prior to calcination) was performed using SDT Q600 (TA Instruments) TGA equipment, with an air flow rate of 100 ml/min and a heating ramp of 5 °C/min. Determination of Ca and P Powders were dissolved in HNO3 (Merck, Germany) to determine calcium and phosphorus concentrations. Calcium content was measured by flame atomic absorption spectrometry (Solaar 969 AA Spectrometer, Unicam, UK). A La solution (Spectrosol, England; 4 g/L) was added to the samples acid solution to prevent ionization interference. A calibration curve of Ca (0.5–2.0 mg/mL) was prepared by dilution of the respective atomic absorption standard solution (Spectrosol, England). Phosphorus concentration was measured by a spectrophotometric method, using a Spectroquant phosphorus reagent kit (Merck, Germany). A calibration curve of standard K2HPO4 was used and all measurements were performed at 400 nm. The assays were performed in duplicate. The results were expressed in % (g of Ca or P/100 g of sample); Ca/P molar ratio was also calculated. Phase analysis Phase analysis of the inorganic residues and of the calcined powder was determined by X-ray diffraction (XRD). A X'Pert PRO MRD diffractometer was used, with CuKα radiation; the diffraction patterns were acquired with a step size of 0.005° and a count time of 100 s; an interval between 20 and 60° was considered. The registered patterns were compared with the JCPDF standard file 01-072-1234 for HAp. Samples were also analyzed by Fourier transformed infrared spectroscopy (FTIR) in a spectrum series Perkin Elmer spectrometer (ABB, Switzerland) equipped with an attenuated total reflectance (ATR) sampling accessory (PIKE technologies, USA) and a diamond/ZnSe crystal. All spectra were acquired between 500 and 4000 cm−1. Sample morphology The morphology of the samples was analyzed with the scanning electron microscopy (SEM) technique, using a Carl Zeiss Merlin instrument, equipped with a Gemini II column and an integrated high efficiency In-lens for secondary electrons. Before the analysis, the samples were sputtered with gold to prevent charge accumulation. Yield of the process Figure 1 shows the scheme of the process, as well as the yield for each step. It can be seen that, starting from 1 kg of material, the amount of PH extracts is about 135 g–13.5%; the inorganic residues, on the other hand, are about 205 g–20.5%. Porcine protein hydrolysates Characterization of porcine PH Proteins of animal origin are known for their nutritional properties as a crucial source of amino acids; in fact, these are released upon digestion or industrial processing from the parent protein. Meat is one of the most studied sources for the production of bioactive peptides due to the presence of high-quality proteins (Albenzio et al. 2017). Some industrial food-grade proteinases, namely alcalase, flavourzyme, bromelain and papain, have been used for the generation of hydrolysates of porcine proteins (Chang et al. 2007; Liu et al. 2010; Wang et al. 2008; López-Pedrouso et al. 2020). Alcalase is very noteworthy from an industrial standpoint, because of its activity/stability at alkaline pH values, having a wide application. Alcalase has been used as additive in detergent formulations, it can be employed in meat tenderizing, dehairing and bating leather, cheese flavor improvement, baked manufacture, or enhancing digestibility of animal feeds. The reaction of protein hydrolysis catalyzed by alcalase has a strong tendency to develop a hydrolysate with many peptides of small size, due to the extensive range of amino acids that this enzyme can recognize. Therefore, the broad enzyme selectivity and specificity allows the use of alcalase in a variety of protein substrates, yielding a high protein hydrolysis degree (Tacias-Pascacio et al. 2020). Moreover, there is growing evidence that alcalase on its own shows a higher ability for hydrolysis in comparison with other commercial enzymes. As demonstrated in the work of Ahmadifard et al. (2016), the enzymatic hydrolysis of rice bran protein concentrate and soybean protein showed that alcalase presented a higher capability for hydrolysis of approximately 10 times higher than other enzymes. Based on these data, the hydrolysis of porcine by-products was accomplished by alcalase, a serine endopeptidase from Bacillus licheniformis. A complete characterization of the PH extract was performed; the results are reported in Table 1. Table 1 Composition of the PH extracts The DH is an indicator widely used to compare hydrolysis efficiency among different protein hydrolysates. The DH of PH extracts from these food by-products was 53.3 ± 5.1%. This value is higher than achieved for other pork tissue hydrolysates also produced by alcalase. Liu and colleagues (Liu et al. 2010) hydrolyzed porcine plasma protein with 2% (w/w) alcalase for 5 h, showing a DH of 17.6%. Chang and collaborators (Chang et al. 2007) performed the proteolytic reaction of porcine hemoglobin with 2.0% alcalase, and after 6 h obtained hydrolysates with a DH less than 10%. Verma and collaborators (Verma et al. 2017) carried out the proteolytic reaction of porcine liver with 1% (w/w) alcalase over 6 h, and the hydrolysates had a DH of 23.56%. Regarding the composition, the dry matter of PH extracts was 10.3 ± 0.0%; this is in agreement with literature, which reports that the dry matter content in the porcine hydrolysates can vary between 5.9 and 13.8%, depending on pork tissue types used for hydrolysis (Damgaard et al. 2014). As expected, this fraction is protein-rich, showing a content of 70.4 ± 2.4% (w/w dry basis), which is within the values described for porcine hydrolysates; for instance, hydrolyzed swine mucus protein has approximately 59% crude protein and hydrolyzed swine liver has ca. 78% crude protein (dos Santos Cardoso et al. 2020). The enzymatic hydrolysis is able to produce peptides which are more water-soluble than the intact proteins, so it was possible to obtain a high protein recovery. PH extracts showed an ash content of 13.9 ± 0.4% (w/w dry basis), which indicates a large amount of minerals. Minerals are essential for human and animal health because they are important for several functionalities, such as building strong bones, imparting nerve impulses, producing different hormones and also regulating the heartbeat (Gharibzahedi et al. 2017). The proportion of the remaining components was calculated by difference; it is likely that some lipids are present in the extracts. Considering these results, this porcine-derived PH extract showed a high nutritive quality. Peptide profile analysis Besides DH measurement, an extremely important parameter used for characterization of PH extracts is the molecular weight distribution of the peptides. This is a direct analysis of the peptides and protein content, unlike the DH which is a measure relative to the raw material. Enzymatic hydrolysis decreases the molecular weight of intrinsic protein and increases the number of ionizable groups, resulting in novel peptides. Porcine PH extracts were analyzed by gel filtration chromatography to determine the peptide length distribution (Fig. 2). The chromatogram revealed that alcalase produced hydrolysates with small peptides. According to the calibration curve of standards, PH contained peptides with molecular weight lower than 13.7 kDa, with a high contribution of peptides with molecular weight smaller than 1.2 kDa. This result confirmed that proteins in porcine by-products were degraded by alcalase into low MW peptides or free amino acids. Fu and co-authors (Fu et al. 2019) also showed that porcine hemoglobin and whole blood treated with different proteases, such as alcalase, generated peptide fractions with low MW, most of them below 1 kDa. Size-exclusion FPLC profile of porcine protein hydrolysates obtained upon hydrolyzing with alcalase. Molecular weight markers of 13.7 kDa and 1.2 kDa are indicated This is expected to be beneficial for the antioxidant activity, since small peptides have been revealed to have higher activity than peptides with high MW (Ajibola et al. 2011; Irshad et al. 2015). Antioxidant activity of porcine protein hydrolysates The search for bioactive protein hydrolysates from meat by-products has been instigated by the growing interest in the development of functional foods, along with the control of food lipid oxidation. These valuable compounds could revalue the by-products, while mitigating the environmental and economic issues caused by the meat industry. Furthermore, the antioxidant properties related with these compounds offers an alternative to synthetic additives, which are linked with adverse effects on human health (Borrajo et al. 2020a). Some animal proteins have been used as substrates for alcalase hydrolysis, such as sheep visceral protein, which produced a PH with an antioxidant activity of 68% (Meshginfar et al. 2014). In vivo and in vitro antioxidant capacity of porcine splenic hydrolysate produced using alcalase, suggested that porcine splenic hydrolysates improve the antioxidant status in rats by increasing hepatic catalase and glutathione peroxidase activities (Han et al. 2014). Thus, the antioxidant capacity of porcine protein hydrolysates prepared with alcalase was evaluated by ABTS and ORAC assays; results are reported in Table 2. Table 2 Antioxidant activity of the protein hydrolysates The ABTS method evaluates the ability of an antioxidant compound to transfer electrons or donate hydrogen atoms to a preformed ABTS radical cation, whose change of color causes a decrease in absorbance (Re et al. 1999). The value of the radical scavenging activity of porcine PH extract was 21.1 ± 0.5 mg ascorbic acid equivalent/g of dry extract (55.16 ± 1.34 in terms of % radical scavenging activity—%RSA). This antioxidant activity is in agreement with the values observed for other porcine hydrolysates extracted with alcalase. Porcine liver protein hydrolysates showed a RSA ranged from 38.43 to 74.62% for 0–6 h reaction time (Verma et al. 2017). Damgaard et al. (2015) tested the antioxidant capacity of different porcine tissue hydrolysates (heart, colon and neck) using a mixture of alcalase and protamex; they registered values of RSA between 37.9 and 49.6%. The activity of hydrolysates to scavenge ABTS+ radicals is affected by several factors such as enzymes, DH, solubility of hydrolysates and MW of peptides. The ORAC-FL method evaluates the scavenging capacity due to a hydrogen-atom transfer mechanism. The antioxidant compound is exposed to a peroxyl radical generator (AAPH) and the oxidative degradation of fluorescein is measured (Ou et al. 2001). This assay uses a biological radical source, so is considered the most relevant method from a biological point of view, integrating the degree and time of antioxidant reaction (López-Pedrouso et al. 2020). The value of the peroxyl radical scavenging activity of porcine PH extracts was 87.7 ± 6.3 mg Trolox equivalent/g of dry extract. Indeed, porcine liver protein hydrolysates from enzymatic hydrolysis with several enzymes such as alcalase, bromelain, flavourzyme and papain have shown antioxidant capacity using ORAC-FL method (López-Pedrouso et al. 2020; Borrajo et al. 2020b). Our results corroborated this, proving that antioxidant peptides from porcine by-products can protect cells from oxidative damage. Thus, these antioxidant peptides can be employed to maintain human health and also food safety and quality, by mitigating oxidative stress and lipid peroxidation triggered by free radicals produced during oxidation reactions of the human body and food products. Thereby, antioxidant peptides have received noteworthy attention in the food industry as functional ingredients and food additives. The use of synthetic antioxidants agents in the food industry is under strict regulation due to their side-effects on human health, namely induction of DNA damage and toxicity. Consequently, substituting synthetic antioxidants by natural antioxidants has become required (Tadesse and Emire, 2020). Inorganic fraction: CaP-based compounds From inorganic residues to CaP: thermal treatment To extract CaP phases from natural sources, a thermal treatment (calcination) is generally performed; this is done to remove possible organic fragments still present in the residues, as well as increasing the crystallinity of the obtained CaP (Piccirillo et al. 2014). To understand the changes taking place during the heating and, therefore, to choose the best temperature for the calcination, a thermogravimetric analysis was carried out on the mineral residues; Fig. 3a shows the results, while Fig. 3b reports the first derivative of the curve, to better visualize the different steps. a Thermogravimetric analysis (TGA) of the inorganic residues; b first derivative of the curve The first weight loss (about 8%, T < 200 °C) is due to the removal of water, either adsorbed on the surface or included in the structure of the powder. A slightly larger loss (about 10%) is observed for 200 < T < 600 °C; losses in this temperature interval are associated with the burning of the residual organic fragments present in the material. This weight loss is much smaller to that previously observed for porcine bones—about 30% (Figueiredo et al. 2010); this difference confirming that a significant amount of organic matter was removed from the powder during the enzymatic hydrolysis. For higher temperatures, a further decrease in weight can be observed, although it is small (< 3%); this could be due to the removal of the carbonate present in the material (Figueiredo et al. 2010). CaP characterization Based on these results, it was decided to perform the calcination of the material at 700 °C; this value was chosen as the organic fragments were already removed but some carbonate ions were still present in the material—indeed, literature reports that the presence of such ions can be beneficial for bone-like cellular growth and bioactivity (Nakamura et al. 2016). Calcination at this temperature led to a yield of about 170 g, i.e., 17%. Elemental analysis was performed to determine the content of calcium and phosphorus, as well as the Ca/P molar ratio—see Table 3. Table 3 Calcium and phosphorus content (wt %) and Ca/P molar ratio for the non-calcined powder and CaP sample It can be seen that, although both calcium and phosphorus show higher relative content after the calcination, the increase is different for the two elements; in fact, the Ca/P ratio decreases slightly—from 1.74 to 1.65. This indicates that a small quantity of calcium is lost during the calcination; this behavior was previously observed for CaP derived from natural sources (Aydin et al. 2020). The Ca/P ratio for the CaP powder is very close to the stoichiometric one, that is 1.67. Figure 4 shows the XRD patterns for the inorganic powder, prior the calcination and after (CaP). It can be seen that in both cases the only phase present is HAp; no other phosphate compounds, for instance β-tricalcium phosphate (β-TCP), are present. This was expected, the Ca/P ratio being not statistically different from the stoichiometric one; literature reports the formation of β-TCP for smaller Ca/P ratios, i.e., values close to 1.5 (Piccirillo et al. 2014). It can also be observed that CaP is much more crystalline than the starting non-calcined powder (sharper peaks). XRD data for the inorganic residue (non-calcined powder) and the CaP sample. The patterns are compared to the 01-072-1234 standard for HAp FTIR spectra of the samples are shown in Fig. 5. It can be seen that the signals are much sharper and more resolved for the calcined CaP sample, due to its higher crystallinity. Peaks belonging to the HAp phosphate ions can be observed at 1090, 1040, 960, 603, 568 cm−1 (Piccirillo et al. 2013); it is interesting to note that no peak is present at 1122 cm−1. This signal corresponds to the β-TCP (Piccirillo et al. 2013); its absence confirms that this phase is not formed in the CaP sample, in agreement with XRD data (Fig. 4). Other weaker peaks present in the spectrum correspond to the OH ions at 3570 and 634 cm−1; moreover, signals at 1412 and 1450 cm−1 belong to the carbonate group (Figueiredo et al. 2010). These signals confirm that, after a treatment at 700 °C carbonate ions are still present in the HAp lattice. FTIR of the inorganic residues (i.e., non-calcined powder) and of the calcined CaP sample Figure 6 shows a SEM micrograph of the CaP sample. It can be seen that the powder has a nanometric structure; indeed particles with average size of about 50 nm can be observed. Literature data report that HAp from natural sources can be nanometric or not, depending on the sources (Barakat et al. 2009; Santana et al. 2019). The use of HAp in the form of nanoparticles has recently gained increasing attention, as it can show enhanced sintering properties, if compared to the micrometric powder (György et al. 2019), due to a higher specific surface and consequently a higher powder reactivity. Moreover, nano-HAp was also preferred for the preparation of composites with other compounds (i.e., biopolymers) (Turon et al. 2017). Compared with micrometric HAp, the nanoscale one induces better cellular functions like osteoblast responses (such as adhesion, proliferation, and differentiation) (Li et al. 2013). Also, for its application as a heavy metal remover, nano-HAp showed enhanced performance (Kowthaman and Varadappan 2019). SEM micrograph of the calcined powder CaP Based on these results, it can be stated that HAp with interesting properties and potential for application in biomedicine (for instance, as bone substitute) and as a heavy metal remover was successfully extracted in this combined process. The use of natural HAp has been explored instead to synthetic HAp, because it has comparable metabolic activity, preserves chemical composition and structure of the precursor material (Boutinguiza et al. 2012). In addition, there is a growing concern to develop clean, non-toxic and environmentally friendly procedures for HAp synthesis (with lower impact on the environment). It is required to reuse waste not only because waste materials are accumulating, but also because natural raw materials are being exhausted. Evaluation of the process A full environmental and economic assessment of the process is beyond the scope of this work; some general comments, however, can already be made. As reported in Sect. 3.1, the overall yield is about 30%; this value is well below the 100% associated with the full valorization. It is worth highlighting, however, that without performing this combined process, only PH or CaP would be extracted, with an overall minor yield and a fewer effective by-products valorization. Moreover, the residues also contain other organic fractions, such as lipids; these phases should also be considered to achieve a full valorization. The process presented here represent the first step to extract different parts from the residues; other step(s) for other fraction(s) could be added. The extraction of PH was performed with an enzymatic bioprocess, without employing toxic solvents, according to the principles of the green chemistry, as an aqueous solution was used. For CaP, on the other hand, no solvent was necessary, but a simple thermal process was performed. Although energy is used for this treatment, with an associated impact on the environment, it has to be highlighted that the conventional routes for CaP production are likely to have a greater environmental impact. With these, in fact, chemical reactions between Ca and P are performed; in addition to the energy cost linked to this, the use of Ca- and P-containing reagents, derived from non-renewable sources, has to be considered. Both elements, in fact, are obtained from mining activities, whose impact is known to be quite high (Dubsok and Kittipongvieses 2016; Petrov and Danilov 2020). Considering this, by-product valorization surely offers a more sustainable solution. From the economic point of view, both PH and CaP have high market value. Considering CaP, it is a good quality powder, with high purity and high level of crystallinity for biomedical applications, can cost up to tens of euros per gram. PH, for feed applications, is about 5x more valuable than a non-hydrolyzed protein. This makes the process profitable. This makes the process profitable. Porcine by-products (meat and bones) are a valuable source to produce natural value-added compounds for different markets. The combined process described in this work shows that it is possible to extract different compounds, both organics and minerals—indeed protein hydrolysates and hydroxyapatite were obtained. Overall, about 30% of the by-products were converted into valuable compounds—13.5% and 17% of protein hydrolysates and hydroxyapatite, respectively. Protein hydrolysates were rich in low MW peptides and showed significant antioxidant properties. Hydroxyapatite, on the other hand, was shown to be single-phase and with a nanometric structure. The proposed combined process is quite simple, cheap and easily scalable; all these features make it applicable at industrial scale, to achieve a more complete valorization of porcine by-products. Moreover, in principle the process could be applied also to by-products of other meat or fish industries. As future work, other steps could be added to the process, for the extraction of other phases (i.e., lipids) and to achieve a more complete valorization. All data supporting this article's conclusion are available. AAPH: 2,2'-Azobis(2-amidinopropane) dihydrochloride 2,2-Azino-bis-3-ethylbenzothiazoline-6-sulphonic acid ATR: Attenuated total reflectance Calcium phosphates DH: Degree of hydrolysis FPLC: Fast protein liquid chromatography FTIR: Fourier transformed infrared spectroscopy HAp: Hydroxyapatite mAU: Milli absorbance units MW: ORAC-FL: Oxygen radical absorbance capacity Protein hydrolysates Radical scavenging activity SEM: Scanning electron microscopy TGA: TNBS: Trinitrobenzenesulfonic acid solution XRD: Ahmadifard N, Murueta JHC, Abedian-Kenari A, Motamedzadegan A, Jamali H (2016) Comparison the effect of three commercial enzymes for enzymatic hydrolysis of two substrates (rice bran protein concentrate and soy-been protein) with SDS-PAGE. J Food Sci Technol 53:1279–1284 Ahmed M, Verma AK, Patel R (2020) Collagen extraction and recent biological activities of collagen peptides derived from sea-food waste: a review. Sustain Chem Pharm 18:100315 Ajibola CF, Fashakin JB, Fagbemi TN, Aluko RE (2011) Effect of peptide size on antioxidant properties of African yam bean seed (Sphenostylis stenocarpa) protein hydrolysate fractions. Int J Mol Sci 12:6685–6702 Albenzio M, Santillo A, Caroprese M, Della Malva A, Marino R (2017) Bioactive peptides in animal food products. Foods 6:35 Álvarez C, Rendueles M, Díaz M (2012) Production of porcine hemoglobin peptides at moderate temperature and medium pressure under a nitrogen stream. Functional and antioxidant properties. J Agric Food Chem 60:5636–5643 AOAC (1995) Official methods of analysis 16th Ed. Association of official analytical chemists. Washington DC, USA Aspevik T, Oterhals Å, Rønning SB, Altintzoglou T, Wubshet SG, Gildberg A, Afseth NK, Whitaker RD, Lindberg D (2017) Valorization of proteins from co-and by-products from the fish and meat industry. Chem Chem Technol Waste Valoriz 1:123–150 Aydin G, Terzioğlu P, Öğüt H, Kalemtas A (2020) Production, characterization, and cytotoxicity of calcium phosphate ceramics derived from the bone of meagre fish, Argyrosomus regius. J Austr Ceram Soc 1:1–10 Barakat NAM, Khil MS, Omran AM, Sheikh FA, Kim HY (2009) Extraction of pure natural hydroxyapatite from the bovine bones bio waste by three different methods. J Mater Process Technol 209:3408–3415 Borrajo P, López-Pedrouso M, Franco D, Pateiro M, Lorenzo JM (2020a) Antioxidant and antimicrobial activity of porcine liver hydrolysates using flavourzyme. Appl Sci 10:3950 Borrajo P, Pateiro M, Gagaoua M, Franco D, Zhang W, Lorenzo JM (2020b) Evaluation of the antioxidant and antimicrobial activities of porcine liver protein hydrolysates obtained using alcalase, bromelain, and papain. Appl Sci 10:2290 Boutinguiza M, Pou J, Comesaña R, Lusquiños F, De Carlos A, León B (2012) Biological hydroxyapatite obtained from fish bones. Mater Sci Eng, C 32:478–486 Chang C-Y, Wu K-C, Chiang S-H (2007) Antioxidant properties and protein compositions of porcine haemoglobin hydrolysates. Food Chem 100:1537–1543 Damgaard TD, Otte JAH, Meinert L, Jensen K, Lametsch R (2014) Antioxidant capacity of hydrolyzed porcine tissues. Food Sci Nutr 2:282–288 Damgaard T, Lametsch R, Otte J (2015) Antioxidant capacity of hydrolyzed animal by-products and relation to amino acid composition and peptide size distribution. J Food Sci Technol 52:6511–6519 de Queiroz ALM, Bezerra TKA, de Freitas PS, da Silva MEC, de Almeida GCA, Gadelha TS, Pacheco MTB, Madruga MS (2017) Functional protein hydrolysate from goat by-products: Optimization and characterization studies. Food Biosci 20:19–27 dos Santos Cardoso M, Godoy AC, Oxford JH, Rodrigues R, dos Santos CM, Bittencourt F, Signor A, Boscolo WR, Feiden A (2020) Apparent digestibility of protein hydrolysates from chicken and swine slaughter residues for Nile tilapia. Aquaculture 530:735720 Dubsok A, Kittipongvieses S (2016) Estimated greenhouse gases emissions from mobile and stationary sources in the limestone and basalt rock mining in Thailand. Am J Environ Sci 12:334–340 Ferraro V, Gaillard-Martinie B, Sayd T, Chambon C, Anton M, Santé-Lhoutellier V (2017) Collagen type I from bovine bone. Effect of animal age, bone anatomy and drying methodology on extraction yield, self-assembly, thermal behaviour and electrokinetic potential. Int J Biol Macromol 97:55–66 Figueiredo M, Fernando A, Martins G, Freitas J, Judas F, Figueiredo H (2010) Effect of the calcination temperature on the composition and microstructure of hydroxyapatite derived from human and animal bone. Ceram Int 36:2383–2393 Fu Y, Liu J, Hansen ET, Bredie WLP, Lametsch R (2018) Structural characteristics of low bitter and high umami protein hydrolysates prepared from bovine muscle and porcine plasma. Food Chem 257:163–171 Fu Y, Bak KH, Liu J, De Gobba C, Tøstesen M, Hansen ET, Petersen MA, Ruiz-Carrascal J, Bredie WLP, Lametsch R (2019) Protein hydrolysates of porcine hemoglobin and blood: Peptide characteristics in relation to taste attributes and formation of volatile compounds. Food Res Int 121:28–38 Gharibzahedi SMT, Jafari SM (2017) The importance of minerals in human nutrition: Bioavailability, food fortification, processing effects and nanoencapsulation. Trends Food Sci Technol 62:119–132 György S, Károly Z, Fazekas P, Németh P, Bódis E, Menyhárd A, Kótai L, Klébert S (2019) Effect of the reaction temperature on the morphology of nanosized HAp. J Therm Anal Calorim 138:145–151 Han K-H, Shimada K, Hayakawa T, Yoon TJ, Fukushima M (2014) Porcine Splenic Hydrolysate has Antioxidant Activity in vivo and in vitro. Korean J Food Sci Anim Resour 34:325 Irshad I, Kanekanian A, Peters A, Masud T (2015) Antioxidant activity of bioactive peptides derived from bovine casein hydrolysate fractions. J Food Sci Technol 52:231–239 Khan HM, Iqbal T, Ali CH, Yasin S, Jamil F (2020) Waste quail beaks as renewable source for synthesizing novel catalysts for biodiesel production. Renew Energy 154:1035–1043 Kowthaman CN, Varadappan AMS (2019) Synthesis, characterization, and optimization of Schizochytrium biodiesel production using Na+-doped nanohydroxyapatite. Int J Energy Res 43:3182–3200 Lapeña D, Vuoristo KS, Kosa G, Horn SJ, Eijsink VGH (2018) Comparative assessment of enzymatic hydrolysis for valorization of different protein-rich industrial byproducts. J Agric Food Chem 66:9738–9749 Li B, Chen F, Wang X, Ji B, Wu Y (2007) Isolation and identification of antioxidative peptides from porcine collagen hydrolysate by consecutive chromatography and electrospray ionization–mass spectrometry. Food Chem 102:1135–1143 Li X, Wang L, Fan Y, Feng Q, Cui FZ, Watari F (2013) Nanostructured scaffolds for bone tissue engineering. J Biomed Mater Res, Part A 101:2424–2435 Li T, Shi C, Zhou C, Sun X, Ang Y, Dong X, Huang M, and Zhou G (2020) Purification and characterization of novel antioxidant peptides from duck breast protein hydrolysates. LWT: 109215. Liu Q, Kong B, Xiong YL, Xia X (2010) Antioxidant activity and functional properties of porcine plasma protein hydrolysate as influenced by the degree of hydrolysis. Food Chem 118:403–410 López-Pedrouso M, Borrajo P, Pateiro M, Lorenzo JM, Franco D (2020) Antioxidant activity and peptidomic analysis of porcine liver hydrolysates using alcalase, bromelain, flavourzyme and papain enzymes. Food Res Int 137:109389 Lü X Y, Fan Y B, Gu D, and Cui W(2007) Preparation and characterization of natural hydroxyapatite from animal hard tissues. In, 213–16. Trans Tech Publ Meshginfar N, Sadeghi-Mahoonak A, Ziaiifar AM, Ghorbani M, Kashaninejad M (2014) Study of antioxidant activity of sheep visceral protein hydrolysate: Optimization using response surface methodology. ARYA Atherosclerosis 10:179 Nakamura M, Hiratai R, Hentunen T, Salonen J, Yamashita K (2016) Hydroxyapatite with high carbonate substitutions promotes osteoclast resorption through osteocyte-like cells. ACS Biomater Sci Eng 2:259–267 Nie Y, Hou Q, Bai C, Qian H, Bai X, Ju M (2020) Transformation of carbohydrates to 5-hydroxymethylfurfural with high efficiency by tandem catalysis. J Clean Prod 274:123023 Ou B, Hampsch-Woodill M, Prior RL (2001) Development and validation of an improved oxygen radical absorbance capacity assay using fluorescein as the fluorescent probe. J Agric Food Chem 49:4619–4626 Petrov DS, Danilov AS (2020) Analysis and assessment of the hydrochemical conditions of flooded phosphate rock quarries. Water Ecol 25:63–69 Piccirillo C, Silva MF, Pullar RC, Da Cruz IB, Jorge R, Pintado MME, Castro PML (2013) Extraction and characterisation of apatite-and tricalcium phosphate-based materials from cod fish bones. Mater Sci Eng, C 33:103–110 Piccirillo C, Pullar RC, Tobaldi DM, Castro PML, Pintado MME (2014) Hydroxyapatite and chloroapatite derived from sardine by-products. Ceram Int 40:13231–13240 Re R, Pellegrini N, Proteggente A, Pannala A, Yang M, Rice-Evans C (1999) Antioxidant activity applying an improved ABTS radical cation decolorization assay. Free Radical Biol Med 26:1231–1237 Safavi A, Mohammadi A, Sorouri M (2020) Cobalt-nickel wrapped hydroxyapatite carbon nanotubes as a new catalyst in oxygen evolution reaction in alkaline media. Electrocatalysis 11:226–233 Saiga AI, Tanabe S, Nishimura T (2003) Antioxidant activity of peptides obtained from porcine myofibrillar proteins by protease treatment. J Agric Food Chem 51:3661–3667 Santana CA, Piccirillo C, Pereira SIA, Pullar RC, Lima SM, Castro PML (2019) Employment of phosphate solubilising bacteria on fish scales–Turning food waste into an available phosphorus source. J Environ Chem Eng 7:103403 Santos AF, Arim AL, Lopes DV, Gando-Ferreira LM, Quina MJ (2019) Recovery of phosphate from aqueous solutions using calcined eggshell as an eco-friendly adsorbent. J Environ Manage 238:451–459 Sousa P, Borges S, Pintado M (2020) Enzymatic hydrolysis of insect Alphitobius diaperinus towards the development of bioactive peptide hydrolysates. Food Funct 11:3539–3548 Tacias-Pascacio V G, Morellon-Sterling R, Siar E-H, Tavano O, Berenguer-Murcia Á, and Fernandez-Lafuente R (2020) Use of Alcalase in the production of bioactive peptides: A review. Int J Biol Macromol Tadesse SA, Emire SA (2020) Production and processing of antioxidant bioactive peptides: a driving force for the functional food market. Heliyon 6:e04765 Toldrá F, Mora L, Reig M (2016) New insights into meat by-product utilization. Meat Sci 120:54–59 Turon P, Del Valle LJ, Alemán C, Puiggalí J (2017) Biodegradable and biocompatible systems based on hydroxyapatite nanoparticles. Appl Sci 7:60 Verma AK, Chatli MK, Kumar P, Mehta N (2017) Antioxidant and antimicrobial activity of protein hydrolysate extracted from porcine liver. Indian J Anim Sci 87:711–717 Wang JZ, Zhang HAO, Zhang M, Yao WT, Mao XY, Ren FZ (2008) Antioxidant activity of hydrolysates and peptide fractions of porcine plasma albumin and globulin. J Food Biochem 32:693–707 Zou Z, Wei M, Fang J, Dai W, Sun T, Liu Q, Gong G, Liu Y, Song S, Ma F (2019) Preparation of chondroitin sulfates with different molecular weights from bovine nasal cartilage and their antioxidant activities. Int J Biol Macromol 152:1047–1055 This work was supported by National Funds from project MOREPEP (POCI-01–0247-FEDER-017638) funded by Fundo Europeu de Desenvolvimento Regional (FEDER), under Programa Operacional Competitividade e Internacionalização (POCI) and from FCT – Fundação para a Ciência e a Tecnologia through project UIDB/50016/2020. Clara Piccirillo and Francesca Scalera thank Fondazione con il Sud for funding the HApECOrk project (Grant Number 2015–0243). Universidade Católica Portuguesa, CBQF - Centro de Biotecnologia e Química Fina – Laboratório Associado, Escola Superior de Biotecnologia, Rua Diogo Botelho 1327, 4169-005, Porto, Portugal Sandra Borges, José António Couto & Manuela Pintado Institute of Nanotechnology/NANOTEC, National Research Council, Lecce, Italy Clara Piccirillo & Francesca Scalera ETSA, Empresa Transformadora de Subprodutos, Loures, Portugal Rui Martins, Ana Rosa & André Almeida Sandra Borges Clara Piccirillo Francesca Scalera Rui Martins Ana Rosa José António Couto Manuela Pintado SB: methodology, investigation, writing—original draft, preparation. CP: investigation, writing- reviewing and editing. FS: investigation, visualization and data curation. RM: investigation. AR: investigation. JAC: writing—reviewing and editing. AA: conceptualization, resources. MP: conceptualization, project administration, writing—reviewing and editing. All authors read and approved the final manuscript. Correspondence to Sandra Borges. Borges, S., Piccirillo, C., Scalera, F. et al. Valorization of porcine by-products: a combined process for protein hydrolysates and hydroxyapatite production. Bioresour. Bioprocess. 9, 30 (2022). https://doi.org/10.1186/s40643-022-00522-6 Porcine by-products Bioactive peptides Enzymatic hydrolysis Natural hydroxyapatite
CommonCrawl
An improved approach to infer protein-protein interaction based on a hierarchical vector space model Methodology article Jiongmin Zhang1, Ke Jia1, Jinmeng Jia2 & Ying Qian ORCID: orcid.org/0000-0003-4961-58421 Comparing and classifying functions of gene products are important in today's biomedical research. The semantic similarity derived from the Gene Ontology (GO) annotation has been regarded as one of the most widely used indicators for protein interaction. Among the various approaches proposed, those based on the vector space model are relatively simple, but their effectiveness is far from satisfying. We propose a Hierarchical Vector Space Model (HVSM) for computing semantic similarity between different genes or their products, which enhances the basic vector space model by introducing the relation between GO terms. Besides the directly annotated terms, HVSM also takes their ancestors and descendants related by "is_a" and "part_of" relations into account. Moreover, HVSM introduces the concept of a Certainty Factor to calibrate the semantic similarity based on the number of terms annotated to genes. To assess the performance of our method, we applied HVSM to Homo sapiens and Saccharomyces cerevisiae protein-protein interaction datasets. Compared with TCSS, Resnik, and other classic similarity measures, HVSM achieved significant improvement for distinguishing positive from negative protein interactions. We also tested its correlation with sequence, EC, and Pfam similarity using online tool CESSM. HVSM showed an improvement of up to 4% compared to TCSS, 8% compared to IntelliGO, 12% compared to basic VSM, 6% compared to Resnik, 8% compared to Lin, 11% compared to Jiang, 8% compared to Schlicker, and 11% compared to SimGIC using AUC scores. CESSM test showed HVSM was comparable to SimGIC, and superior to all other similarity measures in CESSM as well as TCSS. Supplementary information and the software are available at https://github.com/kejia1215/HVSM. The Gene Ontology (GO) [1] is a widely used vocabulary system in bioinformatics, which systematically describes the functional relations between different genes or their products. The GO consists of three independent ontologies: biological process (BP), cellular component (CC), and molecular function (MF). Each ontology is structured as a Directed Acyclic Graph (DAG), in which GO terms form the nodes, and the relations between the GO terms form the edges. In the DAG, GO terms are connected by different hierarchical relations (mostly is_a and part_of relations). The is_a relation describes the fact that a child term is a specialization of a parent term, while the part_of relation denotes the fact that a child term is a component of a parent term. The term at the lower level (e.g., leaf term) has more specific information than the term at the upper level (e.g., root term). Recently, GO has been widely used in protein function prediction, validation [2, 3] and classification of protein-protein interactions [4, 5], gene expression studies [6] and pathway analysis [7]. Gene products are usually annotated with a set of GO terms. The functional relations between gene products are quantified by using the shared GO terms of gene products [8–10] or explicitly using semantic similarity measures [11]. The semantic similarity measures have been widely used, which generate numerical values describing the likeness between two terms [12]. In this paper we presented a new method to calculate semantic similarity, the Hierarchical Vector Space Model (HVSM), which enhanced the basic vector space model (VSM) by explicitly introducing the relations between GO terms. When constructing the vector for a gene, in addition to the terms annotated to the gene, HVSM takes their ancestors and descendants into consideration as well. Besides, HVSM considers both "is_a" and "part_of" relations. The introduction of the Certainty Factor to calibrate the similarity value based on the number of annotated terms improves the effectiveness of HVSM further. The simplicity of the algorithm makes it very efficient. We tested HVSM on Homo sapiens and Saccharomyces cerevisiae protein-protein interaction datasets and compared the results with two other vector-based measures, IntelliGO [13] and basic VSM, and the six other popular measures, including TCSS [14], Resnik [15], Lin [16], Jiang [17], Schlicker [18], and SimGIC [19]. The results showed that HVSM outperformed the other eight measures in most cases. HVSM achieved an improvement of up to 4% compared to TCSS, 8% compared to IntelliGO, 12% compared to VSM, 6% compared to Resnik, 8% compared to Lin, 11% compared to Jiang, 8% compared to Schlicker, and 11% compared to SimGIC. The correlation coefficients with protein sequence, EC, and Pfam similarity also showed that HVSM was comparable to SimGIC, and outperformed all other similarity measures in the CESSM test. Different approaches have been proposed to calculate the semantic similarity, such as the vector-based approach, the term-based approach, the set-based approach, and the graph-based approach. The vector-based approach transforms a gene product into a vector, and functional similarity is measured by the similarity of corresponding vectors. The term-based approach calculates semantic similarities from term similarities using various combination strategies. The set-based approach views the set of terms as bags of words. Two gene products are similar if there is a large overlap between the two corresponding sets of terms. The graph-based approach uses graph matching techniques to compute the similarity. In vector-based approaches, the dimension of the vector is equal to the total number of terms in GO. Each dimension corresponds to a term in GO. Each vector component is either 1 or 0, denoting the presence or absence of a term in the set of annotations of a given gene product. The alternative way is to have each dimension represent a certain property of a term (e.g., IC value) [20]. The most common method of measuring similarity between vectors is the cosine similarity: $$\begin{array}{@{}rcl@{}} S_{v}\left(G_{1},G_{2}\right)=\frac{v_{1}\cdot v_{2}}{\left|v_{1}\right|\left|v_{2}\right|} \end{array} $$ where v i represents the vector of the gene product G i , v1·v2 corresponds to the dot product between the two vectors, and |v i | denotes the magnitude of vector v i . Suppose G1 and G2 are two given genes or gene products annotated by two sets of GO terms {t11,t12,⋯,t1n} and {t21,t22,⋯,t2m}. IntelliGO [13], a vector-based method, represented each gene as a vector \(g=\sum _{i}\alpha _{i}e_{i}\), where α i =w(g,t i )IFA(t i ), w(g,t i ) representing the weight assigned to the evidence code between g and t i , IFA(t i ) being the inverse annotation frequency of the term t i , and e i being the i-th basis vector corresponding to the annotation term t i . The dot product between two gene vectors was defined as: $$\begin{array}{@{}rcl@{}} g_{1}*g_{2}=\sum_{ij}\alpha_{i}*\beta_{i}*e_{i}*e_{j} \end{array} $$ $$\begin{array}{@{}rcl@{}} e_{i}*e_{j}=\frac{2Depth(LCA)}{MinSPL\left(t_{1i},t_{2j}\right)+2Depth(LCA)} \end{array} $$ where Depth(LCA) was the depth of the lowest common ancestor (LCA) for t1i and t2j, and MinSPL(t1i,t2j) was the length of the shortest path between t1i and t2j, which passed through LCA. The similarity measure for the two genes vectors g1 and g2 was then defined using the cosine formula: $$\begin{array}{@{}rcl@{}} {SIM}_{IntelliGO}\left(g_{1},g_{2}\right)=\frac{g_{1} \cdot g_{2}}{\sqrt{g_{1}*g_{1}}\sqrt{g_{2}*g_{2}}} \end{array} $$ The basic vector-based methods ignore the intrinsic relationship between different terms and treat different terms as independent components, which may lead to the inaccuracy of the semantic similarity. Term-based approaches can be classified into two groups: path-based and IC-based. Path-based approaches, also called edge-based approaches [2, 21–26], use the number of edges or the distance between two terms to quantify the semantic similarity. When more than one path exist between two terms, the shortest path or the average of all paths is usually used. Similar approaches were adapted to the biomedical field [27]. Path-based methods are based on two assumptions: (1) edges and nodes are uniformly distributed [28], and (2) edges at the same level in the ontology correspond to the same semantic distance between terms. However, both of the above assumptions are rarely true. IC-based approaches [14–19, 29–32] use the Information Content (IC) to measure how specific and informative a term is. IC can be quantified by negative log likelihood, − logp(c), where p(c) is the occurrence probability of the term c in a specific corpus, such as the UniProt Knowledge base [12]. The TCSS [14] measure defined a different way to calculate IC, which depended upon the specificity of the term in the graph, shown as: $$\begin{array}{@{}rcl@{}} ICT(t)=-ln\left(\frac{\left|{N(t)}\right|}{\left|{O}\right|}\right) \end{array} $$ where t was a term in the ontology O, |N(t)| was the number of children terms of t, and |O| was the total number of terms in O. The IC value of a term was dependent on its children, and its parents were not considered [15]. Many of the term-based methods are hybrid. They involve both ideas of the path-based and IC-based approaches, so the distinction between the two groups is not clear. Three combination approaches are commonly used in term-based approaches to obtain semantic similarities of gene pairs from term similarities: maximum (MAX), average (AVG) and best-match average (BMA) [18]. Let GO(A) and GO(B) denote the term sets annotated to two proteins A and B. The MAX and the AVG approach are given by the maximum and the average of the similarity between each term in GO(A) and each term in GO(B). The BMA is given by the average similarity between each term in GO(A) and its most similar term in GO(B), averaged with its reciprocal [33]. Set-based approaches use the Tversky ratio model of similarity [34] (a general model of distance) to calculate the similarity between gene products, which is defined as: $$\begin{array}{@{}rcl@{}} \frac{f\left(G_{1}\cap G_{2}\right)}{f\left(G_{1}\cap G_{2}\right)+\alpha*f\left(G_{1}-G_{2}\right)+\beta*f(G_{2}-G_{1})} \end{array} $$ where G1 and G2 are sets of terms annotated to two different gene products from the same ontology and f is an additive function on sets. When α=β=1, we get the Jaccard distance between two sets: $$\begin{array}{@{}rcl@{}} S_{Jaccard}=\frac{f\left(G_{1}\cap G_{2}\right)}{f\left(G_{1}\cup G_{2}\right)} \end{array} $$ When \(\alpha =\beta =\frac {1}{2}\), we have the Dice distance between two sets: $$\begin{array}{@{}rcl@{}} S_{Dice}=\frac{2*f\left(G_{1}\cap G_{2}\right)}{f\left(G_{1}\right)+f\left(G_{2}\right)} \end{array} $$ Set-based approaches assume that the terms are independent of each other. The similarity and dissimilarity of genes are modeled by two sets and their interactions. From Eqs. (7) and (8), we can conclude that the Jaccard and Dice distance return a similarity of 0 if two sets have no shared terms. However, these terms may have a certain relationship in the GO hierarchy. Graph-based approaches make use of graph matching and graph similarity to calculate the similarity between gene products. A gene is modeled by the sets of nodes and edges associated with a sub-graph. The similarity is calculated by quantifying the difference between two sub-graphs. Graph-based methods have three disadvantages: (1) a few measures only takes into account the shared terms in the sub-graphs, ignoring the edge type [35–38]; (2) graph matching have a weak correlation with similarity between terms [39]; (3) graph matching is an NP-complete problem [40]. Mazandu et al. [11] compared fourteen semantic similarity tools based on GO, classified in the context of IC models, term similarity approaches and functional similarity measures. The features and challenges of each approach were analyzed, including the use scope and limitations. Mazandu et al. also described two key reasons for the difficulty in comparison: the dataset issue, where different tools use different version of GO or annotation datasets, and the scaling issue, which results from tools making different assumption regarding normalization methods. The effects of the shared information for the semantic similarity calculation were discussed in [41]. The shared information of a term pair is the common inheritance relations extracted from the structure of the GO graph. Experiments of three different methods calculating the term similarity, each with five shared information methods, were done on three ontologies across six benchmarks. Among the choice of shared information, term similarity algorithm, and ontology type, the choice of ontology type most strongly influenced the performance, and shared information type had the least influence [41]. More and more hybrid approaches were proposed in recent years, such as the algorithm described in [42], which utilized both the topological features of the GO graph and the information contents of the GO terms. Based on the topological structure of the GO graph, the measure [42] identified a number of GO terms as cluster centers according to a specific threshold, and then a membership was calculated for each cluster center and term pair. Semantic similarity scores were obtained by combining the relevant memberships and shared information contents. The threshold and the width of the Gaussian membership function were determined for different ontologies and datasets respectively to achieve the best AUC scores, while most of the other methods, including TCSS, used fixed value of parameters. Besides, the normalization method used in [42] depended on different ontologies. Therefore, the method showed relatively good performance. The machine learning approaches are emerging to study semantic similarity, such as support vector machine (SVM) [43], random forest [44], and AdaBoost strategy [45]. Among the machine learning techniques, random forest and support vector machine (SVM) are found to achieve the best performance [43]. Methods involving natural language processing were reported. w2vGO [46] utilized the Word2vec model to compare definitions of two GO terms, which did not rely on the GO graph. The results showed that w2vGO was comparable to Resnik [15]. The semantic similarity measure was also extended to gene network analysis. GFD-Net [47] combined the concept of semantic similarity with the use of gene network topology to analyze the functional dissimilarity of gene networks based on GO. It was used in gene network validation to prove its effectiveness. We propose the HVSM algorithm, which is based on the Vector Space Model, to calculate the semantic similarity between genes. Similar to basic VSM approaches, HVSM maps each gene into a vector, and the semantic similarity between two genes is obtained by calculating the similarity between two corresponding vectors. The key improvement of HVSM over basic VSM lies in the refinement of the vector generation. When transforming the set of terms annotated to a gene to a vector, HVSM considers the relations between terms in the hierarchy structure of the GO graph. HVSM takes into account not only each directly annotated GO term, but also their ancestors and descendants, which are related by "is_a" and "part_of" relations. Thus, vectors in HVSM represent the attributes of genes more comprehensively and accurately, compared with basic VSM. Figure 1 shows the main procedure of HVSM, which consists of four stages. Initialize the vectors. Each vector component is binary valued, with 1 representing the presence of the GO term in the gene's annotation and 0 representing its absence; The main process of HVSM Find out the parents and children of the directly annotated terms via "is_a" relations and then modify the vector accordingly; Find out the parents and children of the directly annotated terms via "part_of" relations and then modify the vector accordingly; Calculate similarity between vectors enhanced with the certainty factor. In stage 1, each gene has a set of directly annotated terms and each element in the set denotes a functional aspect of the gene. The dimension of the vectors generated by vector-based methods, including the HVSM, equals the total number of terms in GO, with each dimension corresponding to a specific term in GO. Each component value of the vector represents the relative degree of the contribution of the corresponding terms. Thus, the vector generated for a gene represents the function distribution of the gene. Let n be the dimension of the vector. The vector g for a given gene G can be denoted as \(g=\left (t_{1}^{G},t_{2}^{G},\cdots,t_{n}^{G}\right)\), where \(t_{i}^{G}\) has value between 0 and 1, which reflects the relevance of term i to gene G. The main steps of stage 2 of HVSM are described in detail as follows: Deal with parents. For the directly annotated terms, their parents are considered individually. For each parent, if the value of the component corresponding to a parent is 0, we add the value w parent ∗wis_a to it, where w parent and \(w_{is\_a}\) are the semantic contribution factors for parent terms and "is_a" relation, respectively. If the value of the component corresponding to a parent is equal to 1, the value remains unchanged. If it is between 0 and 1, we add w incre ∗wis_a to it, where w incre is the increment factor for shared nodes. The modified value of the component should not be larger than 1. Let \(t_{i}^{G^{\prime }}\) be the value of ith component corresponding to a parent. The value after modification, \(t_{i}^{G}\), is expressed as: $$ {}t_{i}^{G}= \left\{ \begin{array}{ll} w_{parent}*w_{is\_a}& {t_{i}^{G^{\prime}}=0;}\\ min\left(1,{t_{i}^{G^{\prime}}}+{w_{incre}*w_{is\_a}}\right)& {t_{i}^{G^{\prime}}\neq0;} \end{array}\right. $$ Deal with grandparents. The grandparent terms are considered with a similar strategy to that used in step i for the parent terms. We introduce \(w_{r\_g}\), which is the ratio of contribution factor for grandparents. The value of ith component corresponding to a grandparent after modification, \(t_{i}^{G}\), is expressed as: $$ {}t_{i}^{G}= \left\{ \begin{array}{ll} w_{r\_g}*w_{parent}*w_{is\_a}& {t_{i}^{G^{\prime}}=0;}\\ min\left(1,t_{i}^{G^{\prime}}+w_{r\_g}*w_{incre}*w_{is\_a}\right)& {t_{i}^{G^{\prime}}\neq0;} \end{array}\right. $$ The more distant from the directly annotated terms, the less relevant the terms are. Therefore, HVSM only considers parent and grandparent terms upward. Deal with children. Only common descendant terms of two or more directly annotated terms are considered, because descendants are less relevant than parents. We use a similar strategy as in step i, replacing parameter w parent with w child , which corresponds to the semantic contribution factor for child terms. The value of ith component corresponding to a child after modification, \(t_{i}^{G}\), is expressed as: $$ {} t_{i}^{G}\,=\, \left\{ \begin{array}{ll} w_{child}*w_{is\_a}& {t_{i}^{G^{\prime}}=0;}\\ min\left(1,t_{i}^{G^{\prime}}+w_{child}*w_{incre}*w_{is\_a}\right)& {t_{i}^{G^{\prime}}\neq0;} \end{array}\right. $$ Deal with grandchildren. A similar strategy as in step iii is used to process the grandchildren. The value of ith component corresponding to a grandchild after modification, \(t_{i}^{G}\), is expressed as: $$ t_{i}^{G}= \left\{\begin{array}{ll} w_{r\_g}*w_{child}*w_{is\_a}& {t_{i}^{G^{\prime}}=0;}\\ min\left(1,t_{i}^{G^{\prime}}+w_{r\_g}*w_{child}*w_{incre}*w_{is\_a}\right)& {t_{i}^{G^{\prime}}\neq0;} \end{array}\right. $$ Stage 3 is similar to stage 2, while the "part_of" relation is considered and \(w_{is\_a}\) is replaced by \(w_{part\_of}\), where \(w_{part\_of}\) corresponds to the semantic contribution factor for the "part_of" relation. There are no "part_of" relations existing in the molecular function ontology. The semantic contribution of the "part_of" relation is lower than the "is_a" relation [26]. From an intuitive point of view, the parent terms of the directly annotated terms are more relevant than the children terms, parents are more relevant than grandparents, and children are more relevant than grandchildren. This is why \(w_{is\_a}>w_{part\_of}\), w parent >w child , and \(w_{r\_g}<1\). It is quite complicated to find the optimal combination of all coefficients, for all ontologies and datasets. Especially, the optimal parameters for one ontology may not be optimal for the other ontology. We performed a series of experiments with different coefficient values on the H. sapiens and S. cerevisiae PPI dataset. One of the experiments was done with different values of w parent . The results are shown in Fig. 2. When w parent =0.5, it was found to have most consistent AUC scores for three ontologies. The set of parameters used in HVSM was the result of trade-offs of all PPI experiments, as shown in Table 1. AUC scores for three ontologies over w parent Table 1 Parameters The similarity measure calculated by VSM is relatively small. Thus, we introduce the concept of a Certainty Factor to calibrate the similarity based on the number of terms annotated to genes. The certainty factor is defined as: $$\begin{array}{@{}rcl@{}} \lambda=\ln(S_{1}+S_{2}) \end{array} $$ where S i represents the total number of terms annotated to genes. Finally, the similarity between two vectors is defined as: $$\begin{array}{@{}rcl@{}} S_{v}(G_{1},G_{2})=\frac{\lambda(v_{1}\cdot{v_{2}})}{\left|v_{1}\right|\left|v_{2}\right|} \end{array} $$ Because the number of terms associated with genes is very limited, the vectors generated by HVSM are usually quite sparse. When calculating the similarity between two vectors, we remove all the common zero dimensions of two vectors to improve the execution performance of the algorithm. A simple example is provided to illustrate the computation process of HVSM as shown in Fig. 3. The part of GO topology from CC ontology relevant to the example is shown in Fig. 4. An example illustrating the algorithm. Gene1:S000000313 is annotated to 5 terms and Gene2: S000000825 is annotated to 1 term, which are written in black color. Parent and grandparent terms via "is_a" relations are in red and orange respectively. The terms in green are related via "part_of" relation. Gene1 and Gene2 happen to have no common descendants. The steps in stage3 are not shown because they are almost the same as stage2. The vector components in green background are the ones changed in the steps. The similarity by VSM can be simply calculated from the two vectors in stage1, which is 0. However the gene pair is labeled as positive in the yeast dataset. The similarity obtained by HVSM is 0.23 The partial GO topology relevant to the example. Solid lines indicate the "is_a" relation, and dotted lines indicate the "part_of" relation. The term annotated to Gene2 is in blue background It is known that comparing the performance of semantic similarity analysis in GO is difficult, because most of the measures use different datasets and different version of ontologies [11, 48]. We used Homo sapiens and Saccharomyces cerevisiae positive and negative protein interaction sets to evaluate HVSM as a classifier to distinguish positive and negative interactions. We also used Collaborative Evaluation of GO-based Semantic Similarity Measures (CESSM) online tool to compare HVSM to existing measures based on their correlation with sequence, Pfam, and Enzyme Classification similarity. We adopted the same Homo sapiens and Saccharomyces cerevisiae PPI datasets and GO annotation file used in Jain, et al. [14]. Ontology data used in our experiments was downloaded from the Gene Ontology database (released in September 2016). The GO contains 29969 BP terms, 4200 CC terms and 11295 MF terms. Gene annotations for GO terms were downloaded from the Gene Ontology database for H. Sapiens (dated August 2010) [49] and S. cerevisiae (dated February 2010) [50]. The positive and negative protein-protein interaction datasets for H. sapiens and S. cerevisiae were created as follows. Homo sapiens: 2077 unique pairwise PPIs (with three or more publications) for Homo sapiens were retrieved from the core set of Database of Interacting Proteins (DIP) (dated June 2010) [51]. The DIP core database records data derived from both small-scale and large-scale experiments that have been validated by the occurrence of the interaction between paralogous proteins in different species [14]. The positive dataset for CC, BP, and MF ontologies comprised interactions with both proteins annotated to terms (other than root) in their respective ontologies. The negative interaction dataset contained an equal number of randomly selected interactions from a pool of all possible interactions in human except for those known to be positive in a set of all known (43,935) human PPIs from iRefWeb [52]. iRefWeb was a meta-database containing the ten largest primary PPI databases [52]. Saccharomyces cerevisiae: 4598 unique pairwise Saccharomyces cerevisiae PPIs were retrieved from DIP (dated December 2009). The positive dataset for CC, BP, and MF ontologies comprised interactions with both proteins annotated to terms (other than root) in their respective ontologies. The negative dataset with the same number of PPIs as the positive set was generated by randomly selecting proteins from genes in the GO annotation files that are not known to be positive in a set of all known (45,448) yeast PPIs from iRefWeb. When calculating the similarity on the dataset chosen above at the IntelliGO website (http://plateforme-mbi.loria.fr/intelligo/), we encountered two problems: (1) the corresponding geneid of certain genes from the dataset can not be found in NCBI; (2) a few errors were reported for some gene pairs. To compare the methods fairly, we tested all measures on two sets of data: Use the complete PPI dataset provided in [14]. When the two problems described above occurred, we adopted the processing method used in the HRSS algorithm [53]. When the first problem occurred, the similarity of the gene pair under consideration was set to − 1. When the second problem occurred, the similarity was set to − 2. Use the partial dataset, which means removing the potentially problematic gene data. The negative and positive data distributions of the dataset including or excluding potentially problematic genes are shown in Table 2. The ratio of potentially problematic genes is shown in Tables 3 and 4. Table 2 Negative and positive data distribution before and after the removal Table 3 Ratio of removed data in the H. sapiens dataset Table 4 Ratio of removed data in the S. cerevisiae dataset Note that more than half of the negative S. cerevisiae data have problems. When conducting experiments on the complete dataset, we set the similarity of the gene pairs with problems to either − 1 or − 2. Therefore, the experiment results on the complete S. cerevisiae dataset may be unreliable. We used the ROC (Receiver Operating Characteristic) curve to evaluate the classification effects of HVSM and other measures for PPI experiments. The ROC curve illustrates the diagnostic ability of a classifier system. The ROC curves are created by plotting TPR (true positive rate) against FPR (false positive rate). TPR and FPR are defined as: $$\begin{array}{@{}rcl@{}} TPR=\frac{TP}{(TP+FN)} \end{array} $$ $$\begin{array}{@{}rcl@{}} FPR=\frac{FP}{(FP+TN)} \end{array} $$ where TP, TN, FP, and FN are the number of True Positive, True Negative, False Positive, and False Negative, respectively. The ideal ROC curve is close to the upper left corner. The closer the ROC curve is to the upper left corner the more accurate the classifier is. Ideally, the area under the ROC curve (AUC) is equal to 1. Therefore, it can be concluded that the larger the AUC, the better the classifier is. AUC is defined as: $$\begin{array}{@{}rcl@{}} AUC=\frac{1}{2}\sum_{k=1}^{n}\left(X_{k}-X_{k-1}\right)\left(Y_{k}-Y_{k-1}\right) \end{array} $$ where X k is FPR, and Y k is TPR. To test how our method performs in another application scenario, we tested its correlation using Collaborative Evaluation of GO-based Semantic Similarity Measures (CESSM). CESSM is an online tool [54] that provide a convenient way to compare a specific measure against 11 previously published measures based on their correlation with sequence, Pfam, and Enzyme Classification (EC) similarity. A dataset of 13,430 protein pairs was used involving 1039 unique proteins from various species. Protein pairs (from multiple species), GO (dated August 2010), and UniProt GO annotations (dated August 2008) were downloaded from CESSM. The similarities for the 13,430 proteins pairs were calculated with HVSM and returned to CESSM for evaluation. PPI tests We compared HVSM with the other popular semantic similarity measures, including TCSS [14], IntelliGO [13], basic VSM, Resnik [15], Lin [16], Jiang [17], Schlicker [18], and SimGIC [19], focusing on TCSS. TCSS is widely used and proven to be effective [14] and Resnik is a classic measure. IntelliGO and basic VSM are both vector-based, same as HVSM. The results for H. sapiens and S. cerevisiae PPI datasets are shown in Tables 5 and 6. The experimental results show that the performance of HVSM is improved up to 12% compared to VSM, 8% compared to IntelliGO, 4% compared to TCSS, 6% compared to Resnik, 8% compared to Lin, 11% compared to Jiang, 8% compared to Schlicker, and 11% compared to SimGIC. Note that the percentage numbers in the color red in Table 6 were obtained on the unreliable dataset, as mentioned previously. Table 5 Improvement of HVSM compared with VSM, IntelliGO, TCSS, Resnik, Lin, Jiang, Schlicker, and SimGIC on the H. sapiens PPI datasets Table 6 Improvement of HVSM compared with VSM, IntelliGO, TCSS, Resnik, Lin, Jiang, Schlicker, and SimGIC on the S. cerevisiae PPI datasets Homo sapiens PPI test We evaluated the ability of HVSM, TCSS, IntelliGO, VSM and the other five methods to distinguish between the negative and positive using the H. sapiens positive and negative protein interaction sets. Both BMA and MAX approaches were applied to compare with other measures in [14], and MAX was found to have better performance. Therefore, we only compared HVSM with the TCSS MAX approach. TCSS focused on manually annotated GO annotations ("without" annotations with IEA evidence codes (IEA-)), but it was also tested with all annotations, including electronic annotations ("with" annotations with IEA evidence codes (IEA+)). TCSS worked better with (IEA+) than (IEA-). Therefore, we only presented comparison results with (IEA+). Tests were done for CC, BP, and MF ontologies. The AUC scores for the three ontologies are shown in Table 7. HVSM outperforms all other measures in all cases. HVSM performs best for MF ontology on the partial dataset, with an improvement of 4% compared to TCSS, 8% compared to IntelliGO, 9% compared to VSM, 6% compared to Resnik, 5% compared to Lin, 5% compared to Jiang, 8% compared to Schlicker, and 8% compared to SimGIC. No significant performance difference between the complete dataset and partial dataset is observed for the nine measures. The ROC curves are shown in Figs. 5 and 6. ROC curve on the H. sapiens PPI dataset (Complete dataset). a Cellular Component, b Biological Process, c Molecular Function ROC curve on the H. sapiens PPI dataset (Partial dataset). a Cellular Component, b Biological Process, c Molecular Function Table 7 Area under ROC curves (AUCs) on the H. sapiens PPI dataset Saccharomyces cerevisiae PPI test We applied all nine methods on the Saccharomyces cerevisiae PPI datasets. The AUC scores for three ontologies are shown in Table 8. Note that only IntelliGO is sensitive to the problematic dataset, where the performance on the complete dataset is much better than the partial dataset, as shown in Table 8. If we exclude the unreliable IntelliGO results (numbers in the color red), HVSM performs best for CC and BP ontology. For MF ontology, HVSM performs only 1% lower than TCSS, similarly to Resnik, and better than VSM and the other five measures. The ROC curves are shown in Figs. 7 and 8. ROC curve on the S.cerevisiae PPI dataset (Complete dataset). a Cellular Component, b Biological Process, c Molecular Function ROC curve on the S.cerevisiae PPI dataset (Partial dataset). a Cellular Component, b Biological Process, c Molecular Function Table 8 Area under ROC curves (AUCs) on the S. cerevisiae PPI dataset CESSM test HVSM measure was used to calculate similarities for the benchmark set of protein pairs downloaded from the CESSM website [54]. The benchmark set represents three different types of similarities, based on sequence similarity, Enzyme Classification (EC), and protein domains (Pfam). We compared HVSM with our main concern TCSS and four other measures provided by CESSM: Resnik, Lin, Jiang, and SimGIC. MAX approach was selected for Resnik, Lin, and Jiang. The results obtained (correlation coefficients) are presented in Table 9. HVSM is superior to all measures except for SimGIC. The HVSM correlation coefficient for the EC dataset is higher than all other methods. For the Pfam dataset, HVSM is comparable to SimGIC. For the sequence dataset, the value obtained with HVSM is beaten by SimGIC, but better than all other measures. One cause for this could be that SimGIC scores gene products with shared annotation terms. Since gene products annotated to same term are more likely to be part of the same gene family and thus SimGIC has high sequence similarity [14]. HVSM performs better for CC and BP ontology than MF ontology. For CESSM, we fine-tuned the parameters based on the values in Table 1 by adjusting w child to 0.05. Table 9 Results obtained with CESSM Our experiments showed that the results with the confidence factor were significantly better than those without it. It can be proved that the relative value of similarities of pairs of genes are not affected by the base of the logarithm in equation (13), as long as the base is greater than 1. In other words, the base of the logarithm does not change the order of the similarity ranking. Hence, the base of the logarithm of the confidence factor does not affect the ROC analysis results. Since the multiplication of the confidence factor may cause the similarity values calculated by HVSM to be greater than 1, a single similarity value could not be used directly. This problem does not affect the effectiveness of HVSM as a classifier to distinguish positive and negative interactions. In any case, a proper normalization method needs to be investigated in the future. The coefficients used in HVSM, such as \(w_{is\_a} w_{part\_of} w_{parent}\) and w child , were decided by the intuitive speculation and the experiments on the H. sapiens and S. cerevisiae PPI datasets. We have tried to look for the optimal combination of the five coefficients for all datasets and ontologies. Right now they are the results of approximate trade-offs and may not be the best answer. More experiments and datasets should be tested. The alternative way is to find different combinations for different ontologies or datasets. We presented a new method to calculate semantic similarity, the Hierarchical Vector Space Model, which enhanced the basic vector space model by introducing the relations between GO terms. When constructing the gene vector, we took into account the terms related by two types of relations: "is_a" and "part_of". Moreover, HVSM introduced the concept of the Certainty Factor to calibrate the similarity based on the number of annotated terms. To assess the effectiveness of HVSM, we performed experiments using H. sapiens and S. cerevisiae protein-protein interaction datasets, and compared the results with TCSS, IntelliGO, basic VSM, Resnik, Lin, Jiang, Schlicker, and SimGIC measures. The results showed that HVSM outperformed the other eight measures in most cases. HVSM achieved an improvement of up to 4% compared to TCSS, 8% compared to IntelliGO and 12% compared to VSM, 6% compared to Resnik, 8% compared to Lin, 11% compared to Jiang, 8% compared to Schlicker, and 11% compared to SimGIC. We also tested the correlation between multiple semantic similarity scoring methods with sequence, EC, and Pfam similarity with CESSM. The results showed that HVSM was a comparable measure relative to SimGIC, and outperformed all other similarity measures in CESSM as well as TCSS. CESSM: Collaborative Evaluation of GO-based Semantic Similarity Measures Directed Acyclic Graph EC: Enzyme Classification FN: False Negative FP: FPR: False Positive Rate HVSM: Hierarchical Vector Space Model Information Content LCA: Lowest Common Ancestor TN: True Negative TP: True Positive TPR: True Positive Rate VSM: Vector Space Model Michael A, Catherine AB, Judith AB, David B, Heather B, J. Michael C, Allan PD, Kara D, Selina SD, Janan TE, Midori AH, David PH, Laurie IT, Andrew K, Suzanna L, John CM, Joel ER, Martin R, Gerald MR, Gavin S. Gene ontology: tool for the unification of biology. Nat Genet. 2000; 25:25–9. Wu X, Zhu L, Guo J, Zhang DK, Lin K. Prediction of yeast protein-protein interaction network: insights from the gene ontology and annotations. Nucleic Acids Res. 2006; 34:2137–50. Stelzl U, Worm U, Lalowski M, Haenig C, Brembeck FH, Goehler H, Stroedicke M, Zenkner M, Schoenherr A, Koeppen S, Timm J, Mintzlaff S, Abraham C, Bock N, Kietzmann S, Goedde A, Toksöz E., Droege A, Krobitsch S, Korn B, Birchmeier W, Lehrach H, Wanker EE. A human protein-protein interaction network: a resource for annotating the proteome. Cell. 2005; 122:957–68. Yu J, Yang H. A draft sequence of the rice genome (oryza sativa l. ssp. indica). Science. 2002; 296:1937–42. Sequencing C, Consortium A. Initial sequence of the chimpanzee genome and comparison with the human genome. Nature. 2005; 437:69–87. Khatri P, Drăghici S. Ontological analysis of gene expression data: current tools, limitations, and open problems. Bioinformatics. 2005; 21:3587–95. Shen R, Chinnaiyan AM, Ghosh D. Pathway analysis reveals functional convergence of gene expression profiles in breast cancer. BMC Med Genomics. 2008; 1:28. Jansen R, Yu H, Greenbaum D, Kluger Y, Krogan NJ, Chung S, Emili A, Snyder M, Greenblatt JF, Gerstein M. A bayesian networks approach for predicting protein-protein interactions from genomic data. Science. 2003; 302:449–53. David M, Christine B, Elisabeth R, Pierre M, Denis T, Bernard J. Gotoolbox: functional analysis of gene datasets based on gene ontology. Genome Biol. 2004; 5:101. Rhodes DR, Tomlins SA, Varambally S, Mahavisno V, Barrette T, Kalyana-Sundaram S, Ghosh D, Pandey A, Chinnaiyan AM. Probabilistic model of the human protein-protein interaction network. Nat Biotechnol. 2005; 23:951–9. Mazandu GK, Chimusa ER, Mulder NJ. Gene ontology semantic similarity tools: survey on features and challenges for biological knowledge discovery.Brief Bioinform. 2016; 18:1–16. Catia P, Daniel F, Andre FO, Phillip L, Francisco MC. Semantic similarity in biomedical ontologies. Plos Comput Biol. 2009; 5:1000443. Sidahmed B, Malika ST, Olivier P, Amedeo N, Marie-Dominique D. Intelligo: a new vector-based semantic similarity measure including annotation origin. BMC Bioinformatics. 2010; 11:588. Jain S, Bader GD. An improved method for scoring protein-protein interactions using semantic similarity within the gene ontology. BMC Bioinformatics. 2010; 11:562. Resnik P. Using information content to evaluate semantic similarity in a taxonomy. In: Proceedings of the 14th International Joint Conference on Artificial Intelligence San Francisco. CA, USA: Morgan Kaufmann Publishers Inc: 1995. p. 448–453. Lin D. An information-theoretic definition of similarity. In: Proceedings of the 15th International Conference on Machine Learning Morgan Kaufmann. Morgan Kaufmann: 1998. p. 296–304. Jiang JJ, Conrath DW. Semantic similarity based on corpus statistics and lexical taxonomy. Int Conf Res Comput Linguist (ROCLING X). 1999;9008. Schlicker A, Domingues FS, Rahnenführer J., Lengauer T. A new measure for functional similarity of gene products based on gene ontology. BMC Bioinformatics. 2006; 7:302. Pesquita C, Faria D, Bastos H, Falco A, Couto FM. Evaluating go-based semantic similarity measures. Ismb/eccb Sig Meet Program Mater Iscb. 2007; 37:37–40. Chabalier J, Mosser J, Burgun A. A transversal approach to predict gene product networks from ontology-based similarity. BMC Bioinformatics. 2007; 1:235. Pozo AD, Pazos F, Valencia A. Defining functional distances over gene ontology. BMC Bioinformatics. 2008; 9:50. Wu H, Su Z, Mao F, Olman V, Xu Y. Prediction of functional modules based on comparative genome analysis and gene ontology application. Nucleic Acids Res. 2005; 33:2822–37. Cheng J, Cline M, Martin J, Finkelstein D, Awad T, Kulp D, Siani-Rose MA. A knowledge based clustering algorithm driven by gene ontology. J Biopharm Stat. 2004; 14:687–700. Yu H, Gao L, Tu K, Guo Z. Broadly predicting specific gene functions with expression similarity and taxonomy similarity. Gene. 2005; 352:75–81. Pekar V, Staab S. Taxonomy Learning: Factoring the Structure of a Taxonomy Into a Semantic Classification Decision. In: Proceedings of the 19th international conference on Computational linguistics. Morristown: Association for Computational Linguistics: 2002. Wang JZ, Du Z, Payattakool R, Yu PS, Chen CF. A new method to measure the semantic similarity of go terms. Bioinformatics. 2007; 23:1274–81. Batet M, Sánchez D., Valls A. An ontology-based measure to compute semantic similarity in biomedicine. J Biomed Inform. 2011; 44(1):118–25. Budanitsky A. Lexical semantic relatedness and its application in natural language processing. 1999. http://www.cs.toronto.edu/pub/gh/Budanitsky-99.pdf. Seco N, Veale T, Hayes J. An intrinsic information content metric for semantic similarity in wordnet. In: Eureopean Conference on Artificial Intelligence, Ecai'2004, Including Prestigious Applicants of Intelligent Systems, Pais 2004, Valencia, Spain, August. Amsterdam: IOS Press: 2004. p. 1089–90. Couto FM, Coutinho PM. Semantic similarity over the gene ontology: family correlation and selecting disjunctive ancestors. In: ACM CIKM International Conference on Information and Knowledge Management. New York: ACM: 2005. p. 343–344. Budanitsky A, Hirst G. Semantic distance in wordnet: An experimental, application-oriented evaluation of five measures. In: The Workshop on Wordnet & Other Lexical Resources: 2001. Ehsani R, Drabløs F.Topoicsim: a new semantic similarity measure based on gene ontology. BMC Bioinformatics. 2016; 17:296. Pesquita C, Faria D, Bastos H, Ferreira A, Falcão AO, Couto FM. Metrics for go based protein semantic similarity: a systematic evaluation. BMC Bioinformatics. 2008; 9(5):4. Cross V. Tversky's parameterized similarity ratio model: A basis for semantic relatedness. In: Fuzzy Information Processing Society, 2006. Nafips 2006 Meeting of the North American. IEEE: 2006. p. 541–546. Lee HK, Hsu AK, Sajdak J, Qin J, Pavlidis P. Coexpression analysis of human genes across many microarray data sets. Genome Res. 2004; 14:1085–94. Mistry M, Pavlidis P. Gene ontology term overlap as a measure of gene functional similarity. BMC Bioinformatics. 2008; 9:327. Gentleman R. Visualizing and distances using go. 2010. https://www.bioconductor.org/packages/devel/bioc/vignettes/GOstats/inst/doc/GOvis.pdf. Sheehan B, Quigley A, Gaudin B, Dobson S. A relation based measure of semantic similarity for gene ontology annotations. 2008; 9:468. Torsello A, Hidovic D, Pelillo M. Four metrics for efficiently comparing attributed trees. 2004; 2:467–70. Bible PW, Sun HW, Morasso MI, Loganantharaj R, Wei L. The effects of shared information on semantic calculations in the gene ontology. Comput Struct Biotechnol J. 2017; 15:195. Dutta P, Basu S, Kundu M. Assessment of semantic similarity between proteins using information content and topological properties of the gene ontology graph. IEEE/ACM Trans Comput Biol Bioinforma. 2017. Zhang SB, Tang QR. Protein-protein interaction inference based on semantic similarity of gene ontology terms. J Theor Biol. 2016; 401:30–7. Huang Q, You Z, Zhang X, Yong Z. Prediction of protein protein interactions with clustered amino acids and weighted sparse representation. Int J Mol Sci. 2015; 16(5):10855–69. Mei S, Zhu H. Adaboost based multi-instance transfer learning for predicting proteome-wide interactions between salmonella and human proteins. PLoS One. 2014; 9:110488. Duong D, Eskin E, Li J. A novel word2vec based tool to estimate semantic similarity of genes by using gene ontology terms. bioRxiv. 2017. Diaz-Montana JJ, Diaz-Diaz N, Gomez-Vela F. Gfd-net a novel semantic similarity methodology for the analysis of gene networks. J Biomed Inform. 2017; 68:71–82. Guzzi PH, Mina M, Guerra C, Cannataro M. Semantic similarity analysis of protein data: assessment with biological features and issues. Brief Bioinformatics. 2012; 13:569–85. Consortium U. The universal protein resource (uniprot) in 2010. Nucleic Acids Res. 2010; 38 Database:142–8. Saccharomyces Genome Database. http://downloads.yeastgenome.org. Xenarios I, Rice D, Salwinski L, Baron M, Marcotte E, Eisenberg D. Dip: the database of interacting proteins. Nucleic Acids Res. 2000; 28:289. Razick S, Magklaras G, Donaldson I. irefindex: a consolidated protein interaction database with provenance. BMC bioinformatics. 2008; 9:405. Wu X, Pang E, Lin K, Pei ZM. Improving the measurement of semantic similarity between gene ontology terms and gene products: insights from an edge- and ic-based hybrid method. Plos One. 2013; 8:66745. The Collaborative Evaluation of Semantic Similarity Measures tool. http://xldb.di.fc.ul.pt/tools/cessm/. Accessed 30 Jan 2018. The authors thank [14] for sharing the PPI dataset. This work is supported by the China Human Proteome Project (grant no. 2014DFB30030). Project name: Hierarchical Vector Space Model (HVSM) Home page: https://github.com/kejia1215/HVSM Operating systems: Unix/Linux/Windows Programming language: Java Other requirements: Maven (3.3.9 or later), JDK (1.8.0 or later) Any restrictions to use by non-academics: no The datasets analyzed during the current study are available in the GitHub repository, https://github.com/kejia1215/HVSM/tree/master/datasets. Department of Computer Science & Technology, East China Normal University, North Zhongshan Road, Shanghai, 200062, China Jiongmin Zhang, Ke Jia & Ying Qian School of life science, East China Normal University, Dongchuan Road, Shanghai, 200241, China Jinmeng Jia Jiongmin Zhang Ke Jia Ying Qian JMZ and YQ supervised and provided input on all aspects of the study. KJ designed the method and carried out all programming work. JMJ provided helpful information from the perspective of biology. JMZ,YQ and KJ discussed the results and wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Ying Qian. Zhang, J., Jia, K., Jia, J. et al. An improved approach to infer protein-protein interaction based on a hierarchical vector space model. BMC Bioinformatics 19, 161 (2018). https://doi.org/10.1186/s12859-018-2152-z Received: 15 February 2017 Accepted: 09 April 2018 DOI: https://doi.org/10.1186/s12859-018-2152-z Protein-protein interaction Functional similarity
CommonCrawl
ERROR: type should be string, got "https://static.ffx.io/images/$zoom_1.271%2C$multiply_1%2C$ratio_1.776846%2C$width_1059%2C$x_0%2C$y_3/t_crop_custom/w_1024/t_sharpen%2Cq_auto%2Cf_auto/6b428604ea52a83966ee185353cfe40f7be5e2a1\nbrisbanetimes.com.au/nation … 4z8s3.html\nUh-oh - it's too late to leave the party, well, it is when you are so drunk you can't walk.\nThe housing downturn is likely to worsen if the banking royal commission unleashes a fresh round regulation on the banking sector, SQM Research's Louis Christopher says.\n\"It's hard to see Sydney and Melbourne recovering, at least not for the next year,\" he said.\n\"The real risk is further regulation, which is acute now. What further regulation means is higher costs for banks to meet compliance and more restrictions on lending.\n\"The cost base of banks will go up and they could pass it on to borrowers.\"\nafr.com/real-estate/auction … 925-h15tq0\napparently regulation is 'acute' -\nThe Melbourne auction scene is dismal with the general number of bidders now down to \"one or none\", buyer's agent Morrell and Koren's Emma Bloom says.\n\"A lot of auctions have been failing,\" Ms Bloom said. \"What agents are now doing to protect the value of the home is to use an expressions-of-interest private treaty selling method.\"\nMs Bloom said they did this so they could pitch potential buyers against others sometimes \"phantom buyers\" in a less transparent sale process to an auction.\nThe only exception this past weekend was the sale of the five-bedroom family home at 8 Avenel Road in Kooyong at just over $3 million. It had a \"surprising\" four bidders, Ms Bloom said.\n\"I nearly fainted, it wasn't even that good,\" she said.\nMs Bloom said there was plenty of room to negotiate on properties of less than $5 million in Melbourne.\nafr.com/real-estate/melbour … 812-h13v0u\nmore often than not, none of them took any steps to verify the applicant's outgoings.\nVerifying outgoings was considered \"too hard\".\n\"But what was meant by verifying outgoings being 'too hard' was that the benefit to the bank of doing this work was not worth the bank's cost of doing it,\" Mr Hayne said in his interim report yesterday.\namp.9news.com.au/article/b99fbe … e8e02e0f23\nAustralian Bureau of Statistics figures show Australia's July building approvals figures are down -5.6 per cent in seasonally adjusted figures, with apartments the worst hit, down -6.2 per cent.\nMr Oliver said the worst is yet to come for Australia's tradies, in a view supported by BIS Economics's Angie Zigomanis.\n\"The fact that approvals are going down again points to a downswing at the end of the year going into next year,\" Mr Oliver said.\nMr Zigomanis went further, predicting the decline would continue \"at least another 12 months or possibly two years.\"\nBuilding approvals are important as they point to what comes next in the construction cycle.\nIn recent years building approvals have steadily risen as the east coast housing boom went into full swing. Levels grew to almost 30 per cent higher than the long term average, and the national residential crane count — 528 cranes — is significantly greater than the USA (300) and Canada (123) combined.\nThat's despite Australia's entire population being only 6 per cent of the combined populations of the United States and Canada.\nhttps://cdn.newsapi.com.au/image/v1/d273ce41285590f14d6df63400393af5\nconstruction accounted for 1.19 million, or almost 9.5 per cent, of all jobs in Australia. That's not to mention the additional 228,000 who work in rental, hiring and real estate. That makes construction the third largest employment sector behind retail and healthcare.\nA lot of people have gone into construction over the last decade, as the good times keep getting better.\nSome of these people have never experienced a decline. But the drop in building approvals will soon mean there's less work for nearly 10 per cent of our entire work force.\nAnd in what could be seen as a sign of the times, job ads for construction and real estate/property are all down from last year, almost in line with the concurrent falls in building approvals.\nThis reflects a growing trend that's been seen in Seek ads since June earlier this year.\nhttps://cdn.newsapi.com.au/image/v1/6308716f48c56eb0dc212d2e3d2e1b7c\nnews.com.au/finance/economy … 1223ae1743\nWhile the overall average is much less, some individual postcode areas have dropped by around 15 - 27 or so %\nHouse prices in Hawksburn and Toorak (3142) have fallen 18 per cent over the past year to June 30, 2018 and the prices of units in the two suburbs have fallen by 15 per cent.\nPostcode 3182 St Kilda, St Kilda West -27.3%\nsmh.com.au/money/investing/ … 506od.html\nPostcode 2007 – Broadway, Ultimo –18.8%\nsmh.com.au/money/investing/ … 506b5.html\nCheck out the cool tool on realestate.com.au\nLets look at St Kilda on it…\nrealestate.com.au/neighbour … 2-vic?cid=\nScroll down to the \"Median property price\" heading, look at the trend for the last few months - since June prices have been below 1 million and as of September 2018 the median is $865,000.\nChange the chart from monthly to annual - the latest number is December 2017 - I reckon the chart must need a full year to auto expand another year which is why on the monthly we get the month just gone etc. Anyway the Median price as of December 2017 was $1,367,500\nEye watering with plenty of more room to drop. In 2013 house prices were $590,000 - which is where it dropped to (from highs of $727,750) after the last temporary drop in prices with were ramped back up by various things that just wont help this time.\n9news.com.au/national/2018/ … -sold-land\nRepo fun begins\nEpicurus 2018-10-08 12:02:41 UTC #2546\nWhere's me Nama Sport!\ndolanbaker 2018-10-08 17:43:45 UTC #2547\nhttps://www.9news.com.au/national/2018/10/08/19/14/property-dispute-melbourne-family-camp-on-sold-land\nSite repo'd in 2014, new owner started work on it a couple of weeks ago and displaced owner then decided to \"occupy\" the land!\nWhy didn't make a stand four years ago?\nNegativeEquity 2018-10-08 18:56:35 UTC #2548\ndolanbaker:\nBen Gilroy prob going on holidays to Oz in the near future.\nMore than 1 million households in mortgage stress\nmacrobusiness.com.au/2018/1 … ouseholds/\nand…people are starting to wonder about deposit safety\nabc.net.au/news/2018-10-16/e … s/10378570\nThe National Debt Helpline — a federal government-run financial counselling service — said it's on track to receive a record number of cases through its call centres this year — many from older Australians who can't meet their mortgage or rent payments.\n\"The phones just never stop now,\" financial counsellor Greg said.\n\"They're just going day after day, after day.\n\"You put the phone down, you pick the phone up again.\"\nGreg has been with the call centre for 14 years.\nHe said it's never been so busy, now with record numbers of older Australians calling in, unable to pay rent, or make good on their mortgage repayments.\nAMP predicting a 20% peak to trough decline in Sydney/Melbourne. Back to 2015 levels they reckon. To sustain a bounce at those levels you would need what? Maybe if rental yields get to levels where they exceed the cost of capital? I think that would require a big discount even if interest rates drop. Even then, value investors would be looking for a decent risk premium before taking the plunge. Lenders are likely to get a whole lot more selective in who they lend to.\nafr.com/real-estate/amp-cap … 018-h16sdp\nThere are two things that can happen to an asset price bubble. It can burst dramatically, or deflate slowly.\nWhich brings us to the Australian housing market.\nPrices in Sydney and Melbourne continue to decline at a modest rate. The CoreLogic index has Sydney prices down 6.1% for the last 12 months, and Melbourne down 3.4%.\nThis barely dents the huge increases in previous years and is consistent with \"deflate slowly\" scenario.\nWhat could cause a pop?\nInterest-only loans could do it\nAs I have said here before, one of the worries is interest-only loans.\ntheconversation.com/vital-signs … how-105137\nabc.net.au/news/2018-10-26/ … u/10435148\nOne Belt, One Road: Victoria signs MOU to join China's controversial global trade initiative\nSenior national security figures have often warned of serious \"strategic\" consequences if Australia formally signs up, although various investment projects on Australian soil seem to have had some form of involvement.\nthe_dude 2018-10-29 21:24:13 UTC #2555\nhttps://www.abc.net.au/news/2018-10-26/victoria-and-china-belt-and-road-signing-mou/10435148\nHas any country other than China ever ended up on the winning side of one of these deals ?\nyoganmahew:\nAre they on about future lending?\nCan I assume they have packaged all there existing (billions/trillions) loans to third parties i.e. you and me and pension funds?\nnews.com.au/finance/busines … dd171fc571\nANZ has blamed \"strong headwinds\" in the housing market and the fallout from the banking royal commission for a 5 per cent drop in full-year cash profit to $6.487 billion.\nSydney's annual price change has been revised down heavily, from -6.3% to -7.4%\nmacrobusiness.com.au/2018/1 … downwards/\nabcmedia.akamaized.net/radio/lo … gomery.mp3\nHave a listen in to Roger Montgomery on the on ABC yesterday here. Fast forward to Sally at 24 minutes who is a realtor managing some large settling apartment blocks in South Bank, Melbourne. Her story of Chinese buyers left high and dry should give everyone pause for what's coming down the pipe in terms of defaulting developers and forced stock sales:\nmacrobusiness.com.au/2018/1 … partments/\nanother interesting sign\nforums.whirlpool.net.au/forum-r … ?t=2762040\nReally interesting perspective about retirement develops at the 40 minute mark, I hadn't thought about the inability in a falling market to raise the downpayment that developers would need to begin.\nI reckon the Irish Fair Deal scheme could be adapted further to bridge that gap. It would actually allow people stay out of nursing homes longer if they can transition to a semi independent living developments."
CommonCrawl
Constructing numbers from basic arithmetic on digits I was tooling around over on stackoverflow and happened upon this question. To summarise, given the set of digits $\{1,2,3,4,5,6,7,8,9\}$ and a set of basic arithmetic (binary) operators $\{+,-,\times,/\}$, what is the least number of operations you need to construct a given integer? For example $239 = 8\times6\times5-1$, requires 3 operations. My conjecture is that division doesn't help you. There is no number that can be constructed using division, that can't be constructed without division using the same number operations (or fewer). Can anyone prove or disprove this? combinatorics number-theory elementary-number-theory wxffleswxffles $\begingroup$ I have a feeling that you do need division sometimes. For instance, take numbers like 1+9+81+729+...+9^k. These are easy to make with division: (9*9*9*...*9-1)/8. But I'm not sure how to prove this works; that is, that at least some of these numbers are "rough" enough that there's no more efficient way to make them. $\endgroup$ – Lopsy Feb 10 '12 at 2:48 $\begingroup$ Lopsy: You can calculate with a computer the smallest number of operations required with or without division. That wouldn't constitute a proof for general $n$, but will at least answer the OP's question. $\endgroup$ – Yuval Filmus Feb 10 '12 at 5:46 $\begingroup$ Could you be more precise of what to do if a division is not exact? Either you simply forbid this, or you consider the rational result as a valid intermediate value, or you take the quotient ignoring the remainder. I think your conjecture is false even if you simply forbid inexact divisions, but it helps to have carity about this first. $\endgroup$ – Marc van Leeuwen Feb 10 '12 at 12:34 $\begingroup$ @Marc: I used only exact division for my answer. $\endgroup$ – joriki Feb 10 '12 at 13:02 $\begingroup$ Do you allow parenthesis? eg. (1+9)*9 $\endgroup$ – TROLLHUNTER Feb 10 '12 at 13:19 Division does help, but you have to use seven operations (eight operands) to find a case where it does. Here's a list of all expressions with seven operations with values that can't be obtained with seven operations without division: $$ \begin{eqnarray} (5\cdot7\cdot7\cdot9\cdot9\cdot9-1)/2&=&89302\\ (5\cdot7\cdot7\cdot9\cdot9\cdot9+1)/2&=&89303\\ (7\cdot7\cdot7\cdot9\cdot9\cdot9-5)/2&=&125021\\ (7\cdot7\cdot7\cdot9\cdot9\cdot9+5)/2&=&125026\\ (7\cdot7\cdot9\cdot9\cdot9\cdot9-3)/2&=&160743\\ (7\cdot7\cdot9\cdot9\cdot9\cdot9+5)/2&=&160747\\ (7\cdot9\cdot9\cdot9\cdot9\cdot9-1)/2&=&206671\\ (7\cdot9\cdot9\cdot9\cdot9\cdot9+3)/2&=&206673\\ (7\cdot9\cdot9\cdot9\cdot9\cdot9+5)/2&=&206674\\ (9\cdot9\cdot9\cdot9\cdot9\cdot9-7)/2&=&265717\\ (9\cdot9\cdot9\cdot9\cdot9\cdot9-5)/2&=&265718 \end{eqnarray} $$ Here's the code I used to find them. So Lopsy's idea turns out to be a good one. In fact I found the penultimate example, $(9\cdot9\cdot9\cdot9\cdot9\cdot9-7)/2=265717$ with a lot less computational effort than the others by factorizing the numbers of the form $(9^6\pm d)/2$, where $d$ is a single-digit number, finding that $(9^6-7)/2=265717$ is a prime and thus can't be the result of a multiplication, noting that a number this large requires a product with at least six factors and could thus only be formed as $p_7\pm p_1$, $p_6\pm p_1\pm p_1$ or $p_6\pm p_2$ (where $p_k$ is a product of $k$ factors), and checking that no such expression yields $265717$. Here's an attempt at explaining that all the counterexamples involve division by $2$. The number should be large, in order to require most of the operations to be multiplications and to leave as little room as possible for additions, subtractions and small factors. The divisor cannot divide any of the other numbers, since then it would have to divide both terms and thus could be canceled. Thus, if the divisor were $3$, there could be no factors of $9$, which lowers the attainable maximum to $(8^6+8)/3=87384$, which is below the smallest counterexample. The higher the divisor $d$, the fewer potential candidates there are, since only every $d$-th number is divisible by $d$, and also the lower the attainable maximum. For $d=4$, the value of $(9^6+7)/4=132862$ is still within the lower range of the actual counterexamples, but with only half as many candidates for counterexamples, it may be just a coincidence that there aren't any. For $d=5$, the maximum $(9^6+9)/5=106290$ is already at the lower end of the range, and with only $2/5$ as many candidates, counterexamples aren't to be expected. Since $d=6$ is excluded for the same reason as $d=3$, the next possibility is $7$. For $d=7$, the maximum $(9^6+6)/7=75921$ is already below the smallest counterexample. Here's a table showing the number $a_n$ of values expressible with $n$ operations (excluding division) and, as requested in a comment, the least value $b_n$ not expressible with $n$ operations: $$ \begin{array}{c|c} n&0&1&2&3&4&5&6&7\\ \hline a_n&9&39&155&739&3667&16947&77860&379072\\ b_n&10&19&92&417&851&4237&14771&73237 \end{array} $$ The growth rate appears to be well below $9$. That shows that it would be quite wrong to model the expressions as having random values uniformly distributed over the accessible interval $[1,9^{n+1}]$ (where $n$ is the number of operations). The generating function for the number of expressions with $n$ operations (excluding division) approximately satisfies $$f(x)=9+\frac32xf(x)^2$$ (approximately because the symmetry factor $\frac12$ shouldn't be applied when combining two identical expressions), and the solution is $$f(x)=\frac{1-\sqrt{1-54x}}{3x}$$ with a singularity at $x=1/54$, so the growth rate of the number of expressions is $54$. If their values were uniformly distributed, the probability for a value not to be represented by an expression would be roughly of the order $$\left(1-\frac1{9^n}\right)^{54^n}\approx\mathrm e^{-6^n}\;,$$ so we would expect almost complete coverage, i.e. a growth rate of $9$. Ross Millikan jorikijoriki $\begingroup$ Wow really a nice answer, which reflects your hard-work sir. Fantastic answer. $\endgroup$ – IDOK Feb 10 '12 at 13:03 $\begingroup$ +1. Could you add a table with the smallest number not expressible in n operations? $\endgroup$ – TROLLHUNTER Feb 10 '12 at 13:19 $\begingroup$ @Holowitz: Done. $\endgroup$ – joriki Feb 10 '12 at 13:48 $\begingroup$ Excellent stuff. If you're still keen you could see what happens for negative numbers. $\endgroup$ – wxffles Feb 10 '12 at 19:15 Not the answer you're looking for? Browse other questions tagged combinatorics number-theory elementary-number-theory or ask your own question. Calculate constant using minimum number of operations on single digit constants Given a set of digits, what is the biggest number we can make using exponentiation - numberphile noodle quiz Interesting Number Game Number of operation to transform $(0,0,0)$ to $(a,b,c)$ with $2^h\leq a,b,c \leq 2^h-1$ Could relational operators be used to construct formal theory of natural numbers which is "stronger" than Peano Axioms? Transforming generating functions into algorithms that generate combinatorial objects Can we find a bound for the length of this sequence? Making numbers from 2, 0, 1, 7. Also: are the the iterative factorials and square roots, starting from any $s>2$, dense in $[1,\infty)$? division in modular arithmetic Proof of obtaining multiple of 10 by using arithmetic operations on any three numbers
CommonCrawl
Mars Express Power Challenge March 31, 2016, 10 p.m. UTC July 31, 2016, 10 p.m. UTC The competition is over. The goal of this competition is to predict the average current of 33 thermal power lines per hour of the mission's fourth Martian year (2014-04-14 to 2016-03-01). Prediction file For the prediction, we ask you to provide the average current (NPWD) in each of the 33 thermal power lines during the fourth year. The format of your submitted file must be a CSV file with the following columns: ut_ms: unix timestamp in milliseconds NPWD----: 33 columns/parameters with the predicted average electric current measurements The data download contains a sample prediction file named power-prediction-sample-2014-04-14_2016-03-01.csv which shows the format. Note that for a valid submission, the file must contain 34 columns and 16488 lines of data plus one header line. The timestamps and column headers must be the same as in the sample prediction file. The submission system will inform you of any problems in your submission file (size, Inf, NaN...) and will not accept invalid files. Each submission file will be evaluated against the fourth Martian year's actual average electric current data using the Root Mean Square Error (RMSE) measure. $$\epsilon = \sqrt{\frac{1}{N M} \sum{(c_{ij} - r_{ij})^2}}$$ \(\epsilon\): root mean square error \(c_{ij}\): predicted value for the ith timestep in the fourth Martian year of the jth parameter \(r_{ij}\): reference value for the ith timestep in the fourth Martian year of the jth parameter \(N\): the total number of evaluated measurements \(i \in [1,N]\) with \(N <= 16488\) \(M\): the number of parameters \(j \in [1,M]\) with \(M = 33\) For the public leaderboard your data will be tested only on a portion of the fourth year's data. The final leaderboard may thus vary slightly. Created by the Advanced Concepts Team, Copyright © European Space Agency 2016
CommonCrawl
International Journal of Mechanical and Materials Engineering Rayleigh wave propagation in transversely isotropic magneto-thermoelastic medium with three-phase-lag heat transfer and diffusion Iqbal Kaur ORCID: orcid.org/0000-0002-2210-77011 & Parveen Lata1 International Journal of Mechanical and Materials Engineering volume 14, Article number: 12 (2019) Cite this article The present research deals with the propagation of Rayleigh wave in transversely isotropic magneto-thermoelastic homogeneous medium in the presence of mass diffusion and three-phase-lag heat transfer. The wave characteristics such as phase velocity, attenuation coefficients, specific loss, and penetration depths are computed numerically and depicted graphically. The normal stress, tangential stress components, temperature change, and mass concentration are computed and drawn graphically. The effects of three-phase-lag heat transfer, GN type-III, and LS theory of heat transfer are depicted on the various quantities. Some particular cases are also deduced from the present investigation. There are two types of surface waves namely Rayleigh wave and Love wave. These waves have primary importance in earthquake engineering. Rayleigh (1885) first investigated the waves that exist near the surface of a homogeneous elastic half-space and named it as Rayleigh waves. Rayleigh wave exists in a homogeneous, elastic half-space whereas Love wave requires a surficial layer of lowers wave velocity than the underlying half-space. The propagation of waves in thermoelastic materials has numerous applications in various fields of science and technology, earthquake engineering, seismology, nuclear reactors, aerospace, submarine structures, and in the non-destructive evaluation in material process control and fabrication. Green and Naghdi (1992, 1993) dealt with the linear and the nonlinear theories of thermoelastic body with and without energy dissipation. Three new thermoelastic theories were proposed by them, based on entropy equality. Their theories are known as thermoelasticity theory of type I, the thermoelasticity theory of type II (i.e., thermoelasticity without energy dissipation), and the thermoelasticity theory of type III (i.e., thermoelasticity with energy dissipation). On linearization, type I becomes the classical heat equation whereas on linearization type-II as well as type-III, theories give a finite speed of thermal wave propagation. The effects of heat conduction upon the propagation of Rayleigh surface waves in a semi-infinite elastic solid is studied for transversely isotropic thermoelastic (TIT) materials by Sharma, Pal, and Chand (2005) and Sharma and Singh (1985). Marin (1997) had proved the Cesaro means of the kinetic and strain energies of dipolar bodies with finite energy. Ting (2004) explored a surface wave propagation in an anisotropic rotating medium. Othman and Song (2006, 2008) presented different hypotheses about magneto-thermoelastic waves in a homogeneous and isotropic medium. Kumar and Kansal (2008a) investigated the effect of rotation on the characteristics of Rayleigh wave propagation in a homogeneous, isotropic, thermoelastic diffusive half-space in the context of different theories of thermoelastic diffusion, including the Coriolis and Centrifugal forces. Sharma and Kaur (2010) considered Rayleigh waves in rotating thermoelastic solids with the void. Mahmoud (2011) investigated the Rayleigh wave velocity under the effect of rotation, initial stress, magnetic field, and gravity field in a granular medium. Abouelregal (2011) studied Rayleigh wave propagation in thermoelastic half-space in the context of dual-phase-lag mode. Abd-Alla, Abo-Dahab, and Hammad (2011); Abd-Alla, Abo-Dahab, Hammad, and Mahmoud (2011); and Abd-Alla and Ahmed (1996) studied Rayleigh waves in an orthotropic thermoelastic medium under the influence of gravity, magnetic field, and initial stress. Marin, Baleanu, and Vlase (2017) have discussed the effect of micro-temperatures for micropolar thermoelastic bodies. Mahmoud (2014) studied the effect of the magnetic field, gravity field, and rotation on the propagation of Rayleigh waves in an initially stressed non-homogeneous orthotropic medium. Singh, Kumari, and Singh (2014) solved the basic equations for the Rayleigh wave on the surface of TIT dual-phase-lag material under magnetic field. Kumar and Kansal (2013) investigated the propagation of Rayleigh waves in a TIT diffusive solid half-space. Kumar and Gupta (2015) investigated the effect of phase lags on Rayleigh wave propagation in the thermoelastic medium. Biswas, Mukhopadhyay, and Shaw (2017) dealt with the propagation of Rayleigh surface waves in a homogeneous, orthotropic thermoelastic half-space in the context of three-phase-lag models of thermoelasticity. Kumar, Sharma, Lata, and Abo-Dahab (2017) and Lata, Kumar, and Sharma (2016) investigated the Rayleigh waves in a homogeneous transversely isotropic magneto-thermoelastic (TIM) medium with two temperatures, Hall current, and rotation. Despite this, several researchers worked on a different theory of thermoelasticity as Chauthale and Khobragade (2017); Ezzat and AI-Bary (2016, 2017); Ezzat, El-Karamany, and El-Bary (2017); Ezzat, El-Karamany, and Ezzat (2012); Hassan, Marin, Ellahi, and Alamri (2018); Kumar, Kaushal, and Sharma (2018); Kumar, Sharma, and Lata (2016a, 2016b, 2016c); Lata and Kaur (2019a, 2019b, 2019c, 2019d, 2019e); Lata et al. (2016); Marin (2009, 2010); Marin and Craciun (2017); Marin, Ellahi, and Chirilă (2017); Marin and Nicaise (2016); and Othman and Marin (2017). Inspite of these, not much work has been carried out in the study of the Rayleigh wave propagation in a transversely isotropic magneto-thermoelastic medium with fractional order three-phase-lag heat transfer. In this paper, we have attempted to study the Rayleigh wave propagation with fractional order three-phase-lag heat transfer in a transversely isotropic magneto-thermoelastic medium. Basic equations The basic governing equations for homogeneous, anisotropic, generalized thermodiffusive elastic solids in the absence of body forces, heat and mass diffusion sources following Kumar and Kansal (2008b) are $$ {t}_{ij}={c}_{ij kl}{e}_{kl}+{a}_{ij}T+{b}_{ij}C, $$ $$ \left(1+{\tau}_q\frac{\partial }{\partial t}+{\tau}_q^2\frac{\partial^2}{{\partial t}^2}\right){\dot{q}}_i=-\left[{K}_{ij}\left(1+{\tau}_T\frac{\partial }{\partial t}\right){\dot{T}}_{,j}+{K}_{ij}^{\ast}\left(1+{\tau}_v\frac{\partial }{\partial t}\right){T}_{,j}\right], $$ $$ \rho {ST}_0=\rho {C}_ET+{aT}_0C-{a}_{ij}{e}_{ij}{T}_0, $$ $$ \mathrm{P}={b}_{kl}{e}_{kl}+ bC- aT $$ $$ {\eta}_i=-{\alpha}_{ij}^{\ast }{P}_{,j} $$ Here, Cijkl are elastic parameters and having symmetry (Cijkl = Cklij = Cjikl = Cijlk). The basis of these symmetries of Cijkl is due to The stress tensor is symmetric, which is only possible if (Cijkl = Cjikl) If a strain energy density exists for the material, the elastic stiffness tensor must satisfy Cijkl = Cklij From stress tensor and elastic stiffness, tensor symmetries infer (Cijkl = Cijlk) and Cijkl = Cklij = Cjikl = Cijlk The simplified Maxwell's linear equation (Rafiq et al. 2019) of electrodynamics for a slowly moving and perfectly conducting elastic solid are $$ \operatorname{curl}\overrightarrow{\ h}=\overrightarrow{j}+{\varepsilon}_0\frac{\partial \overrightarrow{E}}{\partial t};\operatorname{curl}\overrightarrow{E}=-{\mu}_0\frac{\partial \overrightarrow{h}}{\partial t};\overrightarrow{\ E}=-{\mu}_0\left(\frac{\partial \overrightarrow{u}}{\partial t}\times \overrightarrow{H}\right);\operatorname{div}\overrightarrow{h}=0. $$ From Eq. (6), we obtain $$ \overrightarrow{E}={\mu}_0{H}_0\left(\dot{w},0,-\dot{u}\right) $$ $$ \overrightarrow{h}=\left(0,-{H}_0e,0\right), $$ $$ \overrightarrow{j}=\left(-{h}_{,z}-{\varepsilon}_0{\mu}_0{H}_0\ddot{w},0,-{h}_{,x}-{\varepsilon}_0{\mu}_0{H}_0\ddot{u}\right) $$ The equation of motion, entropy equation, and mass conservation equation following Kumar and Kansal (2009) are $$ {t}_{ij,j}+{F}_i=\rho {\ddot{u}}_i, $$ $$ {q}_{i,i}+\rho {T}_0\dot{S}-\rho M+P{\eta}_{i,i}=0, $$ $$ {\eta}_{i,i}=\dot{C+\rho N} $$ $$ {\displaystyle \begin{array}{c}{F}_i={\mu}_0{\left(\overrightarrow{j}\times \overrightarrow{H}\right)}_i\\ {}\overrightarrow{F}=\left({F}_x,{F}_y,{F}_z\right)=\left({\mu}_0{H}_0^2{e}_{,x}-{\varepsilon}_0{\mu}_0^2{H}_0^2\ddot{u},0,{\mu}_0{H}_0^2{e}_{,z}-{\varepsilon}_0{\mu}_0^2{H}_0^2\ddot{w}\right)\end{array}} $$ are the components of the Lorentz force that appeared due to initially applied a magnetic field, the total magnetic field is given by \( \overrightarrow{H}={\overrightarrow{H}}_0+\overrightarrow{h} \), \( {\overrightarrow{H}}_0 \) is the external applied magnetic field intensity vector, and M and N are the strengths of the heat source and mass diffusion source per unit mass. The medium is supposed to be perfectly electrically conducting and is half-space (x, 0, z) such that all the variables are independent of the dimension y. Let \( {\overrightarrow{H}}_0=\left(0,{H}_0,0\right). \) The heat conduction equation following Othman and Said (2018), we have $$ {K}_{ij}\left(1+{\tau}_t\frac{\partial }{\partial t}\right){\dot{T}}_{, ji}+{K}_{ij}^{\ast}\left(1+{\tau}_v\frac{\partial }{\partial t}\right){T}_{, ji}=\left(1+{\tau}_q\frac{\partial }{\partial t}+{\left({\tau}_q\right)}^2\frac{\partial^2}{\partial {t}^2}\right)\left[{\rho C}_E\ddot{T}+{a}_{ij}{T}_0{\ddot{\mathrm{e}}}_{ij}+a{T}_0\ddot{C}\right], $$ $$ {a}_{ij}=-{a}_i{\delta}_{ij},\kern0.5em {b}_{ij}=-{b}_i{\delta}_{ij},\kern0.75em {\alpha}_{ij}^{\ast }={\alpha}_i^{\ast }{\delta}_{ij},\kern0.5em {K}_{ij}^{\ast }={K}_i^{\ast }{\delta}_{ij},\kern0.5em {K}_{ij}={K}_i{\delta}_{ij} $$ Method and solution of the problem We consider a perfectly conducting homogeneous transversely isotropic magneto-thermoelastic medium in the context of the three-phase-lag model of thermoelasticity initially at a uniform temperature T0, an initial magnetic field \( {\overrightarrow{H}}_0=\left(0,{H}_0,0\right) \) towards y-axis. Moreover, we considered x, y, z taking origin on the surface (z = 0) along the z-axis directing vertically downwards inside the medium. For the 2D problem in the xz-plane, we take $$ \boldsymbol{u}=\left(u,0,w\right) $$ Now using the transformation on Eqs. (7–9) following Slaughter (2002) is as under: $$ {C}_{11}\frac{\partial^2u}{\partial {x}^2}+{C}_{13}\frac{\partial^2w}{\partial x\partial z}+{C}_{44}\left(\frac{\partial^2u}{\partial {z}^2}+\frac{\partial^2w}{\partial x\partial z}\right)-{a}_1\frac{\partial T}{\partial x}-{b}_1\frac{\partial C}{\partial x}+\left({\mu}_0{H}_0^2\frac{\partial e}{\partial x}-{\epsilon}_0{\mu}_0^2{H}_0^2\frac{\partial^2u}{\partial {t}^2}\right)=\rho \left(\frac{\partial^2u}{\partial {t}^2}\right), $$ $$ \left({C}_{13}+{C}_{44}\right)\frac{\partial^2u}{\partial x\partial z}+{C}_{44}\frac{\partial^2w}{\partial {x}^2}+{C}_{33}\frac{\partial^2w}{\partial {z}^2}-{a}_3\frac{\partial T}{\partial z}-{b}_3\frac{\partial C}{\partial z}+\left({\mu}_0{H}_0^2\frac{\partial e}{\partial z}-{\epsilon}_0{\mu}_0^2{H}_0^2\frac{\partial^2w}{\partial {t}^2}\right)=\rho \left(\frac{\partial^2w}{\partial {t}^2}\right), $$ $$ {K}_1\left(1+{\tau}_t\frac{\partial }{\partial t}\right)\frac{\partial^2\dot{T}}{\partial {x}^2}+{K}_3\left(1+{\tau}_t\frac{\partial }{\partial t}\right)\frac{\partial^2\dot{T}}{\partial {z}^2}+{K}_1^{\ast}\left(1+{\tau}_v\frac{\partial }{\partial t}\right)\frac{\partial^2T}{\partial {x}^2}+{K}_3^{\ast}\left(1+{\tau}_v\frac{\partial }{\partial t}\right)\frac{\partial^2T}{\partial {z}^2}=\left(1+{\tau}_q\frac{\partial }{\partial t}+{\left({\tau}_q\right)}^2\frac{\partial^2}{\partial {t}^2}\right)\left[{\rho C}_E\frac{\partial^2T}{\partial {t}^2}+{T}_0\left\{{a}_1\frac{\partial \ddot{u}}{\partial x}+{a}_1\frac{\partial \ddot{w}}{\partial z}\right\}+a{T}_0\ddot{C}\right], $$ $$ {\alpha}_1^{\ast}\left[{b}_1\frac{\partial^3u}{\partial {x}^3}+{b}_3\frac{\partial^3w}{\partial {x}^2\partial z}\right]+{\alpha}_3^{\ast}\left[{b}_1\frac{\partial^3u}{\partial x\partial {z}^2}+{b}_3\frac{\partial^3w}{\partial {z}^3}\right]-{\alpha}_1^{\ast }b\frac{\partial^2C}{\partial {x}^2}-{\alpha}_3^{\ast }b\frac{\partial^2C}{\partial {z}^2}+{\alpha}_1^{\ast }a\frac{\partial^2T}{\partial {x}^2}+{\alpha}_3^{\ast }a\frac{\partial^2T}{\partial {z}^2}=-\left(\dot{C}\right). $$ $$ {t}_{xx}={C}_{11}{e}_{xx}+{C}_{13}{e}_{xz}-{a}_1T, $$ $$ {t}_{zz}={C}_{13}{e}_{xx}+{C}_{33}{e}_{zz}-{a}_3T, $$ $$ {t}_{xz}=2{C}_{44}{e}_{xz}, $$ $$ {\displaystyle \begin{array}{c}{a}_1=\left({C}_{11}+{C}_{12}\right){\alpha}_1+{C}_{13}{\alpha}_{3,}\\ {}{a}_3=2{C}_{13}{\alpha}_1+{C}_{33}{\alpha}_3,\\ {}{b}_1=\left({C}_{11}+{C}_{12}\right){\alpha}_{1c}+{C}_{13}{\alpha}_{3c,}\end{array}}. $$ Using dimensionless quantities, $$ {\displaystyle \begin{array}{c}\left({x}^{\prime },{z}^{\prime },{u}^{\prime },{w}^{\prime}\right)=\frac{\omega_1^{\ast }}{C_1}\left(x,z,u,w\right),\rho {C}_1^2={C}_{11},{\omega}_1^{\ast }=\frac{\rho {C}_1^2{C}_E}{K_1}\\ {}{T}^{\prime }=\frac{a_1T}{\rho {C}_1^2},{C}^{\prime }=\frac{b_1C}{\rho {C}_1^2},\left(\ {t}^{\prime },{\tau}_0^{\prime },{\tau}^{0\hbox{'}},{\tau}_T^{\prime },{\tau}_v^{\prime },{\tau}_q^{\prime}\right)={\omega}_1^{\ast}\left(t,{\tau}_0,{\tau}^0,{\tau}_T,{\tau}_v,{\tau}_q\right).\end{array}} $$ Making use of (21) in Eqs. (14–17), after suppressing the primes, yield $$ \left(1+{\delta}_4\right)\frac{\partial^2u}{\partial {x}^2}+\left({\delta}_1+{\delta}_4\right)\frac{\partial^2w}{\partial x\partial z}+{\delta}_2\frac{\partial^2u}{\partial {z}^2}-\frac{\partial T}{\partial x}-\frac{\partial C}{\partial x}=\left(1+{\delta}_5\right)\frac{\partial^2u}{\partial {t}^2}, $$ $$ \left({\delta}_1+{\delta}_4\right)\frac{\partial^2u}{\partial x\partial z}+{\delta}_2\frac{\partial^2w}{\partial {x}^2}+\left({\delta}_3+{\delta}_4\right)\frac{\partial^2w}{\partial {z}^2}-{\delta}_7\frac{\partial T}{\partial z}-{\delta}_8\frac{\partial C}{\partial z}=\left(1+{\delta}_5\right)\frac{\partial^2w}{\partial {t}^2} $$ $$ \left(1+{\tau}_T\frac{\partial }{\partial t}\right)\left({\delta}_9\frac{\partial^2\dot{T}}{\partial {x}^2}+{\delta}_{12}\frac{\partial^2\dot{T}}{\partial {z}^2}\right)+\left(1+{\tau}_v\frac{\partial }{\partial t}\right)\left({\delta}_{10}\frac{\partial^2T}{\partial {x}^2}+{\delta}_{11}\frac{\partial^2T}{\partial {z}^2}\right)=\left(1+{\tau}_q\frac{\partial^{\alpha }}{\partial {t}^{\alpha }}+{\tau_q}^2\frac{\partial^2}{\partial {t}^2}\right)\left[{\delta}_9\ddot{T}+{\delta}_{13}\frac{\partial \ddot{u}}{\partial x}+{\delta}_{14}\frac{\partial \ddot{w}}{\partial z}+{\delta}_{15}\ddot{C}\right]. $$ $$ {q}_1\frac{\partial^3u}{\partial {x}^3}+{q}_2\frac{\partial^3w}{\partial {x}^2\partial z}+{q}_3\frac{\partial^3u}{\partial x\partial {z}^2}+{q}_4{\frac{\partial^3w}{\partial {z}^3}}_3^{\ast }+{q}_5\frac{\partial^2C}{\partial {x}^2}+{q}_6\frac{\partial^2C}{\partial {z}^2}+{q}_7\frac{\partial^2T}{\partial {x}^2}+{q}_8\frac{\partial^2T}{\partial {z}^2}+{q}_9\frac{\partial C}{\partial t}=0 $$ $$ {\delta}_1=\frac{c_{13}+{c}_{44}}{c_{11}},{\delta}_2=\frac{c_{44}}{c_{11}},{\delta}_3=\frac{c_{33}}{c_{11}},{\delta}_4=\frac{\mu_0{H}_0^2}{\rho {C}_1^2},{\delta}_5=\frac{\varepsilon_0{\mu}_0^2{H}_0^2}{\rho },{\delta}_7=\frac{a_3}{a_1},{\delta}_8=\frac{b_3}{b_1},{\delta}_9=\frac{\rho {\omega}_1^{\ast 3}}{a_1},{\delta}_{10}=\frac{\rho {\omega}_1^{\ast 2}{K}_1^{\ast }}{a_1{K}_1}, $$ $$ {\delta}_{11}=\frac{\rho {\omega}_1^{\ast 2}{K}_3^{\ast }}{a_1{K}_1},{\delta}_{12}=\frac{\rho {\omega}_1^{\ast 3}{K}_3}{a_1{K}_1},{\delta}_{13}=\frac{T_0{\omega}_1^{\ast 2}{a}_1}{K_1},{\delta}_{14}=\frac{T_0{\omega}_1^{\ast 2}{a}_3}{K_1},{\delta}_{15}=\frac{a\rho {C}_1^2{T}_0{\omega}_1^{\ast 2}}{K_1{b}_1}. $$ Rayleigh wave propagation We pursue Rayleigh wave solution of the equations of the form $$ \left(\begin{array}{c}u\\ {}w\\ {}T\\ {}C\end{array}\right)=\left(\begin{array}{c}1\\ {}W\\ {}S\\ {}R\end{array}\right)U{e}^{i\xi \left(x+ mz- ct\right)} $$ where \( c=\frac{\upomega}{\upxi} \) is the non-dimensional phase velocity and m is an unknown parameter. 1, W, S, and R are the amplitude ratios of displacements u, w, temperature change T, and concentration C, respectively. Upon using Eq. (26) in Eqs. (22–25), we get $$ U\left[{l}_1+{l}_6+{l}_2{m}^2\right]+W\left[{l}_3m\right]+S\left[{l}_5\right]+R\left[{l}_5\right]=0, $$ $$ U\left[{l}_3m\right]+W\left[{l}_2+{l}_6+{l}_7{m}^2\right]+S\left[{l}_8m\right]+R\left[{l}_9m\right]=0, $$ $$ U\left[{l}_{12}\right]+W\left[{l}_{13}m\right]+S\left[{l}_{10}+{l}_{11}{m}^2\right]+R\left[{l}_{14}\right]=0, $$ $$ U\left[{l}_{15}+{l}_{16}{m}^2\right]+W\left[{l}_{17}m+{l}_{18}{m}^3\right]+S\left[{l}_{21}+{l}_{22}{m}^2\right]+R\left[{l}_{19}+{l}_{20}{m}^2\right]=0, $$ $$ {\displaystyle \begin{array}{c}{l}_1=-{\xi}^2\left(1+{\delta}_4\right),{l}_2=-{\delta}_2{\xi}^2,{l}_3=-{\xi}^2\left({\delta}_1+{\delta}_4\right),{l}_5=- i\xi, {l}_6=\left(1+{\delta}_5\right){\xi}^2{c}^2,{l}_7=-{\xi}^2\left({\delta}_3+{\delta}_4\right),\\ {}{l}_8=-i{\xi \delta}_{7,}{l}_9=-i{\xi \delta}_{8,}{l}_{10}=-{\delta}_{10}\left(1- i\xi c{\tau}_v\right){\xi}^2+{\delta}_9\left(1- i\xi c{\tau}_T\right)i{\xi}^3c-{\delta}_9{\xi}^2{c}^2\left(1- i\xi c{\tau}_q-\frac{{\tau_q}^2{\xi}^2{c}^2}{2}\right),\\ {}{l}_{11}=-{\delta}_{11}\left(1- i\xi c{\tau}_v\right){\xi}^2+{\delta}_{12}\left(1- i\xi c{\tau}_T\right)i{\xi}^3c,{l}_{12}=-{\delta}_{13}i{\xi}^3{c}^2\left(1- i\xi c{\tau}_q-\frac{{\tau_q}^2{\xi}^2{c}^2}{2}\right),\\ {}{l}_{13}=-{\delta}_{14}i{\xi}^3{c}^2\left(1- i\xi c{\tau}_q-\frac{{\tau_q}^2{\xi}^2{c}^2}{2}\right),{l}_{14}=-{\delta}_{15}{\xi}^2{c}^2\left(1- i\xi c{\tau}_q-\frac{{\tau_q}^2{\xi}^2{c}^2}{2}\right),{l}_{15}=-{q}_1i{\xi}^3,\\ {}{l}_{16}=-{q}_3i{\xi}^3,{l}_{17}=-{q}_2i{\xi}^3,{l}_{18}=-{q}_4i{\xi}^3,{l}_{19}=-{q}_5{\xi}^2-{q}_9 i\xi c,{l}_{20}=-{q}_6{\xi}^2,{l}_{21}=-{q}_7{\xi}^2,\\ {}{l}_{22}=-{q}_8{\xi}^2,{q}_1=\frac{\alpha_1^{\ast }{b}_1{\omega}_1^{\ast 2}}{c_1^2},{q}_2=\frac{\alpha_1^{\ast }{b}_3{\omega}_1^{\ast 2}}{c_1^2},{q}_3=\frac{\alpha_3^{\ast }{b}_1{\omega}_1^{\ast 2}}{c_1^2},{q}_4=\frac{\alpha_3^{\ast }{b}_3{\omega}_1^{\ast 2}}{c_1^2},{q}_5=-\frac{\alpha_1^{\ast }b{\omega}_1^{\ast 2}\rho }{b_1}\\ {}{q}_6=-\frac{\alpha_3^{\ast }b{\omega}_1^{\ast 2}\rho }{b_1},{q}_7=-\frac{\alpha_1^{\ast }a{\omega}_1^{\ast 2}\rho }{a_1},{q}_8=\frac{\alpha_3^{\ast }a{\omega}_1^{\ast 2}\rho }{a_1},{q}_9=-\frac{\omega_1^{\ast }{c}_1^2\rho }{b_1}.\end{array}} $$ and from (27–30), the characteristic equation is a biquadratic equation in m2 given by $$ {m}^8+\frac{B}{A}{m}^6+\frac{C}{A}{m}^4+\frac{D}{A}{m}^2+\frac{E}{A}=0, $$ $$ A={l}_2{l}_7{l}_{11}{l}_{20}-{l}_2{l}_9{l}_{18}{l}_{11}, $$ $$ B={l}_1{l}_7{l}_{11}{l}_{20}-{l}_1{l}_9{l}_{18}{l}_{11}+{l}_2{l}_6{l}_{11}{l}_{20}+{l}_2{l}_7{l}_{10}{l}_{20}-{l}_{14}{l}_2{l}_{22}{l}_7+{l}_{14}{l}_2{l}_8{l}_{18}+{l}_2{l}_9{l}_{13}{l}_{22}-{l}_2{l}_9{l}_{13}{l}_{22}-{l}_{10}{l}_9{l}_2{l}_{18}+{l}_2{l}_{11}{l}_{17}{l}_9-{l}_3{l}_3{l}_{11}{l}_{20}+{l}_3{l}_9{l}_{16}{l}_{11}+{l}_5{l}_3{l}_{11}{l}_{18}+{l}_5{l}_7{l}_{11}{l}_{15}+{l}_2{l}_8{l}_{13}{l}_{20}, $$ $$ C={l}_1{l}_6{l}_{11}{l}_{20}+{l}_1{l}_7{l}_{19}{l}_{11}+{l}_1{l}_7{l}_{10}{l}_{20}-{l}_1{l}_7{l}_{14}{l}_{22}-{l}_{14}{l}_1{l}_8{l}_{18}-{l}_{10}{l}_1{l}_9{l}_{18}+{l}_1{l}_9{l}_{13}{l}_{22}-{l}_1{l}_9{l}_{11}{l}_{17}+{l}_2{l}_6{l}_{11}{l}_{19}+{l}_2{l}_6{l}_{10}{l}_{20}-{l}_2{l}_6{l}_{14}{l}_{22}+{l}_1{l}_7{l}_{10}{l}_{19}-{l}_2{l}_7{l}_{14}{l}_{21}-{l}_2{l}_8{l}_{13}{l}_{19}+{l}_1{l}_8{l}_{13}{l}_{20}-{l}_2{l}_8{l}_{14}{l}_{17}-{l}_2{l}_9{l}_{13}{l}_{21}-{l}_2{l}_9{l}_{10}{l}_{17}-{l}_3^2{l}_{11}{l}_{19}-{l}_3^2{l}_{10}{l}_{20}+{l}_3^2{l}_{14}{l}_{22}+{l}_3{l}_8{l}_{12}{l}_{20}-{l}_3{l}_8{l}_{14}{l}_{16}-{l}_3{l}_9{l}_{12}{l}_{22}-{l}_5{l}_3{l}_{13}{l}_{22}+{l}_5{l}_3{l}_{10}{l}_{18}+{l}_5{l}_3{l}_{11}{l}_{17}-{l}_5{l}_6{l}_{16}{l}_{11}-{l}_5{l}_7{l}_{12}{l}_{22}+{l}_5{l}_7{l}_{10}{l}_{16}+{l}_5{l}_7{l}_{11}{l}_{16}-{l}_5{l}_8{l}_{12}{l}_{18}+{l}_5{l}_8{l}_{16}{l}_{13}+{l}_5{l}_3{l}_{13}{l}_{20}-{l}_5{l}_3{l}_{14}{l}_{18}-{l}_5{l}_7{l}_{12}{l}_{20}+{l}_5{l}_7{l}_{14}{l}_{16}+{l}_5{l}_9{l}_{12}{l}_{18}-{l}_5{l}_9{l}_{13}{l}_{16}, $$ $$ D={l}_1{l}_6{l}_{11}{l}_{19}+{l}_1{l}_6{l}_{10}{l}_{20}-{l}_1{l}_6{l}_{14}{l}_{22}+{l}_1{l}_7{l}_{10}{l}_{19}+{l}_5{l}_7{l}_{10}{l}_{15}-{l}_1{l}_7{l}_{14}{l}_{21}-{l}_1{l}_8{l}_{13}{l}_{19}-{l}_1{l}_8{l}_{14}{l}_{17}+{l}_1{l}_9{l}_{13}{l}_{21}-{l}_5{l}_8{l}_{12}{l}_{17}-{l}_1{l}_9{l}_{10}{l}_{17}+{l}_2{l}_6{l}_{10}{l}_{19}-{l}_2{l}_6{l}_{14}{l}_{21}-{l}_3^2{l}_{10}{l}_{19}+{l}_5{l}_8{l}_{13}{l}_{15}+{l}_3^2{l}_{14}{l}_{21}+{l}_3{l}_8{l}_{12}{l}_{19}-{l}_3{l}_8{l}_{14}{l}_{15}-{l}_3{l}_9{l}_{12}{l}_{21}+{l}_5{l}_3{l}_{13}{l}_{19}+{l}_3{l}_9{l}_{10}{l}_{15}-{l}_3{l}_5{l}_{13}{l}_{21}+{l}_3{l}_5{l}_{10}{l}_{17}+{l}_5{l}_6{l}_{12}{l}_{22}-{l}_5{l}_6{l}_{12}{l}_{20}-{l}_{10}{l}_5{l}_6{l}_{16}-{l}_5{l}_3{l}_{14}{l}_{17}-{l}_5{l}_6{l}_{10}{l}_{16}-{l}_5{l}_6{l}_{15}{l}_{11}-{l}_5{l}_7{l}_{12}{l}_{21}+{l}_5{l}_6{l}_{14}{l}_{16}-{l}_5{l}_7{l}_{12}{l}_{19}+{l}_5{l}_7{l}_{14}{l}_{15}+{l}_5{l}_9{l}_{12}{l}_{17}-{\mathrm{l}}_5{l}_9{l}_{13}{l}_{15}+{l}_5{l}_6{l}_{14}{l}_{15}, $$ $$ E={l}_1{l}_6{l}_{10}{l}_{19}-{l}_1{l}_6{l}_{14}{l}_{21}-{l}_5{l}_6{l}_{12}{l}_{21}-{l}_5{l}_6{l}_{10}{l}_{15}-{l}_5{l}_6{l}_{12}{l}_{19}. $$ The characteristic in Eq. (27) gives four roots \( {m}_p^2\mathrm{where} \) p = 1, 2, 3, 4. Since we consider only the surface waves, therefore, motion is restricted to the free surface z = 0 of the half-space, hence, satisfy the radiation conditions Re(mp) ≥ 0. The displacements, temperature change, and concentration can be written as $$ \left(\begin{array}{c}u\\ {}w\\ {}T\\ {}C\end{array}\right)=\sum \limits_{p=1}^4\left(\begin{array}{c}1\\ {}{n}_{1p}\\ {}{n}_{2p}\\ {}{n}_{3p}\end{array}\right){A}_p{e}^{i\xi \left(x+{i m}_pz- ct\right)} $$ where Ap (p = 1, 2, 3, 4) are arbitrary constants and coupling constants are Boundary conditions The boundary conditions at z = 0 are given by $$ {t}_{zz}=0,{t}_{zx}=0,\frac{\partial T}{\partial z}+ hT=0,P=0. $$ After applying dimensionless quantities from Eq. (21), the above boundary conditions reduces to $$ {\displaystyle \begin{array}{c}\left({\delta}_1-{\delta}_2\right)\frac{\partial u}{\partial x}+{\delta}_3\frac{\partial w}{\partial z}-{\delta}_7T-{\delta}_8C=0,\\ {}{\delta}_2\left(\frac{\partial w}{\partial x}+\frac{\partial u}{\partial z}\right)=0,\\ {}\frac{\partial T}{\partial z}+ hT=0,\\ {}\frac{\partial u}{\partial x}+{\epsilon}_2\frac{\partial w}{\partial z}-{\eta}_2C+{\eta}_1T=0,\end{array}} $$ $$ {\eta}_1=\frac{a{C}_{11}}{a_1{b}_1},{\eta}_2=\frac{b{C}_{11}}{b_1^2}, $$ Derivations of the secular equations By using the values of u, w, T, and C from (28) in (29), we get four linear equations as $$ \sum \limits_{p=1}^4{Q}_{jp}{A}_p=0,j=1,2,3,4. $$ $$ {\displaystyle \begin{array}{c}{Q}_{1p}=\left({\delta}_1-{\delta}_2\right)+{\delta}_3{im}_p{n}_{1p}+\frac{i{\delta}_7{n}_{2p}}{\xi }+\frac{i{\delta}_8{n}_{3p}}{\xi },\\ {}{Q}_{2p}={im}_p+{n}_{1p},\\ {}{Q}_{3p}=\left(-\xi {m}_p+h\right){n}_{2p},\\ {}{Q}_{4p}=1+i{\epsilon}_2{m}_p{n}_{1p}-\frac{i{\eta}_1{n}_{2p}}{\xi }+\frac{i{\eta}_2{n}_{3p}}{\xi }.\end{array}} $$ Secular equations are $$ {\displaystyle \begin{array}{c}\left[\begin{array}{c}{Q}_{11}\\ {}{Q}_{21}\\ {}{Q}_{31}\\ {}{Q}_{41}\end{array}\begin{array}{c}{Q}_{12}\\ {}{Q}_{22}\\ {}{Q}_{32}\\ {}{Q}_{42}\end{array}\begin{array}{c}{Q}_{13}\\ {}{Q}_{23}\\ {}{Q}_{33}\\ {}{Q}_{43}\end{array}\begin{array}{c}{Q}_{14}\\ {}{Q}_{24}\\ {}{Q}_{34}\\ {}{Q}_{44}\end{array}\right]=0,\kern0.5em \mathrm{or}\\ {}-{Q}_{31}{D}_1+{Q}_{32}{D}_2-{Q}_{33}{D}_3+{Q}_{34}{D}_4=0,\end{array}} $$ $$ {\displaystyle \begin{array}{c}{D}_1=\left[\begin{array}{ccc}{Q}_{12}& {Q}_{13}& {Q}_{14}\\ {}{Q}_{22}& {Q}_{23}& {Q}_{24}\\ {}{Q}_{42}& {Q}_{43}& {Q}_{44}\end{array}\right],\\ {}{D}_1={Q}_{12}\left({Q}_{23}{Q}_{44}-{Q}_{24}{Q}_{43}\right)-{Q}_{13}\left({Q}_{22}{Q}_{44}-{Q}_{24}{Q}_{42}\right)+{Q}_{14}\left({Q}_{22}{Q}_{43}-{Q}_{23}{Q}_{42}\right),\\ {}{D}_2=\left[\begin{array}{ccc}{Q}_{11}& {Q}_{13}& {Q}_{14}\\ {}{Q}_{21}& {Q}_{23}& {Q}_{24}\\ {}{Q}_{41}& {Q}_{43}& {Q}_{44}\end{array}\right],\\ {}{D}_2={Q}_{11}\left({Q}_{23}{Q}_{44}-{Q}_{24}{Q}_{43}\right)-{Q}_{13}\left({Q}_{21}{Q}_{44}-{Q}_{24}{Q}_{42}\right)+{Q}_{14}\left({Q}_{21}{Q}_{43}-{Q}_{23}{Q}_{41}\right),\\ {}{D}_3=\left[\begin{array}{ccc}{Q}_{11}& {Q}_{12}& {Q}_{14}\\ {}{Q}_{21}& {Q}_{22}& {Q}_{24}\\ {}{Q}_{41}& {Q}_{42}& {Q}_{44}\end{array}\right],\\ {}{D}_3={Q}_{11}\left({Q}_{22}{Q}_{44}-{Q}_{24}{Q}_{42}\right)-{Q}_{12}\left({Q}_{21}{Q}_{44}-{Q}_{24}{Q}_{41}\right)+{Q}_{14}\left({Q}_{21}{Q}_{42}-{Q}_{22}{Q}_{41}\right),\\ {}{D}_4=\left[\begin{array}{ccc}{Q}_{11}& {Q}_{12}& {Q}_{13}\\ {}{Q}_{21}& {Q}_{22}& {Q}_{23}\\ {}{Q}_{31}& {Q}_{32}& {Q}_{33}\end{array}\right],\\ {}{D}_4={Q}_{11}\left({Q}_{22}{Q}_{33}-{Q}_{23}{Q}_{32}\right)-{Q}_{12}\left({Q}_{21}{Q}_{33}-{Q}_2{Q}_{31}\right)+{Q}_{13}\left({Q}_{21}{Q}_{32}-{Q}_{22}{Q}_{31}\right).\end{array}} $$ These secular equations have entire information regarding the wavenumber, phase velocity, and attenuation coefficient of Rayleigh waves in the transversely isotropic magneto-thermoelastic medium. Moreover, If we write $$ {c}^{-1}={v}^{-1}+F{i\omega}^{-1}, $$ then ξ = E + iF, where \( E=\frac{\omega }{v},v \) (velocity), and F (attenuation coefficient) are real. The roots of the characteristic in Eq. (27) are complex and therefore, we assume that mp = Qp + ipq, so that the exponent in Rayleigh wave solutions (28) becomes $$ iE\left(x-i{m}_p^iz- vt\right)-E\left(\frac{F}{E}x+{m}_p^rz\right), $$ $$ {m}_p^r={Q}_p-{p}_q\frac{F}{E},{m}_p^i={p}_q+{Q}_p\frac{F}{E}. $$ Equation (28) can be written as $$ \left(\begin{array}{c}u\\ {}w\\ {}T\\ {}C\end{array}\right)=\sum \limits_{p=1}^4\left(\begin{array}{c}1\\ {}{n}_{1p}\\ {}{n}_{2p}\\ {}{n}_{3p}\end{array}\right){A}_p{e}^{\left(- Fx-{\chi}_p^r\right)}\times {e}^{i\left[E\left(x- vt\right)-{\chi}_p^i\right]}, $$ $$ {\displaystyle \begin{array}{c}{\left|{\chi}_p^r\right|}^2-{\left|{\chi}_p^i\right|}^2={E}^2\left\{{\left({m}_p^r\right)}^2-{\left({m}_p^i\right)}^2\right\},\\ {}\left|{\chi}_p^r\right|\left|{\chi}_p^i\right| cos\theta =\frac{1}{2}{E}^2{m}_p^r{m}_p^i,\end{array}} $$ θ is the angle between the real and imaginary part of the vector χp. Phase velocity Phase velocity defines the speed at which waves oscillating at a particular frequency propagate and it depends on the real component of the wave number. The phase velocities are given by $$ V=\frac{\omega }{\mathit{\operatorname{Re}}\left(\xi \right)} $$ Attenuation coefficient The attenuation coefficient is the gradual loss of flux intensity through a medium, and it depends on the imaginary component of the wavenumber. The attenuation coefficient is defined as $$ Q= Img\left(\xi \right), $$ Specific loss The specific loss is the most direct way of defining internal resistance for a material. The specific loss W is given by $$ W=\left(\frac{\Delta W}{W}\right)=4\uppi \left|\frac{Img\left(\xi \right)}{\mathit{\operatorname{Re}}\left(\xi \right)}\right|, $$ Penetration depth Penetration depth describes how deep a wave can penetrate into a material and describes the decay of waves inside of a material. The penetration depth S is defined by $$ S=\frac{1}{Img\left(\xi \right)} $$ If τT ≠ 0, τv ≠ 0, τq ≠ 0, we obtain results for Rayleigh wave propagation in transversely isotropic magneto-thermoelastic solid with diffusion and with and without energy dissipation and TPL (three-phase-lag) effects. If τT = 0, τv = 0, τq = 0, and K∗ ≠ 0, we obtain results for Rayleigh wave propagation in magneto-thermoelastic transversely isotropic solid with diffusion and GN-III theory (thermoelasticity with energy dissipation). If τT = 0, τv = 0, τq = 0, and K∗ = 0, we obtain results for Rayleigh wave propagation in magneto-thermoelastic transversely isotropic solid with diffusion and GN-II theory (generalized thermoelasticity without energy dissipation). If τT ≠ 0, τv ≠ 0, τq ≠ 0 , and K∗ = 0, we obtain results for Rayleigh wave propagation in magneto-thermoelastic transversely isotropic solid with diffusion and GN-II theory with TPL effect If τT = 0, τv = 0, τq = τ0 > 0, and K∗ = 0, and ignoring \( {\tau}_q^2 \), we obtain results for Rayleigh wave propagation in magneto-thermoelastic transversely isotropic solid with diffusion and Lord-Shulman (L-S) model. If τT = 0, τv = 0, and τq = 0 and if the medium is not permitted with the magnetic field, i.e., μ0 = H0 = 0, then we obtain results for Rayleigh wave propagation in transversely isotropic thermoelastic solid with diffusion and without TPL effect If \( \kern0.50em {C}_{11}={C}_{33}=\lambda +2\mu, {C}_{12}={C}_{13}=\lambda, {C}_{44}=\mu, {\alpha}_1={\alpha}_3={\alpha}^{\prime },{a}_1={a}_3=a,{b}_1={b}_3=b,{K}_1={K}_3=K,{K}_1^{\ast }={K}_3^{\ast }={K}^{\ast } \), we obtain expressions for Rayleigh wave propagation in magneto-thermoelastic isotropic materials with diffusion and with and without energy dissipation with TPL effect. Numerical results and discussion In order to illustrate our theoretical results in the proceeding section and to show the effect of Hall current and fractional order parameter, we now present some numerical results. Following Dhaliwal and Sherief (1980), cobalt material has been taken for thermoelastic material as $$ {c}_{11}=3.07\times {10}^{11}N{m}^{-2},{c}_{33}=3.581\times {10}^{11}N{m}^{-2},{c}_{13}=1.027\times {10}^{10}N{m}^{-2},{c}_{44}=1.510\times {10}^{11}N{m}^{-2},{\beta}_1=7.04\times {10}^6N{m}^{-2}{\mathit{\deg}}^{-1},{\beta}_3=6.90\times {10}^6N{m}^{-2}{\mathit{\deg}}^{-1},\rho =8.836\times {10}^3K{gm}^{-3},{C}_E=4.27\times {10}^2 jK{g}^{-1}{\mathit{\deg}}^{-1},{K}_1=0.690\times {10}^2W{m}^{-1}{Kdeg}^{-1},{K}_3=0.690\times {10}^2W{m}^{-1}{K}^{-1},{T}_0=298\ \mathrm{K},{H}_0=1\mathrm{J}{\mathrm{m}}^{-1}\mathrm{n}{\mathrm{b}}^{-1},{\upvarepsilon}_0=8.838\times {10}^{-12}\mathrm{F}{\mathrm{m}}^{-1},L=1. $$ Using the above values, the graphical representations of stress components, temperature change, and concentration, Rayleigh wave velocity, attenuation coefficient, specific loss, and penetration depth of Raleigh wave in the transversely isotropic thermoelastic medium have been investigated with three-phase-lag, GN-III, and LS theory of thermoelasticity and demonstrated graphically as The solid line relates to the three-phase lag theory τT ≠ 0, τv ≠ 0, τq ≠ 0, The dashed line relates to GN-III theory τT = 0, τv = 0, τq = 0, and K∗ ≠ 0, The dotted line relates to LS theory τT = 0, τv = 0, τq = τ0 > 0, and K∗ = 0. Figure 1 illustrates the deviations of tangential stress tzx with wave number. From the graph, we observe that tangential stress tzx decreases with wave number in all the three theories with a little difference in magnitude. Figure 2 shows the deviations of normal stress tzz with wavenumber. Here, we observe that the normal stress tzz increases with increase in wavenumber with a small magnitude difference in all the three theories. Figure 3 illustrates the deviations of the attenuation coefficient with wavenumber. For the TPL theory, we observe that increase in attenuation coefficient is a gradually increasing which shows that for TPL theory attenuation coefficient is directly proportional to wavenumber. For GN-III theory, the attenuation coefficient increases in the form of a curve with an increase in wavenumber, while for L-S theory, the value of the attenuation coefficient decreases with increase in wavenumber. Figure 4 shows the deviations of penetration depth with wavenumber. From the graphs, we observe that the penetration depth decreases for TPL and GN-II theories, while for L-S theory, it first increases and then starts decreasing with increase in wavenumber and hence shows the influence of three different theories on penetration depth. Figure 5 illustrates the variations of specific loss with wavenumber. From the graphs, we observe that the value of specific loss first decreases and then becomes stationary with an increase in wavenumber for TPL theory. In GN-III theory, specific loss increases with increase in wavenumber, while for L-S theory, the value of specific loss first increases and then starts decreasing after attaining a maximum value at wavenumber = 2.5. Figure 6 shows variations of concentration C with wavenumber. From the graph, we observe that the concentration C increases with increase in wavenumber for all the three theories with a little magnitude difference. Figure 7 shows variations of Rayleigh wave velocity with wavenumber. The Rayleigh wave velocity increases for the GN-III theory case and no change for TPL case, while for L-S theory, it first decreases and then remains the same with an increase in wavenumber. Figure 8 shows variations of temperature T with wavenumber. From the graph, we observe that the temperature T increases with increase in wavenumber for all the three theories with a little magnitude difference. Thus, we conclude that there is a significant influence of three-phase-lag GN-III and LS on the deformation wave parameter attenuation coefficients, specific loss, wave velocity, penetration depth, temperature, concentration, tangential stress, normal stress components, and of the transversely isotropic magneto-thermoelastic medium. Variations of tangential stress tzx with wavenumber Variations of normal stress tzz with wavenumber Variations of attenuation coefficient with wavenumber Variations of penetration depth with wavenumber Variations of specific loss with wavenumber Variations of concentration C with wavenumber Variations of Rayleigh wave velocity with wave number Variations of temperature T with wave number From the above study, we conclude the following: A mathematical model to study the Rayleigh wave propagation in the homogeneous transversely isotropic magneto-thermoelastic medium in the presence of mass diffusion and the three-phase-lag heat transfer has been developed, and various wave characteristics, i.e., attenuation coefficients, specific loss, wave velocity, penetration depth, temperature, concentration, tangential stress, and normal stress components have been derived and represented graphically. The secular equation of Rayleigh waves in the presence of the effect of diffusion in a transversely isotropic magneto-thermoelastic medium has been derived. The comparison of different theories of thermoelasticity, i.e., TPL, GN-III, and L-S theories are carried out. From the graphs, we observe a significant influence of three-phase-lag, GN-III and LS theories on the various wave characteristics, i.e., attenuation coefficients, specific loss, wave velocity, penetration depth, temperature, concentration, tangential stress, and normal stress components in transversely isotropic magneto-thermoelastic medium. Attenuation of waves increases, whereas the penetration depth decreases with the increase in wavenumber. The study of elastic wave attenuation particularly in transversely isotropic magneto-thermoelastic medium carries information about transversely isotropic magneto-thermoelastic medium properties and is important for the design of geophysics and seismic investigations. Significant resemblance and non-resemblance among the results under TPL, GN-III, and L-S theory of thermoelasticity have been identified. However, the problem is theoretical, but it can deliver useful information for experimental researchers working in the field of geophysics and earthquake engineering and seismologist working in the field of mining tremors and drilling into the Earth crust. δij Kronecker delta Cijkl Elastic parameters βij Thermal elastic coupling tensor T Absolute temperature T0 Reference temperature φ Conductive temperature tij Stress tensors eij Strain tensors ui Components of displacement ρ Medium density CE Specific heat aij Tensor of thermal moduli αij Linear thermal expansion coefficient Kij Materialistic constant \( {K}_{ij}^{\ast } \) Thermal conductivity ω Angular frequency μ0 Magnetic permeability Ω Angular velocity of the solid and equal to Ωn, where n is a unit vector \( \overrightarrow{u} \) Displacement vector \( {\overrightarrow{H}}_0 \) Magnetic field intensity vector \( \overrightarrow{j} \) Current density vector Fi Components of the Lorentz force τ0 Relaxation time ε0 Electric permeability δ(t) Dirac's delta function τt Phase lag of heat flux τv Phase lag of temperature gradient τq Phase lag of thermal displacement α Fractional-order derivative ξ Wavenumber bij Tensor of diffusion moduli C The concentration of the diffusion material \( {\alpha}_{ij}^{\ast } \) Diffusion parameters ηi The flow of diffusion mass vector qi Components of heat flux vector P Chemical potential per unit mass S Entropy per unit mass k Material constant \( {\omega}_1^{\ast } \) Characteristics frequency of the medium C1 Longitudinal wave velocity For the numerical results, cobalt material has been taken for thermoelastic material from Dhaliwal and Sherief (1980). Abd-Alla, A. M., Abo-Dahab, S. M., & Hammad, H. A. (2011). Propagation of Rayleigh waves in generalized magnetothermoelastic orthotropic material under initial stress and gravity field. Applied Mathematical Modelling, 35, 2981–3000. Abd-Alla, A. M., Abo-Dahab, S. M., Hammad, H. A., & Mahmoud, a. S. (2011). On generalized magneto-thermoelastic Rayleigh waves in a granular medium under the influence of a gravity field and initial stress. Journal of Vibration and Control, 17(1), 115–128. Abd-Alla, A. M., & Ahmed, S. M. (1996). Rayleigh waves in an orthotropic thermoelastic medium under gravity and initial stress. Earth, Moon, and Planets, 75, 185–197. Abouelregal, A. E. (2011). Rayleigh waves in a thermoelastic solid half space using dual-phase-lag model. International Journal of Engineering Science, 49, 781–791. Biswas, S., Mukhopadhyay, B., & Shaw, S. (2017). Rayleigh surface wave propagation in orthotropic thermoelastic solids under three-phase-lag model. Journal of Thermal Stresses, 40(4), 403–419. Chauthale, S., & Khobragade, N. W. (2017). Thermoelastic response of a thick circular plate due to heat generation and its thermal stresses. Global Journal of Pure and Applied Mathematics, 13, 7505–7527. Dhaliwal, R. S., & Sherief, H. H. (1980). Generalized thermoelasticity for anisotropic media. Quarterly of Applied Mathematics, XXXVII(1), 1–8. Ezzat, M., & AI-Bary, A. (2016). Magneto-thermoelectric viscoelastic materials with memory dependent derivatives involving two temperature. International Journal of Applied Electromagnetics and Mechanics, 50(4), 549–567. Ezzat, M., & AI-Bary, A. (2017). Fractional magneto-thermoelastic materials with phase lag Green-Naghdi theories. Steel and Composite Structures, 24(3), 297–307. Ezzat, M. A., El-Karamany, A. S., & El-Bary, A. A. (2017). Two-temperature theory in Green–Naghdi thermoelasticity with fractional phase-lag heat transfer. Microsystem Technologies- Springer Nature, 24(2), 951–961. Ezzat, M. A., El-Karamany, A. S., & Ezzat, S. M. (2012). Two-temperature theory in magneto-thermoelasticity with fractional order dual-phase-lag heat transfer. Nuclear Engineering and Design (Elsevier), 252, 267–277. Green, A., & Naghdi, a. P. (1992). On undamped heat waves in an elastic solid. Journal of Thermal Stresses, 15(2), 253–264. Green, A., & Naghdi, P. (1993). Thermoelasticity without energy dissipation. Journal of Elasticity, 31(3), 189–208. Hassan, M., Marin, M., Ellahi, R., & Alamri, S. (2018). Exploration of convective heat transfer and flow characteristics synthesis by Cu–Ag/water hybrid-nanofluids. Heat Transfer Research, 49(18), 1837–1848. https://doi.org/10.1615/HeatTransRes.2018025569. Kumar, R., & Gupta, V. (2015). Effect of phase-lags on Rayleigh wave propagation in thermoelastic medium with mass diffusion. Multidiscipline Modeling in Materials and Structures, 11, 474–493. Kumar, R., & Kansal, T. (2008a). Effect of rotation on Rayleigh waves in an isotropic generalized thermoelastic diffusive half-space. Archives of Mechanics, 65(5), 421–443. Kumar, R., & Kansal, T. (2008b). Rayleigh waves in transversely isotropicthermoelastic diffusive half-space. Canadian Journal of Physics, 86, 133–1143. https://doi.org/10.1139/P08-055. Kumar, R., & Kansal, T. (2009). Propagation of Rayleigh waves in transversely isotropic generalized thermoelastic diffusion. Journal of Engineering Physics and Thermophysics, Springer, 82(6), 1199–1210. Kumar, R., & Kansal, T. (2013). Propagation of cylindrical Rayleigh waves in a transversely isotropic thermoelastic diffusive solid half-space. Journal of Theoretical and Applied Mechanics, 43(3), 3–20. Kumar, R., Kaushal, P., & Sharma, R. (2018). Transversely isotropic magneto-visco thermoelastic medium with vacuum and without energy dissipation. Journal of Solid Mechanics, 10(2), 416–434. Kumar, R., Sharma, N., & Lata, a. P. (2016a). Effects of Hall current in a transversely isotropic magnetothermoelastic with and without energy dissipation due to normal force. Structural Engineering and Mechanics, 57(1), 91–103. Kumar, R., Sharma, N., & Lata, P. (2016b). Effects of thermal and diffusion phase-lags in a plate with axisymmetric heat supply. Multidiscipline Modeling in Materials and Structures(Emerald), 12(2), 275–290. Kumar, R., Sharma, N., & Lata, P. (2016c). Thermomechanical interactions due to hall current in transversely isotropic thermoelastic with and without energy dissipation with two temperatures and rotation. Journal of Solid Mechanics, 8(4), 840–858. Kumar, R., Sharma, N., Lata, P., & Abo-Dahab, S. (2017). Rayleigh waves in anisotropic magnetothermoelastic medium. Coupled Systems Mechanics, 6(3), 317–333. Lata, P., & Kaur, I. (2019a). Transversely isotropic thick plate with two temperature and GN type-III in frequency domain. Coupled Systems Mechanics-Techno Press, 8(1), 55–70. Lata, P., & Kaur, I. (2019b). Study of transversely isotropic thick circular plate due to ring load with two temperature & Green Nagdhi theory of type-I, II and III. In International conference on sustainable computing in science, Technology & Management (SUSCOM-2019), − Elsevier SSRN (pp. 1753–1767). Jaipur: Amity University Rajasthan. Lata, P., & Kaur, I. (2019c). Thermomechanical interactions in transversely isotropic thick circular plate with axisymmetric heat supply. Structural Engineering and Mechanics, 69(6), 607–614. Lata, P., & Kaur, I. (2019d). Transversely isotropic magneto thermoelastic solid with two temperature and without energy dissipation in generalized thermoelasticity due to inclined load. SN Applied Sciences, 1, 426. https://doi.org/10.1007/s42452-019-0438-z. Lata, P., & Kaur, I. (2019e). Effect of rotation and inclined load on transversely isotropic magneto thermoelastic solid. Structural Engineering and Mechanics, 70(2), 245–255. Lata, P., Kumar, R., & Sharma, N. (2016). Plane waves in an anisotropic thermoelastic. Steel and Composite Structures, 22(3), 567–587. Mahmoud, S. R. (2011). Effect of rotation, gravity field and initial stress on generalized magneto-thermoelastic Rayleigh waves in a granular medium. Applied Mathematical Sciences, 41(5), 2013–2032. Mahmoud, S. R. (2014). Effect of non-homogenity, magnetic field and gravity field on Rayleigh waves in an initially stressed elastic half-space of orthotropic material subject to rotation. Journal of Computational and Theoretical Nanoscience, 11(7), 1627–1634. Marin, M. (1997). Cesaro means in thermoelasticity of dipolar bodies. Acta Mechanica, 122(1–4), 155–168. Marin, M. (2009). On the minimum principle for dipolar materials with stretch. Nonlinear Analysis Real World Applications, 10(3), 1572–1578. Marin, M. (2010). A partition of energy in thermoelasticity of microstretch bodies. Nonlinear Analysis: Real World Applications, 11(4), 2436–2447. Marin, M., Baleanu, D., & Vlase, S. (2017). Effect of microtemperatures for micropolar thermoelastic bodies. Structural Engineering and Mechanics, 61(3), 381–387. Marin, M., & Craciun, E. (2017). Uniqueness results for a boundary value problem in dipolar thermoelasticity to model composite materials. Composites Part B: Engineering, 126, 27–37. Marin, M., Ellahi, R., & Chirilă, A. (2017). On solutions of Saint-Venant's problem for elastic dipolar bodies with voids. Carpathian Journal of Mathematics, 33(2), 219–232. Marin, M., & Nicaise, S. (2016). Existence and stability results for thermoelastic dipolar bodies with double porosity. Continuum Mechanics and Thermodynamics, 28(6), 1645–1657. Othman, M. I. A., & Marin, M. (2017). Effect of thermal loading due to laser pulse on thermoelastic porous medium under G-N theory. Results in Physics, 7, 3863–3872. Othman, M. I., & Said, S. M. (2018). Effect of diffusion and internal heat source on a two-temperature thermoelastic medium with three-phase-lag model. Archives of Thermodynamics, 39(2), 15–39. Othman, M. I., & Song, Y. Q. (2006). The effect of rotation on the reflection of magneto-thermoelastic waves under thermoelasticity without energy dissipation. Acta Mechanica, 184, 89–204. Othman, M. I., & Song, Y. Q. (2008). Reflection of magneto-thermoelastic waves from a rotating elastic half-space. International Journal of Engineering Science, 46, 459–474. Rafiq, M., Singh, B., Arifa, S., Nazeer, M., Usman, M., Arif, S., et al. (2019). Harmonic waves solution in dual-phase-lagmagneto-thermoelasticity. Open Physics, 17, 8–15. https://doi.org/10.1515/phys-2019-0002. Rayleigh, L. (1885). On waves propagated along the plane surface of an elastic solid. Proceedings of the London Mathematical Society, s1-17(1), 4–11. Sharma, J. N., & Kaur, D. (2010). Rayleigh waves in rotating thermoelastic solids with voids. International Journal of Applied Mathematics and Mechanics, 6(3), 43–61. Sharma, J. N., Pal, M., & Chand, D. (2005). Propagation characteristics of Rayleigh waves in transversely isotropic piezothermoelastic materials. Journal of Sound and Vibration, 284, 227–248. Sharma, J. N., & Singh, H. (1985). Thermoelastic surface waves in a transversely isotropic half space with thermal relaxations. Indian Journal of Pure and Applied Mathematics, 16, 1202–1212. Singh, B., Kumari, S., & Singh, J. (2014). Propagation of the Rayleigh wave in an initially stressed transversely isotropic dual-phase-lag magnetothermoelastic half-space. Journal of Engineering Physics and Thermophysics, 87(6), 1539–1547. Slaughter, W. S. (2002). The linearised theory of elasticity. Boston: Birkhausar. Ting, T. C. (2004). Surface waves in a rotating anisotropic elastic half-space. Wave Motion, 40, 329–346. No fund/grant/scholarship has been taken for the research work. Department of Basic and Applied Sciences, Punjabi University, Patiala, Punjab, India Iqbal Kaur & Parveen Lata Iqbal Kaur Parveen Lata The work is carried by the corresponding author under the guidance and supervision of PL. Both authors read and approved the final manuscript. Correspondence to Iqbal Kaur. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Kaur, I., Lata, P. Rayleigh wave propagation in transversely isotropic magneto-thermoelastic medium with three-phase-lag heat transfer and diffusion. Int J Mech Mater Eng 14, 12 (2019). https://doi.org/10.1186/s40712-019-0108-3 Transversely isotropic, magneto-thermoelastic, three-phase-lag heat transfer Wave propagation
CommonCrawl
Equation of a "tilted" sine I would like to know what's the equation of a "tilted" sine, that looks like this (no idea how to show it better). I remember first seeing this waveform in some kind of sound synthesizer, where one of the knobs for controlling shape of the sine was doing just what im looking for - gradually turning sine to sawtooth and vice versa. I tried using fourier series on a sawtooth wave, and getting a couple of first sines together, but the result doesn't have that smoothness. trigonometry graphing-functions Jaideep Khare LugiLugi $\begingroup$ try $\sin\left(x+\frac{\sin(x)}{2}\right)$ $\endgroup$ – Wouter Sep 15 '17 at 14:17 $\begingroup$ The wonderful oscilloscope and function generator combination! O' how I miss them so. $\endgroup$ – JohnColtraneisJC Sep 15 '17 at 15:00 $\begingroup$ Related math.stackexchange.com/questions/1468794/… $\endgroup$ – cgiovanardi Sep 16 '17 at 21:14 $\begingroup$ When you use a truncated fourier series to synthesize a waveform, cut the last term in half and you will get a much smoother approximation. This is similar to apodizing in optics and antenna design, whereby we try to suppress diffraction rings and sidelobes. $\endgroup$ – richard1941 Sep 23 '17 at 20:48 One can look at the derivative of the function and see that near $0$, the derivative is large and near half the period, it is almost constant. This hinted that the derivative might be a cosine raised to an even power minus a constant. The constant would need to be chosen to cancel the integral of the even power of cosine over a period. For example, $\cos^8\left(\frac x2\right)-\frac{35}{128}$: The integral is $\bbox[5px,border:2px solid #C0A000]{\frac7{16}\sin(x)+\frac7{64}\sin(2x)+\frac1{48}\sin(3x)+\frac1{512}\sin(4x)}$: We can generalize this by noting that $$ \cos^{2n}\left(\frac{x}2\right)-\frac1{2^{2n}}\binom{2n}{n}=\sum_{k=1}^n\frac{\binom{2n}{n-k}}{2^{2n-1}}\cos(kx) $$ Then we get that the integral is $$ \bbox[5px,border:2px solid #C0A000]{\sum_{k=1}^n\frac{\binom{2n}{n-k}}{k\,2^{2n-1}}\sin(kx)} $$ The case illustrated above is $n=4$. Scaling by $\frac{2^{2n-1}}{\binom{2n}{n}}$ gives that the integral of $$ \frac{2^{2n-1}}{\binom{2n}{n}}\cos^{2n}\left(\frac{x}2\right)-\frac12 $$ is $$ \bbox[5px,border:2px solid #C0A000]{\sum_{k=1}^n\frac{\binom{2n}{n-k}}{\binom{2n}{n}}\frac{\sin(kx)}k} $$ which, as $n\to\infty$, tends to $$ \sum_{k=1}^\infty\frac{\sin(kx)}k $$ which is a sawtooth wave. robjohn♦robjohn $\begingroup$ interesting intuition that of $cos^{2n}$ $\endgroup$ – G Cab Sep 16 '17 at 23:28 $\begingroup$ A beautiful series of approximates, and a very nice approach to get there. $\endgroup$ – 6005 Sep 18 '17 at 4:03 $\begingroup$ Which grapher did you use for plotting multiple graphs at the same time? $\endgroup$ – Jaideep Khare Nov 29 '17 at 5:45 $\begingroup$ @JaideepKhare: The images were done with Mathematica. $\endgroup$ – robjohn♦ Nov 29 '17 at 15:41 $\begingroup$ @robjohn OK. Thanks for information. $\endgroup$ – Jaideep Khare Nov 29 '17 at 15:42 You can try this : $$y=\sin \left( x+\dfrac yn \right)$$ Here $n \in \mathbb R -\{0\}$. Positive $n$ will "tilt" the graph left side, while negative $n$, right side. When $n=1$, $y=\sin(x+y)$ When $n=2$, $y=\sin \left( x+\dfrac {y}{2} \right)$ When $n=10$, $y=\sin \left( x+\dfrac {y}{10} \right)$ As $n \to \infty$, $y=\sin(x)$ Jaideep KhareJaideep Khare $\begingroup$ +1 Interesting function defined implicitly. I suspect the OP would prefer something explicit, even as a series if not closed form. $\endgroup$ – Ethan Bolker Sep 15 '17 at 14:01 $\begingroup$ If you replace the "y" inside the sine with "sin x" (which is of course the same thing when the skew factor is 0) you get a somewhat similar plot providing that factor isn't too large. Up to about sin(x + 1/3 sin x) it looks pretty good to me. $\endgroup$ – Gareth McCaughan Sep 15 '17 at 14:42 $\begingroup$ If you want to skew it more you can kinda find the fixed point by hand: go from sin(x) to sin(x+k sin x) to sin(x + k sin(x + k sin x)) and so on. In the limit you get exactly the function defined implicitly here, unless I'm confused. $\endgroup$ – Gareth McCaughan Sep 15 '17 at 14:43 $\begingroup$ To my eye: 0 iterations are good at k=0, 1 iteration is good for k<=1/3, 2 iterations are good for k<=1/2, 3 iterations are good for k<=2/3. Approximately. $\endgroup$ – Gareth McCaughan Sep 15 '17 at 14:48 $\begingroup$ @GaurangTandon No logic, it's just free internet (Thanks to JIO), free time, Desmos grapher, and just a crazy idea, which came out of nowhere. $\endgroup$ – Jaideep Khare Jan 20 '18 at 6:34 In the old signal generators, the section devoted to produce square and sawtooth waves, was commonly realized starting from a pulse train generator, in which the frequency and the pulse duration (duty cycle) were tunable. From that, by (analogic) integration, and clipping a series of triangular and trapezoidal waves could be obtained. The knob of your synthetizer was probably acting on a filter applied to a sawtooth wave. A low-pass (e.g. RC, or higher degree) filter should do the job. The output of a simple RC filter is given by the differential equation $$ v_{{\rm in}} (t) - v_{{\rm out}} (t) = RC{d \over {dt}}v_{{\rm out}} (t) $$ When the input is a discontinuous function, and since we are interested in the long term component of the output (the particular solution to the ODE above), it is not practical to apply the differential equation above directly, and we have better go by expressing the $v(t)$'s through a formal series, in practice the Fourier series. A sawtooth wave, with duty cycle $\tau$ $$ v_{{\rm in}} (t) = \left\{ {\matrix{ {2/\tau \,t} & {0 \le t < \tau /2} \cr {{1 \over {1 - \tau }}\left( {1 - 2t} \right)} & {\tau /2 \le t < 1 - \tau } \cr {2/\tau \,\left( {t - 1} \right)} & {1 - \tau /2 \le t < 1} \cr } } \right. $$ which is the integral of a pulse train with same $\tau$, has the Fourier series expansion $$ v_{{\rm in}} (t) = {{2\tau } \over {1 - \tau }}\sum\limits_{1\, \le \,n} {{{\sin \left( {\pi n\,\tau } \right)} \over {\left( {\pi n\,\tau } \right)^{\,2} }}\sin \left( {2\pi nt} \right)} $$ Without resorting to complex representation and impedance, it is quite simple to introduce a generic component of the input into the differential equation and deduce that we will obtain $$ v_{{\rm out}} (t) = {{2\tau } \over {1 - \tau }}\sum\limits_{1\, \le \,n} {{{\sin \left( {\pi n\,\tau } \right)} \over {\left( {\pi n\,\tau } \right)^{\,2} \sqrt {1 + \left( {2\pi nRC} \right)^{\,2} } }}\,\sin \left( {2\pi nt - \arctan \left( {2\pi nRC} \right)} \right)} $$ Limiting the sum to the first few components (10 in this example) and plotting we get Finally, a comment on why the OP did not get a smooth result by taking the first few components of a sawtooth. Doing that corresponds to apply an ideal low-pass filter, which is not a physically realizable device. The RC filter is instead a physical device which, in spite of not being perfect, but introducing some attenuation and shift of phase in the frequency domain, provides the "smoothness" of the output signal. The plot below shows the sum of the first $3$ harmonics from a sawtooth, and the corresponding first $3$ harmonics from the RC filter output. G CabG Cab $\begingroup$ This answer is good because it explains what's really going on. It would be even better if you explained to the poster (and for anyone else who stumbles upon this question) what a low-pass filter is. $\endgroup$ – MackTuesday Sep 16 '17 at 2:35 $\begingroup$ @MackTuesday: I did not expect so much interest in such an old analogic (..!) "relic". Thanks for signalling that. I expanded (compatibly with space here) my answer: wish it is more explicit now. $\endgroup$ – G Cab Sep 16 '17 at 17:33 $\begingroup$ I'd remark that a first-order filter like RC can't actually get very close to a sinusoidal signal. Higher-order filters can. See my answer for a couple of signal examples with different filter coefficients. $\endgroup$ – leftaroundabout Sep 16 '17 at 17:55 $\begingroup$ @leftaroundabout: but a "tilted" sine is not a pure sinusoidal signal, it contains harmonics ! $\endgroup$ – G Cab Sep 18 '17 at 22:37 $\begingroup$ @GCab sure, just, the question said "gradually turning sine to sawtooth and vice versa". $\endgroup$ – leftaroundabout Sep 19 '17 at 8:32 I suggest taking $$\frac1t\tan^{-1}\frac{t\sin x}{1-t\cos x}$$ with $-1\leq t\leq+1$ where the two extreme values of $t$ give sawtooths in opposite directions, and $t=0$ is simply a sinusoid. Intermediate values of t give nice smooth functions that do indeed look like "sheared sinusoids". (When $t=0$ exactly, taking the formula literally gives you 0/0. The limit as $t\rightarrow0$ is $\sin x$, and for values of $t$ very close to 0, it may be numerically better to take a few terms of the series you'll find below under the heading "Motivation".) If you would like to play with this, you can do so e.g. by going to https://www.desmos.com/calculator, pasting in this formula \frac{1}{t}\arctan \left(\frac{t\sin \left(x\right)}{1-t\ \cos \left(x\right)}\right), and setting the permitted range in the slider for $t$ to -1..+1. Here are plots for t=0.25, t=0.5, t=0.75, and t=0.95. A sawtooth wave is given by $\sum\frac1n\sin nx$. So we might want something that interpolates between (1,0,0,...) at 0 and (1,1/2,1/3,...) at 1. Here's an obvious way to do it: take $\sum \frac{t^{n-1}}n\sin nx$ where $t=0$ for an ordinary sinusoid and $t=1$ for a sawtooth. It is not surprising that this has a nice closed form (think of the sine as the difference of two complex exponentials; then our series is the difference of two complex logarithms); it turns out to be the formula above. (Thanks to Bob Hanlon in comments for supplying this; the difference between what he wrote and what I have above is because of an off-by-one error in an earlier version of this answer.) As well as being the motivation for the answer above, this series may be of some practical use. At $t=0$ the closed-form formula is singular; when $t$ is nonzero but very small it's possible that you may do better numerically to use a few terms of the series rather than the closed-form formula. Relationship to some other answers: Our function is obtained from a sawtooth by multiplying the terms of its Fourier series by $(\dots,t^2,t^1,t^0,?,t^0,t^1,t^2,\dots)$. (The series is two-sided; the $\sin nx$ term is a combination of $e^{inx}$ and $e^{-inx}$, which both need to be multiplied by $t^{n-1}$; we can replace the "?" with anything we like because the constant term in the Fourier series we're working with is zero.) Pointwise multiplication of Fourier series equals convolution of functions. One choice of that "?" makes the function we're convolving with equal $2\frac{\cos x-t}{1+t^2-2t\cos x}$; note that when $t=0$ this is just a sinusoid and that as $t->\pm1$ it approaches a delta function. So we can, indeed, think of this as starting with a sawtooth function and applying a linear filter that varies smoothly between doing nothing (convolving with a delta function) and smoothing everything into perfect sinusoids (convolving with a sinusoid). Gareth McCaughanGareth McCaughan $\begingroup$ It clearly does (the series is basically the one for log(1-z) where z gets a real part from the exponential decay and an imaginary part from the sines. I have to be AFK for a bit so won't expand on this in my answer immediately. $\endgroup$ – Gareth McCaughan Sep 15 '17 at 17:35 $\begingroup$ The closed form of the series assuming 0 < t < 1 is ArcTan[(t Sin[x])/(1 - t Cos[x])] $\endgroup$ – Bob Hanlon Sep 15 '17 at 17:47 $\begingroup$ Ah, this is $1/t$ times the angle between the $x$-axis and the line $OP$, where the point $P$ moves along a circle of radius $t$ centered at $(1,0)$. $\endgroup$ – user856 Sep 15 '17 at 22:30 $\begingroup$ How could 0 could give a sinusoid in this formula? 1/0? $\endgroup$ – Lugi Sep 18 '17 at 16:10 $\begingroup$ As $t\rightarrow0$ the function $\rightarrow\sin x$. I'll edit the answer to clarify that a bit. $\endgroup$ – Gareth McCaughan Sep 18 '17 at 16:49 Let's free-hand Fourier transform! First, we sketch what we want on top of an actual sine function. Draw one period. We notice that in that period, the sine function is first too low, then too high, then too low, then too high again. That means that the difference between what we have and what we want has two tops and two bottoms, kindof like $\sin(2x)$. So we add that. This time you have to plot the graph to see what happens. Let's try with $\sin(x) + 0.3\sin(2x)$. OK, we're getting the tilt we want, but on the way down the graph has this weird wavey thing. Back to the drawing board. Now sketch the graph you had together with the graph you want. At least when I did this, the graphs crossed one another a number of times. $6$ to be exact, alternating which one is largest and which is smallest. Thus the difference has three maxes and three mins in this period, kindof like $\sin(3x)$. So we try to add that. Now, with $\sin(x) + 0.3\sin(2x) + 0.65\sin(3x)$ it's starting to look good. You can probably tweak the coefficients here a bit to get something even better. Just keep going until you're happy. You can also add higher frequency terms if you want, which will give you more tilt. But it will get more difficult to gauge exactly how to compensate out the high frequency waving. Alternatively, you could try to sketch the derivative you want, and do the same with that, only with cosines. Higher frequency waves will be more prominent then, and hopefully easier to correct for. ArthurArthur $\begingroup$ Love this explanation. All too often I see students simply cranking the handle without really understanding what they are doing. And this is great for the more visually inclined learner's. $\endgroup$ – AlwaysLearning Sep 16 '17 at 17:16 You can look into the Clausen function which is a class of functions with this form and which has been implicitly described by another answer. NijNij I think G Cab's answer is the best. Wouter's comment is also good because it invokes phase distortion and modulation, which are common techniques used in synthesizers. As for my contribution, a professional developer of software synthesizers (who isn't me) shared an interesting idea on the forums over at KVR Audio. Go to https://www.desmos.com/calculator. Copy and paste the following equation.* ((a+\cos x)\cos n+\left(b\sin x\right)\sin n)/\sqrt{(a+\cos x)^2+\left(b\sin x\right)^2} Click on the "all" button next to "add slider". For the a and b sliders, change the range from [-10,10] to [0,1]. Now you can fiddle with the kinds of waveforms this scheme generates. For your case, try using $(a,b,n) = (0.9,1.0,-1.6)$. The phase is off by 90 degrees but that's sonically irrelevant by itself. * This technique is inspired by a geometric construct. The equation in human-readable form is $$\frac{(a+\cos x)\cos n+b\sin x\sin n}{\sqrt{(a+\cos x)^2+\left(b\sin x\right)^2}}$$ MackTuesdayMackTuesday $\begingroup$ Thanks for the appreciation, and thanks for providing this interesting waveform. $\endgroup$ – G Cab Sep 16 '17 at 17:55 http://nbviewer.jupyter.org/gist/leftaroundabout/725e015c9ea7a2f61cd8f2eec4028dff As G Cab said, a synthesizer generates such a waveform by low-pass filtering a sawtooth signal. To describe that mathematically: we start out with a sawtooth $$ s(t) = \operatorname{round} (t) - t. $$ The low-pass filter is typically a kind of state variable filter. These are named so because there is a "state variable" that's governed by an ordinary differential equation dependent on the incoming signal. The simplest form, that can be build with just a resistor and a capacitor, can be written (I'm using PDE-convention notation for derivatives, i.e. $r_t = \frac{\partial r}{\partial t}$) $$ r_t(r,t) = \eta \cdot (s(t) - r(t)) $$ Because $r_t$ depends Lipschitz-continuously on $r$, the Picard-Lindelöf theorem tells us that the solution $r : \mathbb{R}\to\mathbb{R}$ is uniquely determined by this differential equation, in other words, the above equation for $r_t$ also defines $r(t)$. But how does it actually look? Well, we can use numerical techniques to calculate an arbitrarily good approximation to the exact solution. A common method is the fourth-order† Runge-Kutta solver. Implemented in Haskell: rk₄ :: (Time -> ℝ -> ℝ) -- the function r_t(t,r) -> Time -- time-step for the solver -> [(Time, ℝ)] -- sequence of solution snapshots [r(t)] rk₄ f h = go 0 0 where go t y = (t,y) : go (t+h) (y + h/6*(k₁ + 2*k₂ + 2*k₃ + k₄)) where k₁ = f t y k₂ = f (t + h/2) (y + h/2*k₁) k₃ = f (t + h/2) (y + h/2*k₂) k₄ = f (t + h) (y + h*k₃) If we evaluate this for the above filter $r$ with a couple of different values for $\eta$, we get this result: toRenderable $ forM [0 .. 8] $ \η -> signalPlot ("η = "++show η) $ rk₄ (\t r -> η * (s t - r)) 0.01 Well, this does appear to behave broadly related to the function you've described, but it's hardly the same thing. In particular, the result you get for low $\eta$ doesn't actually look much like a sine, it's still pretty spiky on the negative side, but already has a strongly attenuated amplitude. And indeed that's also the case for a real analogue synthesizer: those never generate exact sine signals, only approximations‡. Im my example the deviation is extreme because I've used a very simple filter, which has only order 1. I.e., when I make $\eta$ small enough to get rid of the transient high-frequency components (the "edges"), then I must also sacrifice a lot of the base sinusoidial part. With a suitable second-order filter, we can get a better result. That requires an additional variable $u$ to be solved: toRenderable $ forM [0, 5 .. 30] $ \η -> signalPlot ("η = "++show η) $ second fst <$> rk₄ (\t (r,u) -> (u, η^2 * (s t - r) - η*u)) 0.01 Still not perfect, but a really well-designed filter can get you very close to a real sine. Filter design used to be a big research topic. Nowadays it tends to be more practical to use digitally sampled signals, then you can just perform a Fourier transform, take whichever frequency components you want, and zero out or attenuate the rest to any amplitude you like. †RK4 is arguably a bit overkill here. Because this filter is linear, a discretised form can actually be calculated much more efficiently and accurately as an IIR, using linear filter theory. ‡If the goal is really only to generate a sine, then you can actually get much more out of a simple 2nd-order filter than in my example, by using high resonance peak: you start out with only a very weak rectangular signal that's feeding a resonant LC circuit at the right frequency so the the output has a much higher amplitude. But this requires that the filter and oscillator are exactly in tune – in practice, they're generally back-coupled to a phase-locked loop, i.e. the oscillator is actually part of the filter resonance. leftaroundaboutleftaroundabout Not the answer you're looking for? Browse other questions tagged trigonometry graphing-functions or ask your own question. secret formula for the "sin" wave with variable rising/falling edge Equation of a "tilted" sphere Asymmetric periodic function? Efficiently generate a sawtooth with rounded peaks. Need Sine form of Cotangent equation Rotate Sine Wave Equation by $69^\circ$ Sine Graphs Equation Meaning Solving an equation involving the sine function Replicating cosine/sine graph, but with reflections? Equation with sine and cosine - coefficients How to smooth sine-like data
CommonCrawl
A coupled technological-sociological model for national electrical energy supply systems including sustainability Manfred Benthaus ORCID: orcid.org/0000-0003-1035-44701 Energy, Sustainability and Society volume 9, Article number: 50 (2019) Cite this article Global trends in the development and use of electricity utilities and assets are practically irreversible. In industrialized nations, capacity factors have grown so large that users may expect freely available electrical potential energy at all times and in almost all locations. Economically capitalizing on this trend means maximizing energy provision and use to boost gross domestic product growth rates. Electricity is now a basic indicator of social development; it is to the cultural-technological dimension what breathing air is to the physiological-biological dimension, the implication being that sustainable development of provision systems has become a matter of international concern. This article presents a decision basis for the design of sustainable national electrical energy supply systems, incorporating country-specific boundary conditions in the form of user requirements to be specified by users. The basis is a solution space of technologically possible systems, obtained by combining generalized user requirements and physical limitations to generate the solution states. As all technological options for the system are brought under consideration, this approach represents a comprehensive comparative analysis. The decision process ensues by assigning to each solution state a set of (newly defined) system risk factors. Particular consideration is given to evaluating the system's ability to meet the user requirements, i.e., interruption-free provision. The central benchmark is the technological-economic availability. From this is obtained a sustainability boundary, the boundary between quantifiable and unquantifiable economic loss potentials. This article deliberately avoids referencing specific technological solutions, with the justification that the basis of the user's decision should be independent of technological considerations. The sole exception is a reference to the currently used technology, which forms the starting point. Existing systems of national electrical energy supply use essentially similar technologies. Each system can be decomposed into the structure of the power stations and of the accompanying grid network. Though the current technology has unquestionably contributed to economic prosperity, it carries a dominant, unquantifiable systemic risk, i.e., of blackouts. Physically speaking, however, for a general system, the risk of blackouts is not inherent and may be avoided. The principle motivation of this article is to incorporate this unquantifiable risk into a new strategy for comparing possible designs for future systems in terms of their sustainability. Electrification as a quantifiable social benefit Electrical energy in a form available to humans does not occur significantly in nature and must therefore be provided artificially. By the end of the nineteenth century, humans had amassed sufficient scientific knowledge to develop the national electrical energy supply systems (EESS). Very fundamental technological innovations were needed to implement a functional system. Electrical energy supply The implementation and sustainable operation of large-scale technological systems depends on majority acceptance by the society in which they are intended to operate. Below, a decision model is developed according to [1], initially consisting of two independent but interacting levels (Fig. 1). Model of a social approach to large technological systems One level decides whether the system has social utility: whether this particular means of electrical energy provision has a positive effect on human well-being, utilitarianism here being the goal-oriented ethics of choice [2]. On the other level, social acceptance is considered, the criteria being investment costs incurred for technology, consequences of technological risk, and consumption of ecological reserves [3]. The levels in the model are not equal. For any system, utility needs be sufficient for consideration, whereas acceptance of the technology is necessary (example: nuclear energy in Germany). First to be considered is the primary level utility. As early as 1920, the notion of an EESS was semantically raised to an instrument of government: "Communism - that is Soviet power plus electrification of the whole country" [4]. The German electricity industry's development in the years from 1890 to 1950 has been examined by [5]. Across the multi-decadal analysis, they arrive at similar results for different forms of government with their respective political forces and currents. The first working hypothesis is as follows: The use of electrical energy is independent of the form of government The utility of electrification is economically significant Economic importance To test the above working hypothesis about energy consumption, a country's gross domestic product (GDP) is an international economic benchmark that depends on the national consumption of electrical energy (Fig. 2) [6]. GDP and electricity consumption in standardized units for a sample of around 100 countries (2015 data). (Zone I 0–2500 kWh. Example countries: Algeria, India, Jordan, Peru, Uzbekistan. Zone II 2500–5000 kWh. Example countries: Brazil, Croatia, Latvia, Poland, Turkey. Zone III over 5000 kWh. Example countries: Germany, France, Kuwait, Norway) The calculated regression function (RF I) shows that rising GDP per capita is characterized by superexponential increase in electricity use. The smallest GDPs per capita in the sample (no use of electrical energy) lie in the region of 1000 USD per capita, and the largest GDPs (for which energy use is unbound) are around 100,000 USD per capita. As energy use rises, its direct influence on GDP decreases sharply. Even in the saturation zone (zone III), though, there is exponential increase. Figure 2 also shows that as energy use increases, the spread of individual countries' GDP per capita increases significantly, which is reasonably assumed to originate from country-specific factors independent of electrical energy use. Comparison of the countries with high and low GDP per capita indicate that electrical energy use has a social utility in the utilitarian sense, which, on considering the dataset as a whole, appears to be independent of social structure. We therefore accept the first working hypothesis. Key point 1: Provision of electrical energy is associated with utility to national economies that is independent of the type of society Key point 2: The GDP per capita that is attainable without the use of electrical energy is about 1% of that estimated with unrestricted use of energy and national electrification is therefore economically significant Current electrical energy supply In 2013, the volume of electrical energy used worldwide was approximately 20,000 billion kWh, with an average growth of 400 billion kWh per year since 1980 [7]. To provide these amounts of energy, a single technology is currently being used: a combination of centralized large-scale electricity generation plants and comprehensively connected large-scale networks. Access of a country's population to an electricity grid and the associated opportunities has direct impact on GDP. Figure 3 depicts the situation as described by [6, 8]. Normalized GDP and relative electricity grid access for around 100 countries. Grid access data from 2014. Same country sample as Fig. 2 The new regression function (RF II) is a standard exponential function over the defined range for relative electricity grid access and GDP. As in RF I, the minimum point is 0% access and USD 1000 GDP per capita. RF I increases most in zone I (0–2500 kWh), and RF II reaches 100% access before the GDP in RF I flattens out. From this, it can be concluded that worldwide use of electrical energy takes place via access to electricity grids. This is supported by considering a sample from the group of countries with 100% network access.Footnote 1 Of global energy consumption, their share alone (2013) is more than 80% [6, 9]. These energy supply systems are based on Faraday's law of induction; specifically, they are three-phase systems. Existing systems have no significant storage of electrical energy; therefore, the demand must be generated "on-time." The technological basis of this is power-frequency control, with a common operating frequency (e.g., 50 Hz) as the central control variable. The fundaments are described in [10, 11]. The task of electricity grids is to connect all users with all producers and to transmit the required energy with as little loss as possible. Technologically, this is a major challenge. Three-phase technology allows the implementation of a fine grid structure that is differentiated by voltage level. The national grid is at the highest voltage. It is a functional link to the lower-level networks and is a significant "electrical network node" in the system. In Europe, the national high-voltage grids have been merged to form an international interconnection grid. This integration has lead towards a European copper plate, with the largely political goals of increasing the physical exchange of electricity and improving technological supply reliability. One example is the UCTE grid area,Footnote 2 which consists of the coupled three-phase networks of 24 countries in central Europe [12]. In this grid region, 440 million users are supplied with electricity: an economic power of 13,000 billion USD (annual figures for 2016). The initiator is the European Union, having outlined the creation of an internal electricity market [13]. Supply reliability and qualityFootnote 3 is an important system descriptor, for which multiple technical indexes have been devised. The SAIDI (System Average Interruption Duration Index) belongs to a group of internationally recognized indicators and describes "...the average interruption in supply per connected final consumer within a calendar year..." [14]. This is a regulatory determination of the supply situation based on past experience. In a 2014 international benchmarking, the SAIDI values were calculated for 27 European countries [15]. The average total annual interruption was 170 min for the 8760 h of a normal yearFootnote 4; the average power availability for the end customer is 99.97%. This result can be interpreted favorably, but it raises several questions. Have technological choices led to excessive costs for the users? Are there cheaper technologies with equivalent availability? Is it appropriate to base indicators on past performance? Direct electrical parameters such as short-circuiting [16] also have an effect on the supply quality. The aim is to create a system that is fair for all users, with the highest possible performance. This is a mounting challenge, however, with a growing gridFootnote 5 and a change in generation technology, stemming from the move to inverter-based sources and away from direct feed-in via rotating masses [17].Footnote 6 Blackouts have the most immediate effect on availability [18]. These occur when fluctuations in the power-frequency control exceed or fall below specified frequency values. In the UCTE grid area, these limits are 50 Hz ± 2.5 Hz [19]. Outside the range, no power plants remain on-grid and the national EESS is functionless. Blackouts can have a range of causes—extensive, sustained blackouts are caused by software and/or hardware irregularities—and risk is inherent to the system, as explored in later sections. Some countries, such as Switzerland, treat blackouts as national hazards [20], paralleling the earlier quotation from Lenin. Opportunity-risk profile Grid technology successfully provides electrical energy to users, not only a single national economy but also the world over, and has led to significant global economic growth. Despite this, the inherent risks can be seen reflected in the supply quality of any one of these grids. Large-scale, sustained functional losses are possible at any time and can have considerable economic impact: "As an Austrian study has found, for an Austria-wide power failure of 24 hours, damage of at least 1 billion Euros, likely several billion Euros" [21]. The opportunity-risk profile of the technology currently used worldwide thus diverges wildly. The economically quantifiable positive effect on national GDP is contrasted with non-quantifiable risk. Key point 3: Large-scale production plants in combination with large-scale grids are the central technologies of EESS worldwide and are drivers of positive economic development Key point 4: Inherent systemic risks can at any time result in large-scale failures of unlimited duration User requirements for a cellular grid The starting point for the analysis is demand and use. First, EESS user requirements are formulated qualitatively. The requirements are then described quantitatively in cellular structures. Seven ad hoc system requirements are formulated that give structure to the "utility" level of the social acceptance model: A utilitarian approach is taken, due to the large number of users; Electrical energy consumption is meant in the anthropogenic sense; The location of energy use is freely selectable by each user; The time profile of energy use is freely selectable by each user; Each user is limited to a freely selectable, fixed maximum energy use; At any time, energy use equals energy demand; Supply costs are economically minimized. Later, an ancillary requirement will be derived. Energy functions The energy demand function ED with i, i0 ∈ I, i ≤ i0; t ∈ T, and xi ∈ N ⊂ R3 is defined as $$ {\displaystyle \begin{array}{l}{E}^D:\left(T\ x\ N\right)\to I{R}^{+}\\ {}\kern1.32em \left(t,{\boldsymbol{x}}_i\right)\to {E}^D\left(t,{\boldsymbol{x}}_i\right).\end{array}} $$ Location vectors xi uniquely define the ith user location and the time component of individual user behavior. The function can be written as $$ {E}^D\left(t,{\boldsymbol{x}}_i\right)={E}_{\mathrm{max}}^D\ \left({\boldsymbol{x}}_i\right)\cdot {f}_i^D(t), $$ (short form) $$ {E}_i^D(t)={E}_{\max, i}^D\cdot {f}_i^D(t). $$ with \( {E}_{\max, i}^D \), the maximum energy and \( {f}_i^D:T\to \left[0;1\right] \) a differentiable time function. The function applies to the primary "utility" level. There exists no natural energy source with the system requirements, meaning energy must be generated and provisioned anthropogenically. For this, there is the energy supply function ES: $$ {\displaystyle \begin{array}{l}{E}^S:\left(T\ x\ N\right)\to I{R}^{+},\\ {}\kern1.2em \left(t,{\boldsymbol{x}}_i\right)\to {E}^S\left(t,{\boldsymbol{x}}_i\right),\end{array}} $$ $$ {E}_i^S(t). $$ The variables here have the same meanings as in Eqs. 1 and 2, and the function should be analytic for each location in the time variable. The supply energy function is part of the secondary "technology" level. The short form will be used in the technological function in Technological Possibility Solution Space. Microcell The functions in Eqs. 1 and 3 operate on different levels of the model. Equating them mathematically, we can define the initial balance between them. For each location, $$ {E}_i^D(t)={E}_i^S(t)\to {\int}_t{E}_i^D(t) dt={\int}_t{E}_i^S(t) dt. $$ This defines an energy microcell (microcell for short), the smallest energy unit in a national EESS. It incorporates user requirements 2, 3, 4, 5, and 6. Macrocell To get a handle on the more complex system states, it is greatly helpful to bundle the microcells.Footnote 7 Total supply and demand energies for an ensemble can be calculated for a system with j0 microcells using Eq. 4: $$ {E}_{j_0}^D(t)={\sum}_{j=1}^{j_0}{E}_j^D(t)\ \mathrm{and}\ {E}_{j_0}^D(t)={\sum}_{j=1}^{j_0}{E}_j^S(t). $$ The energy balance equation follows $$ {E}_{j_0}^D(t)={E}_{j_0}^S(t)\to {\int}_t{E}_{j_0}^D(t) dt={\int}_t{E}_{j_0}^S(t) dt. $$ This defines the energy-economic macrocell (or macrocell for brevity). National macrocell If the ensemble is extended to all i0 usersFootnote 8 of a national EESS, a national energy macrocell (or national macrocell) is created. This has the following energy relation. $$ {E}_{\mathrm{Nat}.}^D(t)={\sum}_{i^{\hbox{'}}=1}^{i_0^{\hbox{'}}}{E}_{i^{\hbox{'}}}^D(t)+{\sum}_{j=1}^{j_0}{E}_{j_0}^D(t)={\sum}_{i^{\hbox{'}}=1}^{i_0^{\hbox{'}}}{E}_{i^{\hbox{'}}}^S(t)+{\sum}_{j=1}^{j_0}{E}_{j_0}^S(t)={E}_{\mathrm{Nat}.}^S(t), $$ $$ {E}_{\mathrm{Nat}.}^D(t)={E}_{\mathrm{Nat}.}^S(t)\to {\int}_t{E}_{\mathrm{Nat}.}^D(t)\ dt={\int}_t{E}_{\mathrm{Nat}.}^S(t)\ dt. $$ The energy demand term in the microcells (the output of Fig. 4) is invariant for different system designs. Freedom in the design of the system is captured in the supply term. Elementary energy cell at location xi Key point 5: Formulation of qualitative user system requirements Key point 6: Definition and properties of supply and demand energy functions Key point 7: Definition and properties of energy micro- and macrocells and the national macrocell Physical limitations and possibilities The primary "utility" level in the social acceptance model is now somewhat structured and ready to be coupled to the secondary "technology" level. A mezzanine level is introduced to define the coupling (Fig. 5). It should highlight the scientific possibilities and limits that affect both user requirements and possible technological solutions. Bilateral feedback is necessary between primary and mezzanine levels (cf. Electrification as a Quantifiable Social Benefit, the primary level takes precedence). One-way coupling suffices between mezzanine and secondary levels. Extension of the model by structuring the coupling between primary and secondary levels with a mezzanine level The analysis proceeds with aspects from classical field theory, specifically from classical electrodynamics. The limits of physical possibility for a national electrical energy supply system are determined by the principle of relativity, Maxwell's equations, the associated conservation laws, and macroscopic electromagnetism and the propagation of electromagnetic waves. Detailed treatments of these topics can be found in [22,23,24]. Here, relevant principles are established such as are necessary here. The principle of relativity The most important consequence of relativity for the present case is the finite propagation speed of forces and information (the speed of light in vacuum). It follows that all dynamic physical systems have time lags so that the balance requirement in Eq. 4 yields the relation. $$ {E}^D\left(t,{\boldsymbol{x}}_i\right)\cong {E}^S\left(t\hbox{'},{\boldsymbol{x}}_i\right) $$ $$ t'=t+\varDelta\ t. $$ This formulation also expresses the weighting of the primary and secondary levels: the time shift effect is associated to the supply energy.Footnote 9 Conservation laws Energy and momentum are conserved in isolated systems, in this case a system of charged particles and electromagnetic fields. Poynting's theorem [25] is a statement of energy conservation and is given here in the form of a balance equation $$ \frac{\partial u}{\partial t}+\boldsymbol{\nabla}\cdot \boldsymbol{S}=-\boldsymbol{J}\cdot \boldsymbol{E} $$ This equation also defines the Poynting vector, which describes the energy flux density of the electromagnetic field. $$ \mathbf{S}=\mathbf{E}\times \mathbf{H} $$ In the case of anthropogenic energy transmission by means of electromagnetic waves, radiation losses are of secondary importance due to the low field frequencies. The signal velocity itself is already maximized: it is the speed of light [26]. Conservation of electric charge is encoded in the continuity equation [27] $$ \frac{\partial \varrho }{\partial t}+\nabla \cdot \mathbf{J}=0 $$ Spatial distances Spatial remoteness in the sources and sinks of electric charge is not unusual in electrodynamics; indeed, the function of electromagnetic fields is to connect them. The intervening medium determines the speed of light c and, therefore, the signal velocity. The associated time shift Δt is directly proportional to the separation Δx, with $$ \varDelta t=\frac{\mid \varDelta \boldsymbol{x}\mid }{c}. $$ This inevitable delay contradicts the requirement on the microcells in Eq. 4 that supply and demand energies should have precisely zero time shift. Parallel circuits Electricity sources are here assumed to be current sources. Ideal current sources feed current into a connected network independently of the load, or equivalently, can continuously draw from an infinitely large energy reserve without disruption. In reality, ideal sources must be forgone for real sources, with time shifts and finite energy reservoirs [28]. As currents must be superimposed without violating charge conservation (Eq. 11), two-terminal parallel networks must be linear. When current sources (ideal or real) are connected in parallel, their currents add up to a new total current (Kirchhoff's node law), tantamount to a new equivalent current source. Key point 8: Finiteness of signal propagation in physical systems Key point 9: Compliance with charge, energy, and momentum conservation laws Key point 10: Option 1 is to allow spatial remoteness of source and sink; option 2 is to allow parallel connection of sources Technological Possibility Solution Space So far, in the social acceptance model, some structure has been given to the primary level, a mezzanine level has been introduced and structured, and the resulting interactions developed. In this section, the secondary level is given some features, resulting in a set of technologically possible solutions. Supply energy The structure of the secondary "technology" level is determined by the supply energy ES. It is analytical in the variable t and may therefore be expanded in a Taylor series (cf. User Requirements for a Cellular Grid, Eq. 3). For each location of xi, $$ {E}_i^S\left({t}_i^{\hbox{'}}\right)={E}_i^{ST}(t)+\frac{\partial {E}_i^{ST}(t)}{\partial t}\cdot \varDelta {t}_i+\frac{1}{2}\ \frac{\partial^2{E}_i^{ST}(t)}{\partial {t}^2}\cdot \varDelta {t}_i^2+{R}_3\left({t}_i^{\hbox{'}}\right). $$ Time shift \( \Delta {t}_i={t}_i^{\prime }-t \) in the ith microcell; The zeroth order term describes a constant supply energy level at time t; The first order term contains the temporal derivative of energy, i.e., power PS(t, xi); The second order term contains the power dynamics \( {\dot{P}}_i^S(t) \); and The third term contains higher order derivatives. A national EESS is described by the following system of equations, in which higher order terms are neglected $$ {\displaystyle \begin{array}{cccc}{E}_1^D\;(t)\cong {E}_1^S\;\left({t}_1^{\hbox{'}}\right)\cong & {E}_1^{ST}\;(t)& +\frac{\partial {E}_1^{ST}\;(t)}{\partial t}.\varDelta {t}_1& +\frac{1}{2}\frac{\partial^2{E}_1^{ST}(T)}{\partial {t}^2}.\varDelta {t}_1^2\\ {}:& :& :& \\ {}{E}_{i_0}^D\;(t)\cong {E}_{i_0}^S\;\left({t}_{i_0}^{\hbox{'}}\right)\cong & {E}_{i_0}^{ST}\;(t)& +\frac{\partial {E}_{i_0}^{ST}\;(t)}{\partial t}.\varDelta {t}_{i_0}& +\frac{1}{2}\frac{\partial^2{E}_{i_0}^{ST}\;(t)}{\partial {t}^2}.\varDelta {t}_{i_0}^2\end{array}} $$ Each equation describes a microcell. Macrocells can then be created by combining corresponding rows. The right hand side of the system of equations encodes the possible technological design variables (cf. User Requirements for a Cellular Grid). For Germany, i0 is approximately 45 million. The structure variable To ensure satisfaction of Equation System 14, five system-defining technological structural variables (S1–S5) are devised. S1−time shift, ∆t → 0. The lag between energy use and supply has two components: relativistic, Δtr, and non-ideal current source effects, ΔtS. For the ith microcell, linear superposition ∆ti gives $$ \varDelta {t}_i=\varDelta {t}_{r_i}+\varDelta {t}_{s_i}. $$ A technological macrocell is a collection of j0 microcells and has a total time shift \( \Delta {t}_{j_0}, \) which may be compared with the equivalent value for the microcells \( {\sum}_{i=1}^{j_0}{t}_i. \) The smaller of the two better fulfills the user requirements. S2−stationary system states, \( {E}_i^{ST}(t)={c}_i \) Any energetically possible stationary state of a microcell can be demanded at a given time. Equation System 14 has the following condition for stationary states $$ {E}_i^D={E}_i^S={E}_i^{ST}={c}_i. $$ Kirchhoff's node rule implies that superposition applies to macrocells, written as $$ {E}_{j_0}^{ST}={\sum}_{j=1}^{j_0}{E}_j^{ST}={\sum}_{j=1}^{j_0}{c}_j. $$ S3−power output of current sources, \( {P}_i^S(t)\to \infty \) The finite power output of real sources varies over finite time intervals as the temporal gradient of the supply energy. The greater the gradient at a stationary operating point \( {E}_i^{ST}(t), \) the shorter the necessary adjustment time interval \( \Delta {t}_{s_i}. \) The source output must be technologically forced towards the ideal value. This applies to microcells and macrocells and is therefore relevant to the whole system (cf. Current Electrical Energy Supply, reducing the system's short-circuit power). S4− power dynamics of current sources, \( {\dot{P}}_i^S(t)\to \infty \) The requirements on the power dynamics of the sources mirror those from S3. Deviations are small, bounded by the time adjustment interval \( \Delta {t}_i^2 \). To achieve significant dynamics is a particular technological challenge under the formulated economic boundary conditions. The situation applies equally to microcells and macrocells and is, like power output, a key element in system choice. S5−maximum power of a national EESS, \( {\wp}_{\mathrm{Nat}.}^D \) Making use of the mean value theorem, the energy demand function defined in Eq. 2 must have a maximum. For the ith microcell, with t0 ∈ T, $$ {P}_i^{D_{\mathrm{max}}}:= {E}_{\max, i}^D\cdot \frac{\partial {f}_i^D\left({t}_0\right)}{\partial t}, $$ and Eq. 4 gives the power relation $$ {P}_i^{D_{\mathrm{max}}}={P}_i^{S_{\mathrm{max}}}. $$ Free individual user behavior is given by a function \( {f}_i^D \), and the system must be capable of providing the maximum required power at any time. Because hardware should lie comfortably in the realm of adequacy for any demand placed on it, hardware is the main driver of cost. For a technological macrocell consisting of j0 microcells, the balance equation is obtained by summation $$ {\wp}_{\mathrm{macrocell}}^D={\sum}_{j=1}^{j_0}{P}_j^{D_{\mathrm{max}}}=P\left({j}_0\right)=P\left({k}_0\right)={\sum}_{j=1}^{j_0}{P}_j^{s_{\mathrm{max}}}={\wp}_{\mathrm{macrocell}}^S. $$ The two physical possibilities for the system discussed earlier are a parallel connection of sources and spatial remoteness of source and sink. These are two technological degrees of freedom in the macrocell and degrees of system design freedom—together with options for the number and size of sources. If there are k0 current sources, only the power relation P(j0) = P(k0) applies and the freedom lies in k0, with 1 ≤ k0 ≤ j0. For a national macrocell, $$ {\wp}_{\mathrm{Nat}.}^D={\sum}_{j=1}^{j_0}{P}_j^{S_{\mathrm{max}}}+{\sum}_{i^{\hbox{'}}=1}^{i_0^{\hbox{'}}}{P}_{i^{\hbox{'}}}^{D_{\mathrm{max}}}. $$ The above five parameters comprise the initial structure vector S∗ $$ {\boldsymbol{S}}^{\ast }=\left({S}_1;{S}_2;{S}_3,{S}_4;{S}_5\right) $$ Energy is exchanged between source and sink in the form of electromagnetic waves, which require material connection suitable for the high energy fluxes of an EESS [29]. Physically, this can be interpreted as meaning that upon request, a Poynting vectorFootnote 10 is transmitted along the conductive material to the destination. Such a structure is referred to as a power grid or simply grid. Grids are defined by paths in the three-dimensional Euclidean vector space, mathematically described by a metric space and its special properties (cf. Additional file 1). A second working hypothesis can now be formulated: Operation of the system without a grid and without source bundling is impossible Operation of the system with a grid but without source bundling is useless As a result, the two physical options are combined into a single usable technology, to which electricity generation is primary and the grid is secondary. Seen economically, this is a two-stage production process whose sub-processes are technologically different. The following section introduces two base modules from which each technological system state can be generated (cf. Additional files 1 and 2). Base module I The base module I consists only of singular microcells \( {i}_0^{\prime } \) so that \( {i}_0^{\prime}\le {i}_0;{j}_0=0. \) Source requirements are given by Eqs. 4, 8, and 19. For the ith microcell, the source is at \( {\boldsymbol{x}}_{i^{\prime}}^{\prime } \) and the sink at \( {\boldsymbol{x}}_{i^{\prime }} \) with \( {\boldsymbol{x}}_{i^{\prime}}^{\prime}\approx {\boldsymbol{x}}_{i^{\prime }} \). The network is a microgrid within the metric space \( \left({N}_{i^{\prime }},{d}_{i^{\prime },\left|.\right|}\right) \), with an associated conductivity function. The mathematical concept of connectedness of subsets underlies the grid structure. For base module I, each individual microcell is connected, and the \( {i}_0^{\prime } \)-microcell ensemble is pairwise disconnected. Grid spectra show connection lengths within a grid. Figure 6 shows the system structure of base module I and the resulting grid spectrum. Base module I: schematic and associated grid spectrum Base module II Base module II consists of bundled microcells j0 with \( {i}_0^{\prime }=0;{j}_0\le {i}_0;{k}_0<{j}_0. \) Meeting the source requirements proceeds differently for base module II than for base module I. The 1:1 source-sink fraction in base module I is replaced with k0, k0 < j0, new equivalent current sources based on parallel connections. The equivalent current sources are located at \( {\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_{k_0} \) and the sinks at \( {\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_{j_0}. \) Location vectors are unique, and the distances are, as before, significant. The network is a macrogrid. The grid is carried by the metric space \( \left({N}_{n_0},{d}_{n_0}\right) \) based on the French railway metric, with additional location vector xN. In this metric, the ensemble containing j0 microcells and k0 current sources is globally connected. Figure 7 depicts a schematic of base module II and the associated grid spectrum. Base module II: schematic and associated grid spectrum For this module, some additional electrical observations may be made. First, note that the total supply energy is obtained from a single equivalent current source k0 = 1. Thus, there exists at least one network node. At all times, the entire energy flux of the macrocell is passing through this node. The grid structure corresponds to one such equivalent node (cf. Current Electrical Energy Supply). System degrees of freedom: energy demand The user requirements and technological degrees of freedom add their own dimensions to system design. The choice of any technological "option" has associated sociological consequences, as can be demonstrated with the base modules. All microcells are by design electrically independent of each other, with the sociological consequence that the decision about a microcell's technological design lies exclusively with the microcell user. He is thus solely responsible for the business costs of his decision. There is no technological-economic socialization. In this module, all microcells are connected to form a macrocell, meaning all microcells are electrically interdependent. The sociological effect of this is to pass decision-making authority from the individual user to a third party. This determines the business characteristics, and resulting costs remain in the user group. In this module, there is technological-economic socialization. The above are structural features of the secondary "technology" level. System states and technological solution space System states describing technologically possible configurations for a national system are denoted by state vectors (cf. Additional file 2). The state vector components are, for now, the number of microcells not connected in parallel \( {i}_0^{\prime } \), the number of technological macrocells n0, the number of parallel-connected microcells \( {j}_0^{\ast }, \) and the number of parallel-connected equivalent current sources \( {k}_0^{\ast } \). They are real vectors in the set. $$ {\varOmega}_{\boldsymbol{u}}:= \left\{\ \boldsymbol{u}\in {R}^4\ |\ \boldsymbol{u}=\left(\ {i}_0^{\hbox{'}};{n}_0;{j}_0^{\ast };{k}_0^{\ast}\right)\right\}. $$ The state space Ωu is the solution space of all technologically possible configurations. Two of the state variables, the numbers of parallel-connected microcells and of equivalent current sources, are functionally dependent, so for constant \( {j}_0^{\ast } \) and \( {k}_0^{\ast } \), cellular variety is possible across the n0 technological macrocells. This manifests as additional state vectors, so-called fine structure vectors. Properties of Ωu can be deduced. Like the base modules, each state vector has a grid spectrum. Figure 8 shows a polychromatic-state based on the combinations of the base modules. The possible states u0 are members of \( {\varOmega}_{u_0}. \) National EESS (u0)—schematic and associated grid spectrum In addition, there are two further monochromatic states given by the base modules I and II as national EESS (Table 1). Table 1 Component representation of the state vectors of a national EESS The technological solution space is built from these subsets $$ {\varOmega}_u={\varOmega}_{u_0}\cup {\varOmega}_{u_I}\cup {\varOmega}_{u_{II}} $$ and the component representation of a general vector u ∈ Ωu is $$ \boldsymbol{u}=\left(0\le {i}_0^{\hbox{'}}<{i}_0;0\le {n}_0<\frac{1}{2}{j}_0^{\ast };{j}_0^{\ast }={i}_0-{i}_0^{\hbox{'}};0\le {k}_0^{\ast }<\frac{1}{2}{j}_0^{\ast}\right). $$ A sensible question at this stage is whether Ωu is mathematically complete, that is, whether all possible states are in Ωu. Without going into a rigorous mathematical proof, completeness will be demonstrated by means of the grid spectra and the state vectors. The grid spectra of the base modules shown in Figs. 6 and 7 represent the extreme states. The spectra do not fundamentally change in transition to monochromatic states, implying the monochromatic states are extreme. Since u0 is any polychromatic state, the associated spectra must lie between these extremes. The state vector components are indexed with natural numbers; since all indexes are defined by their being possible, the resulting state vector set is also complete. Therefore the technological solution space is complete, providing the basis for a decision on the preferred national EESS now looking to the user requirements.Footnote 11 Key point 11: Declaration of technological structural variables Key point 12: Introduction of base modules Key point 13: Definition of a power grid structure and associated grid spectra Key point 14: Reduction of national EESS to state vectors and their solution set Substantial systemic risk Assessing the systemic risk of similar technologies sometimes reveals significant variation. The analysis and evaluation of risks in engineering is therefore extensively researched [30]. The present analysis predicts fundamental systemic effects of risk in national EESS. Central to this assessment is a substantial system risk with two sub-categories: Sudden change from normal operating state to a system OFF state Duration of a system OFF state The substantial systemic risk has an associated likelihood rs; there are also likelihoods for the two sub-categories, r1, r2: $$ {r}_s={r}_1\cdot {r}_2. $$ Risk factor r1 In an EESS, the sudden change from a normal operating state to a system OFF state implies rapid loss of function in a connected electrical element; here the failure of a national macrocell is considered. Combining Eq. 7 and the failure factor μ ∈ [0, 1], with \( {t}_0,{t}_0^{\prime}\in T \), yields $$ \frac{\partial {E}_{\mathrm{Nat}.}^D\left({t}_0\right)}{\partial t}\to -\infty, \mathrm{and}\kern0.5em {E}_{\mathrm{Nat}.}^S\left({t}_0^{\hbox{'}}\right)<\mu \cdot {E}_{\mathrm{Nat}.}^D\left({t}_0^{\hbox{'}}\right), $$ given that the signal propagates with the speed of light in the medium and where a 5% upper bound is been assumed (μ ∈ [0; 0.05]). System states where at least 0.1% of the total users (total > 106) are modeled independently and are collective cell structures, subject to statistical conditions. Determining the risk factor 1.1 \( {\boldsymbol{u}}_I\in {\varOmega}_{u_I} \) Basis: monochromatic system Risk factor r1, 1 = 0 The likelihood for microcell failure is taken as \( {p}_i=\frac{5}{365} \), an interruption likelihood of 5 days per year. The lower limit for a national system with i0 = 106 users is the statistical failure of about 15,000 microcells per day, with a failure rate of 1.5%. This is assumed to represent 1.5% of energy demand. Then, ES = 0.985 ∙ ED and, according to Eq. 27, the system is not in the OFF state. The extreme situation would be for all cells to switch to the OFF state at the same time. The likelihood of this is $$ {P}_{i_0}={p}_i^{i_0}={\left(\frac{5}{365}\right)}^{10^6}=0. $$ A national EESS in system state uI cannot entirely lose functionality in the sense of a system OFF state. 1.2 \( {\boldsymbol{u}}_{II}\in {\varOmega}_{u_{II}} \) Basis: monochromatic system The macrocell is completely connected and thus not a statistical collective. Maximum loss of function occurs when the node location vector is absent from the base set (cf. Technological Possibility Solution Space; Additional file 1). Propagation occurs at the speed of light in the relevant medium, as has been observed in real interruptions.Footnote 12 A national EESS in system state uII can enter a system OFF state (Fig. 9). 1.3 \( {\boldsymbol{u}}_0\in {\varOmega}_{u_0} \) Basis: polychromatic system Risk factor 0 < r1, 3(u0) < 1 Model extension by the technological solution set (Ωu) The distribution of micro- and macrocells in a system state underlies the overall risk factor. The microcell contribution can be assumed to be zero, as it is guaranteed to be smaller than in the state (uI). The contribution from macrocells is again determined by their number and size (cf. Defining Social Sustainability). The risk factor r1, 3 depends on the system state u0. To set bounds on the risk factor, two boundary cases are considered: In the first case, the number of independent microcells approaches \( {i}_0^{\prime}\to {i}_0 \) so that the system approaches the state uI, i.e., r1, 3 → 0 In the second case, the macrocells approach n0 → 1 and the number of non-parallel microcells disappears, \( {i}_0^{\prime}\to 0 \), so that the system approaches uII, i.e., r1, 3 → 1 The next subject is the duration of OFF states, that is, the period of time from the system entering a function-loss OFF state to the recovery of normal operation. Here, OFF states that last longer than 24 h are considered. Such interruptions are caused by fundamental system impairments. Equation 27 results in the following condition $$ \forall t\in \left[{t}_0;{t}_1\right):{E}^S(t)<\mu \cdot {E}^D(t)\kern0.50em \mathrm{and}\ {t}_1>{t}_0+24h. $$ There are various ways of recovering operation (Fig. 10). Redundancies are existing system parts that can compensate for planned or unplanned failures. Redundancy support is generally effective for less than 24 h. Redundancies are not further considered here. Parallel systems are entire existing systems that are capable of establishing a regular operating state without accessing the OFF-state system. Recovery is exponential for large technological systems with a time constant τ. Recovery time is greater than 24 h. Reparation is the restoration of an initial state and is divided into two model stages. Until t1, defective facility elements are recovered with no intervening supply (dead time). After t1, further elements are repaired and supply is exponentially resumed. Recovery time is greater than 24 h. Recovery paths to the national energy equilibrium level Dead times and time constants characterize parallel systems and recovery strategies. 2.1 \( {\boldsymbol{u}}_{\mathrm{I}}\in {\varOmega}_{u_{\mathrm{I}}} \) Basis: monochromatic system The risk factor r1, 1 of a national macrocell consisting only of microcells is zero, and the system cannot change to a system OFF state. 2.2 \( {\boldsymbol{u}}_{\mathrm{I}\mathrm{I}}\in {\varOmega}_{u_{\mathrm{I}}} \) Basis: monochromatic system A national technological macrocell may completely lose function and can therefore satisfy Eq. 28. A parallel system in this case would be a second national electricity grid capable of assuming the supply task given the requirements above. The grids operate under conditions of natural monopoly, in which the cost function is subadditive [31]. This means that for economic reasons, there is no parallel option for a national EESS. The remaining strategy is reparation. For an order-of-magnitude estimation, only the dead time has to be considered. If the fundamental impairments are mechanical in nature, a low estimate for dead time is at least 6 months. This represents manufacturing or production time for the failed plant elements and is already significantFootnote 13; the entire repair takes significantly more time. The conclusion is that a national EESS in uII can assume a system OFF state of inestimable duration. A starting point is the properties of r1, 3. Boundary conditions can be deduced as follows: In one case, the number of independent technological microcells approaches the number of users \( {i}_0^{\prime}\to {i}_0 \); states with r2, 1 approach zero In another case, the system approaches a national macrocell state n0 → 1. The system approaches the state r2, 2, i.e., r2, 3 tend to 1 Here, too, risk factors depend on system state. Risk factor rs Table 2 lists substantial risk for system states (uI, uII). The state consisting only of a national macrocell has the highest risk—the system which fully utilizes the two physical options (cf. Current Electrical Energy Supply; this is the case for the current EESS). All other states have lower risks, but it is nonetheless a broad spectrum. A national EESS consisting only of individual microcells has zero substantial risk. Table 2 Substantial risk factors for system states As the substantial risk factor distinguishes system states technologically, it represents another structural variable (cf. Technological Possibility Solution Space). S6−substantial risk factor, rs → 0 The new structure vector describes the system technologically, specifically the degree of interconnections in the grid structure in a national macrocell. It emphasizes functional loss of macrocells. The substantial risk factor S∗ is extended by component S6 to complete the technological structure vector S∗∗. Key point 15: Risk determination for a sudden change from normal operating state to system OFF state Key point 16: Risk determination for the duration of a system OFF state Key point 17: Risk determination for an existing substantial system risk Key point 18: Definition of substantial risk factor as sixth technological structural variable for national EESS Defining social sustainability System states A sustainability dimension is incorporated through a new technological and economic availability. Availability is understood in the sense of [32] and represents the utilization potential of the system. The sustainability component now added to the state vectors u ∈ Ωu equals the product from the availability parameter vtv and the total load for a national macrocell \( {E}_{\mathrm{Nat}.}^D(t) \) from Eq. 7. The new sustainability component is $$ \boldsymbol{v}=\left({i}_0^{\hbox{'}},{n}_0;{j}_0^{\ast };{k}_0^{\ast };{v}_{tv}\cdot {E}_{\mathrm{Nat}.}^D(t)\right) $$ $$ \boldsymbol{v}\in \varOmega\ \mathrm{and}\ {\varOmega}_u\subset \varOmega . $$ Availability is related to risk through the substantial risk factor rs: $$ {v}_{tv}=1-{r}_s $$ $$ {v}_{tv}\in \left[0;1\right]. $$ The sustainability of a load on a national EESS now depends on the technology used. The limiting cases (cf. Table 2) of substantial risk (rs = 0; rs = 1) are sustainable for \( {\boldsymbol{v}}_{\mathrm{I}}=\left({i}_0^{\prime },{n}_0;{j}_0^{\ast };{k}_0^{\ast };1\right) \) and unsustainable for \( {\boldsymbol{v}}_{\mathrm{II}}=\left({i}_0^{\prime },{n}_0;{j}_0^{\ast };{k}_0^{\ast };0\right) \). Existence of a boundary between states with quantifiable risk and non-quantifiable risk is implied by the completeness of the technological solution set Ωu (Fig. 11). Boundary due to the sustainability limit in the state model of an EESS. As an example, (r1, r2) are assumed linear The determination of this sustainability limit (sustainable availability limit) is an economic problem (cf. Appendix 1). Due to considerable variation in the number and size of macrocells in the various system states, \( {\boldsymbol{u}}_0\in {\varOmega}_{u_0} \), there is a spatial dimension to sustainability. The sustainability components of distinct macrocells must be distinguished, on the basis of Eq. 5. This means that $$ {v}_{tv}\cdot {E}_{j_0}^D(t)={\sum}_{n=1}^{n_0}{v}_{tv,n}\cdot {E}_n^D(t), $$ which introduces regional sustainability into the national EESS. Differences can have historical reasons or arise from future-oriented processes (innovation, transformation). Energy quantities and system states The energy balances given by the user requirements cannot be ideally satisfied in the operation of real energy cells. Deviations due to supply reductions or interruptions due to faults are quantitatively expressed by a supply factor λ ∈ [0; 1]. For the ith user of the ith microcell, the individual energy function becomes (for simplicity, t′ = t): $$ {E}_i^S(t)\ge {\lambda}_i^{-}\cdot {E}_i^D(t) $$ $$ {\lambda}_i^{-}\le 1. $$ The economic interests of a national macrocell are expressed by the energy function $$ {E}_{\mathrm{Nat}.}^S(t)\ge {\lambda}_{\mathrm{min}}^{-}\cdot {E}_{\mathrm{Nat}.}^D $$ $$ {\lambda}_{\mathrm{min}}^{-}\le 1. $$ A utilitarian definition of utility implies a boundary condition $$ {\lambda}_{\mathrm{min}}^{-}\le {\lambda}_i^{-}\le 1. $$ \( {\uplambda}_i^{+},{\lambda}_{\mathrm{max}}^{+}>1 \) are states with generation overcapacities; they do not alter rs. The energy demand \( {E}_{\mathrm{Nat}.}^D(t) \) from Eq. 33 can be approximated by the product of a freely selectable standardized distribution function h(t) and an annual reference energy quantity \( {E}_{T_{\mathrm{Ref}}}^D \). The energy relation for supply energy is then $$ {E}_{\mathrm{Nat}.}^S\ge {\lambda}_{\mathrm{min}}^{-}\cdot {E}_{\mathrm{Nat}.}^D(t)\approx {\lambda}_{\mathrm{min}}^{-}\cdot h(t)\le \cdot {E}_{T_{\mathrm{Ref}}}^D $$ This is the basis for predicting the demand. Information about the supply status in the microcells is necessary for the operation of a national EESS. In addition to passive analysis, active prognoses can be made about future energy demand and microcells can be centrally controlled in an interruption (i.e., a complete loss of function/blackout). For this, there is smart meter technology,Footnote 14 now a key EU energy policy issue [33]. The energy quantities from Eq. 33 need to be connected to the sustainable system states of Eq. 29. This is expressed in the coupling relation $$ {\lambda}_{\mathrm{min}}^{-}=g(x)\cdot {v}_{tv}\left({r}_s\right)=g(x)\cdot \left(1-{r}_s\right) $$ with weighting function g(x), shown in Fig. 12. For simplicity, g(x) ≡ 1. Energy band of a national macrocell at a given time The above builds a technological-economic foundation for the design of a sustainable national EESS. The process for defining boundary conditions should have at its center utilitarian benefit and can be devised by the user community. Ensuring the transparency of this process is a social, economic, and technical challenge which needs further investigation (Appendix 2). The prerequisite for sustainability is that system variability must be contained within the technology and not affect the energy demand and supply. All data generated or analyzed during this study are included in this published article. AR/AU/BE/CA/CH/CL/CN/CN/DK/DE/ES/FI/FR/GB/GR/HU/IE/IT/KR/LU/MY/NZ/NL/NO/PL/PT/ RO/RU/STR/ UA/US Union for the Coordination of the Transmission of Electricity, currently part of ENTSO-E There are different interpretations of supply quality; from User Requirements for a Cellular Grid, availability is used in the sense of [34] Unplanned interruptions including all events; the longest was 11.6 min/908 min in 2014 Example: UCTE network area/cf. technological possibility solution space, substantial systemic risk, defining social sustainability Example: German 'Energiewende' towards wind energy and photovoltaic systems In principle, bundling applies to all aspects from physical connection to the creation of the virtual network. Here, only physical connections are considered (referred to as technological macrocells). \( {i}_0^{\prime } \) unbundled microcells, \( {j}_0^{\prime } \) bundled microcells, \( {i}_0={j}_0^{\prime }+{j}_0 \) Ultimate goal of the European copper plate is to allow providers to meet e.g. Warsaw's demand with supply generated elsewhere e.g. in Madrid (UCTE grid area cities). The signal crosses the intervening 4600 km in approx. 15 ms (75% of the wave period at 50 Hz). cf. Physical Limitations and Possibilities; Appendix IV. Here only the magnitude of the Poynting vector was used. The EESS described in Current Electrical Energy Supply has excess production capacity. It is not in u as it does not meet the user requirements. Generation overcapacities are discussed in Substantial Systemic Risk. UCTE grid area interruption of 4.11.2006: severe frequency drop originating in West zone; \( \frac{\Delta f}{\Delta t}\approx \frac{1\ Hz}{30\ s} \) [35]. Interpolating from the low-limit frequency of 47.5 Hz indicates that for up to 90 s, all power plants were disconnected. For Republic of Austria, quantifiable damage would amount to at least € 180 billion [21]. Functions of a "smart meter": 1. Determine current demand 2. Record current supply 3. Predict future demand 4. Monitor each user's availability 5. Active system control in the grid areas, in particular, for macrocell failure, e.g., blackout control Dyckhoff H (1994) Betriebliche Produktion – Theoretische Grundlagen Einer Umweltorientierten Produktionswirtschaft, 2nd edn. Springer sec. A § 2.2.3. Springer-Verlag Berlin Heidelberg GmbH. Frankenia W (2017) Ethics, 6th edn. Springer: Springer Fachmedien Wiesbaden GmbH. pp 35–39 Dyckhoff, H. (1994) Betriebliche Produktion – Theoretische Grundlagen Einer Umweltorientierten Produktionswirtschaft. 2nd ed. Springer chap. A § 3.3 Lenin V (1920) Werke, vol 31, p 513 (1966 German translation) Stier, B. (1999) Staat Und Strom. vol. 10 Technik und Arbeit Landesmuseum für Technik und Arbeit in Mannheim Index Mundi (2018) Country Facts. https://www.indexmundi.com EIA (2018) Us Energy Information Administration. https://www.eia.gov/ Global Economy (2018) Economic Indicators For Over 200 Countries. https://de.theglobaleconomy.com Löchel C (2018) Fischer Weltalmanach. Fischer Publishing House. Fischer Taschenbuchverlag, Fischer Verlag GmbH, Frankfurt am Main Casazza J, Delea F (2010) Understanding Electric Power Systems, 2nd edn. Wiley chap. 1, 2, 3, 6, 7. Hokoken: Wiley. Oeding D, Oswald B (2011) Electric power plants and grids, 7th edn. Springer chap. 1, 2, 5, 6, 12, 14, 15, 18. Dordrecht London New York: Springer Heidelberg. Official Journal Of The European Union (2009) Regulation (Ec) No 714/2009 Of The European Parliament And Of The Council On Conditions For Access To The Network For Cross-Border Exchanges In Electricity And Repealing Regulation (Ec) No 1228/2003 Official Journal Of The European Union (2009) Directive 2009/72/Ec Of The European Parliament And Of The Council Concerning Common Rules For The Internal Market In Electricity BNETZA (2017) Kennzahlen Der Versorgungsunterbrechungen Strom Council of European Energy Regulators (2016) 6th CEER Benchmarking Report On The Quality Of Electricity And Gas Supply Oeding D, Oswald B (2011) Electric power plants and grids, 7th edn. Springer chap. 15. Dordrecht London New York: Springer Heidelberg. Michel M (2011) Power electronics, 5th edn. Springer chap. 8. Dordrecht London New York: Springer Heidelberg. Casazza J, Delea F (2010) Understanding Electric Power Systems, 2nd edn. Wiley chap. 1, 3, 9. Hokoken: Wiley. ENTSO-E (2018) Continuing frequency deviation in the continental european power system originating in Serbia/Kosovo: political solution urgently needed in addition to technical FOCP (2015) Bundesamt Für Bevölkerungsschutz. Nationale Gefährdungsanalyse - Ausfall Stromversorgung Schweizerische Eidgenossenschaft Austrian Civil Defence Association (2017) Blackout Advice. www.zivilschutzverband.at Landau L, Lifshitz E (2014) Lehrbuch Der Theoretischen Physik, vol 2, 12th edn. Europa-Lehrmittel chap. 1-6. (based on 1998 Russian 7th ed.). Haan-Gruiten. Jackson J (2014) Klassische Elektrodynamik, 5th edn. De Gruyter chap. 5-7 Fano, R.; Chu,L.; Adler, R. (1968) Electromagnetic Fields. Energy and Forces MIT Press chap. 4-9. Cambridge. Jackson J (2014) Klassische Elektrodynamik, 5th edn. De Gruyter chap. 6.7. Berlin. Jackson J (2014) Klassische Elektrodynamik, 5th edn. De Gruyter chap. 5. Berlin. Jackson J (2014) Klassische Elektrodynamik, 5th edn. De Gruyter sec 5.15. Berlin. Preiss R (2017) Methods of risk analysis in engineering, 2nd edn. TÜV Austria Akademie GmbH. Wien. Mankiw NG (2014) Principles of economics, 7th edn. South Western, p 302 Eberlin S, Hock B (2014) Zuverlässigkeit Und Verfügbarkeit Technischer Systeme. Springer Vieweg, p 65. Official Journal Of The European Union (2012) Commission recommendation on preparation for the deployment of smart metering (2012/148/Eu) Eberlin S, Hock B (2014) Zuverlässigkeit Und Verfügbarkeit Technischer Systeme. Springer Vieweg BNETZA (2007) On the system disturbance in the German and European power system on 4th of November 2006. Federal Network Agency for Electricity, Gas, Telecommunications, and Railways, p 11. Bonn. No external funding Technische Universität München, Munich, Germany Manfred Benthaus Search for Manfred Benthaus in: The author wrote, read, and approved the final manuscript. Authors' information Link to author's website: http://www.es.mw.tum.de/en/staff/visiting-lecturers/benthaus/ Correspondence to Manfred Benthaus. The author declares that he has no competing interests. Additional file 1. Mathematical and physical foundation of grid structure. Additional file 2. State vectors and grid spectra. Appendix 1 Table 3 List of abbreviations Table 4 Follow-up themes Benthaus, M. A coupled technological-sociological model for national electrical energy supply systems including sustainability. Energ Sustain Soc 9, 50 (2019) doi:10.1186/s13705-019-0221-4 Electrical energy supply systems Cellular energy structures Electricity supply risks
CommonCrawl
Quality by design approach for development and validation of a RP-HPLC method for simultaneous estimation of xipamide and valsartan in human plasma Mahmoud M. Sebaiy ORCID: orcid.org/0000-0002-5949-28341, Sobhy M. El-Adl1, Mohamed M. Baraka1, Amira A. Hassan1 & Heba M. El-Sayed2 A new rapid, simple, and sensitive RP-HPLC method was carried out through applying Quality by Design approach for determination of xipamide and valsartan in Human plasma. Fractional factorial design was used for screening of four independent factors: pH, flow rate, detection wavelength, and % of MeOH. Analysis of variance (ANOVA) confirmed that flow rate and % of MeOH were only significant. Chromatographic conditions optimization was carried out through using central composite design. Method analysis was performed using BDS Hypersil C8 column (250 × 4.6 mm, 5 μm) and an isocratic mobile phase of MeOH and 0.05 M KH2PO4 buffer pH 3 (64.5:35.5, v/v) at 1.2 mL/min flow rate with UV detection at 240 nm and 10 μL injection volume. According to FDA guidelines, the method was then validated for the determination of the two drugs clinically in human plasma in respect of future pharmacokinetic and bioequivalence simulation studies. The standard curve was linear in the concentration range of 5–100 µg/mL for both drugs, with a determination coefficient (R2) of 0.999. Also, the average recoveries lied within the range from 99.89 to 100.03%. The proposed method showed good predictability and robustness. Quality by design (QbD) is a modern and systematic approach for quality control of pharmaceuticals and product development. Pharmaceutical quality can be assured by understanding and controlling variable parameters for formulation and manufacturing processes through such structured context [1,2,3]. Now-a-days the concept of QbD can be extended to analytical and bioanalytical techniques. The application of QbD principles can help in clinical laboratories to develop a suitable analytical method providing a significant improvement better than the traditional and empirical methodology [4]. One of these QbD approaches is fractional factorial design (FFD) which is commonly used and effective tool in scientific research and industrial applications. The main advantage of FFD is that it allows building statistical models with a few numbers of runs. Using the models allows identification of the significant factors affecting certain responses during analytical method development. Central composite design (CCD) is an efficient tool in optimization of significant factors. CCD suggests the optimal variables value that gives the best and most desired response and defines process conditions which are robust to deliberate variations in factor settings. Also, it suggests a mathematical model relating the response with the critical variables, thus allowing to predict response with minimal error transmitted to that response (propagation of error or POE) [5]. Different classes are indicated for management of hypertension with concomitant disease. These classes include diuretics, beta-blockers, angiotensin converting enzyme (ACE) inhibitors, angiotensin receptor blockers (ARBs), and Aldosterone receptor antagonists. Diuretics have an initial decreasing effect on blood volume and consequently reduce blood pressure. ARBs have more complete blockade of angiotensin II actions compared with ACE inhibitors, so they are a substitute for the latter in treating patients with heart failure and noticeable ACE inhibitors side effects. Therefore, diuretics and ARBs can be considered as a rational drug combination for patients with hypertension associated with heart failure (HF). This combination is more effective than monotherapy with one of its components. It offers a remarkable reduction in blood pressure with lower doses and minimized adverse effects [6]. Xipamide (XIP) is a sulphonamide diuretic drug used in the treatment of hypertension either alone or in combination with other antihypertensives. It is also used in treatment of oedema including that related to HF [7]. Chemical structure of XIP, 5-(Arninosulphonyl) 4-chloro-N-(2,6-dimethylphenyl)-2-hydroxy-benzamide, is presented in Fig. 1. XIP acts mainly on both kidneys to reduce reabsorption of sodium in the distal convoluted tubule. The determination of XIP has been performed by HPLC [8,9,10], spectrophotometry [11, 12], spectroflourimetry [13], and voltammetry [14]. Structure of xipamide (XIP) and valsartan (VAL) Valsartan (VAL) is an orally active and potent, non-peptide tetrazole derivative where it selectively inhibits Angiotensin II Receptor type 1 leading to reduction in blood pressure and so it can be used in hypertension treatment, to reduce mortality in patients with left ventricular dysfunction following myocardial infarction, and in HF management [7, 15] Chemically, it is 2(S)-3-Methyl-2-(pentanoyl{[2ʹ-(1H-tetrazol-5-yl)-4-biphenyl]methyl}amino) butanoic acid (Fig. 1). Literature review revealed that the determination of VAL has been carried out using HPLC [16,17,18,19,20,21,22,23,24,25,26,27,28], spectrophotometry [29,30,31,32,33] and spectroflourimetry [34, 35]. To the best of our comprehensive survey, XIP and VAL were not determined before as combined mixture (despite their synergistic action) by chromatographic techniques neither in biological nor pharmaceutical samples. As such, in line with keeping in mind the current FDA requirements while pursuing the study considering QbD based approach, the objective of our research is to develop a novel, accurate, robust, simple and specific HPLC method suitable for determination of XIP and VAL using FFD regarding pharmacokinetic and bioequivalence simulation studies and robustness testing. Among the different experimental designs, FFD as a response surface was preferrable for nonlinear response prediction in addition to its flexibility, in respect of experimental runs and information correlated with main and interaction factor effects. Agilent 1200® HPLC instrument (Germany) with a Thermo Scientific® BDS Hypersil C8 column (5 µm, 250 × 4.60 mm), DAD absorbance detector, in addition to HPLC QUAT pumps are connected to PC computer which is loaded with Agilent 1200 software [36, 37]. Labomed® Spectro (U6VD-2950) UV–VIS Double Beam Spectrophotometer (England) with 1 cm quartz cells and connected to PC computer loaded with UVWin5 Software v6 [36, 37]. HANNA® HI 8314 (Romania) membrane pH-meter for pH adjustment [37]. Materials and Reagents All materials, chemicals, and solvents were of HPLC grade [37]. XIP (99.79%) and VAL (99.90%) were obtained from EIPICO (Tenth of Ramadan City, Egypt). Standard solutions of 200 µg/mL were prepared through dissolving 10 mg of each pure drug in 50 mL of the mobile phase [36]. Mobile phase was a binary mixture (freshly prepared) of MEOH: 0.05 M potassium dihydrogen phosphate (64.5: 35.5, v/v) adjusted to pH 3 by using ortho-phosphoric acid, filtered and degassed by using 0.45 µm membrane filters (Millipore, USA) [36]. MeOH (Fischer Scientific, Hampton, USA), Potassium dihydrogen phosphate (Techno Pharmchem, Delhi, India) and orthophosphoric acid (Merck, India) were all analytical grade assigned [33]. The human plasma was provided kindly by Zagazig University Hospital and was labeled to be disease and drug free. It was kept frozen at −20 °C before initial use and was then stored at −4 °C during usual uses [37]. Construction of calibration curves Appropriate mixed dilutions of XIP and VAL standard stock solutions were done in 10 mL volumetric flasks to get final concentrations of 5, 12.5, 25, 50 and 100 µg/mL for both drugs. A 10 μL of each mixture was injected then into the column while the chromatogram was monitored at 240 nm. A calibration graph was plotted as drug concentration against peak area response [37]. Human plasma samples procedure All experimental protocols in the current study were approved by the EGYPTIAN NETWORK OF RESEARCH ETHICS COMMITTEES at the Faculty of Pharmacy, Zagazig University (Approved 2008). Calibration curves and validation QC samples in plasma at various concentrations of 2.50, 5, 15 and 20 µg/mL were prepared. Aliquots of 200 µL plasma samples and various drug mixture volumes ranging from 100–200 µL were added to 10 mL centrifuge tubes and then vortexed for 1 min. After that, the mixture was precipitated using methanol (total volume is 2 mL). After vortexing for 1 min, the samples were then centrifuged at 5000 rpm for 15 min. Aliquots of 10 µL of each supernatant was filtered using 0.45 µm PTFE syringe filters (Membrane solutions, USA) and directly injected into HPLC instrument for analysis [37]. Scouting step Some trials were included in this step to find out a suitable mobile phase that can give an acceptable separation for both drugs. At the beginning, different concentrations containing either 0.025 or 0.05 M KH2PO4 buffer (as an aqueous part of the mobile phase) were tried. In addition, acetonitrile and MeOH were tested as organic modifiers. Finally, the variables that may clearly affect the selected responses were chosen [38]. Screening design A resolution IV FFD with a minimum number of runs was used to identify the significant factors affecting the measured responses (Table 1). In this study, 4 independent factors were tested at 2 levels; pH at 3 & 4, flow rate at and 1.2 mL/min, detection wavelength at 230 and 250 nm, and also % of MeOH at 58 and 63%. The mathematical model related to the design consists of main effects and possible interaction effects (2 FI). In this case, 2 responses were taken into consideration: retention time (VAL) and resolution [39]. Table 1 Resolution IV fractional factorial screening design for determination of XIP and VAL by RP-HPLC Optimization design Central composite design (CCD) was commonly used due to its high efficiency and capability to reduce number of runs. A CCD with k factors should require 2 Table 2 k factorial runs, 2k axial experiments, symmetrically spaced at ± α along each variable axis, and one center point at least [40]. A rotatable CCD (α = 1.68) was built for the 4 significant factors to get the optimum level for desired responses using 5 levels of each factor (− α, − 1, 0, + 1, + α) with total number of 13 random runs which are including 5 center points (Table 2). The technique of numerical optimization and desirability function approach are used together usually to locate the optimized conditions through different trading off selected responses [41]. In this study, the numerical optimization was based on minimizing retention time (VAL) (+++ importance) and maximizing resolution (+ importance) between the analytes, obtaining a reasonable desirability function, and minimizing POE of both responses (+++ importance) to ensure that minimum error was transferred to responses. Table 2 Central composite design for optimization with the measured responses Another tool was graphical optimization used to specify the design space (sweet spot) where desired CQAs meet. Graphical optimization goal was to minimize retention time (VAL) to be less than 6 min., and maximizing resolution with 3.6 as a lower limit, as well as, to minimize a POE of both responses by adjusting the highest acceptable upper limit. In addition, interval criteria were applied for CQAs and POE to understand the uncertainty impact on achieving the process goals. The sweet spot (sometimes called the bright yellow area) was obtained for each two variables, whilst the remaining factors were kept at a certain fixed value. Finally, model predictability confirmation was checked through assuring that the predicted means of retention time (VAL), resolution and their POE lie within the low & high 95% values of prediction interval (PI low 95% and PI high 95%). Investigation of model predictability was also achieved through prediction error calculation in accordance with the following equation [42]: $$ {\text{Prediction}}\,{\text{error}} = \,{{({\text{Observed}} - {\text{predicted}})} \mathord{\left/ {\vphantom {{({\text{Observed}} - {\text{predicted}})} {{\text{predicted}}}}} \right. \kern-\nulldelimiterspace} {{\text{predicted}}}} \times 100. $$ Chromatographic conditions optimization All chromatographic conditions are detailed in Table 3. Spectral analysis of both drugs in the range of 200–400 nm showed that XIP and VAL have λmax at 237 nm and 250 nm, respectively. As such, the chromatographic detection was set at 240 nm using a DAD detector as the appropriate wavelength. The method was carried out using a Thermo Scientific® BDS Hypersil C8 column (5 µm, 250 × 4.60 mm). The optimum mobile phase was determined as a MeOH: 0.05 M potassium dihydrogen phosphate mixture adjusted to pH 3 by using ortho-phosphoric acid (64.5: 35.5, v/v) at a flow rate of 1.2 mL/min. Under such conditions, XIP and VAL in human plasma can be completely separated at 3.23 and 4.34 min respectively as depicted in Fig. 2B, respectively. In addition, the mixture in plasma didn't exhibit any matrix interference effect where human plasma chromatogram (Fig. 2A) showed no peaks at retention times of XIP and VAL. Table 3 Chromatographic conditions for the proposed HPLC method for estimation of XIP and VAL HPLC chromatogram of (A) blank plasma (B) mixture of 12.50 µg/mL XIP and VAL in human plasma sample The optimal mobile phase showed good symmetrical peaks (0.8 < T < 1.2), capacity factor (1 < k < 10), and resolution higher than 2 and theoretical plates more than 2000. Table 4 shows all system suitability parameters of the proposed HPLC method for simultaneous determination of those two drugs in pure and plasma matrices. Table 4 System suitability parameters for XIP and VAL in both pure and plasma samples This step explains the effect of different mobile phases on analysis of the two analytes. In this step, four factors were chosen; pH, flow rate, detection wavelength, and % of MeOH to be tested in screening step. Screening with FFD Analysis of variance (ANOVA) for the studied factors is given in Table 5. The results indicated that only flow rate and % MeOH were the significant variables. Pareto charts, presented in Fig. 3, showed that flow rate had a significant effect only on the retention time (VAL), while % MeOH was a critical variable for both responses. Table 5 ANOVA results of the fractional factorial design (insignificant interaction effects were excluded) Pareto chart showing factors effect on: (A) retention time (VAL) and (B) resolution between XIP and valsartan (VAL) Optimization with CCD The results calculated by ANOVA of the significant factors are mentioned in detail in Table 6. Results confirmed the previous factors effects obtained by screening ANOVA. In addition, quadratic effects on retention time (VAL) were observed, while 2FI model was suggested for resolution. Table 6 Regression coefficients of polynomial equation along with p-value of ANOVA of central composite design Perturbation figure shows that % MeOH and flow rate had the most significant negative effect on retention time (VAL), Fig. 4A; increasing the variables was followed by a decrease in the response. The quadratic effect of % MeOH (factor A) is confirmed by the curvature of line A. On the other hand, % MeOH showed a similar effect on resolution (Fig. 4B). Contour and 3D plots (Fig. 5) show the interaction effect of the critical factors on retention time (VAL), and on resolution. Numerical optimization solution suggested those following optimal conditions: 64.5% MeOH, and a 1.2 mL/min flow rate. These optimal conditions have a desirability function of 0.716. Perturbation plot for effect of factors on: (A) retention time (VAL) and (B) resolution, where line (A) is % MeOH and line (B) is flow rate Contour (A) and 3D (B) plots showing the interaction effect of the % MeOH and flow rate on retention time (VAL) and resolution The overlay plot represents the best desirable requirements of factors, responses and POE which are met in the sweet spot (S) as depicted in Fig. 6. Then, the variables optimum ranges were determined using the overlay contour plots as: % MeOH 63.95–64.99% and flow rate 1.12–1.2 mL/min. These ranges are representing the design space and confirm method robustness. Overlay plot showing the sweet spot (S) where the desired responses met The responses predicted means and their POE were reported within the low and high PI 95%, thus confirming predictability of the model. Additionally, the percentage prediction error was equal to −0.718 and 0.474 for retention time (VAL) and resolution, respectively (predicted retention time (VAL) = 4.734 and resolution = 4.998). The following quadratic equation shows the relation between the significant factors and the selected responses (y): Y = b0 + b1A + b2B + b3AB+ b4A2 where b0 is the intercept, b1–b5 represents the regression coefficients of quadratic polynomials and 2FI for both responses (Table 6). Method validation The method validation was performed according to food and drug administration [43,44,45]. Five different concentrations of the drug mixture were specified for linearity studies in the range of 5–100 µg/mL for both drugs (Table 7). Linear regression equations of XIP and VAL were found to be y = 45.396x + 127.84 and y = 32.53x + 108.21, respectively and the regression coefficient values (r) were calculated to be 0.9999 for both drugs indicating a high degree of linearity (Fig. 7). Table 7 Analytical merits for determination of XIP and VAL in pure samples using the proposed HPLC method Calibration curves for authentic mixture of XIP and valsartan VAL using the proposed HPLC method The accuracy of the proposed method was indicated by % recovery of the two different concentrations of XIP and VAL in human plasma. The method precision was evaluated in terms to intra-day and inter-day precision using the validation QC samples at concentrations of 12.50, 25 and 50 µg/ml. Intra-day precision was evaluated depending on standard deviation (SD) & coefficient of variation (CV%) where three replicates using the same solution of pure drugs were used. The SD values (ranged from 0.12 to 0.37) and CV% values (ranged from 0.12 to 0.38) indicated that the method is highly precise. Also, for inter-day reproducibility, SD & CV% values were in the acceptable range of 0.06–0.52 and 0.06–0.53, respectively (Table 8). These results show that the proposed method has an adequate precision in simultaneous determination of both drugs in either pharmaceutical or biological samples. Table 8 Intra- and inter-day precision and stability results of XIP and VAL QC samples samples Selectivity and specificity The method selectivity was checked by injecting XIP and VAL solutions separately into the column where 2 sharp peaks were eluted at retention times of 3.4, and 4.6 min, respectively, and these peaks were not monitored for the blank solution. Limits of detection and limits of quantification For estimating the limits of detection and quantification, the method reported by Bhaskaran et al. [46] was used based on equations: LOD = 3.3 σ/s and LOQ = 10 σ/s, where, σ is SD of y-intercepts of the regression line and s is the slope of the calibration line. LODs were reported to be 0.075 and 0.134, while LOQs were calculated to be 0.248 and 0.448 µg/mL for both XIP and VAL, respectively (Table 7) showing that the proposed method is highly sensitive and being applicable for future bioequivalence studies where it is mandatory to detect small drug concentrations in plasma. Stability and precision studies were also conducted through application of plasma freeze–thaw cycles at −20 °C (over 3 days) using validation samples (5, 15 and 20 µg/mL of XIP and VAL) in plasma (Table 8). The recoveries for XIP and VAL were reported to be 93.09% and 89.17%, respectively as presented in Table 9. Table 9 Result of analysis of proposed method in human plasma Analysis of human plasma XIP is well absorbed with maximum observed plasma concentration (Cmax) occurring 1 h of oral doses. Cmax after oral administration of 20 mg is 3 μg/mL [47]. VAL is rapidly absorbed after administration of tablets and oral solution with bioavailability of 23% and 39%, respectively. It is not significantly metabolized, so it is excreted mainly as unchanged form via the bile [7]. Following a single oral dose of 80 mg, Cmax is approximately 3.128 ng/mL with a tmax of 1.5 h for oral solution [48]. The proposed method was adopted for determination of XIP and VAL in human plasma by applying protein precipitation procedure. XIP and VAL retention times in plasma samples and the other system suitability parameters were pretty similar to those values in pure ones (Table 4). Also, the plasma chromatogram (Fig. 2A) confirms the method specificity in clinical studies as the plasma peak is not interfering with both XIP and VAL peaks. Comparison with the reported method Analytical parameters of the developed method were compared with some of the previously reported ones for estimation of VAL. The comparison presented in Table 10 shows that the developed procedure has the shortest run time. In addition, none the reported method used CCD for method optimization; CCD is superior to full factorial design (FFD) that is not generally advised in optimization procedures because of its incapability of examining quadratic models. Therefore, FFD can be used only for mapping linear relationships while CCD help obtaining more reliable models [49]. Moreover, the rotatable CCD applied in this study is better than FFD and other CCD; it uses five variable levels and consequently, can provide more accurate results. In term of greenness, the proposed mobile phase is the most eco-friendly. Therefore, this study could be considered as a promising would show a better performance. In addition, statistical analysis showed no significant difference between the two methods. Table 10 Comparison of the proposed and reported methods for determination of VAL QbD strategy was adopted to develop a robust and an efficient RP-HPLC method for simultaneous estimation of xipamide and valsartan mixture in human plasma. Multivariate regression analysis was successfully carried out to study the main effects of 4 factors on both column efficiency and resolution. CCD was carried out to optimize of chromatographic conditions through studying the interaction and quadratic effects of significant factors on the two selected responses. The models which were used for screening and optimization steps were highly significant and confirmed the method predictability. The method is very simple, accurate, robust, and can be applied successfully to the analysis of XIP and VAL in human plasma with a high degree of selectivity. All data generated or analyzed during this study are included in this published article. Lawrence XY, Gregory A, Mansoor AK, Stephen WH, James P, et al. Understanding pharmaceutical quality by design. AAPSJ. 2014;16(4):771–83. https://doi.org/10.1208/s12248-014-9598-3. Rozet E, Pierre L, Jean-Francois M, Sondag P, Scherderac T, Bruno B. Analytical procedure validation and the quality by design paradigm. J Biopharm Stat. 2015;25(2):260–8. https://doi.org/10.1080/10543406.2014.971176. Jaiprakash NS, Mrinmayee D, Zahid Z, Devanand BS, Rohidas A. Quality by design approach: regulatory need. Arab J Chem. 2017;10(2):S3412–25. https://doi.org/10.1016/j.arabjc.2014.01.025. Baldelli S, Marrubini G, Cattaneo D, Clementi E, Cerea M. Application of quality by design approach to bioanalysis: development of a method for elvitegravir quantification in human plasma. Ther Drug Monit. 2017;39(5):531–42. https://doi.org/10.1097/FTD.0000000000000428. Montgomery DC. Design and analysis of experiments. New York: Wiley; 2013. Whalen K. Lippincott illustrated reviews: pharmacology. 7th ed. Philadelphia: Wolters Kluwer; 2019. Sweetman SC. Martindale: the complete drug reference. 38th ed. London: Pharmaceutical Press; 2014. Sebaiy MM, Abdellatef HE, Elmosallamy MAF, Alshuwaili MK. Isocratic HPLC method for simultaneous determination of amlodipine and xipamide in human plasma. Open J Anal Bioanal Chem. 2020;4(1):001–6. https://doi.org/10.17352/ojabc.000017. Abd El-Hay SS, Hashem H, Gouda AA. High performance liquid chromatography for simultaneous determination of xipamide, triamterene and hydrochlorothiazide in bulk drug samples and dosage forms. Acta Pharm. 2016;66(1):109–18. https://doi.org/10.1515/acph-2016-0022. Sane RT, Sadana GS, Bhounsule GJ, Gaonkar MV, Nadkarni AD, Nayak VG. High-performance liquid chromatographic determination of xipamide and clopamide in pharmaceuticals. J Chromatogr A. 1986;1(356):468–72. https://doi.org/10.1016/S0021-9673(00)91520-6. Wagieh NE, Abbas SS, Abdelkawy M, Abdelrahman MM. Spectrophotometric and spectrodensitometric determination of triamterene and xipamide in pure form and in pharmaceutical formulation. Drug Test Anal. 2010;2(3):113–21. https://doi.org/10.1002/dta.92. Gaber M, Khedr AM, El-Kady AS. New and sensitive spectrophotometric method for determination of xipamide in pure and dosage forms by complexation with Fe (III), Cu (II), La (III), UO2 (II), Th (IV) and ZrO (II) ions. Int Res J Pharm Pharmacol. 2011;1:215–20. Walash MI, El-Enany N, Eid MI, Fathy ME. Stability—indicating spectrofluorimetric methods for the determination of metolazone and xipamide in their tablets. application to content uniformity testing. J Fluoresc. 2014;24(2):363–76. https://doi.org/10.1007/s10895-013-1301-z. Legorburu MJ, Alonso RM, Jiménez RM. Voltammetric study of the diuretic xipamide. Bioelectrochem Bioenerg. 1993;32(1):57–66. https://doi.org/10.1016/0302-4598(93)80020-U. Flesch G, Müller P, Lloyd P. Absolute bioavailability and pharmacokinetics of valsartan, an angiotensin II receptor antagonist, in man. Eur J Clin Pharmacol. 1997;52(2):115–20. Zareh MM, Saad MZ, Hassan WS, Elhennawy ME, Soltan MK, Sebaiy MM. Gradient HPLC method for simultaneous determination of eight sartan and statin drugs in their pure and dosage forms. Pharmaceuticals. 2020;13(2):32. https://doi.org/10.3390/ph13020032. Kumar L, Sreenivasa Reddy M, Managuli RS, Pai K. G Full factorial design for optimization, development and validation of HPLC method to determine valsartan in nanoparticles. Saudi Pharm J SPJ Pharm Soc. 2015;23:549–55. https://doi.org/10.1016/J.JSPS.2015.02.001. Sreenivasa Reddy M, Kumar L, Attari Z, Verma R. Statistical optimization of extraction process for the quantification of valsartan in rabbit plasma by a HPLC method. Indian J Pharm Sci. 2017;79:16–28. https://doi.org/10.4172/PHARMACEUTICAL-SCIENCES.1000196. Al Wassil O, Omer ME, Yassin AEB, Ahmad D. RP-HPLC method development for quantitation of valsartan in nano-structured lipid carrier formulation and in vitro release studies. Dig J Nanomater Biostruct. 2020;15:1017–102. Patro SK, Kanungo SK, Patro VJ, Choudhury NS. Stability indicating RP-HPLC method for determination of valsartan in pure and pharmaceutical formulation. J Chem. 2010;7(1):246–52. https://doi.org/10.1155/2010/487197. Vinzuda DU, Sailor GU, Sheth NR. RP-HPLC method for determination of valsartan in tablet dosage form. Int J Chem Tech Res. 2010;2(3):1461–7. Kokil SU, Bhatia MS. Simultaneous estimation of nebivolol hydrochloride and valsartan using RP HPLC. Indian J Pharm Sci. 2009;71(2):111. https://doi.org/10.4103/0250-474X.54270. Sharma M, Kothari C, Sherikar O, Mehta P. Concurrent estimation of amlodipine besylate, hydrochlorothiazide and valsartan by RP-HPLC, HPTLC and UV–spectrophotometry. J Chromatogr Sci. 2013;52(1):27–35. https://doi.org/10.1093/chromsci/bms200. Ramadan NK, Mohamed HM, Moustafa AA. Rapid and highly sensitive HPLC and TLC methods for quantitation of amlodipine besilate and valsartan in bulk powder and in pharmaceutical dosage forms and in human plasma. Anal Lett. 2010;43(4):570–81. https://doi.org/10.1080/00032710903406953. Sudesh BM, Uttamrao KS. Determination and validation of valsartan and its degradation products by isocratic HPLC. J Chem Metrol. 2009;3(1):1. Macek J, Klima J, Ptáček P. Rapid determination of valsartan in human plasma by protein precipitation and high-performance liquid chromatography. J Chromatogr B. 2006;832(1):169–72. https://doi.org/10.1016/j.jchromb.2005.12.035. Şatana E, Altınay Ş, Göğer NG, Özkan SA, Şentürk Z. Simultaneous determination of valsartan and hydrochlorothiazide in tablets by first-derivative ultraviolet spectrophotometry and LC. J Pharm Biomed Anal. 2001;25(5–6):1009–13. https://doi.org/10.1016/S0731-7085(01)00394-6. Tatar S, Sağlık S. Comparison of UV-and second derivative-spectrophotometric and LC methods for the determination of valsartan in pharmaceutical formulation. J Pharm Biomed Anal. 2002;30(2):371–5. https://doi.org/10.1016/S0731-7085(02)00360-6. Lakshmi K, Lakshmi S. Simultaneous spectrophotometric determination of valsartan and hydrochlorothiazide by H-point standard addition method and partial least squares regression. Acta Pharmaceutica. 2011;61(1):37–50. https://doi.org/10.2478/v10007-011-0007-5. Erk N. Spectrophotometric analysis of valsartan and hydrochlorothiazide. Anal Lett. 2002;35(2):283–302. https://doi.org/10.1081/AL-120002530. Anandakumar K, Jayamariappan M. Absorption correction method for the simultaneous estimation of amlodipine besylate, valsartan and hydrochlorothiazide in bulk and in combined tablet dosage form. Int J Pharm Pharm Sci. 2011;3(1):23–7. Gupta KR, Mahapatra AD, Wadodkar AR, Wadodkar SG. Simultaneous UV spectrophotometric determination of valsartan and amlodipine in tablet. Int J Chem Tech Res. 2010;2(1):551–6. Mohamed NG. Simultaneous determination of amlodipine and valsartan. Anal Chem Insights. 2011. https://doi.org/10.4137/ACI.S7282. Shaalan RA, Belal TS. Simultaneous spectrofluorimetric determination of amlodipine besylate and valsartan in their combined tablets. Drug Test Anal. 2010;2(10):489–93. https://doi.org/10.1002/dta.160. Cagigal E, Gonzalez L, Alonso RM, Jimenez RM. Experimental design methodologies to optimise the spectrofluorimetric determination of losartan and valsartan in human urine. Talanta. 2001;54(6):1121–33. https://doi.org/10.1016/S0039-9140(01)00379-4. Sebaiy MM, El-Adl SM, Baraka MM, Hassan AA. Rapid RP-HPLC method for simultaneous estimation of some antidiabetics; metformin, gliclazide and glimepiride in tablets. Egy J Chem. 2019;62(3):1–12. https://doi.org/10.21608/ejchem.2018.4394.1388. Hashem H, El-Sayed HM. Quality by design approach for development and validation of a RP-HPLC method for simultaneous determination of co-administered levetiracetam and pyridoxine HCl in prepared tablets. Microchem J. 2018;143:55–63. https://doi.org/10.1016/j.microc.2018.07.031. Elhawi MM, Hassan WS, El-Sheikh R, El-Sayed HM. Multivariate analysis of perampanel in pharmaceutical formulations using RP-HPLC. Chromatographia. 2020;83:1335–43. https://doi.org/10.1007/s10337-020-03950-8. Ficarra R, Calabrò ML, Cutroneo P, Tommasini S, Melardi S, et al. Validation of a LC method for the analysis of oxaliplatin in a pharmaceutical formulation using an experimental design. J Pharm Biomed Anal. 2002;29(6):1097–103. https://doi.org/10.1016/S0731-7085(02)00151-6. Thakur D, Kaur A, Sharma S. Application of QbD based approach in method development of RP-HPLC for simultaneous estimation of antidiabetic drugs in pharmaceutical dosage form. J Pharm Investig. 2017;47(3):229–39. https://doi.org/10.1007/s40005-016-0256-x. Guang W, Baraldo M, Furlanut M. Calculating percentage prediction error: a user's note. Pharmacol Res. 1995;32:241–8. https://doi.org/10.1016/S1043-6618(05)80029-5. US Food and drug administration, guidance for industry: bioanalytical method validation. 2001. (http://www.fda.go v/downloads/Drugs/Guidance/ucm070107.pdf). Zimmer D, New US. FDA draft guidance on bioanalytical method validation versus current FDA and EMA guidelines: chromatographic methods and ISR. Bioanal. 2014;6(1):13–9. https://doi.org/10.4155/bio.13.298. CDER Center for drug evaluation and research reviewer guidance validation of chromatographic methods. 1994. https://www.fda.gov/downloads/drugs/guidances/ucm134409.pdf. Bhaskaran NA, Kumar L, Reddy MS, Pai GK. An analytical "quality by design" approach in RP-HPLC method development and validation for reliable and rapid estimation of irinotecan in an injectable formulation. Acta Pharm. 2021;71:57–79. https://doi.org/10.2478/acph-2021-0008. Knauf H, Mutschler E. Pharmacodynamics and pharmacokinetics of xipamide in patients with normal and impaired kidney function. Eur J Clin Pharmacol. 1984;26:513–20. https://doi.org/10.1007/BF00542150. Sunkara G, Bende G, Mendonza AE, Solar-Yohay S, Biswal S, Neelakantham S, Wagner R, Flarakos J, Zhang Y, Jarugula V. Bioavailability of valsartan oral dosage forms. Clin Pharmacol Drug Dev. 2014;3:132–8. https://doi.org/10.1002/CPDD.56. Rakić T, Kasagić-Vujanović I, Jovanović M, Jančić-Stojanović B, Ivanović D. Comparison of full factorial design, central composite design, and Box-Behnken design in chromatographic method development for the determination of fluconazole and its impurities. Analyt Lett. 2014;47:1334–47. https://doi.org/10.1080/00032719.2013.867503. Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). Medicinal Chemistry Department, Faculty of Pharmacy, Zagazig University, Zagazig, Egypt Mahmoud M. Sebaiy, Sobhy M. El-Adl, Mohamed M. Baraka & Amira A. Hassan Analytical Chemistry Department, Faculty of Pharmacy, Zagazig University, Zagazig, Egypt Heba M. El-Sayed Mahmoud M. Sebaiy Sobhy M. El-Adl Mohamed M. Baraka Amira A. Hassan MMS, AA, H and HMEl-S designed and wrote the research work, SMEl-A and MMB revised the manuscript and supervised the research. All authors read and approved the final manuscript. Correspondence to Mahmoud M. Sebaiy. All experimental protocols in the current study were approved by the Egyptian Network of Research Ethics Committees at the Faculty of Pharmacy, Zagazig University (Approved 2008). All methods were carried out in accordance with relevant regulations and guidelines. Zagazig University Hospital waived consent as the human plasma was provided kindly by Zagazig University Hospital. Sebaiy, M.M., El-Adl, S.M., Baraka, M.M. et al. Quality by design approach for development and validation of a RP-HPLC method for simultaneous estimation of xipamide and valsartan in human plasma. BMC Chemistry 16, 70 (2022). https://doi.org/10.1186/s13065-022-00864-4 QbD Xipamide Human plasma
CommonCrawl
Gevrey asymptotic theory for singular first order linear partial differential equations of nilpotent type — Part I — Examinations on a three-dimensional differentiable vector field that equals its own curl June 2003, 2(2): 233-249. doi: 10.3934/cpaa.2003.2.233 Some results about a bidimensional version of the generalized BO Aniura Milanés 1, Departamento de Matematica, IMECC-UNICAMP, 13081-970, Campinas, SP, Brazil Received February 2002 Revised January 2003 Published March 2003 For the bidimensional version of the generalized Benjamin-Ono equation: $u_t-H^{(x)}u_{x y}+u^p u_y=0, \quad t\in \mathbb R,\quad (x,y)\in \mathbb R^2,$ we use the method of parabolic regularization to prove local well-posedness in the spaces $H^s(\mathbb R^2), \quad s>2$ and in the weighted spaces $\mathcal F_r^s=H^s(\mathbb R^2) \cap L^2((1+x^2+y^2)^rdxdy), \quad s>2,\quad r\in [0,1]$ and $\mathcal F_{1,k}^k=H^k(\mathbb R^2) \cap L^2((1+x^2+y^{2k})dxdy), \quad k\in\mathbb N, \quad k\geq 3. \quad $ As in the case of BO there is lack of persistence for both the linear and nonlinear equations (for $p$ odd) in $\mathcal F_2^s$. That leads to unique continuation principles in a natural way. By standard methods based on $L^p-L^q$ estimates of the associated group we obtain global well-posedness for small initial data and nonlinear scattering for $p\geq 3,\quad s>3$. Nonexistence of square integrable solitary waves of the form $u(x,y,t)=v(x,y-ct),\quad c>0, \quad p\in \{1,2\}$ is obtained using the results about existence of solitary waves of the BO and variational methods. Keywords: wellposedness, nonlinear dispersive equations, Benjamin-Ono equation, nonlinear scattering, unique continuation principles, solitary waves.. Mathematics Subject Classification: 35Q35, 35Q5. Citation: Aniura Milanés. Some results about a bidimensional version of the generalized BO. Communications on Pure & Applied Analysis, 2003, 2 (2) : 233-249. doi: 10.3934/cpaa.2003.2.233 Amin Esfahani, Steve Levandosky. Solitary waves of the rotation-generalized Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 663-700. doi: 10.3934/dcds.2013.33.663 Nakao Hayashi, Pavel Naumkin. On the reduction of the modified Benjamin-Ono equation to the cubic derivative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 237-255. doi: 10.3934/dcds.2002.8.237 Jerry Bona, Hongqiu Chen. Solitary waves in nonlinear dispersive systems. Discrete & Continuous Dynamical Systems - B, 2002, 2 (3) : 313-378. doi: 10.3934/dcdsb.2002.2.313 Dongfeng Yan. KAM Tori for generalized Benjamin-Ono equation. Communications on Pure & Applied Analysis, 2015, 14 (3) : 941-957. doi: 10.3934/cpaa.2015.14.941 Jerry Bona, H. Kalisch. Singularity formation in the generalized Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2004, 11 (1) : 27-45. doi: 10.3934/dcds.2004.11.27 Kenta Ohi, Tatsuo Iguchi. A two-phase problem for capillary-gravity waves and the Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (4) : 1205-1240. doi: 10.3934/dcds.2009.23.1205 Sondre Tesdal Galtung. A convergent Crank-Nicolson Galerkin scheme for the Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1243-1268. doi: 10.3934/dcds.2018051 Agnid Banerjee. A note on the unique continuation property for fully nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2015, 14 (2) : 623-626. doi: 10.3934/cpaa.2015.14.623 H. Kalisch. Stability of solitary waves for a nonlinearly dispersive equation. Discrete & Continuous Dynamical Systems - A, 2004, 10 (3) : 709-717. doi: 10.3934/dcds.2004.10.709 Lufang Mi, Kangkang Zhang. Invariant Tori for Benjamin-Ono Equation with Unbounded quasi-periodically forced Perturbation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 689-707. doi: 10.3934/dcds.2014.34.689 G. Fonseca, G. Rodríguez-Blanco, W. Sandoval. Well-posedness and ill-posedness results for the regularized Benjamin-Ono equation in weighted Sobolev spaces. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1327-1341. doi: 10.3934/cpaa.2015.14.1327 Luc Molinet, Francis Ribaud. Well-posedness in $ H^1 $ for generalized Benjamin-Ono equations on the circle. Discrete & Continuous Dynamical Systems - A, 2009, 23 (4) : 1295-1311. doi: 10.3934/dcds.2009.23.1295 Eddye Bustamante, José Jiménez Urrea, Jorge Mejía. The Cauchy problem for a family of two-dimensional fractional Benjamin-Ono equations. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1177-1203. doi: 10.3934/cpaa.2019057 Juan Belmonte-Beitia, Vladyslav Prytula. Existence of solitary waves in nonlinear equations of Schrödinger type. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1007-1017. doi: 10.3934/dcdss.2011.4.1007 Santosh Bhattarai. Stability of normalized solitary waves for three coupled nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1789-1811. doi: 10.3934/dcds.2016.36.1789 Khaled El Dika. Asymptotic stability of solitary waves for the Benjamin-Bona-Mahony equation. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 583-622. doi: 10.3934/dcds.2005.13.583 Alan Compelli, Rossen Ivanov. Benjamin-Ono model of an internal wave under a flat surface. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4519-4532. doi: 10.3934/dcds.2019185 Peng Gao. Unique continuation property for stochastic nonclassical diffusion equations and stochastic linearized Benjamin-Bona-Mahony equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2493-2510. doi: 10.3934/dcdsb.2018262 John Boyd. Strongly nonlinear perturbation theory for solitary waves and bions. Evolution Equations & Control Theory, 2019, 8 (1) : 1-29. doi: 10.3934/eect.2019001 José R. Quintero. Nonlinear stability of solitary waves for a 2-d Benney--Luke equation. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 203-218. doi: 10.3934/dcds.2005.13.203 Aniura Milanés
CommonCrawl
Multiple invasions, Wolbachia and human-aided transport drive the genetic variability of Aedes albopictus in the Iberian Peninsula Host relatedness and landscape connectivity shape pathogen spread in the puma, a large secretive carnivore Nicholas M. Fountain-Jones, Simona Kraberger, … Meggan E. Craft Epidemiological hypothesis testing using a phylogeographic and phylodynamic framework Simon Dellicour, Sebastian Lequime, … Philippe Lemey Disruption of spatiotemporal clustering in dengue cases by wMel Wolbachia in Yogyakarta, Indonesia Suzanne M. Dufault, Stephanie K. Tanamas, … Katherine L. Anders Patterns and processes of pathogen exposure in gray wolves across North America Ellen E. Brandell, Paul C. Cross, … Peter J. Hudson Invasive Burmese pythons alter host use and virus infection in the vector of a zoonotic virus Nathan D. Burkett-Cadena, Erik M. Blosser, … Robert A. McCleery Fine-scale landscape genetics unveiling contemporary asymmetric movement of red panda (Ailurus fulgens) in Kangchenjunga landscape, India Supriyo Dalui, Hiren Khatri, … Mukesh Thakur Inference of malaria reproduction numbers in three elimination settings by combining temporal data and distance metrics Isobel Routledge, H. Juliette T. Unwin & Samir Bhatt Deterministic processes structure bacterial genetic communities across an urban landscape J. M. Hassell, M. J. Ward, … E. M. Fèvre Estimating Spatio-Temporal Dynamics of Aedes Albopictus Dispersal to Guide Control Interventions in Case of Exotic Arboviruses in Temperate Regions Francesca Marini, Beniamino Caputo, … Alessandra della Torre Federica Lucati1,2, Sarah Delacour3, John R.B. Palmer2, Jenny Caner1, Aitana Oltra1, Claudia Paredes-Esquivel4, Simone Mariani1, Santi Escartin1,5, David Roiz6, Francisco Collantes7, Mikel Bengoa8, Tomàs Montalvo9,10, Juan Antonio Delgado7, Roger Eritja1,11, Javier Lucientes3, Andreu Albó Timor1, Frederic Bartumeus1,11,12 & Marc Ventura1 Scientific Reports volume 12, Article number: 20682 (2022) Cite this article Haplotypes The Asian tiger mosquito, Aedes albopictus, is one of the most invasive species in the world. Native to the tropical forests of Southeast Asia, over the past 30 years it has rapidly spread throughout tropical and temperate regions of the world. Its dramatic expansion has resulted in public health concerns as a consequence of its vector competence for at least 16 viruses. Previous studies showed that Ae. albopictus spread has been facilitated by human-mediated transportation, but much remains unknown about how this has affected its genetic attributes. Here we examined the factors that contributed to shaping the current genetic constitution of Ae. albopictus in the Iberian Peninsula, where the species was first found in 2004, by combining population genetics and Bayesian modelling. We found that both mitochondrial and nuclear DNA markers showed a lack of genetic structure and the presence of worldwide dominant haplotypes, suggesting regular introductions from abroad. Mitochondrial DNA showed little genetic diversity compared to nuclear DNA, likely explained by infection with maternally transmitted bacteria of the genus Wolbachia. Multilevel models revealed that greater mosquito fluxes (estimated from commuting patterns and tiger mosquito population distribution) and spatial proximity between sampling sites were associated with lower nuclear genetic distance, suggesting that rapid short- and medium-distance dispersal is facilitated by humans through vehicular traffic. This study highlights the significant role of human transportation in shaping the genetic attributes of Ae. albopictus and promoting regional gene flow, and underscores the need for a territorially integrated surveillance across scales of this disease-carrying mosquito. The Asian tiger mosquito, Aedes (Stegomyia) albopictus (Skuse 1894) is a highly invasive species that originated in the tropical forests of Southeast Asia1. However, in the late 1970s it started a dramatic expansion throughout tropical and temperate regions of the world and it is now present in the five populated continents2. This species is an ecological generalist capable of rapid evolution and, with the aid of man, speedy colonization of new habitats1. While in its native range Ae. albopictus inhabits forested areas, breeding in natural sites such as tree holes, bromeliads and bamboo stumps, it has now adapted to breed also in artificial man-made water containers from urban and suburban human settlements3. This species has an opportunistic feeding behaviour with a strong preference for mammals, especially humans4,5. Furthermore, in temperate regions, its eggs can survive the cold winters by entering diapause3. This ecological adaptability has important implications for the epidemiology of several mosquito-borne diseases since the tiger mosquito is a competent laboratory vector of at least 16 viruses5,6. While Ae. aegypti is considered as the principal vector of dengue and Zika, Ae. albopictus is a less efficient epidemic vector (with local exceptions in some cases), having developed an enhanced transmission for chikungunya facilitated by genetic adaptation (E1-A226V substitution) of the ECSA strain6,7,8,9. Ae. albopictus has a role on sporadic autochthonous disease transmission of arboviruses (as chikungunya, dengue and Zika) in Europe, and therefore its surveillance and control are considered a regional priority10. Invasive alien species have historically been spread by humans. However, advances in transportation logistics resulting in higher air traffic and sea-born trading are driving a more rapid dispersal of non-indigenous species in the world, including many vectors of human diseases11. Ae. albopictus is one of top invasive alien species in the world, being considered, together with Ae. aegypti, the most costly invasive species12,13. Its expansion in Europe coincides with the wave of introductions of invasive species in the continent, which started almost 40 years ago as a result of globalization14. Considering its limited flight range (less than 200 m/day)15, a crucial element for its success is the longevity of its desiccation-resistant eggs16, which can be passively transported by humans through commercial shipping of used tires and aquatic plants3 and ground vehicles10,17. In Spain, Ae. albopictus was first detected in Sant Cugat del Vallès, Catalonia, in 200418 as a result of the nuisances associated with its aggressive anthropophilic behaviour19. Almost twenty years later, this species is well established in the Mediterranean coast of Spain and it is now colonizing inner territories of the Iberian Peninsula17,20. A biological invasion is a three-step process, which involves: initial dispersal, establishment of self-sustaining populations and spread to neighbouring habitats21,22. It is, however, during the initial dispersal when proactive management efforts can be more cost-effective for preventing the establishment of invasive species22,23. For instance, successful surveillance in New Zealand in the 1990s intercepted the entrance of Ae. albopictus24, which has not been reported so far in this country. On the other hand, once this has become established, it is highly difficult to eradicate16. In this respect, studying Ae. albopictus population genetic structure and identifying its dispersal routes, its main drivers and scales, is crucial to understand the tiger mosquito's spread and design integrated surveillance and preparedness strategies25,26. Previous population genetic studies pointed to a worldwide chaotic dispersion pattern in Ae. albopictus, with Europe harbouring several distinct genotypes that have been linked to multiple independent introductions (e.g.25,27,28). Furthermore, in Europe human transportation networks have been shown to have facilitated the spread of this mosquito and conditioned its genetic and demographic patterns29,30,31,32. As different marker types can tell different stories33,34, it is crucial to analyse both mitochondrial (mtDNA) and nuclear (nDNA) DNA data to understand what major factors contributed to shaping the genetic attributes of invasive populations. In this regard, the presence of maternally-transmitted endosymbiotic bacteria may be relevant, as they can distort mtDNA phylogenies and reduce mtDNA diversity, a process which may have little to no effects on nDNA variation35. If a maternally-inherited symbiont confers a selective advantage to its hosts, the mtDNA variants originally associated with the symbiont can rapidly spread through the host population and go to fixation, thus resulting in a great increase in the frequency of a single or few mtDNA haplotypes36. Such symbionts are widespread in many arthropod species, being Wolbachia the most common maternally-inherited symbiont on the planet37,38. For this reason, incorporation of nDNA data is essential to corroborate results derived from mtDNA. It is now widely accepted that a progress in the study of the ecology of Ae. albopictus requires an interdisciplinary approach11,39. For instance, population genetics and multilevel modelling analyses can be crucial to shed light on Ae. albopictus patterns of dispersal. Besides, it has been shown that genetic diversity in this species has been related to a higher adaptive potential29 and a diversification of its interactions with the pathogens it carries39. Understanding movement patterns in its populations in endemic areas is crucial to disentangle the dynamics of disease transmission between vectors and humans15. In this study we assess geographic patterns of mtDNA and nDNA variation in Ae. albopictus and use this information to investigate the species' dispersal routes across Spain. We also compare features of mtDNA and nDNA variation and assess whether they are incongruent, and test for Wolbachia presence to ascertain whether mtDNA diversity is affected by this maternally-transmitted parasite. Genetic variation and structure, and temporal trends of genetic diversity The nuclear DNA alignment (second internal transcribed spacer of ribosomal DNA—ITS2) included 470 sequences of 286 bp (424 sequences from Spain + 30 from France + 16 from Greece), while for the mitochondrial DNA alignment (cytochrome c oxidase gene subunit 1—COI) we obtained 471 sequences of 513 bp (424 from Spain + 40 from France + 7 from Greece) (Fig. 1, Table 1). Haplotype and nucleotide diversity estimates calculated at the province level ranged from 0.490 to 1 and from 0.004 to 0.064 for ITS2, respectively, and from 0 to 0.350 and from 0 to 0.0014 for COI (Table 1). Locations sampled for genetic analysis. Yellow circles are centered on sample locations with radius proportional to number of mosquitoes sampled at each location. Blue diamonds indicate locations of major commercial ports in Spain (ports with over 50,000 TEU -Twenty-Foot Equivalent Unit-, based on AAPA -American Association of Port Authorities- data from 2013). Grey lines indicate provinces of Spain. Land boundaries from Natural Earth. Province boundaries from GADM. Table 1 Basic genetic statistics of Ae. albopictus sampled provinces and countries for mtDNA (COI) and nuclear DNA (ITS2). In the Iberian Peninsula, overall ITS2 genetic diversity was between three (haplotype diversity) and 10 (nucleotide diversity) times higher than for COI. Indeed, overall haplotype and nucleotide diversities were 0.7084 ± 0.025 and 0.00566 ± 0.00048 for ITS2, respectively, and 0.2196 ± 0.027 and 0.00054 ± 0.00008 for COI. We found 78 haplotypes defined by 83 polymorphic sites (21 parsimony informative) for ITS2, whereas only 18 haplotypes defined by 20 polymorphic sites (5 parsimony informative) were detected in the case of COI (Fig. 2). Haplotype networks of COI mtDNA sequences (A) and ITS2 nuclear DNA sequences (B) analysed in Ae. albopictus. Each circle represents a unique haplotype and the circle area is proportional to the number of sequences of a given haplotype. Blue dots correspond to inferred unsampled haplotypes. No apparent association between haplotypes and geography was detected. Both ITS2 and COI haplotype networks showed the presence of worldwide dominant haplotypes and the absence of a clear genetic structure along the study area (Fig. 2). Specifically, in the ITS2 network, the most frequent and centrally placed haplotypes were observed in localities spanning the entire extension of the study area and were shared with other areas of the world. However, 89.7% of the haplotypes found were newly described (70/78), the majority being low-frequency haplotypes. In the case of COI, we obtained a much simpler star-like network defined by one main central haplotype connected to many low-frequency haplotypes with little genetic differentiation from the dominant sequence. Haplotypes were separated by no more than three nucleotide substitutions, although 83.3% of the haplotypes found were unique (15/18). At the province level, we found a significant positive correlation between genetic diversity and colonisation time for COI, meaning that the provinces that were first colonised by Ae. albopictus bear higher mitochondrial genetic diversity (R2 = 0.497, p = 0.007 for haplotype diversity, R2 = 0.597, p = 0.002 for nucleotide diversity; Fig. 3). However, a deeper look showed that COI's variation in nucleotide diversity over time was almost non-existent (π range: 0–0.0014), likely indicating a statistically significant but biologically irrelevant pattern. On the contrary, we found a significant negative correlation between ITS2 nucleotide diversity and colonisation time (R2 = 0.376, p = 0.022), which displayed higher variation (π range: 0–0.022); the same relationship with haplotype diversity was not significant (Fig. 3). Relationship between genetic diversity and year of first detection in the analysed provinces. The mitochondrial COI fragment is indicated by grey triangles and the nuclear ITS2 gene by black circles. Only provinces with four or more samples were included in the analysis. See Table 1 for province list. Whereas these province-level correlations may be limited, to some extent, by the different sample sizes in each province, we also analysed the genetic distances among the individual sampled mosquitoes. In agreement with the haplotype network, multidimensional scaling (MDS) analysis revealed no apparent evidence of distinct genetic groups or geographic consistency within the nuclear dataset, although it did show a certain level of drift over time (Fig. 4). Most of the samples were grouped into a unique cluster in the centre of the MDS, with the exception of a few samples collected in 2014 and 2015 at the geographic extremes of the study area that laid slightly outside the main cluster (i.e. samples from the southernmost provinces of Málaga, Granada and Murcia, from the northernmost provinces of Gipuzkoa and Girona, and from the Balearic Islands in the east). The two-dimensional MDS solution accounted for 30% of the variance in the data (goodness of fit = 0.302). Multidimensional scaling plots of ITS2 genetic distances among Ae. albopictus samples, coloured by province (A) and year (B) of sample collection. Plots show the 2-dimensional solution using classical scaling. Goodness of fit = 0.302. Again using the individual sampled mosquitoes, a Mantel test of correlation revealed a significant positive correlation between ITS2 pairwise genetic distances and spatial distance (r = 0.163, p = 0.001), and negative correlations with potential tiger mosquito flux (r = − 0.022, p = 0.036) and spatial proximity (r = − 0.042, p = 0.001). The Mantel test did not find evidence of correlation between ITS2 genetic distance and temporal distance. Wolbachia infection From a total of 62 samples screened using the wsp and 16S markers, 49 (79%) from 25 localities tested positive for infection for both markers. Two additional individuals yielded positive amplifications for 16S only and were thus excluded in reporting Wolbachia prevalence. Modelling genetic distance from spatial distance, temporal distance and mosquito flux We use Bayesian multiple-membership multilevel zero-inflated beta regressions to model ITS2 genetic distance as a function of mosquito flux, spatial proximity, spatial distance, and temporal distance (see "Methods"). These models rely on each pair of sampled mosquitoes as the unit of analysis and they suggest that ITS2 genetic distance is better explained by the combination of potential tiger mosquito flux, spatial proximity, spatial distance and temporal distance than by any of these variables on their own or in smaller combinations. This is most clearly seen in our leave-one-out cross validation (LOO) comparison, in which the expected log pointwise predictive density for model 6 (M6) is 138 points lower than the next best model, with a standard error of only 20 (Supplementary Table S1). It is also seen in the Bayesian R-squared comparison, in which M6 also has the highest value, although in this case the differences are small compared to the standard errors (complete model comparisons are presented in Supplementary Fig. S1 and Supplementary Table S2). The slope estimates for the main effects in the equation for the mean (μ) of the beta distribution are shown in Fig. 5A and Supplementary Table S2, and should be read in conjunction with the slope estimates for the main effects in the equation for the probability of zeros (π), shown in Fig. 5B and Supplementary Table S2. The coefficients on the spatial proximity and potential tiger mosquito flux variables are negative in all models of μ and positive in all models of π, even when both variables are included together (model 5 -M5- and M6), indicating that greater spatial proximity and mosquito fluxes are associated with lower ITS2 genetic distance between sampled mosquitoes, with a higher probability of zero distances. In contrast, the coefficient on the temporal distance variable is positive in all models of μ and negative in all models of π (albeit with its posterior distribution overlapping zero in M5 and M6), indicating that greater passage of time between samples is associated with greater ITS2 genetic distance and with lower probability of zero distances (Supplementary Figs. S2, S3). In M6 we find that greater spatial distance is also associated with lower probabilities (π) of zero ITS2 genetic distance (Fig. 5B). Although greater spatial distance is also associated with lower ITS2 genetic distance in the model of μ (Fig. 5A), the combined effect of the two components is an overall positive relationship between spatial distance and ITS2 genetic distance (Fig. 6A). Estimated relationship between ITS2 pairwise genetic distance, spatial distance (sp_dist; geodesic distance between sample locations in meters), spatial proximity (sp_prox; measured as the negative exponential of distance), potential tiger mosquito flux (mosq_flux; estimated from commuting patterns and tiger mosquito population distribution) and temporal distance (yr_diff; measured as absolute difference between years in which samples were taken) on the beta mean (μ) parameter (A) and on the zeros (B) in the zero-inflated beta regression models. Parameters are estimated from a set of Bayesian multilevel zero-inflated Beta regressions with multiple-membership random intercepts for the samples and sampling years represented in each pair. Predicted ITS2 pairwise genetic distance as a function of the spatial distance and spatial proximity variables taken together (A) and potential tiger mosquito flux (B) in the zero-inflated beta regression Model 6. Panel A shows predictions for the range of inter-point distances in the modelled data (0–940 km), holding potential mosquito flux at its observed median, setting the sampling years to 2011 and 2015, and arbitrarily selecting a sample pair and its associated provinces for purposes of the model's random intercepts. The inset plot in this panel shows a close-up of the predictions at very small distances (0–10 m). Panel B shows predictions for the range of mosquito fluxes in the modelled data (0–672 km), holding inter-point distance at its median, setting the sampling years to 2011 and 2015, and arbitrarily selecting a sample pair and its associated provinces. Figure 6A shows the effects predicted by M6 of changes in inter-point distance (reflected simultaneously in the spatial distance and spatial proximity variables) on ITS2 genetic distance. The range of inter-point distance values used for these predictions is the same as that observed in the data. Potential mosquito flux is held at its median, the sampling years are set to 2011 and 2015 (to give the widest range observed) and the sample pair (for purposes of the random intercepts) is arbitrarily selected. The overall pattern is non-linear, reflecting the combination of the spatial proximity and spatial distance variables. There is an initial steep increase in predicted ITS2 genetic distance for samples taken within a few meters of one another, which can be seen most clearly in the inset plot of Fig. 6A. Beyond these highly proximate samples, predicted ITS2 genetic distance then rises more gradually, driven by the spatial distance variable. Figure 6B shows the effects predicted by M6 of changes in potential mosquito flux on ITS2 genetic distance. The range of fluxes used for these predictions is the same as that observed in the data. Inter-point distance is held at its median and used for calculating the spatial proximity and spatial distance variables, the sampling years are set to 2011 and 2015 (to give the widest range observed) and the sample pair (for purposes of the random intercepts) is arbitrarily selected. Here we see a steep drop in predicted ITS2 genetic distance as potential mosquito flux increases from 0 to several hundred per day, with effect of increased mosquito fluxes becoming weaker at higher values, reflecting the log-linear relationship used in the model. Although in this case the pattern is overwhelmed by the uncertainty of the predictions (the wide posterior predictive distributions shown in the lighter shades of blue), this is the result of the combined uncertainty of the other variables in the model; the effect of potential mosquito flux, net of these other variables, is very clearly shown in the parameter estimates in Fig. 5. Figure 7 shows the combined effects of inter-point distance and potential tiger mosquito flux changing together. We see, first, how the lowest values of inter-point distance (bottom edge of plot) correspond with the lowest predicted ITS2 genetic distances while the lowest values of potential tiger mosquito flux (left edge of plot) correspond with the highest. When both variables are at their lowest (bottom left corner), inter-point distance is determinative: predicted ITS2 genetic distance is low here regardless of potential tiger mosquito flux. Beyond these lowest values, we see how predicted tiger mosquito flux acts to hold predicted ITS2 genetic distance down even as distance increases. Although predicted ITS2 genetic distances never reach their lowest values at these longer distances, the potential tiger mosquito fluxes of around 26,000 mosquitoes per day (this is the potential flux, for example, between Barcelona municipality and El Prat de Llobregat) keep the predicted ITS2 genetic distance within the range of 0.0035–0.0036, even at distances of 100 km. Predicted ITS2 pairwise genetic distance (indicated by fill colour) as a function of inter-point distance (the spatial distance and spatial proximity variables taken together) and potential tiger mosquito flux in the zero-inflated beta regression Model 6. Predictions are shown inter-point distances between 0 and 100 km and potential tiger mosquito fluxes between 0 and 30,000, setting the sampling years to 2011 and 2015, and arbitrarily selecting a sample pair and its associated provinces for purposes of the model's random intercepts. Note, finally, that although the ITS2 genetic distances and effect sizes shown in Figs. 5, 6 and 7 are small overall, this should be interpreted in light of the distribution of observed genetics distances, which ranged from 0 to only 0.07, with a standard deviation of 0.01. Aedes albopictus has spread worldwide and particularly in Europe at a fast pace, making it one of the 100 most invasive species on Earth12. In Spain, after Ae. albopictus was first found near Barcelona in 200418, a continuous spread along the Mediterranean coast was observed17. All Mediterranean provinces are currently colonised, along with the Basque Country in the northern coast and several inland territories20, where the species continues to expand. In light of this, understanding Ae. albopictus dispersal routes across scales is crucial for planning effective early warning surveillance in non-invaded areas and implementing surveillance and control activities in the areas already colonised. The same information can also be valuable in predicting the transmission risk of pathogens by this vector. Knowledge of the effect of vehicles and transport infrastructures in the genetic structure of vector populations and their overall spreading capacity can comprehensively show our potential as a natural selective force and the existing contradictions between globalization and our efforts to combat biological invasions and pests40. Our models indicate that at very small spatial scales (i.e. several meters) the genetic variation measured by ITS2 is sharply reduced, likely representing seasonal mosquito pools that come from a main source. Beyond these scales, genetic variability steadily increases with spatial distance, as a clear positive correlation between dispersal distance and genetic variation appears. This is reflected in the combined effects of the (small scale) spatial proximity variable and the linear spatial distance variable. More interestingly, our models suggest that human transportation has a role in shaping Ae. albopictus nuclear genetic structure by means of passive dispersal of adult tiger mosquitoes in cars: there is a clear negative relationship between mosquito flux and genetic variation. Previous studies have also highlighted that, although Ae. albopictus has low natural dispersal capabilities, human-aided transport (especially cars) has probably facilitated significantly the tiger mosquito's movement and invasion process29,30,32,41. Our findings are consistent with these but go a step further by suggesting that the "hitchhiking" of Ae. albopictus in cars observed in Eritja, Palmer, Roiz, Sanpera-Calbet and Bartumeus30 actually helps to explain observed patterns of population genetics. While genetic variability increases with spatial distance, car transport can strongly reduce this effect. The high-resolution of our dataset makes it possible to show the significant role of human transportation in shaping the genetic constitution of Ae. albopictus and promoting regional gene flow. Mosquito movement can be affected by human activities like commuting and human-made structures like roads, which combined act as bridges for dispersal by favouring gene flow and promoting genetic mixing32. At a much broader spatial scale, we find a general lack of genetic structure and geographic consistency among haplotypes across the large expanse of the Iberian Peninsula, with numerous haplotypes shared among several distant areas (e.g. the most abundant ITS2 haplotypes found in Spain were also detected in several other European and Asian countries). This suggests that human-mediated large-scale dispersal of Ae. albopictus is also common, and point to a pattern of regular introductions of the species from abroad, through e.g. transportation of used tires and aquatic plants, which allows for survival and establishment at long distances of whole batches of eggs. Previous genetic studies of Ae. albopictus have also showed little or no genetic structure according to geography, both in the native and introduced range of the species (reviewed in39). Such a genetic feature is likely the consequence of the high level of human-mediated spread from several genetically distinct source populations followed by global dispersal, and it is concordant to what has been found in other wide-ranging invasive insect species, especially those that are closely associated with humans, e.g. the German and American cockroaches (Blattella germanica and Periplaneta americana, respectively)42,43 and the longhorn crazy ant Paratrechina longicornis44. Taken together, our results highlight the role of human activities in promoting unintentional mid- and long-distance dispersal and thus in shaping the current genetic structure of insect species commonly found in human-modified landscapes. Our study has some limitations that should be considered. First, the modelled genetic diversity range, which was obtained from ITS2 pairwise genetic distances, is low. This likely has to do with the relatively low variability of the analysed genetic marker. Indeed, nuclear genes, as well as mitochondrial fragments, are expected to be less variable and bear lower resolving power than highly mutating markers, e.g. microsatellites. Second, although we rely on a relatively large number of sampling sites, additional sampling across larger areas of Spain could provide better balance and a wider range of values for the models. Third, intragenomic heterogeneity (i.e. presence of multiple haplotypes within the same individual) of ITS2 has been reported in several mosquito species including the genus Aedes45,46, and this can pose a challenge in DNA sequencing and analysis. In this study, all individuals presenting intragenomic heterogeneity were thus excluded from analysis. As for future directions, although ITS2 has proved to be a useful marker for studies on the spread of Ae. albopictus47, the reassessment of the species' genetic diversity and population structure through the use of molecular markers with greater variability and/or potential, such as microsatellites and single nucleotide polymorphisms (SNPs), could provide more detailed and deeper insights into the description of fine-scale dispersal patterns, gene flow and introduction routes. Global patterns of mtDNA and nuclear variation were highly discordant, with mtDNA showing little genetic diversity and a single star-like haplotype network. This is in agreement with previous studies showing Ae. albopictus levels of nuclear variation within the range of most insects, but extremely low mtDNA variation both within and among populations48,49,50,51. Interestingly, we found a high infection rate (79%) of Wolbachia in the studied Ae. albopictus samples. Wolbachia is a genus of maternally-inherited endosymbiotic bacteria that is known to induce male killing, feminization, parthenogenesis and cytoplasmic incompatibility, which facilitate its spread within the arthropod population52. Wolbachia is capable of inducing selective sweeps in mtDNA, i.e. fixation of a single or few mtDNA haplotypes that may become widespread in the host population through cytoplasmic hitchhiking driven by Wolbachia invasions36. Selective sweeps on mtDNA have been shown to not only reduce haplotype diversity producing a characteristic single star-like network, but also to cause the remaining set of haplotypes to deviate from neutrality35. Within Culicidae, the natural presence of Wolbachia has been documented in more than 30 species (e.g.53,54,55,56), with Ae. albopictus harbouring significantly lower mtDNA diversity than the uninfected species55. In light of this, our results point to Wolbachia as causative agent for the lack of mitochondrial polymorphism here recovered, as suggested for other insect species, e.g. Acraea butterflies57 and the cherry fruit fly Rhagoletis cerasi58. Nevertheless, it has to be noted that Wolbachia can show seasonal fluctuations in infection rates59, which were not addressed in this study. Hence, the infection rate reported here may not capture a representative overview of Wolbachia infection. Moreover, the low variability in mtDNA in the introduced ranges of Ae. albopictus could also be caused by demographic processes, such as genetic drift or population bottleneck during rapid colonization60. However, demographic processes cause changes in variation for both mitochondrial and nuclear markers, even though mtDNA is expected to show a stronger response61. Furthermore, the lack of mtDNA variation was also found in the native range of Ae. albopictus (48, but see62), strengthening our hypothesis of a Wolbachia-induced selective sweep. Alternatively, we cannot rule out that the low COI diversity here detected may be at least partially due to the length of the analysed fragment, as higher diversity has been observed when targeting larger COI fragments (> 1300 bp)63,64. Further research is needed to test this hypothesis. Mitochondrial DNA has been extensively used to shed light on the geographic origin of invasive Ae. albopictus populations and arthropods in general49,50,65, and disentangle their phylogeographic history62,66. Nevertheless, there is broad recognition that mtDNA can commonly be under selection, thus challenging the assumption of neutrality postulated by several population, phylogeographic and phylogenetic studies33,67,68. Selection can arise due to e.g. mito-nuclear co-evolution, adaptation of mtDNA to different climatic/environmental conditions, and selective sweeps of beneficial mtDNA haplotypes68. Mitochondrial DNA is a single, linked molecule with low or no recombination, meaning that selective processes such as selective sweeps can have a profound impact on the apparent rate of genetic drift33. Therefore, we suggest caution should be used in drawing conclusions from mtDNA alone and, whenever possible, aim for a multilocus approach to achieve a correct understanding of the genetic population structure and history of the study species. Sample collection and DNA extraction Ae. albopictus samples were collected during the period 2011–2015 at 140 locations encompassing most of the current species distribution in Spain (13 provinces mostly located along the Mediterranean coast; Fig. 1, Table 1). Additional samples were collected from two areas of France (Nice N 43° 38′ 32′′ E 7° 5′ 27′′ and Montpellier N 43° 40′ 39'' E 4° 2′ 35′′) and two regions of Greece (the area of Pylaia, in Salonica N 40° 27′ 22′′ E 23° 13′ 20′′ and Sykia N 40° 2′ 19'' E 23°56′ 22′′), which were used as complementary data from these Mediterranean areas (Table 1). Sample collection was performed using different methods according to the life stage (adults, larvae or eggs) (see Supplementary Methods for details). All samples were stored in absolute ethanol and kept at -20 °C until genetic analyses. DNA was extracted from the whole bodies of mosquitoes (adults or larvae) in a final volume of 250 µL using the HotShot protocol69. Ae. albopictus nuclear and mitochondrial gene sequencing We amplified two gene regions, including one nuclear ribosomal gene (ITS2) and one mitochondrial fragment (COI). The following primers were used for amplification and sequencing: for ITS2, primers ITS-CP-P1A (5′-GTGGATCCTGTGAACTGCAGGACACATG-3′) and ITS-CP-P1B (5′-GTGTCGACATGCTTAAATTTAGGGGGTA-3′)70, and for COI, primers LCOI490 (5′-GGTCAACAAATCATAAAGATATTGG-3′) and HCO2198 (5′-TAAACTTCAGGGTGACCAAAAAATCA-3′)71 or the degenerated primers ZplankF1_M13 (5′-TGTAAAACGACGGCCAGTTCTASWAATCATAARGATATTGG-3′) and ZplankR1_M13 (5′-CAGGAAACAGCTATGACTTCAGGRTGRCCRAARAATCA-3′)72 that had the M13 primers attached following the suggestion of Ivanova, Zemlak, Hanner and Hebert73. Further details on amplification and sequencing are presented in the Supplementary Methods. In the case of ITS2, 22.7% (138) of the individuals could not be considered for analysis due to intragenomic heterogeneity (simultaneous presence of two or more haplotypes in the same individual). Resulting sequences were aligned with other relevant Ae. albopictus sequences from each gene (retrieved from GenBank) using the ClustalW algorithm in MEGA 774. Wolbachia screening To test whether mtDNA variability could be affected by the presence of endosymbiotic bacteria of the genus Wolbachia, we analysed 62 randomly selected samples (13.2% of the overall samples). Individuals were selected aiming to cover all sampling years and all studied provinces. Two molecular markers were used for detecting Wolbachia infection, namely wsp and 16S rDNA. Primers used for amplification and sequencing were: for the wsp marker, primers 81F (5′-TGGTCCAATAAGTGATGAAGA-3′) and 691R (5′-AAAAATTAAACGCTACTCCA-3′)75,76, while for 16S, primers 16SF (5′-CGGGGGAAAAATTTATTGCT-3′) and 16SR (5′-AGCTGTAATACAGAAAGTAAA-3′)77,78. Amplification conditions followed Wiwatanaratanabutr53 in the case of wsp, and Heddi, Grenier, Khatchadourian, Charles and Nardon78 in the case of 16S. PCR products were visualized on a 1% agarose gel. To validate the results, two PCR replicates were run for each sample (following Carvajal, Hashimoto, Harnandika, Amalin and Watanabe54). A third replicate was run for samples that showed incongruent results based on the two prior replicates. Wolbachia infection was confirmed by two successful amplifications of both molecular markers. In light of the high infection rate detected in the selected samples (see Results), we decided to exclude COI from population structure and multilevel modelling analyses (see below), as mtDNA diversity was likely affected by the presence of Wolbachia. Genetic diversity and structure Genetic diversity indices including number of haplotypes, number of polymorphic sites, haplotype diversity and nucleotide diversity for mtDNA and nuclear DNA were estimated for the whole dataset and for each analysed province using DNASP 6.11.0179. To estimate gene genealogies, we used HAPLOVIEWER, which turns trees built from traditional phylogenetic methods into haplotype genealogies80. We estimated the phylogeny using a maximum-likelihood approach as implemented in RAxML 7.7.181, with a GTRCAT model rate of heterogeneity and no invariant sites for COI, and a gamma model rate of heterogeneity and invariant sites (GTR+G+I) for ITS2. The most appropriate model of nucleotide evolution was selected using jModelTest 2.1.382 under the Akaike information criterion (AIC). Input data were COI or ITS2 sequences from each individual, subsequently collapsed into haplotypes. The best tree was selected for network construction in HAPLOVIEWER. Population structure of nuclear DNA was characterized using classical multidimensional scaling (MDS), also known as principal coordinates analysis83. Pairwise distances between ITS2 sequences were calculated in MEGA by estimation of evolutionary divergence over sequence pairs, using the Kimura 2-parameter substitution model. MDS analysis was then performed with the pairwise genetic distance matrix, using the cmdscale function of the stats package in R 4.1.284. Construction of covariates Our analysis of the drivers of genetic distance focused on four constructed covariates: spatial distance, spatial proximity, temporal distance, and potential tiger mosquito flux. We constructed the spatial distance variable as the geodesic distance between the locations of each sampled mosquito (inter-point distance) in meters, calculated using the Vincenty method on the WGS84 ellipsoid as implemented in the geosphere package for R85. We constructed spatial proximity as the negative exponential of spatial distance in km, represented as e-d, where e is Euler's constant and d is spatial distance in km. This approach gives high values for spatial proximity (0.82–1) within the 200 m buffer often taken as the tiger mosquito's maximum flying distance, with values then quickly dropping and nearing zero by 5 km. The combination of these spatial distance and spatial proximity variables together is intended to capture the effects of inter-point distance at different scales. We constructed the temporal distance variable as the absolute number of years elapsed between the sampling times when each member of the sample pair was captured, rounded up to the nearest year (another way of thinking about this is as the number of mosquito seasons between each capture time). We constructed potential tiger mosquito flux as the potential daily bidirectional gross number of Ae. albopictus moving between each pair of municipalities. We estimated this by combining municipality-level Ae. albopictus risk estimates from the Mosquito Alert citizen science platform86 with commuter flow estimates drawn from the Spanish Labour Force Survey (LFS) in a manner similar to that described in Eritja, Palmer, Roiz, Sanpera-Calbet and Bartumeus30 (see Supplementary Methods for further details). Modelling the influence of spatial proximity, mosquito flux, and temporal distance on genetic distance We explored drivers of nuclear genetic structure by carrying out simple Mantel tests of correlations87 between ITS2 pairwise genetic distance and potential tiger mosquito flux, spatial distance, spatial proximity, and temporal distance, using pairs of individual mosquitoes as the unit of analysis. We implemented all tests in the ecodist package for R88, using 1000 permutations with bootstrap confidence limits estimated using 500 iterations. We then used Bayesian multiple membership multilevel regressions to model ITS2 pairwise genetic distances between sampled mosquito pairs as a function of potential tiger mosquito flux between sampling sites, spatial distance and spatial proximity between sampling sites, and temporal distance. We relied on zero-inflated beta regressions89, in which the distribution of the stochastic component is a mixture of a beta distribution and a degenerate distribution in 0. Following Figueroa-Zúñiga, Arellano-Valle and Ferrari90 and Branscum, Johnson and Thurmond91, we used the beta distribution for ITS2 genetic distance values in (0, 1) because it is a highly flexible distribution that is defined on that interval. As these authors have done, we parameterized the beta distribution in terms of a mean (μ) and a parameter (ϕ) that captures precision. The probability density function of a variable (y) in this parameterization is: $$b\left(y|\mu ,\phi \right)=\frac{\Gamma \left(\phi \right)}{\Gamma \left(\mu \phi \right)\Gamma \left(\mu \phi \right)\Gamma \left(\left(1-\mu \right)\phi \right)}{y}^{\mu \phi -1}{\left(1-y\right)}^{\left(1-\mu \right)\phi -1},0<y<1$$ Since the ITS2 genetic distances in our dataset include zeros, for which the beta distribution is undefined, we used a zero-inflation component in our models. In addition to making it possible to include the zeros in the analysis (without adding artificial noise to them), this approach also has the advantage of treating them separately, which should minimize any problems associated with the inadvertent sampling of siblings (see Supplementary Methods). As explained in Ospina and Ferrari92, the probability density function of a variable y for this zero-inflated beta (zib) mixture is: $${\text{zib}}\left(y|\pi ,\mu ,\phi \right)=\left\{\begin{array}{ll}\pi ,& \text{if ITS2 distance}\hspace{0.25em}=0\\ \left(1-\pi \right)b\left(y|\mu ,\phi \right),& \text{if ITS2 distance}\hspace{0.25em}\in \left(0,1\right)\end{array}\right.$$ where 0 < π < 1 is the probability of observing an ITS2 distance of 0, and μ and ϕ are defined as above. We treated the observed ITS2 genetic distances between each sample pair i as independent random variables y1,…,yn drawn from this zero inflated beta distribution such that \({y}_{i}\sim {\text{zib}}\left(y|{\pi }_{i},{\mu }_{i},{\phi }_{i}\right)\). We used a logit link to model both π and μ, and a log link to model ϕ. We fit six models (M1–M6) with different combinations of mosquito flux, spatial proximity, temporal distance, and spatial distance as covariates. The models for μ are specified as following: $$\begin{array}{ll}\text{M1:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\mu }_{i}}{1-{\mu }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+\sum_{k\in {\text{ID}}\left(i\right)}{\alpha }_{k}+{\beta }_{1}\text{sp\_prox}\\ \text{M2:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\mu }_{i}}{1-{\mu }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+\sum_{k\in {\text{ID}}\left(i\right)}{\alpha }_{k}+{\beta }_{2}\text{ln\_mosq\_flux}\\ \text{M3:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\mu }_{i}}{1-{\mu }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+\sum_{k\in {\text{ID}}\left(i\right)}{\alpha }_{k}+{\beta }_{1}\text{sp\_prox}+{\beta }_{3}\text{yr\_diff}\\ \text{M4:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\mu }_{i}}{1-{\mu }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+\sum_{k\in {\text{ID}}\left(i\right)}{\alpha }_{k}+{\beta }_{2}\text{ln\_mosq\_flux}+{\beta }_{3}\text{yr\_diff}\\ \text{M5:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\mu }_{i}}{1-{\mu }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+\sum_{k\in {\text{ID}}\left(i\right)}{\alpha }_{k}+{\beta }_{1}\text{sp\_prox}+{\beta }_{2}\text{ln\_mosq\_flux}+{\beta }_{3}\text{yr\_diff}\\ \text{M6:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\mu }_{i}}{1-{\mu }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+\sum_{k\in {\text{ID}}\left(i\right)}{\alpha }_{k}+{\beta }_{1}\text{sp\_prox}+{\beta }_{2}\text{ln\_mosq\_flux}+{\beta }_{3}\text{yr\_diff}+{\beta }_{4}\text{sp\_dist}\end{array}$$ where αj represents a random intercept for each of the years in which the mosquitoes in pair i were sampled, and αk represents a random intercept for each of the sampled mosquitoes. This multiple membership approach allows us to account for the effects of sample year and sample itself, which could otherwise confound results given that each pair may be connected to other pairs by the sampling years and sampling that they share93. \({\beta }_{1}\), \({\beta }_{2}\), \({\beta }_{3}\) and \({\beta }_{4}\) are the slopes on the spatial proximity (sp_prox), log mosquito flux (ln_mosq_flux), temporal distance (yr_diff) and spatial distance (sp_dist) variables, and α0 is the overall model intercept. We explicitly model the ϕ parameter of our beta distributions as a function of the provinces in which each sample in the pair was taken. This allows us to explore how the precision of the beta distribution changes across geography. The choice of provinces as the areal units for this part of the model is based on these units representing patterns of human settlement and activity that should be of relevance to tiger mosquito spreading patterns, while also being large enough to avoid adding too many additional parameters to an already complicated model. The model for ϕ is the same in all six models: $$\mathrm{ln}\left({\phi }_{i}\right)={\alpha }_{0}+\sum_{j\in {\text{prv}}\left(i\right)}{\alpha }_{j}$$ Finally, we modelled π (the probability of ITS2 pairwise distance being equal to zero) as: $$\begin{array}{ll}\text{M1:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\pi }_{i}}{1-{\pi }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+{\beta }_{1}\text{sp\_prox}\\ \text{M2:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\pi }_{i}}{1-{\pi }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+{\beta }_{2}\text{ln\_mosq\_flux}\\ \text{M3:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\pi }_{i}}{1-{\pi }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+{\beta }_{1}\text{sp\_prox}+{\beta }_{3}\text{yr\_diff}\\ \text{M4:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\pi }_{i}}{1-{\pi }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+{\beta }_{2}\text{ln\_mosq\_flux}+{\beta }_{3}\text{yr\_diff}\\ \text{M5:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\pi }_{i}}{1-{\pi }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+{\beta }_{1}\text{sp\_prox}+{\beta }_{2}\text{ln\_mosq\_flux}+{\beta }_{3}\text{yr\_diff}\\ \text{M6:}\hspace{0.17em}\hspace{0.17em}\mathrm{ln}\left(\frac{{\pi }_{i}}{1-{\pi }_{i}}\right)& ={\alpha }_{0}+\sum_{j\in {\text{yr}}\left(i\right)}{\alpha }_{j}+{\beta }_{1}\text{sp\_prox}+{\beta }_{2}\text{ln\_mosq\_flux}+{\beta }_{3}\text{yr\_diff}+{\beta }_{4}\text{sp\_dist}\end{array}$$ The systematic part of the model for π is almost identical to the systematic part of the model for μ. The only difference is the absence of the sample random intercepts, which are excluded here for computational reasons. Details on model fitting and accuracy estimation are outlined in the Supplementary Methods. Newly generated mitochondrial and nuclear DNA sequence data were deposited in GenBank under accession numbers OP060971-OP060988 and OP077008-OP077085. Hawley, W. A. The biology of Aedes albopictus. J. Am. Mosq. Control Assoc. 1, 1–39 (1988). Benedict, M. Q., Levine, R. S., Hawley, W. A. & Lounibos, L. P. Spread of the tiger: global risk of invasion by the mosquito Aedes albopictus. Vector-Borne Zoonotic Dis. 7, 76–85 (2007). Paupy, C., Delatte, H., Bagny, L., Corbel, V. & Fontenille, D. Aedes albopictus, an arbovirus vector: From the darkness to the light. Microb. Infect. 11, 1177–1185 (2009). Delatte, H. et al. Blood-feeding behavior of Aedes albopictus, a vector of Chikungunya on La Réunion. Vector-Borne Zoonotic Dis. 10, 249–258 (2010). Pereira-dos-Santos, T., Roiz, D., Lourenço-de-Oliveira, R. & Paupy, C. A systematic review: Is Aedes albopictus an efficient bridge vector for zoonotic arboviruses? Pathogens 9, 266 (2020). Gratz, N. Critical review of the vector status of Aedes albopictus. Med. Vet. Entomol. 18, 215–227 (2004). Grard, G. et al. Zika virus in Gabon (Central Africa)—2007: A new threat from Aedes albopictus? PLoS Negl. Trop. Dis. 8, e2681 (2014). Lambrechts, L., Scott, T. W. & Gubler, D. J. Consequences of the expanding global distribution of Aedes albopictus for dengue virus transmission. PLoS Negl. Trop. Dis. 4, e646 (2010). Lounibos, L. P. & Kramer, L. D. Invasiveness of Aedes aegypti and Aedes albopictus and vectorial capacity for chikungunya virus. J. Infect. Dis. 214, S453–S458 (2016). European Centre for Disease Prevention and Control (ECDC). Vector Control with a Focus on Aedes aegypti and Aedes albopictus Mosquitoes: Literature Review and Analysis of Information (ECDC, Stockholm, Sweden, 2017). Tatem, A. J., Hay, S. I. & Rogers, D. J. Global traffic and disease vector dispersal. PNAS 103, 6242–6247 (2006). Lowe, S., Browne, M., Boudjelas, S. & De Poorter, M. 100 of the World's Worst Invasive Alien Species: A Selection From the Global Invasive Species Database, Vol. 12 (Invasive Species Specialist Group, 2000). Diagne, C. et al. High and rising economic costs of biological invasions worldwide. Nature 592, 571–576 (2021). Hulme, P. E. Trade, transport and trouble: Managing invasive species pathways in an era of globalization. J. Appl. Ecol. 46, 10–18 (2009). Marini, F., Caputo, B., Pombi, M., Tarsitani, G. & Della-Torre, A. Study of Aedes albopictus dispersal in Rome, Italy, using sticky traps in mark–release–recapture experiments. Med. Vet. Entomol. 24, 361–368 (2010). Bonizzoni, M., Gasperi, G., Chen, X. & James, A. A. The invasive mosquito species Aedes albopictus: current knowledge and future perspectives. Trends Parasitol. 29, 460–468 (2013). Collantes, F. et al. Review of ten-years presence of Aedes albopictus in Spain 2004–2014: Known distribution and public health concerns. Parasit Vectors 8, 1–11 (2015). Aranda, C., Eritja, R. & Roiz, D. First record and establishment of the mosquito Aedes albopictus in Spain. Med. Vet. Entomol. 20, 150–152 (2006). Giménez, N. et al. Introduction of Aedes albopictus in Spain: A new challenge for public health. Gac. Sanit. 21, 25–28 (2007). European Centre for Disease Prevention and Control and European Food Safety Authority. Mosquito maps [internet]. Stockholm: ECDC. https://ecdc.europa.eu/en/disease-vectors/surveillance-and-disease-data/mosquito-maps (2022). Shigesada, N. & Kawasaki, K. Biological Invasions: Theory and Practice (Oxford University Press, 1997). Puth, L. M. & Post, D. M. Studying invasion: Have we missed the boat? Ecol. Lett. 8, 715–721 (2005). Leung, B. et al. An ounce of prevention or a pound of cure: Bioeconomic risk analysis of invasive species. Proc. R Soc. Lond. Ser. B Biol. Sci. 269, 2407–2413 (2002). Lounibos, L. P. Invasions by insect vectors of human disease. Annu. Rev. Entomol. 47, 233–266 (2002). Manni, M. et al. Genetic evidence for a worldwide chaotic dispersion pattern of the arbovirus vector, Aedes albopictus. PLoS Negl. Trop. Dis. 11, e0005332 (2017). Roiz, D. et al. Integrated Aedes management for the control of Aedes-borne diseases. PLoS Negl. Trop. Dis. 12, e0006845 (2018). Lühken, R. et al. Microsatellite typing of Aedes albopictus (Diptera: Culicidae) populations from Germany suggests regular introductions. Infect. Genet. Evol. 81, 104237 (2020). Battaglia, V. et al. The worldwide spread of the tiger mosquito as revealed by mitogenome haplogroup diversity. Front. Genet. 7, 208 (2016). Medley, K. A., Jenkins, D. G. & Hoffman, E. A. Human-aided and natural dispersal drive gene flow across the range of an invasive mosquito. Mol. Ecol. 24, 284–295 (2015). Eritja, R., Palmer, J. R., Roiz, D., Sanpera-Calbet, I. & Bartumeus, F. Direct evidence of adult Aedes albopictus dispersal by car. Sci. Rep. 7, 1–15 (2017). Sherpa, S. et al. Unravelling the invasion history of the Asian tiger mosquito in Europe. Mol. Ecol. 28, 2360–2377 (2019). Swan, T. et al. A literature review of dispersal pathways of Aedes albopictus across different spatial scales: Implications for vector surveillance. Parasit Vectors 15, 1–13 (2022). Ballard, J. W. O. & Whitlock, M. C. The incomplete natural history of mitochondria. Mol. Ecol. 13, 729–744. https://doi.org/10.1046/j.1365-294X.2003.02063.x (2004). Toews, D. P. L. & Brelsford, A. The biogeography of mitochondrial and nuclear discordance in animals. Mol. Ecol. 21, 3907–3930. https://doi.org/10.1111/j.1365-294X.2012.05664.x (2012). Hurst, G. D. & Jiggins, F. M. Problems with mitochondrial DNA as a marker in population, phylogeographic and phylogenetic studies: The effects of inherited symbionts. Proc. R. Soc. B: Biol. Sci. 272, 1525–1534 (2005). Cariou, M., Duret, L. & Charlat, S. The global impact of Wolbachia on mitochondrial diversity and evolution. J. Evol. Biol. 30, 2204–2210 (2017). Zug, R. & Hammerstein, P. Still a host of hosts for Wolbachia: analysis of recent data suggests that 40% of terrestrial arthropod species are infected. PLoS ONE 7, e38544 (2012). Weinert, L. A., Araujo-Jnr, E. V., Ahmed, M. Z. & Welch, J. J. The incidence of bacterial endosymbionts in terrestrial arthropods. Proc. R. Soc. B: Biol. Sci. 282, 20150249 (2015). Goubert, C., Minard, G., Vieira, C. & Boulesteix, M. Population genetics of the Asian tiger mosquito Aedes albopictus, an invasive vector of human diseases. Heredity 117, 125–134 (2016). Western, D. Human-modified ecosystems and future evolution. PNAS 98, 5458–5465 (2001). Pech-May, A. et al. Population genetics and ecological niche of invasive Aedes albopictus in Mexico. Acta Trop. 157, 30–41 (2016). Vargo, E. L. et al. Hierarchical genetic analysis of German cockroach (Blattella germanica) populations from within buildings to across continents. PLoS ONE 9, e102321 (2014). von Beeren, C., Stoeckle, M. Y., Xia, J., Burke, G. & Kronauer, D. J. Interbreeding among deeply divergent mitochondrial lineages in the American cockroach (Periplaneta americana). Sci. Rep. 5, 1–7 (2015). Tseng, S.-P. et al. Genetic diversity and Wolbachia infection patterns in a globally distributed invasive ant. Front. Genet. 10, 838 (2019). Wesson, D. M., Porter, C. H. & Collins, F. H. Sequence and secondary structure comparisons of ITS rDNA in mosquitoes (Diptera: Culicidae). Mol. Phylogen. Evol. 1, 253–269 (1992). Mishra, S., Sharma, G., Das, M. K., Pande, V. & Singh, O. P. Intragenomic sequence variations in the second internal transcribed spacer (ITS2) ribosomal DNA of the malaria vector Anopheles stephensi. PLoS ONE 16, e0253173 (2021). Artigas, P. et al. Aedes albopictus diversity and relationships in south-western Europe and Brazil by rDNA/mtDNA and phenotypic analyses: ITS-2, a useful marker for spread studies. Parasit Vectors 14, 1–23 (2021). Armbruster, P. et al. Infection of New-and Old-World Aedes albopictus (Diptera: Culicidae) by the intracellular parasite Wolbachia: implications for host mitochondrial DNA evolution. J. Med. Entomol. 40, 356–360 (2003). Maia, R., Scarpassa, V. M., Maciel-Litaiff, L. & Tadei, W. P. Reduced levels of genetic variation in Aedes albopictus (Diptera: Culicidae) from Manaus, Amazonas State, Brazil, based on analysis of the mitochondrial DNA ND5 gene. Gen. Mol. Res. 2000, 998–1007 (2009). Birungi, J. & Munstermann, L. E. Genetic structure of Aedes albopictus (Diptera: Culicidae) populations based on mitochondrial ND5 sequences: Evidence for an independent invasion into Brazil and United States. Ann. Entomol. Soc. Am. 95, 125–132 (2002). Kambhampati, S. & Rai, K. S. Mitochondrial DNA variation within and among populations of the mosquito Aedes albopictus. Genome 34, 288–292 (1991). Werren, J. H., Baldo, L. & Clark, M. E. Wolbachia: master manipulators of invertebrate biology. Nat. Rev. Microbiol. 6, 741–751 (2008). Wiwatanaratanabutr, I. Geographic distribution of wolbachial infections in mosquitoes from Thailand. J. Invertebr. Pathol. 114, 337–340 (2013). Carvajal, T. M., Hashimoto, K., Harnandika, R. K., Amalin, D. M. & Watanabe, K. Detection of Wolbachia in field-collected Aedes aegypti mosquitoes in metropolitan Manila, Philippines. Parasit. Vectors 12, 1–9 (2019). Atyame, C. M., Delsuc, F., Pasteur, N., Weill, M. & Duron, O. Diversification of Wolbachia endosymbiont in the Culex pipiens mosquito. Mol. Biol. Evol. 28, 2761–2772 (2011). Damiani, C. et al. Wolbachia in Aedes koreicus: Rare detections and possible implications. Insects 13, 216 (2022). Jiggins, F. M. Male-killing Wolbachia and mitochondrial DNA: Selective sweeps, hybrid introgression and parasite population dynamics. Genetics 164, 5–12 (2003). Schuler, H. et al. The hitchhiker's guide to Europe: The infection dynamics of an ongoing Wolbachia invasion and mitochondrial selective sweep in Rhagoletis cerasi. Mol. Ecol. 25, 1595–1609 (2016). Ross, P. A., Ritchie, S. A., Axford, J. K. & Hoffmann, A. A. Loss of cytoplasmic incompatibility in Wolbachia-infected Aedes aegypti under field conditions. PLoS Negl. Trop. Dis. 13, e0007357 (2019). Avise, J. C. Phylogeography: The history and formation of species (Harvard University Press, 2000). Rokas, A., Atkinson, R. J., Brown, G. S., West, S. A. & Stone, G. N. Understanding patterns of genetic diversity in the oak gallwasp Biorhiza pallida: Demographic history or a Wolbachia selective sweep? Heredity 87, 294–304 (2001). Porretta, D., Mastrantonio, V., Bellini, R., Somboon, P. & Urbanelli, S. Glacial history of a modern invader: Phylogeography and species distribution modelling of the Asian tiger mosquito Aedes albopictus. PLoS ONE 7, e44515. https://doi.org/10.1371/journal.pone.0044515 (2012). Motoki, M. T. et al. Population genetics of Aedes albopictus (Diptera: Culicidae) in its native range in Lao People's Democratic Republic. Parasit. Vectors 12, 1–12 (2019). Zhong, D. et al. Genetic analysis of invasive Aedes albopictus populations in Los Angeles County, California and its potential public health impact. PLoS ONE 8, e68586 (2013). Usmani-Brown, S., Cohnstaedt, L. & Munstermann, L. E. Population genetics of Aedes albopictus (Diptera: Culicidae) invading populations, using mitochondrial nicotinamide adenine dinucleotide dehydrogenase subunit 5 sequences. Ann. Entomol. Soc. Am. 102, 144–150 (2009). Mousson, L. et al. Phylogeography of Aedes (Stegomyia) aegypti (L.) and Aedes (Stegomyia) albopictus (Skuse) (Diptera: Culicidae) based on mitochondrial DNA variations. Genet. Res. 86, 1–11 (2005). Bazin, E., Glémin, S. & Galtier, N. Population size does not influence mitochondrial genetic diversity in animals. Science 312, 570–572. https://doi.org/10.1126/science.1122033 (2006). Dowling, D. K., Friberg, U. & Lindell, J. Evolutionary implications of non-neutral mitochondrial genetic variation. Ecol. Evol. 23, 546–554 (2008). Montero-Pau, J., Gómez, A. & Muñoz, J. Application of an inexpensive and high-throughput genomic DNA extraction method for the molecular ecology of zooplanktonic diapausing eggs. Limnol. Oceanogr. Methods 6, 218–222 (2008). Porter, C. H. & Collins, F. H. Species-diagnostic differences in a ribosomal DNA internal transcribed spacer from the sibling species Anopheles freeborni and Anopheles hermsi (Diptera: Culicidae). Am. J. Trop. Med. 45, 271–279 (1991). Folmer, O., Black, M., Hoeh, W., Lutz, R. & Vrijenhoek, R. DNA primers for amplification of mitochondrial cytochrome c oxidase subunit I from diverse metazoan invertebrates. Mol. Mar. Biol. Biotechnol. 3, 294–299 (1994). Prosser, S., Martínez-Arce, A. & Elías-Gutiérrez, M. A new set of primers for COI amplification from freshwater microcrustaceans. Mol. Ecol. Resour. 13, 1151–1155 (2013). Ivanova, N. V., Zemlak, T. S., Hanner, R. H. & Hebert, P. D. Universal primer cocktails for fish DNA barcoding. Mol. Ecol. Notes 7, 544–548 (2007). Kumar, S., Stecher, G. & Tamura, K. MEGA7: Molecular evolutionary genetics analysis version 7.0 for bigger datasets. Mol. Biol. Evol. 33, 1870–1874 (2016). Zhou, W., Rousset, F. & O'Neill, S. Phylogeny and PCR–based classification of Wolbachia strains using wsp gene sequences. Proc. R Soc. Lond. Ser. B Biol. Sci. 265, 509–515 (1998). Braig, H. R., Zhou, W., Dobson, S. L. & O'Neill, S. L. Cloning and characterization of a gene encoding the major surface protein of the bacterial endosymbiont Wolbachia pipientis. J. Bacteriol. 180, 2373–2378 (1998). Hu, Y. et al. Identification and molecular characterization of Wolbachia strains in natural populations of Aedes albopictus in China. Parasit. Vectors 13, 1–14 (2020). Heddi, A., Grenier, A.-M., Khatchadourian, C., Charles, H. & Nardon, P. Four intracellular genomes direct weevil biology: Nuclear, mitochondrial, principal endosymbiont, and Wolbachia. PNAS 96, 6814–6819 (1999). Rozas, J. et al. DnaSP 6: DNA sequence polymorphism analysis of large data sets. Mol. Biol. Evol. 34, 3299–3302 (2017). Salzburger, W., Ewing, G. B. & Von Haeseler, A. The performance of phylogenetic algorithms in estimating haplotype genealogies with migration. Mol. Ecol. 20, 1952–1963 (2011). Stamatakis, A. RAxML-VI-HPC: Maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models. Bioinformatics 22, 2688–2690 (2006). Darriba, D., Taboada, G. L., Doallo, R. & Posada, D. jModelTest 2: More models, new heuristics and parallel computing. Nat. Methods 9, 772. https://doi.org/10.1038/nmeth.2109 (2012). Gower, J. C. Some distance properties of latent root and vector methods used in multivariate analysis. Biometrika 53, 325–338 (1966). R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. http://www.R-project.org/. (2021). Hijmans, R. J., Williams, E., Vennes, C. & Hijmans, M. R. J. Package 'geosphere'. Spher. Trigon. 1, 5 (2017). Palmer, J. R. et al. Citizen science provides a reliable and scalable tool to track disease-carrying mosquitoes. Nat. Commun. 8, 1–13 (2017). Mantel, N. & Valand, R. S. A technique of nonparametric multivariate analysis. Biometrics 1970, 547–558 (1970). Goslee, S. C. & Urban, D. L. The ecodist package for dissimilarity-based analysis of ecological data. J. Stat. Softw. 22, 1–19 (2007). Stewart, C. Zero-inflated beta distribution for modeling the proportions in quantitative fatty acid signature analysis. J. Appl. Stat. 40, 985–992 (2013). Figueroa-Zúñiga, J. I., Arellano-Valle, R. B. & Ferrari, S. L. Mixed beta regression: A Bayesian perspective. Comput. Stat. Data Anal. 61, 137–147 (2013). Branscum, A. J., Johnson, W. O. & Thurmond, M. C. Bayesian beta regression: Applications to household expenditure data and genetic distance between foot-and-mouth disease viruses. Aust. N. Z. J. Stat. 49, 287–301 (2007). Ospina, R. & Ferrari, S. L. Inflated beta distributions. Stat. Pap. 51, 111–126 (2010). Chung, H. & Beretvas, S. N. The impact of ignoring multiple membership data structures in multilevel models. Br. J. Math. Stat. Psychol. 65, 185–200 (2012). Article MathSciNet PubMed MATH Google Scholar We would like to thank those citizen scientists participating in the Mosquito Alert project who captured and sent us adult tiger mosquitoes for this study. We are grateful to ReNED (Red Nacional de Entomólogos Digitales) for supporting this work. We are also thankful to Victor Ojeda, Sandra Serra and Marc Pradell for their valuable help at the initial stage of the study. The research leading to these results has received funding from the Spanish Ministry of Economy and Competitiveness (MINECO, Plan Estatal I+D+I CGL2013-43139-R), "la Caixa" Foundation (ID 100010434) under agreement HR18-00336, and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 853271). Center for Advanced Studies of Blanes (CEAB-CSIC), Blanes, Catalonia, Spain Federica Lucati, Jenny Caner, Aitana Oltra, Simone Mariani, Santi Escartin, Roger Eritja, Andreu Albó Timor, Frederic Bartumeus & Marc Ventura Universitat Pompeu Fabra (UPF), Barcelona, Catalonia, Spain Federica Lucati & John R.B. Palmer School of Veterinary Medicine, The AgriFood Institute of Aragon (IA2), University of Zaragoza, Zaragoza, Spain Sarah Delacour & Javier Lucientes Applied Zoology and Animal Conservation Group, Universitat de les Illes Balears, Palma, Mallorca, Spain Claudia Paredes-Esquivel Associació Mediambiental Xatrac, Lloret de Mar, Catalonia, Spain Santi Escartin MIVEGEC, Institut de Recherche pour le Développement (IRD), Montpellier, France David Roiz Department of Zoology and Physical Anthropology, University of Murcia, Murcia, Spain Francisco Collantes & Juan Antonio Delgado Anticimex, Sant Cugat del Valles, Spain Mikel Bengoa Agencia de Salut Pública de Barcelona, Barcelona, Catalonia, Spain Tomàs Montalvo CIBER Epidemiología y Salud Pública (CIBERESP), Madrid, Spain CREAF, Cerdanyola del Vallès, Catalonia, Spain Roger Eritja & Frederic Bartumeus Institut Català de Recerca i Estudis Avançats (ICREA), Barcelona, Catalonia, Spain Frederic Bartumeus Federica Lucati Sarah Delacour John R.B. Palmer Jenny Caner Aitana Oltra Simone Mariani Francisco Collantes Juan Antonio Delgado Roger Eritja Javier Lucientes Andreu Albó Timor Marc Ventura J.R.B.P., F.B. and M.V. conceived and designed the study. S.D., J.R.B.P., A.O., C.P-E., S.M., S.E., D.R., F.C., M.B., T.M., J.A.D., R.E., J.L., and F.B. collected the samples. F.L., S.D., J.C., C.P-E. and A.A.T. analysed samples and data, under the supervision of J.R.B.P., F.B. and M.V.. F.L., S.D., J.R.B.P., J.C. and C.P-E. wrote the first draft, S.M., S.E., D.R., M.B., T.M., J.A.D., R.E., J.L., A.A.T., F.B. and M.V. improved successive versions. All authors read and approved the final manuscript. Correspondence to Federica Lucati. Supplementary Information. Lucati, F., Delacour, S., Palmer, J.R. et al. Multiple invasions, Wolbachia and human-aided transport drive the genetic variability of Aedes albopictus in the Iberian Peninsula. Sci Rep 12, 20682 (2022). https://doi.org/10.1038/s41598-022-24963-3
CommonCrawl
Instead, I urge the military to examine the use of smart drugs and the potential benefits they bring to the military. If they are safe, and pride cognitive enhancement to servicemembers, then we should discuss their use in the military. Imagine the potential benefits on the battlefield. They could potentially lead to an increase in the speed and tempo of our individual and collective OODA loop. They could improve our ability to become aware and make observations. Improve the speed of orientation and decision-making. Lastly, smart drugs could improve our ability to act and adapt to rapidly changing situations. When it comes to coping with exam stress or meeting that looming deadline, the prospect of a "smart drug" that could help you focus, learn and think faster is very seductive. At least this is what current trends on university campuses suggest. Just as you might drink a cup of coffee to help you stay alert, an increasing number of students and academics are turning to prescription drugs to boost academic performance. Qualia Mind, meanwhile, combines more than two dozen ingredients that may support brain and nervous system function – and even empathy, the company claims – including vitamins B, C and D, artichoke stem and leaf extract, taurine and a concentrated caffeine powder. A 2014 review of research on vitamin C, for one, suggests it may help protect against cognitive decline, while most of the research on artichoke extract seems to point to its benefits to other organs like the liver and heart. A small company-lead pilot study on the product found users experienced improvements in reasoning, memory, verbal ability and concentration five days after beginning Qualia Mind. In the United States, people consume more coffee than fizzy drink, tea and juice combined. Alas, no one has ever estimated its impact on economic growth – but plenty of studies have found myriad other benefits. Somewhat embarrassingly, caffeine has been proven to be better than the caffeine-based commercial supplement that Woo's company came up with, which is currently marketed at $17.95 for 60 pills. Ethical issues also arise with the use of drugs to boost brain power. Their use as cognitive enhancers isn't currently regulated. But should it be, just as the use of certain performance-enhancing drugs is regulated for professional athletes? Should universities consider dope testing to check that students aren't gaining an unfair advantage through drug use? Kratom (Erowid, Reddit) is a tree leaf from Southeast Asia; it's addictive to some degree (like caffeine and nicotine), and so it is regulated/banned in Thailand, Malaysia, Myanmar, and Bhutan among others - but not the USA. (One might think that kratom's common use there indicates how very addictive it must be, except it literally grows on trees so it can't be too hard to get.) Kratom is not particularly well-studied (and what has been studied is not necessarily relevant - I'm not addicted to any opiates!), and it suffers the usual herbal problem of being an endlessly variable food product and not a specific chemical with the fun risks of perhaps being poisonous, but in my reading it doesn't seem to be particularly dangerous or have serious side-effects. Most stock quote data provided by BATS. Market indices are shown in real time, except for the DJIA, which is delayed by two minutes. All times are ET. Disclaimer. Morningstar: Copyright 2018 Morningstar, Inc. All Rights Reserved. Factset: FactSet Research Systems Inc.2018. All rights reserved. Chicago Mercantile Association: Certain market data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Dow Jones: The Dow Jones branded indices are proprietary to and are calculated, distributed and marketed by DJI Opco, a subsidiary of S&P Dow Jones Indices LLC and have been licensed for use to S&P Opco, LLC and CNN. Standard & Poor's and S&P are registered trademarks of Standard & Poor's Financial Services LLC and Dow Jones is a registered trademark of Dow Jones Trademark Holdings LLC. All content of the Dow Jones branded indices Copyright S&P Dow Jones Indices LLC 2018 and/or its affiliates. Not all drug users are searching for a chemical escape hatch. A newer and increasingly normalized drug culture is all about heightening one's current relationship to reality—whether at work or school—by boosting the brain's ability to think under stress, stay alert and productive for long hours, and keep track of large amounts of information. In the name of becoming sharper traders, medical interns, or coders, people are taking pills typically prescribed for conditions including ADHD, narcolepsy, and Alzheimer's. Others down "stacks" of special "nootropic" supplements. Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect. The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total. Nicotine absorption through the stomach is variable and relatively reduced in comparison with absorption via the buccal cavity and the small intestine. Drinking, eating, and swallowing of tobacco smoke by South American Indians have frequently been reported. Tenetehara shamans reach a state of tobacco narcosis through large swallows of smoke, and Tapirape shams are said to eat smoke by forcing down large gulps of smoke only to expel it again in a rapid sequence of belches. In general, swallowing of tobacco smoke is quite frequently likened to drinking. However, although the amounts of nicotine swallowed in this way - or in the form of saturated saliva or pipe juice - may be large enough to be behaviorally significant at normal levels of gastric pH, nicotine, like other weak bases, is not significantly absorbed. 2 commenters point out that my possible lack of result is due to my mistaken assumption that if nicotine is absorbable through skin, mouth, and lungs it ought to be perfectly fine to absorb it through my stomach by drinking it (rather than vaporizing it and breathing it with an e-cigarette machine) - it's apparently known that absorption differs in the stomach. Two variants of the Towers of London task were used by Elliott et al. (1997) to study the effects of MPH on planning. The object of this task is for subjects to move game pieces from one position to another while adhering to rules that constrain the ways in which they can move the pieces, thus requiring subjects to plan their moves several steps ahead. Neither version of the task revealed overall effects of the drug, but one version showed impairment for the group that received the drug first, and the other version showed enhancement for the group that received the placebo first. ATTENTION CANADIAN CUSTOMERS: Due to delays caused by it's union's ongoing rotating strikes, Canada Post has suspended its delivery standard guarantees for parcel services. This may cause a delay in the delivery of your shipment unless you select DHL Express or UPS Express as your shipping service. For more information or further assistance, please visit the Canada Post website. Thank you. …Four subjects correctly stated when they received nicotine, five subjects were unsure, and the remaining two stated incorrectly which treatment they received on each occasion of testing. These numbers are sufficiently close to chance expectation that even the four subjects whose statements corresponded to the treatments received may have been guessing. The FDA has approved the first smart pill for use in the United States. Called Abilify MyCite, the pill contains a drug and an ingestible sensor that is activated when it comes into contact with stomach fluid to detect when the pill has been taken. The pill then transmits this data to a wearable patch that subsequently transfers the information to an app on a paired smartphone. From that point, with a patient's consent, the data can be accessed by the patient's doctors or caregivers via a web portal. Noopept is a Russian stimulant sometimes suggested for nootropics use as it may be more effective than piracetam or other -racetams, and its smaller doses make it more convenient & possibly safer. Following up on a pilot study, I ran a well-powered blind randomized self-experiment between September 2013 and August 2014 using doses of 12-60mg Noopept & pairs of 3-day blocks to investigate the impact of Noopept on self-ratings of daily functioning in addition to my existing supplementation regimen involving small-to-moderate doses of piracetam. A linear regression, which included other concurrent experiments as covariates & used multiple imputation for missing data, indicates a small benefit to the lower dose levels and harm from the highest 60mg dose level, but no dose nor Noopept as a whole was statistically-significant. It seems Noopept's effects are too subtle to easily notice if they exist, but if one uses it, one should probably avoid 60mg+. The goal of this article has been to synthesize what is known about the use of prescription stimulants for cognitive enhancement and what is known about the cognitive effects of these drugs. We have eschewed discussion of ethical issues in favor of simply trying to get the facts straight. Although ethical issues cannot be decided on the basis of facts alone, neither can they be decided without relevant facts. Personal and societal values will dictate whether success through sheer effort is as good as success with pharmacologic help, whether the freedom to alter one's own brain chemistry is more important than the right to compete on a level playing field at school and work, and how much risk of dependence is too much risk. Yet these positions cannot be translated into ethical decisions in the real world without considerable empirical knowledge. Do the drugs actually improve cognition? Under what circumstances and for whom? Who will be using them and for what purposes? What are the mental and physical health risks for frequent cognitive-enhancement users? For occasional users? "I love this book! As someone that deals with an autoimmune condition, I deal with sever brain fog. I'm currently in school and this has had a very negative impact on my learning. I have been looking for something like this to help my brain function better. This book has me thinking clearer, and my memory has improved. I'm eating healthier and overall feeling much better. This book is very easy to follow and also has some great recipes included." How should the mixed results just summarized be interpreted vis-á-vis the cognitive-enhancing potential of prescription stimulants? One possibility is that d-AMP and MPH enhance cognition, including the retention of just-acquired information and some or all forms of executive function, but that the enhancement effect is small. If this were the case, then many of the published studies were underpowered for detecting enhancement, with most samples sizes under 50. It follows that the observed effects would be inconsistent, a mix of positive and null findings. The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work. "Piracetam is not a vitamin, mineral, amino acid, herb or other botanical, or dietary substance for use by man to supplement the diet by increasing the total dietary intake. Further, piracetam is not a concentrate, metabolite, constituent, extract or combination of any such dietary ingredient. [...] Accordingly, these products are drugs, under section 201(g)(1)(C) of the Act, 21 U.S.C. § 321(g)(1)(C), because they are not foods and they are intended to affect the structure or any function of the body. Moreover, these products are new drugs as defined by section 201(p) of the Act, 21 U.S.C. § 321(p), because they are not generally recognized as safe and effective for use under the conditions prescribed, recommended, or suggested in their labeling."[33]
CommonCrawl
arXiv.org > gr-qc > arXiv:1912.07058 Title:Constraining the Temperature of Astrophysical Black Holes through Ringdown Detection: Results of GW150914 Remnant Authors:Ka-Wai Chung, Tjonnie Guang Feng Li (Submitted on 15 Dec 2019) Abstract: The ringdown of a black hole as a result of the merger of two black holes is a potent laboratory of the strong-field dynamics of spacetime. For example, it conveys information about the mass and spin of the remnant object, which can be related to the temperature of black holes. However, such relationships depend intimately on the assumption that general relativity is correct, and their capacity to test general relativity is restricted. We propose a novel method to measure the temperature of astrophysical black holes through detecting their quasi-normal modes, without assuming a specific dependence of the temperature on the mass and spin of black hole. In particular, we re-evaluate the emission of gravitational waves from the ringdown under the assumption that a black hole also radiates gravitational waves through Hawking radiation. We find that the resulting gravitational-wave signal has a temperature dependence that is independent of fixed relationships amongst the mass, spin and temperature. By re-analysing the gravitational-wave signal of GW150914, we set a constraint on the temperature of its remnant to be $T < 10^6\,\mathrm{K}$. Our results rule out the possibility of having detected anomalously strong quantum-gravity effects, but does not provide evidence of possible quantum-gravity signatures. Comments: 17 pages, 2 figures Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Astrophysical Phenomena (astro-ph.HE) Cite as: arXiv:1912.07058 [gr-qc] (or arXiv:1912.07058v1 [gr-qc] for this version) From: Ka - Wai Chung [view email] [v1] Sun, 15 Dec 2019 15:29:49 UTC (451 KB) gr-qc astro-ph.HE
CommonCrawl
Accepted Manuscript: Relativistic Bondi accretion for stiff equations of state Title: Relativistic Bondi accretion for stiff equations of state ABSTRACT We revisit Bondi accretion – steady-state, adiabatic, spherical gas flow on to a Schwarzschild black hole at rest in an asymptotically homogeneous medium – for stiff polytropic equations of state (EOSs) with adiabatic indices Γ > 5/3. A general relativistic treatment is required to determine their accretion rates, for which we provide exact expressions. We discuss several qualitative differences between results for soft and stiff EOSs – including the appearance of a minimum steady-state accretion rate for EOSs with Γ ≥ 5/3 – and explore limiting cases in order to examine these differences. As an example, we highlight results for Γ = 2, which is often used in numerical simulations to model the EOS of neutron stars. We also discuss a special case with this index, the ultrarelativistic 'causal' EOS, P = ρ. The latter serves as a useful limit for the still undetermined neutron star EOS above nuclear density. The results are useful, for example, to estimate the accretion rate on to a mini-black hole residing at the centre of a neutron star. Richards, Chloe B; Baumgarte, Thomas W; Shapiro, Stuart L Numerical relativity simulations of the neutron star merger GW190425: microphysics and mass ratio effects https://doi.org/10.1093/mnras/stac2333 Camilletti, Alessandro ; Chiesa, Leonardo ; Ricigliano, Giacomo ; Perego, Albino ; Lippold, Lukas Chris ; Padamata, Surendra ; Bernuzzi, Sebastiano ; Radice, David ; Logoteta, Domenico ; Guercilena, Federico Maria ( August 2022 , Monthly Notices of the Royal Astronomical Society) GW190425 was the second gravitational wave (GW) signal compatible with a binary neutron star (BNS) merger detected by the Advanced LIGO and Advanced Virgo detectors. Since no electromagnetic counterpart was identified, whether the associated kilonova was too dim or the localization area too broad is still an open question. We simulate 28 BNS mergers with the chirp mass of GW190425 and mass ratio 1 ≤ q ≤ 1.67, using numerical-relativity simulations with finite-temperature, composition dependent equations of state (EOS) and neutrino radiation. The energy emitted in GWs is $\lesssim 0.083\mathrm{\, M_\odot }c^2$ with peak luminosity of 1.1–$2.4\times ~10^{58}/(1+q)^2\, {\rm {erg \, s^{-1}}}$. Dynamical ejecta and disc mass range between 5 × 10−6–10−3 and 10−5–$0.1 \mathrm{\, M_\odot }$, respectively. Asymmetric mergers, especially with stiff EOSs, unbind more matter and form heavier discs compared to equal mass binaries. The angular momentum of the disc is 8–$10\mathrm{\, M_\odot }~GM_{\rm {disc}}/c$ over three orders of magnitude in Mdisc. While the nucleosynthesis shows no peculiarity, the simulated kilonovae are relatively dim compared with GW170817. For distances compatible with GW190425, AB magnitudes are always dimmer than ∼20 mag for the B, r, and K bands, with brighter kilonovae associated to more asymmetric binaries and stiffer EOSs. We suggest that,more »even assuming a good coverage of GW190425's sky location, the kilonova could hardly have been detected by present wide-field surveys and no firm constraints on the binary parameters or EOS can be argued from the lack of the detection. Spherically symmetric accretion on to a compact object through a standing shock: the effects of general relativity in the Schwarzschild geometry Kundu, Suman Kumar ; Coughlin, Eric R. ( September 2022 , Monthly Notices of the Royal Astronomical Society) A core-collapse supernova is generated by the passage of a shock wave through the envelope of a massive star, where the shock wave is initially launched from the 'bounce' of the neutron star formed during the collapse of the stellar core. Instead of successfully exploding the star, however, numerical investigations of core-collapse supernovae find that this shock tends to 'stall' at small radii (≲10 neutron star radii), with stellar material accreting on to the central object through the standing shock. Here, we present time-steady, adiabatic solutions for the density, pressure, and velocity of the shocked fluid that accretes on to the compact object through the stalled shock, and we include the effects of general relativity in the Schwarzschild metric. Similar to previous works that were carried out in the Newtonian limit, we find that the gas 'settles' interior to the stalled shock; in the relativistic regime analysed here, the velocity asymptotically approaches zero near the Schwarzschild radius. These solutions can represent accretion on to a material surface if the radius of the compact object is outside of its event horizon, such as a neutron star; we also discuss the possibility that these solutions can approximately represent the accretion ofmore »gas on to a newly formed black hole following a core-collapse event. Our findings and solutions are particularly relevant in weak and failed supernovae, where the shock is pushed to small radii and relativistic effects are large. Finite-temperature effects in dynamical spacetime binary neutron star merger simulations: validation of the parametric approach Raithel, Carolyn A. ; Espino, Pedro ; Paschalidis, Vasileios ( September 2022 , Monthly Notices of the Royal Astronomical Society) Parametric equations of state (EoSs) provide an important tool for systematically studying EoS effects in neutron star merger simulations. In this work, we perform a numerical validation of the M*-framework for parametrically calculating finite-temperature EoS tables. The framework, introduced by Raithel et al., provides a model for generically extending any cold, β-equilibrium EoS to finite temperatures and arbitrary electron fractions. In this work, we perform numerical evolutions of a binary neutron star merger with the SFHo finite-temperature EoS, as well as with the M*-approximation of this same EoS, where the approximation uses the zero-temperature, β-equilibrium slice of SFHo and replaces the finite-temperature and composition-dependent parts with the M*-model. We find that the approximate version of the EoS is able to accurately recreate the temperature and thermal pressure profiles of the binary neutron star remnant, when compared to the results found using the full version of SFHo. We additionally find that the merger dynamics and gravitational wave signals agree well between both cases, with differences of $\lesssim 1\!-\!2\,{\textrm{per cent}}$ introduced into the post-merger gravitational wave peak frequencies by the approximations of the EoS. We conclude that the M*-framework can be reliably used to probe neutron star merger properties in numerical simulations. Stellar Revival and Repeated Flares in Deeply Plunging Tidal Disruption Events https://doi.org/10.3847/2041-8213/ac5118 Nixon, C. J. ; Coughlin, Eric R. ( March 2022 , The Astrophysical Journal Letters) Abstract Tidal disruption events with tidal radius r t and pericenter distance r p are characterized by the quantity β = r t / r p , and "deep encounters" have β ≫ 1. It has been assumed that there is a critical β ≡ β c ∼ 1 that differentiates between partial and full disruption: for β < β c a fraction of the star survives the tidal interaction with the black hole, while for β > β c the star is completely destroyed, and hence all deep encounters should be full. Here we show that this assumption is incorrect by providing an example of a β = 16 encounter between a γ = 5/3, solar-like polytrope and a 10 6 M ⊙ black hole—for which previous investigations have found β c ≃ 0.9—that results in the reformation of a stellar core post-disruption that comprises approximately 25% of the original stellar mass. We propose that the core reforms under self-gravity, which remains important because of the compression of the gas both near pericenter, where the compression occurs out of the orbital plane, and substantially after pericenter, where compression is within the plane. We find that the core forms onmore »a bound orbit about the black hole, and we discuss the corresponding implications of our findings in the context of recently observed, repeating nuclear transients.« less Neutron Stars and the Nuclear Matter Equation of State https://doi.org/10.1146/annurev-nucl-102419-124827 Lattimer, J.M. ( September 2021 , Annual Review of Nuclear and Particle Science) Neutron stars provide a window into the properties of dense nuclear matter. Several recent observational and theoretical developments provide powerful constraints on their structure and internal composition. Among these are the first observed binary neutron star merger, GW170817, whose gravitational radiation was accompanied by electromagnetic radiation from a short γ-ray burst and an optical afterglow believed to be due to the radioactive decay of newly minted heavy r-process nuclei. These observations give important constraints on the radii of typical neutron stars and on the upper limit to the neutron star maximum mass and complement recent pulsar observations that established a lower limit. Pulse-profile observations by the Neutron Star Interior Composition Explorer (NICER) X-ray telescope provide an independent, consistent measure of the neutron star radius. Theoretical many-body studies of neutron matter reinforce these estimates of neutron star radii. Studies using parameterized dense matter equations of state (EOSs) reveal several EOS-independent relations connecting global neutron star properties. https://doi.org/10.1093/mnras/stab161 Richards, Chloe B, Baumgarte, Thomas W, and Shapiro, Stuart L. Relativistic Bondi accretion for stiff equations of state. Retrieved from https://par.nsf.gov/biblio/10232903. Monthly Notices of the Royal Astronomical Society 502.2 Web. doi:10.1093/mnras/stab161. Richards, Chloe B, Baumgarte, Thomas W, & Shapiro, Stuart L. Relativistic Bondi accretion for stiff equations of state. Monthly Notices of the Royal Astronomical Society, 502 (2). Retrieved from https://par.nsf.gov/biblio/10232903. https://doi.org/10.1093/mnras/stab161 Richards, Chloe B, Baumgarte, Thomas W, and Shapiro, Stuart L. "Relativistic Bondi accretion for stiff equations of state". Monthly Notices of the Royal Astronomical Society 502 (2). Country unknown/Code not available. https://doi.org/10.1093/mnras/stab161. https://par.nsf.gov/biblio/10232903. place = {Country unknown/Code not available}, title = {Relativistic Bondi accretion for stiff equations of state}, url = {https://par.nsf.gov/biblio/10232903}, DOI = {10.1093/mnras/stab161}, abstractNote = {ABSTRACT We revisit Bondi accretion – steady-state, adiabatic, spherical gas flow on to a Schwarzschild black hole at rest in an asymptotically homogeneous medium – for stiff polytropic equations of state (EOSs) with adiabatic indices Γ > 5/3. A general relativistic treatment is required to determine their accretion rates, for which we provide exact expressions. We discuss several qualitative differences between results for soft and stiff EOSs – including the appearance of a minimum steady-state accretion rate for EOSs with Γ ≥ 5/3 – and explore limiting cases in order to examine these differences. As an example, we highlight results for Γ = 2, which is often used in numerical simulations to model the EOS of neutron stars. We also discuss a special case with this index, the ultrarelativistic 'causal' EOS, P = ρ. The latter serves as a useful limit for the still undetermined neutron star EOS above nuclear density. The results are useful, for example, to estimate the accretion rate on to a mini-black hole residing at the centre of a neutron star.}, journal = {Monthly Notices of the Royal Astronomical Society}, volume = {502}, number = {2}, author = {Richards, Chloe B and Baumgarte, Thomas W and Shapiro, Stuart L}, editor = {null} }
CommonCrawl
Mortality and immune challenge of a native isolate of Beauveria bassina against the larvae of Glyphodes pyloalis Walker (Lepidoptera: Pyralidae) Sarah Aghaee Pour1, Arash Zibaee ORCID: orcid.org/0000-0001-8819-31661, Maryam Gohar Rostami1, Hassan Hoda2 & Morteza Shahriari1 Egyptian Journal of Biological Pest Control volume 31, Article number: 37 (2021) Cite this article Entomopathogenic fungi (EPF) attack a wide range of insects. They are considered environmental friendly alternatives to synthetic insecticides for pest control. In the present study, virulence of a native isolate of the EPF, Beauveria bassiana Vuillemin (Hypocreales: Cordycipitaceae) was evaluated against the least mulberry pyralid, Glyphodes pyloalis Walker (Lepidoptera: Crambidae), through bioassay, pathogenic pathways, and immune responses. The values of 2.6 × 104 conidia/ml and 3.54 days were determined as the median lethal concentration (LC50) and median lethal concentration (LT50) of AM-118 against the 4th instar larvae of G. pyloalis, respectively. The activities of proteases and chitinases in the culture medium containing the larval cuticle were higher than the control medium. Moreover, the total and the differential hemocyte counts of the larvae were significantly changed after injection with AM-118 spores. The highest numbers of total hemocytes and granulocytes were obtained 3 and 6 h post-injection, while the highest numbers of plasmatocytes and nodules were observed 6 h post-injection. The highest activity of phenoloxidase was determined 12 h post-injection by AM-118 spores. The findings imply on virulence of the AM-118 isolate against the larvae of G. pyoalis although immune responses were triggered by the spores. The lesser mulberry pyralid, Glyphodes pyloalis Walker (Lepidoptera: Pyralidae), is an important pest with annually sever damages on mulberry. The larvae intensively feed on leaves, fold them, and retain the black feces to lose quality of the leaves for silkworm rearing (Khosravi and Jalali Sendi 2010). Another important case is the capability of transmitting pathogenic diseases to silkworm (Watanabe et al. 1988). The main control measure is to spray chemical insecticides although the excessive use of these pesticides has resulted in adversely severe effects on non-target organisms mainly silkworm (Yazdani et al. 2013). Therefore, considering eco-friendly measures like natural enemies, predators, parasitoids, or entomopathogens seems to be necessary. Several researches have studied the effects of entomopathogenic fungi (EPF) especially, Beauveria bassiana Vuillemin (Hypocreales: Cordycipitaceae) isolates against many lepidopteran insect pests. EPF not only have significant virulence against insects but they also play important roles in physiological functions by inactivation of enzymes or triggering some processes (Ramzi and Zibaee 2014; Baja et al. 2020). In infection pathway, conidia should be connected to the cuticle of host, germinated via different layers and reproduced after reaching the hemocoel. Then, produced blastospores led to kill the insect hosts by production of secondary metabolites and consumption of nourishing resources in host bodies (Brownbridge et al. 2001). Before any suggestion for field application, it is necessary to understand the infection mechanism of EPF on the specific insect; therefore, the aims of the present research were to (a) bioassay the virulence of a native isolate of B. bassiana against the larvae of G. pyloalis, (b) evaluate the production of extracellular enzymes in presence of the host cuticle to find correlation between secretion of these compounds and virulence, and (c) evaluate the immune responses in the hemolymph of the larvae encountering the spores of B. bassiana. Larvae of G. pyloalis were collected from infested mulberry plantation of Rasht, northern Iran (37° 19′ N, 49° 37′ E; -9 m). Rearing of insect was carried out in a growth chamber at 24 ± 2 °C, 70 ± 5% R.H., and 16:8 (L:D) h of photoperiod on fresh mulberry leaves (Yazdani et al. 2013). Adult moths were placed in transparent plastic containers (20 × 12 × 12 cm3) and provided with fresh mulberry leaves for egg-laying and a cotton wool soaked in 10% honey for adult feeding. The containers were cleaned daily and fresh leaves were provided for the larvae. Beauveria bassiana fungal culture AM-118 isolate of B. bassiana was cultured on Potato Dextrose Agar (PDA) and kept at 25 ± 1 °C for 21 days. Then, the conidia were washed off by a 0.01% aqueous solution of Tween 80 to prepare a stock sample of fungal conidia. The isolate was collected from a rice field at Amol (26 25′ N, 52 21′ E; 17 m) and registered in herbarium of mycology. The serial concentrations of 103, 104, 105, 106, and 107 conidia/ml of AM-118 were prepared in sterile distilled water containing Tween80 (0.02%); then, the early 4th instar larvae of G. pyloalis were randomly selected and separately dipped in each concentration for 5 s. The control larvae were dipped in aqueous solution of Tween 80 (0.02%) alone. After treatment, the larvae were placed on filter paper (Whatman No. 1), provided with fresh leaves, and kept at the rearing conditions. The bioassay was done in 3 replicates, containing 10 larvae/ replication. Mortality was recorded after 7 days and LC50 was determined using POLO-plus software. For estimation of LT50, the mortality was recorded until death of all larvae at the highest conidia concentration. Liquid culture for enzyme production A liquid medium, including KH2PO4, 0.02%; CaCl2, 0.01%; MgSO4, 0.01%; Na2HPO4, 0.02%; ZnCl2, 0.01%; and yeast extract, 0.01%, was used for enzymatic production of AM-118. Culture flasks were inoculated with a concentration of 108 conidia/ml and 5% (weight) of G. pyloalis larval cuticle and incubated for 8 days at 25 ± 1 °C on a rotatory shaker (70 rev/min) (Zibaee and Bandani 2009). In control flasks, starch and potato extract were added instead of larval cuticle. Sample preparations for enzymatic assays Eight days post-incubation, the mixture was harvested by centrifugation at 10,000×g and 4 °C for 30 min and washed in ice-cold Tris-HCl (25 mM, pH 8). Weighed mycelia were ground to a fine powder and suspended in distilled water. Then, the samples were homogenized and centrifuged at 20,000×g and 4 °C for 30 min to gain the supernatant for the enzymatic assays (Ramzi and Zibaee 2014). Assay of proteases Activities of subtilisin-like (Pr1) and trypsin-like (Pr2) as the 2 main fungal proteases were evaluated by 30 μl of succinyl-(alanine) 2-prolinephenylalanine-p-nitroanilide for Pr1 and benzoylphenylalanine-valine-arginine-p-nitroanilide for Pr2 in 100 μl of Tris–HCl buffer (20 mM, pH 8). Afterward, 20 μl of the enzyme solution was added to the mixture and incubated at 25 °C for 10 min. Then, 100 μl of trichloroacetic acid (TCA, 30%) was added and the absorbance was recorded at 405 nm (Zibaee and Bandani 2009). Endo-chitinase assay A reaction mixture, containing 50 μl of 0.5% colloidal chitin as substrate, 20 μl of enzyme sample, and 100 μl of Tris–HCl buffer (20 mM, pH 7), was prepared to assay endo-chitinase activity. The tubes containing reaction mixture were incubated in a water bath (30 °C) for 60 min. Then, 100 μl of DNS (dinitrosalicylic acid) was added and the incubation was prolonged for 10 min at boiling water before reading the absorbance at 545 nm (Miller 1959). Exo-chitinase assay Assay of exo-chitinase activity was carried out, using 200 μl of p-nitrophenyl-N-acetyl-β-D-glucosaminide (pNPg) solution (1 mg pNPg per ml of distilled water) as substrate, 500 μl of Tris-HCl (25 mM, pH 7), and 25 μl of culture filtrate. The mixture was incubated for 20 h at 30 °C, centrifuged at 20,000×g at 4 °C, and then, supernatant was added to 200 μl of sodium tetraborate-NaOH buffer (125 mM, pH 10) prior to reading the absorbance at 400 nm. The extinction coefficient of 18.5 mM−1 cm−1 was calculated. Protein assay The method of Lowry et al. (1951) was used to measure the amount of protein in the enzymatic preparations. Twenty microliters of the enzyme sample was added into 100 μl of reagent (Ziest Chem. Co., Tehran-Iran) and incubated for 30 min before reading the absorbance at 545 nm. Hemolymph collection and hemocyte counts Fourth instar larvae were injected in the latest segment of thorax by 1 μl of a soluble Tween 80 containing LC50 concentration of B. bassiana. Then, larval hemolymph was collected at intervals of 1, 3, 6, 12, and 24 h after injection. Control larvae remained intact, while other larvae were injected by Tween 80 solution. Samples of hemolymph were bled into ice cold anticoagulant buffer in a ratio of 1–3 (0.01 M ethylenediamine tetraacetic acid, 0.1 M glucose, 0.062 M NaCl, 0.026 M citric acid, pH 4.6) (Azambuja et al. 1991). Number of total hemocyte, granulocyte, plasmatocyte, and nodules were counted, using a Neubauer hemocytometer (Chemkind Co. China). For each treatment, 10 larvae were used and the experiment had 3 replicates. Assay of phenoloxidase activity The activity of phenoloxidase (PO) was assayed according to the procedure described by Wilson et al. (2002). Briefly, 10 μl of hemolymph was added into an Eppendorf tube (1.5 ml), centrifuged at 2000×g and 4 °C for 15 min. Supernatant was removed and 100 μl of ice-cold phosphate-buffered saline (20 mM, pH 7) was added to the pellets. To determine PO activity, samples were poured into each well of a plate containing 20 μl of 10-mM 3,4-dihydroxyphenylalanine (L-dopa). After 5 min of incubation at room temperature, the absorbance was recorded at 492 nm. Probit analysis was performed to determine LC50 and LT50 values at the corresponding 95% confidence interval (CI) values by using POLO-Plus software. Biochemical data were compared by one-way analysis of variance (ANOVA), followed by t test and Tukey's test, where applicable. Differences between control and treatments were statistically analyzed at a probability less than 5% and marked by different letters. Also, the extinction coefficient of 18.5 mM−1 cm−1 was considered for activity calculation based on the following formula: $$ \mathrm{Volume}\ \mathrm{activity}\ \left(\mathrm{U}/\mathrm{ml}\right)=\left[\Delta \mathrm{OD}\ \left(\mathrm{OD}\ \mathrm{test}-\mathrm{OD}\ \mathrm{blank}\right)\times {V}_{\mathrm{t}}\times \mathrm{df}\right]/\left(18.5\times t\times 1.0\times {V}_{\mathrm{s}}\right) $$ where Vt = total volume; Vs = sample volume; 18.5 = millimolar extinction coefficient of p-nitrophenol under the assay condition; 1.0 = light path length (cm); t = reaction time; and df = dilution factor (Miller 1959). Results of the Am-118 bioassay against the 4th instar larvae of G. pyloalis are presented in Tables 1 and 2. AM-118 showed a virulence against the larvae at different concentrations, and significant positive correlations among increasing concentrations of AM-118 and the larval mortality (Table 1). Table 1 LC50 value (conidia/ml) of Beauveria bassiana against the 4th instar larvae of Glyphodes pyloalis Table 2 LT50 value (days) of Beauveria bassiana against the 4th instar larvae of Glyphodes pyloalis The LC50 value of AM-118 was obtained as 2.6 × 104 (7.8 × 103–6.9 × 104) spore/ml (Table 1) and the LT50 value was calculated as 3.54 (2.99–4.08) days (Table 2). The activities of Pr1 and Pr2 significantly increased in the samples containing larval cuticle than in the control (Fig. 1). Similarly, high activities of endo- and exo-chitinase were observed in the cuticle treatment compared to control (Fig. 2). Activities of proteases (U/mg protein) in the cultured Beauveria bassiana on liquid medium containing Glyphodes pyloalis cuticle. Statistical differences are shown by asterisks (t test, p ≤ 0.05) Activities of chitinases (U/mg protein) in the cultured Beauveria bassiana on liquid medium containing Glyphodes pyloalis cuticle. Statistical differences are shown by asterisks (t test, p ≤ 0.05) Total number of hemocytes, number of granulocytes, and of plasmatocytes in G. pyloalis larvae demonstrated significant changes after AM-118 spore injection (Fig. 3). The total number of hemocytes increased in the larvae exposed to B. bassiana conidia in comparison to control and Tween 80 at all-time intervals. The highest number of total hemocyte was observed 3 and 6 h post-injection similar to granulocytes (Fig. 3). Also, injection of the spores led to the highest numbers of plasmatocytes at all-time intervals (Fig.3). Changes of hemocytes numbers in Glyphodes pyloalis larvae injected by 2.6 × 105 spore/ml of Beauveria bassiana (AM-118). Different letters indicate significant differences among treatments in time intervals (p ≤ 0.05) Injection of larvae with B. bassiana spores caused the highest number of nodules 6 h post-injection but control and tween 80 larvae showed no nodules (Fig. 4). The activity of PO statistically increased in the larvae injected by AM-118 spores at all-time intervals. The highest PO activity was found 12 and 6 h post-injection by AM-118 spores, respectively (Fig. 4). Changes of nodules numbers and PO activity (U/mg protein) in Glyphodes pyloalis larval hemolymph after injection with 2.6 × 105 spore/ml of Beauveria bassiana (AM-118). Different letters indicate significant differences among treatments in time intervals (p ≤ 0.05) Obtained results showed the virulence of B. bassiana on the 4th instar larvae of G. pyloalis and the induction of cellular immune responses of the larvae. Ramzi and Zibaee (2014) reported that two isolates of B. bassiana (BB1 and BB2) had a high virulence against Chilo suppressalis Walker (Lepidoptera: Crambidae) larvae compared to Metarhizium anisopliae Metchnikoff (Hypocreales: Clavicipitaceae), Isaria fumosoroseus Wize (Hypocreales: Clavicipitaceae), and Lecanicilium lecanii Zare & Gams (Hypocreales: Cordycipitaceae). Wraight et al. (2010) demonstrated a high virulence of B. bassiana isolates against Plutella xylostella L. (Lepidoptera: Plutellidae), Helicoverpa zea Boddie (Lepidoptera: Noctuidae), Ostrinia nubilalis Hubner (Lepidoptera: Crambidae), Spodoptera frugiperda Smith (Lepidoptera: Noctuidae), Spodoptera exigua Fabricius (Lepidoptera: Noctuidae), Agrotis ipsilon Hufnagel (Lepidoptera: Noctuidae), Pieris rapae L. (Lepidoptera: Pieridae), and Trichoplusia ni Hubner (Lepidoptera: Noctuidae). Laznik et al. (2012) investigated the effects of Beauveria brongniartii Vuillemin and B. bassiana against June beetle, margined vine chafer, and garden chafer. The authors reported significant effects of the fungi on the total number of the white grubs in April and May once the population was higher than the economic threshold. Also, the highest virulence of B. bassiana was observed on the Duponchelia fovealis Zeller (Lepidoptera: Crambidae) larvae (Baja et al. 2020). B. bassiana (different isolates) are among the most virulent EPF against insects and it has shown a significant capability to be used as an efficient biocontrol agent in agro-ecosystems. Moreover, AM-118, the native isolate of B. bassiana in north of Iran, has shown a virulence against C. suppressalis and Pseudococcus viburni Signoret (Hemiptera: Pseudococcidae). Lesser mulberry is the third case of pests which is susceptible to the isolate, which implied on the pathogenic ability of AM-118 in a wide range of crop and orchard pests. This may be attributed to the isolate adaptability to the climate of northern Iran and host-associations which emboss virulence of AM-118 microbe. Once fungal conidium attached to the integument of insects, it generates a germ tube and penetrates by passing through cuticle, using extracellular enzymes and mechanical pressure. Proteases and chitinases are the necessary enzymes to facilitate fungal penetration through insect cuticle (Ramzi and Zibaee 2014). Subtilisin-like (Pr1) and trypsin-like (Pr2) are the 2 types of proteases important in pathogenic process of EPF. Pr1 catalyzes widely the cuticular protein of insects and is secreted in the initial steps of penetration, while Pr2 plays as a supplementary protease to accomplish protein degradation along with Pr1 (Dias et al. 2008). Chitinases speed up the hydrolysis of b-(1, 4)-linked polymer of N-acetyl-D-glucosamine in insect cuticle although they play other roles in development, nutrition, and morphogenesis of fungi (Tsigos and Bouriotis 1995). High activities of fungal enzymes in the presence of larval cuticle implies on better adaptations of these enzymes with integument composition of host rather than pure supplements in control media. In fact, there is a coevolution in host-microbe associations in which the EPF recruit the enzymatic isoforms to facilitate integument rupture and to enhance achieving of fungus to the haemocoel of host. Microbial infections are generally lethal for insects, but several studies demonstrate that some physiological functions neutralize the damage of infectious injury (Gorman et al. 2007; Vengateswari et al. 2020). Immunity is important to protect insects against microbial agents. Immune responses of insects rely on circulating hemocytes such as prohemocyte, granulocyte, plasmatocyte, and oenocytoids (Russo et al. 2001; Schmid-Hempel 2005; Borges et al. 2008; Zibaee and Malagoli 2014). The hemocytes act through different functions including nodule formation, phagocytosis, and encapsulation to entrap and kill pathogens in the hemolymph (Borges et al. 2008). In the present study, the results demonstrated that injecting of Am-118 spores to G. pyloalis larvae led to increase of total and differentiated hemocyte counts than to tween 80 and control larvae. Similar findings were observed in case of number of nodules, which highlight the roles of circulating hemocytes in alleviating or removing deleterious effects of entomopathogenic infection. Similar studies demonstrated a direct correlation between EPF infection and number of differential hemocytes in insects. For example, Mirhaghparast et al. (2013) showed that injection of Spodoptera littoralis Boisduval (Lepidoptera: Noctuidae) larvae by B. bassiana and M. anisopliae increased the number of hemocytes and nodules. In another research, Zibaee and Malagoli (2014) reported that the number of hemocytes and nodules in C. suppressalis increased after injection with different EPF. Phenoloxidase (PO) is an important enzyme in immunity of insect, which has a key role in catalyzing hydroxylation of monophenols to diphenols and conversion of diphenols to quinones (Gorman et al. 2007). Then, quinones are converted to the melanin pigments. In insects, melanin is involved in wound healing, cuticle sclerotization, and defense reactions against pathogens, such as encapsulation and nodule formation (Zdybicka-Barabas et al. 2014). In the present study, there is a correlation between number of hemocytes and PO activity because the highest PO activity was obtained after 6 post-injections, which it coincided with the hemocytes high numbers in the given time of post injection. Since PO is secreted and stored within hemocytes, increased PO activity may be attributed to the highest number of hemocytes after injection of fungi conidia. Several studies have been demonstrated an increase of hemocytes' numbers causes increase PO activity of insects (Catalán et al. 2012; Mirhaghparast et al. 2013; Zibaee and Malagoli 2014; Shamakhi et al. 2019). The present study revealed detailed results on the function of a native B. bassiana isolate from the pathogen infection–host interaction perspective. Injection of AM-118 led to mortality of G. pyloalis larvae by increased secretion of extracellular enzymes and affected insect host immunity. Increased knowledge on the physiological interplays among EPF and the insect host could provide new strategies for pest management. PDA: Potato Dextrose Agar Pr1: Subtilisin-like Trypsin-like TCA: Dinitrosalicylic acid pNPg: p-Nitrophenyl-N-acetyl-β-D-glucosaminide L-dopa: 3,4-Dihydroxyphenylalanine Phenoloxidase Azambuja P, Garcia ES, Ratcliffe NA (1991) Aspects of classification of hemiptera hemocytes from six triatomine species. Mem Inst Oswaldo Cruz 86:1–10. https://doi.org/10.1590/S0074-02761991000100002 Baja F, Poitevin CG, Araujo ES, Mirás-Avalos JM, Zawadneak MA, Pimentel IC (2020) Infection of Beauveria bassiana and Cordyceps javanica on different immature stages of Duponchelia fovealis Zeller (Lepidoptera: Crambidae). Crop Prot 138:105347. https://doi.org/10.1016/j.cropro.2020.105347 Borges AR, Santos PN, Furtado AF, Figueiredo RCB (2008) Phagocytosis of latex beads and bacteria by hemocytes of the triatomine bug Rhodnius prolixus (Hemiptera: Reduvidae). Micron 39:486–494. https://doi.org/10.1016/j.micron.2007.01.007 Brownbridge M, Costa S, Jaronski ST (2001) Effects of in vitro passage of Beauveria bassiana on virulence to Bemisia argentifolii. J Invertebr Pathol 77:280–283. https://doi.org/10.1006/jipa.2001.502 Catalán TP, Niemeyer HM, Kalergis AM, Bozinovic F (2012) Interplay between behavioral thermoregulation and immune response in mealworms. J Insect Physiol 58:1450–1455. https://doi.org/10.1016/j.jinsphys.2012.08.011 Dias BA, Neves PMOJ, Furlaneto-Maia L, Furlaneto MC (2008) Cuticaledegrading proteases produced by the entomopathogenic fungus Beauveria bassiana in the presence of coffee berry borer cuticle. Braz J Microbiol 39:301–306. https://doi.org/10.1590/S1517-83822008000200019 Gorman MJ, An C, Kanost MR (2007) Characterization of tyrosine hydroxylase from Manduca sexta. Insect Biochem Mol Biol 37:1327–1337. https://doi.org/10.1016/j.ibmb.2007.08.006 Khosravi R, Jalali Sendi J (2010) Biology and demography of Glyphodes pyloalis Walker (Lepidoptera: Pyralidae) on mulberry. J Asia Pac Entomol 13:273–276. https://doi.org/10.1016/j.aspen.2010.04.005 Laznik Z, Vidrih M, Trdan S (2012) The effect of different entomopathogens on white grubs (Coleoptera: Scarabaeidae) in an organic hay-producing grassland. Arch Biol Sci 64(4):1235–1246 Lowry OH, Rosenbrough NJ, Farr LL, Randall RJ (1951) Protein measurement with the Folin phenol reagent. J Biol Chem 19:265–275 Miller GL (1959) Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem 31:426–428. https://doi.org/10.1021/ac60147a030 Mirhaghparast SK, Zibaee A, Hajizadeh J (2013) Effects of Beauveria bassiana and Metarhizium anisopliae on cellular immunity and intermediary metabolism of Spodoptera littoralis Boisduval (Lepidoptera: Noctuidae). Invertebr Surviv J 10:110–119 Ramzi S, Zibaee A (2014) Biochemical properties of different entomopathogenic fungi and their virulence against Chilo suppressalis (Lepidoptera: Crambidae) larvae. Biocon Sci Technol 24:597–610. https://doi.org/10.1080/09583157.2014.883360 Russo J, Brehelin M, Carton Y (2001) Haemocyte changes in resistant and susceptible strains of D. melanogaster caused by virulent and avirulent strains of the parasitic wasp Leptopilina boulardi. J Insect Physiol 47:167–172. https://doi.org/10.1016/S0022-1910(00)00102-5 Schmid-Hempel P (2005) Evolutionary ecology of insect immune defenses. Annu Rev Entomol 50:529–551. https://doi.org/10.1146/annurev.ento.50.071803.130420 Shamakhi L, Zibaee A, Karimi-Malati A, Hoda H (2019) Effect of thermal stress on the immune responses of Chilo suppressalis walker (Lepidoptera: Crambidae) to Beauveria bassiana. J Therm Biol 84:136–145. https://doi.org/10.1016/j.jtherbio.2019.07.006 Tsigos I, Bouriotis V (1995) Purification and characterization of chitin deacetylase from Colletotrichum lindemuthianum. J Biol Chem 270:26286–26291. https://doi.org/10.1074/jbc.270.44.26286 Vengateswari G, Arunthirumeni M, Shivakumar MS (2020) Effect of food plants on Spodoptera litura (Lepidoptera: Noctuidae) larvae immune and antioxidant properties in response to Bacillus thuringiensis infection. Toxicol Rep 7:1428–1437. https://doi.org/10.1016/j.toxrep.2020.10.005 Watanabe Y, Kurihara Y, Wang XY, Shimizu TJ (1988) Mulberry pyralid, Glyhodes pyloalis: habitual host of nonoccluded viruses pathogenic to the silkworm, Bombyx mori. J Invertebr Pathol 52:401–408. https://doi.org/10.1016/0022-2011(88)90052-3 Wilson K, Thomas MB, Blanford S, Doggett M, Simpson SJ, Moore SL (2002) Coping with crowds: density-dependent disease resistance in desert locusts. Proc Natl Acad Sci U S A 99:5471–5475. https://doi.org/10.1073/pnas.082461999 Wraight SP, Ramos ME, Avery PB, Jaronski ST, Vandenberg JD (2010) Comparative virulence of Beauveria bassiana isolates against lepidopteran pests of vegetable crops. J Invertebr Pathol 103:186–199. https://doi.org/10.1016/j.jip.2010.01.001 Yazdani E, Sendi JJ, Aliakbar A, Senthil-Nathan S (2013) Effect of Lavandula angustifolia essential oil against lesser mulberry pyralid Glyphodes pyloalis Walker (Lep: Pyralidae) and identification of its major derivatives. Pestic Biochem Physiol 107:250–257. https://doi.org/10.1016/j.pestbp.2013.08.002 Zdybicka-Barabas A, Mak P, Jakubowicz T, Cytryńska M (2014) Lysozyme and defense peptides as suppressors of phenoloxidase activity in Galleria mellonella. Arch Insect Biochem Physiol 87:1–12. https://doi.org/10.1002/arch.21175 Zibaee A, Bandani AR (2009) Purification and characterization of the cuticledegrading protease produced by the entomopathogenic fungus, Beauveria bassiana in the presence of Sunn pest, Eurygaster integriceps (Hemiptera: Scutelleridae) cuticle. Biocontrol Sci Tech 19:797–808. https://doi.org/10.1080/09583150903132172 Zibaee A, Malagoli D (2014) Immune response of Chilo suppressalis Walker (Lepidoptera: Crambidae) larvae to different entomopathogenic fungi. Bull Entomol Res 104:155–163. https://doi.org/10.1017/S0007485313000588 Authors would like to thank the University of Guilan (Rasht, Iran) for supporting our research. The authors would like to thank the University of Guilan (Grant Number 516235) for financial support of the research. Department of Plant Protection, Faculty of Agricultural Sciences, University of Guilan, Box 41635-1314, Rasht, Iran Sarah Aghaee Pour, Arash Zibaee, Maryam Gohar Rostami & Morteza Shahriari Iranian Research Institute of Plant Protection, Agricultural Research, Education and Extension, Amol, Iran Hassan Hoda Sarah Aghaee Pour Arash Zibaee Maryam Gohar Rostami Morteza Shahriari SAP: Investigation, methodology, formal analysis, writing; AZ: supervision, writing—review and editing; MGR: methodology; HH: writing—review and editing; MS: supervision, formal analysis, methodology. All authors have read and approved the manuscript. Correspondence to Arash Zibaee. Pour, S.A., Zibaee, A., Rostami, M.G. et al. Mortality and immune challenge of a native isolate of Beauveria bassina against the larvae of Glyphodes pyloalis Walker (Lepidoptera: Pyralidae). Egypt J Biol Pest Control 31, 37 (2021). https://doi.org/10.1186/s41938-021-00386-6 Glyphodes pyloalis Mycoinsecticides Virulence
CommonCrawl
preprints > subject area > all Filter articles by Today's articles This week's articles Most viewed Most downloaded All Preprint ARTICLE Download: 0| View: 0| Comments: 0 | doi:10.20944/preprints202001.0260.v1 El Niño Southern Oscillation and Its Impact on Rainfall Distribution and Productivity of Major Agricultural Crops: The Case of Kemabata Tembaro Zone, Southern Ethiopia Bereket Tesfaye Haile, Kassahun Ture Bekitie, Gudina Legesse Feyissa, Tadesse Terefe Zeleke Subject: Earth Sciences, Environmental Sciences Keywords: ENSO; El Nino; La Nina; Crop Yield; Climate change Online: 22 January 2020 (09:44:40 CET) Show abstract| Download PDF| Share This study was conducted to investigate the impact of El Niño Southern Oscillation on rainfall distribution and productivity of major Agricultural crops in the Kembta Tembaro Zone of Southern Ethiopia over the past 30 years. Precipitation and temperature data were obtained from the National Meteorology Agency, crop data from the Central Statistical Agency of Ethiopia, and the Sea Surface Temperature data from the NOAA website. The rainfall trend had shown decreasing trend with high variability at all the stations (p<0.05). Over the same period, El Niño and La Niña event were observed and highly affected rainfall distribution. It was found that Coefficient Variation was greater than 30%, which indicates the area was prone to drought episodes. The impacts of the ENSO events on the yield of Maize, Wheat, Barely, Sorghum and Enset were assessed. Wheat and Maize were highly affected by the ENSO events. Enset was found to be more resistant crop to the influence of ENSO. Barely and Sorghum were affected at varying magnitude. Among the five chosen crop for this investigation two of the crops were seriously affected during the two extremes, i.e. El Niño and La Niña. From this investigation it is conclude that the overall cereal crop productivity was decreased and precipitation variability was noticed. So, having the information about ENSO phase in advance can be used to forecast ENSO and select crop types and varieties to maximize agricultural rain fed cereal crop productivity while minimizing the crop risk associated with seasonal rainfall and ENSO phases. Single Versus Multiple Dose Ivermectin Regimen in Onchocerciasis-infected Persons with Epilepsy Treated with Phenobarbital: A Randomized Clinical Trial in the Democratic Republic of Congo Michel Mandro, Alfred Dusambimana, Joseph Nelson Siewe Fodjo, Deby Mukendi, Stephen Haesendonckx, Richard Lokonda, Swabra Nakato, francoise Nyis, Gearge Abhafule, D Wonya'ross, An Hotterbeekx, Steven Abrams, Robert Colebunders Subject: Life Sciences, Other Keywords: onchocerciasis; epielpsy; ivermectin; trial; seizures Show abstract| Download PDF| Supplementary Files| Share Background There is anecdotal evidence that ivermectin may decrease the frequency of seizures in Onchocerca volvulus-infected persons with epilepsy (PWE). Methods In October 2017, a 12-month clinical trial was initiated in rural Democratic Republic of Congo. PWE with onchocerciasis-associated epilepsy with ≥2 seizures/month were randomly allocated to receive over a one year period, ivermectin once or thrice (group 1), while other onchocerciasis-infected PWE (OIPWE) were randomized to ivermectin twice or thrice (group 2). All participants also received anti-epileptic drugs (AED). Study outcomes included seizure freedom during the last four months (primary endpoint), decrease in microfilarial density, and occurrence of adverse events. A multiple logistic regression model was used to evaluate the primary outcome. Results Of the 197 OIPWE enrolled, 100 were randomized to receive ivermectin thrice, 52 twice, and 45 once. In an intent-to-treat combined analysis of data from group 1 and 2, the probability to become seizure-free for OIPWE treated with ivermectin twice per year was significantly higher than in those treated once (OR: 5.087, 95% CI: 1.378-19.749; p=0.018) and individuals who received ivermectin twice had a 4.471 (95% CI: 0.944-6.769, p=0.075) times higher odds of seizure freedom than those received ivermectin once per year. Absence of microfilariae during the last 4 months was associated with a higher probability of seizure freedom (p=0.027). Conclusions Increasing the number of ivermectin treatments per year was found to suppress both microfilarial density and seizure frequency in OIPWE, suggesting that O. volvulus infection plays an etiological role in causing seizures. Registration: www.clinicaltrials.gov; NCT03852303 Working Paper ARTICLE Download: 0| View: 0| Comments: 0 Proof of Twin Prime Conjecture that Can be Obtained by Using Contradiction Method K.H.K. Geerasee Wijesuriya Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: prime; contradiction; greater than; integer Twin prime numbers are two prime numbers which have the difference of 2 exactly. In other words, twin primes is a pair of prime that has a prime gap of two. Sometimes the term twin prime is used for a pair of twin primes; an alternative name for this is prime twin or prime pair. Up to date there is no any valid proof/disproof for twin prime conjecture. Through this research paper, my attempt is to provide a valid proof for twin prime conjecture. Hydrostatic Bandsaw Blade Guides for Natural Stone Cutting Applications Ammar Ahsan, Jonas Kröger, Kyle Kenney, Stefan Böhm Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Hydrostatic; Blade guides; Bandsaw; Diamond blade; Natural stone; Sawing Bandsaws either use fibre or ceramic block or sealed bearings as blade guides. This works well for cutting metals, wood and plastics. However, highly abrasive particles generated while cutting stones, settle between the contacts of the blade and the guides causing wear and premature failure. Hydrostatic guide system as presented in this work, is a contactless blade guiding method that uses force of several pressurized water jets to keep the blade cutting in a straight line. For this investigation, cutting tests were performed on a marble block using a galvanic diamond coated bandsaw blade with the upper roller guides replaced by hydrostatic guides. The results show that the hydrostatic guides help to reduce the passive force to a constant near zero in contrast to the bearing guides. This also resulted in reduced surface roughness of the stone plates that were cut. Additionally, it has also been shown that using hydrostatic guides the bandsaw blade can be tilted to counter the bandsaw drift. This original research work has shown that the hydrostatic guide systems are capable of replacing and in fact perform better than the state of the art bearing or block guides specially for stone cutting applications. Preprint REVIEW Download: 0| View: 0| Comments: 0 | doi:10.20944/preprints202001.0256.v1 The Respiratory Phenotype of Pompe Disease Rodent Models Anna Fusco, Angela McCall, Justin Dhindsa, Lucy Zheng, Aidan Bailey, Amanda Kahn, Mai ElMallah Subject: Biology, Physiology Keywords: Pompe Disease; Breathing; Respiratory Pompe disease is a glycogen storage disease caused by a deficiency in acid α-glucosidase (GAA) – a hydrolase necessary for the degradation of lysosomal glycogen. This deficiency in GAA results in muscle and neuronal glycogen accumulation, which causes respiratory insufficiency. Pompe disease rodent models provide a means of assessing respiratory pathology and are important for pre-clinical studies of novel therapies that aim to treat respiratory dysfunction and improve quality of life. This review aims to compile and summarize existing manuscripts which characterize the respiratory phenotype of Pompe rodent models. Manuscripts included in this review were selected utilizing specific search terms and exclusion criteria. Analysis of these findings demonstrate that Pompe disease rodent models have respiratory physiological defects as well as pathologies in the diaphragm, tongue, phrenic and hypoglossal motor nucleus, phrenic and hypoglossal nerves, neuromuscular junctions, and airway smooth muscle and higher order respiratory control centers. Overall, the culmination of these pathologies contributes to severe respiratory dysfunction, underscoring the importance of characterizing the respiratory phenotype while developing effective therapies for patients. Cordycepin Resensitizes T24R2, Cisplatin-resistant Human Bladder Cancer Cell, to Cisplatin by Inhibiting the AKT-mediated Ets-1 Activation Kwang dong Kim, Sang-Seok Oh, Ki Won Lee, Jin-Woo Jeong, Soojong Park, Minju Kim, Yerin Lee, Hyun-Tak Han, Cheol Hwangbo, Jiyun Yoo Subject: Life Sciences, Cell & Developmental Biology Keywords: Cordycepin; Cisplatin-resistance; Resensitization; MDR1; Ets-1; AKT Resistance of tumor cells to anticancer drugs is a major obstacle in tumor therapy. In this study, we investigated the mechanism of cordycepin-mediated resensitization to cisplatin inT24R2, a derived T24 cell line. Treatment with cordycepin or cisplatin (2 g/ml) alone could not induce cell death of T24R2, but combination treatment of these drugs significantly induced apoptosis of the cells through mitochondrial pathway including depolarization of mitochondrial membrane, decrease of anti-apoptotic proteins, Bcl-2, Bcl-xL, and Mcl-1, and increase of pro-apoptotic proteins, Bak and Bax. . High expression of MDR1 was the cause of cisplatin resistance in T24R2, and cordycepin significantly reduced MDR1 expression through inhibition of MDR1 promoter activity. MDR1 promoter activity was dependent on a transcription factor, Ets-1, in T24R2 cells. Although there is a correlation between MDR1 and Ets-1 expression in bladder cancer patients, active Ets-1, Thr-38 phosphorylated form (pThr-38), was critical to induce MDR1 expression. Cordycepin decreased pThr-38 Ets-1 level through inhibition of AKT, which reduced MDR1 transcription and induced the resensitization of T24R2 to cisplatin. The results suggest that cordycepin effectively resensitizes cisplatin-resistant bladder cancer cells to cisplatin, thus serving as a potential strategy for treatment of anti-cancer drug resistant patients. Evaluation of La (III) and Ce(III) Adsorption from Aqueous Solution Using Carbon Nanotubes Adsorbent Irene García Díaz, Francisco J. Alguacil, Esther Escudero, Félix A. López Subject: Materials Science, Metallurgy Keywords: adsorption; Lanthanum; Cerium; carbon nanotubes; rare earth Since the 1960s Rare earths (REs) applications gradually have expanded to everyday life. REs have great strategic importance in industrial and technological development, so it is expected an increase in their demand. Among the REs the European Commission considered Cerium and Lanthanum as critical raw materials. This research article studies the adsorption of Ce and La onto two carbon nanomaterials, multiwalled carbon nanotubes (MWCNT) and carboxylic functionalized multiwalled carbon nanotubes (MWCNT_ox). The latter has slightly more affinity for REs than MWCNT. The recovery percentage for Ce were 89 and 98% and in the case of for La were 99 and 92% using 0.8 g of MWCNT and 0.2 g of MWCNT_ox respectively. The adsorption process fits a pseudo second-order kinetic model and the Langmuir isotherm best represented the metal uptake. Bridge Displacement Estimation Using a Co-Located Acceleration and Strain Muhammad Zohaib Sarwar, Jongwoong Park Subject: Engineering, Civil Engineering Keywords: structural health monitoring; sensor fusion; adaptive Kalman Filter; displacement estimation; reference-free displacement Structural displacement is an important metric for assessing structural conditions because it has a direct relationship with the structural stiffness. Many bridge displacement measurement techniques have been developed, but most methods require fixed reference points in the vicinity of the target structure which limits field implementations. A promising alternative is to use reference-free measurement techniques that indirectly estimate the displacement by using measurements such as acceleration, and strain. This paper proposes novel reference-free bridge displacement estimation by the fusion of single acceleration with pseudo-static displacement derived from co-located strain measurements. First, we propose a conversion of the strain at the center of a beam into displacement based on the geometric relationship between strain and deflection curves with reference-free calibration. Second, an adaptive Kalman filter is proposed to fuse the displacement generated by strain with acceleration by recursively estimate the noise covariance of displacement from strain measurements which is vulnerable to measurement condition. Both numerical and experimental validations are presented to demonstrate the efficiency and robustness of the proposed approach. RIP1 is Novel Component of γ-Ionizing Radiation-Induced Invasion of Non-Small Cell Lung Cancer Cells Jeong Hyun Cho, A-Ram Kang, Na-Gyeong Lee, Jie-Young Song, Sang-Gu Hwang, Dae-Hee Lee, Hong-Duck Um, Jong Kuk Park Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: signal transduction; γ-ionizing radiation; cancer invasion; non-small cell lung cancer; epithelial-mesenchymal transition; tumor microenvironment Previously, we demonstrated that IR triggers the invasion/migration of A549 cells via activation of an EGFR–p38/ERK–STAT3/CREB-1–EMT pathway. Here, we have demonstrated the involvement of a novel intracellular signaling mechanism in γ-ionizing radiation (IR)-induced migration/invasion. Expression of receptor-interacting protein (RIP) 1 was initially increased upon exposure of A549, a non-small cell lung cancer (NSCLC) cell line, to IR. IR-induced RIP1 is located downstream of EGFR and involved in the expression/activity of matrix metalloproteases (MMP-2 and MMP-9) and vimentin, suggesting a role in epithelial-mesenchymal transition (EMT). Our experiments showed that IR-induced RIP1 sequentially induces Src-STAT3-EMT to promote invasion/migration. Inhibition of RIP1 kinase activity and expression blocked induction of EMT by IR and suppressed the levels and activities of MMP-2, MMP-9, and vimentin. IR-induced RIP1 activation was additionally associated with stimulation of the transcriptional factor NF-κB. Specifically, exposure to IR triggered NF-κB activation and inhibition of NF-κB suppressed IR-induced RIP1 expression followed by a decrease in invasion/migration as well as EMT. Based on the collective results, we propose that IR concomitantly activates EGFR and NF-κB and subsequently triggers the RIP1–Src/STAT3–EMT pathway, ultimately promoting metastasis. A Study on the Co-Creation of Value in Service Ecosystem: An Appraisal of the Platform of My Health Bank of National Health Insurance Yu-Hua Yan, Shih-Chieh Fang Subject: Social Sciences, Law Keywords: value co-creation; National Health Insurance; My Health Bank; Service Ecosystem Objective: Taiwan Government's organizations have endeavored to promote the applications of big data and open data. The "My Health Bank" is one of the measures promoted by the National Health Administration, Ministry of Health and Welfare. This study proposes the perspective of the "value co-creation" with the attempt to extend the concept of service ecosystem and apply it on the platform of My Health Bank to examine whether people (patients, families, and caregivers) can promote their health literacy? Method: This cross-sectional study, with people that have registered at "My Health Bank" as subjects. Complying with the inclusion criteria, 401 questionnaires were delivered, with 391 valid ones, excluding those incompletely and inaccurately filled. Result: That the affecting factors of the co-creation of values: age, education level, annual income, and platform operation show to be significant ( p<0.05); and gender, occupation, and resource exchange do not reach the significant level (p>0.1). Conclusion: We found My Health Bank changed the inertia of "value creation" in the traditional medical value, it allows the traditional medical and healthcare industry to expose to the impacts of the mega trend of the internet, the transformation of the platform in a necessary trend. Preprint ARTICLE Download: 0| View: 22| Comments: 0 | doi:10.20944/preprints202001.0250.v1 From Understanding to Sustainable Use of Peatlands: The WETSCAPES Approach Gerald Jurasinski, Sate Ahmad, Alba Anadon-Rosell, Jacqueline Berendt, Florian Beyer, Ralf Bill, Gesche Blume-Werry, John Couwenberg, Anke Günther, Hans Joosten, Franziska Köbsch, Daniel Köhn, Nils Koldrack, Jürgen Kreyling, Peter Leinweber, Bernd Lennartz, Haojie Liu, Dierk Michaelis, Almut Mrotzek, Wakene Negassa, Sandra Schenk, Franziska Schmacka, Sarah Schwieger, Marko Smiljanic, Franziska Tanneberger, Tim Urich, Haitao Wang, Micha Weil, Martin Wilmking, Nicole Wrage-Mönnig Subject: Biology, Agricultural Sciences & Agronomy Keywords: fen; paludiculture; rewetting; drainage; matter fluxes; interdisciplinary Of all terrestrial ecosystems, peatlands store carbon most effectively. However, many peatlands have been drained for peat extraction or agricultural use. This converts peatlands from sinks to sources of carbon, causing approx. 5% of the anthropogenic greenhouse effect and additional negative effects on other ecosystem services. Rewetting peatlands can mitigate the climate crisis and may be combined with management in the form of paludiculture. Rewetted peatlands, however, do not equal their pristine ancestors and their ecological functioning is not understood. This holds especially for fens. Their functioning results from complex interactions and can only be understood following an integrative approach of many relevant fields of science, which we develop in the interdisciplinary project WETSCAPES. Here, we introduce our approach in which we are addressing interactions among water transport and chemistry, primary production, peat formation, matter transformation and transport, microorganisms and greenhouse gas exchange using state of the art methods in the relevant research fields. We record data on six study sites spreading across three important fen types (Alder forest, percolation fen, and coastal fen) each in drained and rewetted state. Using exemplary results, we show the importance of developing an integrative understanding of managed fen peatlands and their ecosystem functioning. Bioprospection of Tabebuia aurea (Silva Manso) Benth. & Hook. f. ex S. Moore: Chemical, Biological and Toxicity Studies Maria Cristiane Aranha Brito, Luciana Patrícia Lima Alves Pereira, Sulayne Janayna Araujo Guimarães, José Ribamar de Castro Júnior, Vinicyus Teles Chagas, Mariana Oliveira Arruda, Wellyson da Cunha Araújo Firmo, Cláudia Quintino da Rocha, Ana Paula Silva de Azevedo dos Santos, Patricia de Maria Silva Figueiredo, Antonio Carlos Romão Borges, Denise Fernandes Coutinho Subject: Medicine & Pharmacology, Other Keywords: antimicrobial; antioxidant; bioprospecting; lapachol; Tabebuia aurea; toxicity Tabebuia aurea (Silva Manso) Benth. & Hook. f. The ex S.Moore (yellow ipe), belonging to the Bignoniaceae family, used in the popular for fever, inflammation and healing of skin wounds. The extract was prepared by maceration, using 70% ethanol. Through HPLC analysis, it was possible to identify substances, mainly phenolic, such as lapachol, present in Bignoniaceae. The phenolic content was 21.36 mg / Eag in the antioxidant activity, the effective concentration of 50% was 53.03 ± 1.14 µg / mL. The antimicrobial activity against S.aureus, E. coli and C. albicans was evaluated by microdilution in broth, which verified action against the tested microorganisms. Cell viability has been inhibited for tumor cells, although this has not been observed for normal cells. The LD50 against A.aegypti mosquito larvae was 3504.6 mg / L and there was no mortality in the concentration tested for the snail B.glabrata. Nontoxic or low toxicity for A. salina and T. molitor, respectively, and did not exhibit hemolytic action at concentrations of antibacterial effect. Given the above, it was concluded that the bark extract of the studied species has bioprospecting potential for the future development of antimicrobial products. Separation of Pet from Other Plastics by Froth Flotation Combined with Alkaline Pretreatment Fernando Pita Subject: Engineering, Other Keywords: plastic; froth flotation; alkaline treatment; particle size Plastics are naturally hydrophobic materials so, in order to employ flotation for the separation of plastic mixtures, the use of appropriate wetting agents is mandatory. In this work, the effect of pretreatment with alkaline solutions of sodium hydroxide on the floatability of four plastics (PET, PS, PMMA and PVC) was studied. The influence of NaOH concentration, treatment time and temperature of the alkaline solution, and influence of particle size was analyzed. Results showed that alkaline treatment had a strong effect on PET floatability, some effect on floatability of PMMA and PVC and no effect on floatability of PS. Plastics floatability decreased with the increase of NaOH concentration, temperature and treatment time of the alkaline solution. Based on flotation behavior of simple plastics, flotation separation after alkaline treatment of bi-component mixtures of PET with PS and PVC was achieved efficiently. The best separation was obtained for PET/PS mixture, a floated with a grade of 98% in PS and a sunk with a grade of 100% in PET. PET/PMMA mixture led to the worst separation. For PET/PMMA and PET/PVC mixtures, flotation separation improved with the decrease of the particles size. Hash-Based Hierarchical Caching and Layered Filtering for Interactive Previews in Global Illumination Rendering Thorsten Roth, Martin Weier, Pablo Bauszat, André Hinkenjann, Yongmin Li Subject: Mathematics & Computer Science, Other Keywords: global illumination; rendering; filtering; caching; Level-of-Detail Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering. Tue, 21 January 2020 Multiscale Modeling of Composite Materials with DECM Approach: Shape Effect of Inclusions Elena Ferretti Subject: Engineering, Civil Engineering Keywords: Cell Method (CM); Discrete Element Method (DEM); multiscale modeling; periodic composite continua This paper addresses the study of the stress field in composites continua with the multiscale approach of the DECM (Discrete Element modeling with the Cell Method). The analysis focuses on composites consisting of a matrix with inclusions of various shapes, to investigate whether and how the shape of the inclusions changes the stress field. The purpose is to provide a numerical explanation for some of the main failure mechanisms of concrete, which is precisely a composite consisting of a cement-based matrix and aggregates of various shapes. Actually, while extensive experimental campaigns detailed the shape effect of concrete aggregates in the past, so far it has not been possible to model the stress field within the inclusions and on the interfaces accurately. The reason for this lies in the limits of the differential formulation, which is the basis of the most commonly used numerical methods. The Cell Method (CM), on the contrary, is an algebraic method that provides descriptions up to the micro-scale, independently of the presence of rheological discontinuities or concentrated sources. This makes the CM useful for describing the shape effect of the inclusions, on the micro-scale. When used together with a multiscale approach, it also models the macro-scale behavior of periodic composite continua, without losing accuracy on the micro-scale. The DECM uses discrete elements precisely to provide the CM with a multiscale approach. Modulation of SERCA in Patients with Persistent Atrial Fibrillation Treated by Epicardial Thoracoscopic Ablation: The CAMAF Study Celestino Sardu, Gaetano Santulli, Germano Guerra, Maria Consiglia Trotta, Matteo Santamaria, Cosimo Sacra, Nicola Testa, Valentino Ducceschi, Gianluca Gatta, Michele D' Amico, Ferdinando Carlo Sasso, Giuseppe Paolisso, Raffaele Marfella Subject: Medicine & Pharmacology, Cardiology Keywords: persistent atrail fibrilation; epicardial ablation; calcium channels; SERCA Objectives: To evaluate atrial fibrillation (AF) recurrence and Sarcoplasmic Endoplasmic Reticulum Calcium ATPase (SERCA) levels in patients treated by epicardial thoracoscopic ablation for persistent AF. Background: Reduced levels of SERCA have been reported in the peripheral blood cells of patients with AF. We hypothesize that SERCA levels can predict the response to epicardial ablation. Methods: We designed a prospective, multicenter observational study to recruit, from October 2014 to June 2016, patients with persistent AF receiving an epicardial thoracoscopic pulmonary vein isolation. Results: We enrolled 27 patients; responders patients (n=15) did not present AF recurrence after epicardial ablation at 1-year follow-up. These patients displayed a marked remodeling of the left atrium, with a significant reduction of inflammatory cytokines, B type Natriuretic Peptide (BNP), and over expression of SERCA as compared to baseline and to non-responders (p<0.05). Furthermore, mean AF duration (HR 1.235 [1.037-1.471], p<0.05), LAV (HR 1.755 [1.126-2.738], p<0.05), BNP (HR 1.945 [1.895-1.999], p<0.05), and SERCA (HR 1.763 [1.167-2.663], p<0.05) were predictive of AF recurrence. Conclusions: Our data indicate that baseline values of SERCA in patients with persistent AF might be predictive of failure to epicardial ablative approach. Intriguingly, epicardial ablation was associated with increased levels of SERCA in responders. Therefore, SERCA might be an innovative therapeutic target to improve the response to epicardial ablative treatments. Chronic Mild Stress Modified Epigenetic Mechanisms Leading to Accelerated Senescence and Impaired Cognitive Performance in Mice Dolors Puigoriol-Illamola, Mirna Martínez-Damas, Christian Griñán-Ferré, Mercè Pallàs Subject: Life Sciences, Molecular Biology Keywords: Stress; epigenetics; senescence; cognition; age-related cognitive decline; Alzheimer's disease; SAMP8; SAMR1; oxidative stress; inflammation; autophagy Cognitive and behavioural disturbances are growing public healthcare issue for the modern society, as stressful lifestyle is becoming more and more common. Besides, several pieces of evidence state that environment is crucial in the development of several diseases as well as compromising healthy aging. Therefore, it is important to study the effects of stress on cognition and its relationship with aging. To address these queries, Chronic Mild Stress (CMS) paradigm was used in the senescence-accelerated mouse prone 8 (SAMP8) and resistant 1 (SAMR1). On one hand, we determined the changes produced in the three main epigenetic marks after 4 weeks of CMS treatment, such as a reduction in histone posttranslational modifications and DNA methylation, and up-regulation or down-regulation of several miRNA involved in different cellular processes in mice. In addition, CMS treatment induced reactive oxygen species (ROS) accumulation and loss of antioxidant defence mechanisms, as well as inflammatory signalling activation through NF-κB pathway and astrogliosis markers, like Gfap. Remarkably, CMS altered mTORC1 signalling in both strains, decreasing autophagy only in SAMR1 mice. We found a decrease in glycogen synthase kinase 3 β (GSK-3β) inactivation, hyperphosphorylation of Tau and an increase in sAPPβ protein levels in mice under CMS. Moreover, reduction in the non-amyloidogenic secretase ADAM10 protein levels was found in SAMR1 CMS group. Consequently, detrimental effects on behaviour and cognitive performance were detected in CMS treated mice, affecting mainly SAMR1 mice, promoting a turning to SAMP8 phenotype. In conclusion, CMS is a feasible intervention to understand the influence of stress on epigenetic mechanisms underlying cognition and accelerating senescence. Preprint REVIEW Download: 6| View: 21| Comments: 0 | doi:10.20944/preprints202001.0243.v1 The DOF Transcription Factors in Seed and Seedling Development Veronica Ruta, Chiara Longo, Andrea Lepri, Veronica De Angelis, Sara Occhigrossi, Paolo Costantino, Paola Vittorioso Subject: Biology, Plant Sciences Keywords: DOF proteins; DELLA proteins; seed germination; seedling development; seed maturation The DOF (DNA binding with one finger) family of plant-specific transcription factors (TF) was first identified in maize in 1995. Since then, DOF proteins have been shown to be present in the whole plant kingdom including the unicellular alga Chlamydomonas reinhardtii. The DOF TF family is characterised by a highly conserved DNA binding domain (DOF domain), consisting of a CX2C-X21-CX2C motif which is able to form a zinc finger structure. Early in the study of DOF proteins it became clear their relevance for seed biology. Indeed, the Prolamine Binding Factor (PBF), one of the first DOF proteins characterised, controls the endosperm-specific expression of the zein genes in maize. Subsequently, several DOF proteins from both monocots and dicots have been shown to be primarily involved in seed development, dormancy and germination, as well as in seedling development and other light-mediated processes. In the last two decades the molecular network underlying these processes have been outlined, and the main molecular players and their interactions have been identified. In this review, we will focus on the DOF TFs involved in these molecular networs, and on their interaction with other proteins. Feedstock Recycling of Rubber – A Review on Devulcanization Technologies Richard Wintersteller, Erich Markl, Maximilian Lackner Subject: Materials Science, Polymers & Plastics Keywords: rubber devulcanisation; rubber devulcanization; sustainability; recycling; twin screw extruder Vulcanized Rubber, as elastomer, is difficult to recycle. Today, the main end of life routes of tyres and other rubber products are landfilling, incineration in e.g. cement plants, and grinding to a fine powder, with huge quantities lacking sustainable recycling of this valuable material. Devulcanization, i.e. the breaking up of sulfur bonds by chemical, thermo-physical or biological means, is a promising route that has been investigated for more than 50 years. This review article presents and update on the state-of-the art in rubber devulcanization. This review article addresses established devulcanization technologies and novel processes described in the scientific and patent literatures. It is expected that the public discussion of environmental impacts of thermoplastics will soon spill over to thermosets and elastomers. Therefore, the industry needs to develop and market solutions proactively. Tyre recycling through devulcanization has a huge lever, since approx. 30 million tons of tyres are discarded annually. Association between Post-Diagnosis Particulate Matter Exposure Among 5-Year Cancer Survivors and Cardiovascular Disease Risk in Three Metropolitan Areas from South Korea Kyae Hyung Kim, Seulggie Choi, Kyuwoong Kim, Jooyoung Chang, Sung Min Kim, Seong Rae Kim, Yoosun Cho, Gyeongsil Lee, Joung Sik Son, Sang Min Park Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: cardiovascular disease; particulate matter; cancer survivor; metropolitan area Abstract: Cancer survivors are at an increased risk for cardiovascular disease (CVD). However, the association between particulate matter (PM) and CVD risk among cancer survivors (alive >5 years since diagnosis) is unclear. We investigated the risk of CVD among 40,899 cancer survivors within the Korean National Health Insurance Service database. Exposure to PM was determined by assessing yearly average PM levels obtained from the Air Korea database from 2008 to 2011. PMs with sizes <2.5 (PM2.5), <10 (PM10), or 2.5-10 (PM2.5-10) μm in diameter were compared, with each PM level exposure further divided into quintiles. Patients were followed up from January 2012 to date of CVD event, death, or December 2017, whichever came earliest. Adjusted hazard ratios (aHRs) and 95% confidence intervals (CIs) for CVD were calculated using Cox proportional hazards regression by PM exposure levels. Compared with cancer survivors in the lowest quintile of PM2.5 exposure, those within the highest quintile had a greater risk for CVD (aHR 1.31, 95% CI 1.07-1.59). Conversely, increasing PM10 and PM2.5-10 levels were not associated with increased CVD risk (p for trend 0.078 and 0.361, respectively). Cancer survivors who reduce PM2.5 exposure may reduce their risk of developing CVD. Preprint ARTICLE Download: 19| View: 57| Comments: 0 | doi:10.20944/preprints202001.0240.v1 Open or Ajar? Openness within the Neoliberal Academy Kevin Sanders, Simon Bowie Subject: Social Sciences, Library & Information Science Keywords: openness under neoliberalism; open-access licensing in capitalism; the politics of open-licensing The terms 'open' and 'openness' are widely used across the current higher education environment particularly in the areas of repository services and scholarly communications. Open-access licensing and open-source licensing are two prevalent manifestations of open culture within higher education research environments. As theoretical ideals, open-licensing models aim at openness and academic freedom. But operating as they do within the context of global neoliberalism, to what extent are these models constructed by, sustained by, and co-opted by neoliberalism? In this paper, we interrogate the use of open-licensing within scholarly communications and within the larger societal context of neoliberalism. Through synthesis of various sources, we will examine how open access licensing models have been constrained by neoliberal or otherwise corporate agendas, how open access and open scholarship have been reframed within discourses of compliance, how open-source software models and software are co-opted by politico-economic forces, and how the language of 'openness' is widely misused in higher education and repository services circles to drive agendas that run counter to actually increasing openness. We will finish by suggesting ways to resist this trend and use open-licensing models to resist neoliberal agendas in open scholarship. Working Paper REVIEW Download: 5| View: 20| Comments: 0 Corrosion of Carbon Steel in Concrete: Current Knowledge of Corrosion Mechanisms and Non-destructive Testing of Corrosion Rates Romain Rodrigues, Stéphane Gaboreau, Julien Gance, Ioannis Ignatiadis, Stéphanie Betelu Subject: Engineering, Civil Engineering Keywords: steel-reinforced concrete; carbon steel; corrosion mechanism; corrosion rate; non-destructive testing Steel corrosion is the main cause of deterioration of reinforced concrete (RC) structures. We review the current understanding of how corrosion takes place and of the development of electrical methods for non-destructive testing of corrosion rates. The inherent heterogeneity of RC structures (concrete, reinforcement, steel-concrete interface) and the significant effect of environmental factors remain major issues when assessing corrosion mechanisms. For evaluating corrosion rates, accurate determination of polarization resistance and concrete resistivity is required. The coupling of modelling with complementary non-destructive testing is essential for consolidating the results to provide a better diagnosis of the service life of RC structures. Geochemical, Mineralogical and Morphological Characterisation of Road Dust and Associated Health Risks Carla Candeias, Estela Vicente, Mário Tomé, Fernando Rocha, Paula Ávila, Alves Celia Subject: Earth Sciences, Environmental Sciences Keywords: road dust; , traffic; , PM10 emission factors; Enrichment Index; human health risk Road dust resuspension, especially the particulate matter fraction below 10 µm (PM10), is one of the main air quality management challenges in Europe. Road dust samples were collected from representative streets (suburban and urban) of the city of Viana do Castelo, Portugal. PM10 emission factors (mg veh-1 km-1) ranging from 49 (asphalt) to 330 (cobble stone) were estimated by means of the United Stated Environmental Protection Agency method. Two road dust fractions (< 0.074 mm and from 0.0074 to 1 mm) were characterised for their geochemical, mineralogical and morphological properties. In urban streets, road dusts reveal the contribution from traffic emissions, with higher concentrations of e.g. Cu, Zn, and Pb. In the suburban area, agriculture practices likely contributed to As concentrations of 180 mg kg-1 in the finest road dust fraction. Samples are primarily composed of quartz, but also of muscovite, albite, kaolinite, microcline, Fe-enstatite, graphite and amorphous content. Particle morphology clearly shows the link with natural and traffic related materials, with well-formed minerals and irregular aggregates. The hazard quotient suggests a probability to induce non-carcinogenic adverse health effects in children by ingestion of Zr. Arsenic in the suburban street represents a human health risk of 1.58 x 10-4. Working Paper ARTICLE Download: 5| View: 14| Comments: 0 Artificial Learning Dispatch Planning for Flexible Renewable Energy Systems Ana Carolina do Amaral Burghi, Tobias Hirsch, Robert Pitz-Paal Subject: Engineering, Energy & Fuel Technology Keywords: renewable systems; storage; dispatch; optimization; energy markets; machine learning Environmental and economic needs drive the increased penetration of intermittent renewable energy in electricity grids, enhancing uncertainty in market conditions prediction and network constraints. Thereafter, the importance of energy systems with flexible dispatch is reinforced, ensuring energy storage as an essential asset for these systems to be able to balance production and demand. In order to do so, such systems should participate in whole-sale energy markets, enabling competition among all players, including conventional power plants. Consequently, an effective dispatch schedule considering market and resource uncertainties is crucial. In this context, an innovative dispatch optimization strategy for schedule planning of renewable systems with storage is presented. Based on an optimization algorithm combined with a machine learning approach, the proposed method develops a financial optimum schedule with the incorporation of uncertainty information. Simulations performed with a concentrated solar power plant model following the proposed optimization strategy demonstrate promising financial improvement with a dynamic and intuitive dispatch planning method, emphasizing the importance of uncertainty treatment on the enhanced quality of renewable systems scheduling. A Mathematical Model of the Transition from the Normal Hematopoiesis to the Chronic and Accelerated Acute Stages in Myeloid Leukemia Lorand Gabriel Parajdi, Radu Precup, Eduard Alexandru Bonci, Ciprian Tomuleasa Subject: Mathematics & Computer Science, Applied Mathematics Keywords: mathematical modeling; dynamic system; steady state; stability; hematopoiesis; chronic myeloid leukemia; stem cells A mathematical model given by a two - dimensional differential system is introduced in order to understand the transition process from the normal hematopoiesis to the chronic and accelerated acute stages in chronic myeloid leukemia. A previous model of Dingli and Michor is refined by introducing a new parameter in order to differentiate the bone marrow microenvironment sensitivities of normal and mutant stem cells. In the light of the new parameter, the system now has three distinct equilibria corresponding to the normal hematopoietic state, to the chronic state, and to the accelerated acute phase of the disease. A characterization of the three hematopoietic states is obtained based on the stability analysis. Numerical simulations are included to illustrate the theoretical results. Restructuring of the Global Economy Viacheslav M. Shavshukov, Natalia A. Zhuravleva Subject: Social Sciences, Finance Keywords: World economy restructuring; nature of global crises; risks of the financial markets; leadership problem The World economy after global crisis of 2008–2009 entered a restructuring era. It defines relevance of researches of its directions and patterns. The principles of determinism and systemic analysis are methodological basis of article. In research were used the methods of processing Big Date, as of continuous changes and the symmetric analysis of macroeconomic indicators according to databases of the IMF, WB, BIS, the Central banks and treasuries. As a result of the research it is proved that depth and globalism of recessionary processes are caused by a combination of crisis of world financial system and civilization problems. Intrinsic signs of risks in the modern economy caused by the duality of the nature of global crises have been identified. There are analyzes: the deficit of resources of the international financial institutions, a negative role of a fixed rate of Yuan, a big share of derivatives and off-balance obligations of banks, use of SPV in structures of deals, growth of debt obligations, trade wars, slowdown in the growth of the Chinese economy, aggravation of contradictions between global and national finances. The thesis is reasonable that deglobalization and dedollarization deepened this conflict, started the rollback mechanism from achievements of globalization, led to tariff wars. By means of a systematization it is proved the key directions of the restructuring of global economy: legal basis, leadership and reserve currency. On the basis of SWOT-analysis of the USA and China it was concluded that the question of leadership of one country should be excluded, but slow transition to use SDR (with a basket from 15–20 currencies) as a reserve. A Statistical Study on the Development of Metronidazole-alginate-chitosan Nanocomposite Formulations using Full Factorial Designs Hazem Sabbagh, Samer Hussein Al Ali, Mohd Zobir Hussein, Zead Abudayeh, Rami Ayoub, Suha Abudoleh Subject: Materials Science, Nanotechnology Keywords: Full Factorial Design; Optimization; metronidazole; nanocomposites; sodium alginate; Chitosan The purpose of this study was to investigate the effect of chitosan (CS) and Alginate (Alg) polymers concentrations and CaCl2 concentration on metronidazole (MET) drug loading (LE), size particles and zeta potential. Nanocomposites were prepared by ionotropic pregelation method. A (21 *31 *21) *3= 36 full factorial design (FFD) was used to predict statistical equation and responses. The MET-CS-AlgNPs nanocomposites were characterized by X-ray diffraction, Fourier-transform infrared spectroscopy, thermal gravimetric analysis, scanning electron microscope and in vitro drug release studies. All data indicated the presence of drug into MET-CS-AlgNPs nanocomposites. The release profile of MET-CS-AlgNPs nanocomposites was found to be sustained An Efficient Electrocatalyst for Oxygen Evolution Reaction in Alkaline Solutions Derived from a Copper Chelate Polymer via in-situ Electrochemical Transformation Ridwan P. Putra, Hideyuki Horino, Izabela I. Rzeznicka Subject: Chemistry, Applied Chemistry Keywords: electrocatalyst; oxygen evolution reaction; dithiooxamide; chelate polymers; copper oxides; metal-air batteries; alkaline Efficient oxygen evolution reaction (OER) electrocatalysts are highly desired in the field of water electrolysis and rechargeable metal-air batteries. In this study, a chelate polymer, composed of copper (II) and dithiooxamide, was used to derive an efficient catalytic system for OER. Upon potential sweep in 1M KOH, copper (II) centers of the chelate polymer were transformed to CuO and Cu(OH)2. The carbon-dispersed CuO nanostructures formed a nanocomposite which exhibits an enhanced catalytic activity for OER in alkaline media. The nanocomposite catalyst has overpotential of 280 mV (at 1 mA/cm2) and a Tafel slope of 81 mV/dec in 1M KOH solution. It has a seven-fold higher current than IrO2/C electrode, per metal loading. A catalytic cycle is proposed, in which, CuO undergoes electrooxidation to Cu2O3 that further decomposes to CuO with releasing oxygen. This work reveals a new method to produce an active nanocomposite catalyst for OER in alkaline media using a non-noble metal chelate polymer and a porous carbon. This method can be applied to the synthesis of transition metal oxide nanoparticles used in the preparation of composite electrodes for water electrolyzers and can be used to derive cathode materials for aqueous-type metal-air batteries. The Stability of a General Sextic Functional Equation by Fixed Point Theory Yang-Hi Lee, Soon-Mo Jung, Jaiok Roh Subject: Mathematics & Computer Science, Analysis Keywords: sextic mapping; general sextic functional equation; fixed point theory method; generalized Hyers-Ulam stability In this paper, we consider the generalized sextic functional equation \begin{align*} \sum_{i=0}^{7}{}_7 C_{i} (-1)^{7-i}f(x+iy) = 0. \end{align*} And by applying the fixed point theory in the sense of L. C\u adariu and V. Radu, we will discuss the stability of the solutions for this functional equation. 4G Model of Final Unification - A Very Brief Report U. V. S. Seshavatharam, S. Lakshminarayana Subject: Physical Sciences, Particle & Field Physics Keywords: four gravitational constants; strong nuclear charge; electroweak fermion; Hadron mass generator; super symmetry With our long experience in the field of unification of gravity and quantum mechanics, we understood that, when mass of any elementary is extremely small/negligible compared to macroscopic bodies, highly curved microscopic space-time can be addressed with large gravitational constants and magnitude of elementary gravitational constant seems to increase with decreasing mass and increasing interaction range. In our earlier publications, we proposed that, 1) There exist three atomic gravitational constants associated with electroweak, strong and electromagnetic interactions; 2) There exists a strong interaction elementary charge in such a way that, it's squared ratio with normal elementary charge is close to inverse of the strong coupling constant; and 3) Considering a fermion-boson mass ratio of 2.27, quarks can be split into quark fermions and quark bosons. Further, we noticed that, electroweak field seems to be operated by a primordial massive fermion of rest energy 584.725 GeV and hadron masses seem to be generated by a new hadronic fermion of rest energy 103.4 GeV. In this context, starting from lepton rest masses to stellar masses, we have developed many interesting and workable relations. With further study, a workable model of final unification can be developed. Inside-Out: From Endosomes to Extracellular Vesicles in Fungal RNA Transport Seomun Kwon, Constance Tisserant, Markus Tulinski, Arne Weiberg, Michael Feldbrügge Subject: Biology, Other Keywords: endosome; exosome; extracellular vesicles; fungal RNA biology; membrane trafficking; RNA transport; RNA recognition motif Membrane-coupled RNA transport is an emerging theme in fungal biology. This review focuses on the RNA cargo and mechanistic details of transport via two inter-related sets of organelles: endosomes and extracellular vesicles for intra- and intercellular RNA transfer. Simultaneous transport and translation of messenger RNAs (mRNAs) on the surface of shuttling endosomes is a conserved process pertinent to highly polarised eukaryotic cells, such as hyphae or neurons. Here we detail the endosomal mRNA transport machinery components and mRNA targets of the core RNA-binding protein Rrm4. Extracellular vesicles (EVs) are newly garnering interest as mediators of intercellular communication, especially between pathogenic fungi and their hosts. Landmark studies in plant-fungus interactions indicate EVs as a means of delivering various cargos, most notably small RNAs (sRNAs), for cross-kingdom RNA interference. Recent advances and implications of the nascent field of fungal EVs are discussed and potential links between endosomal and EV-mediated RNA transport are proposed. Progesterone: An Enigmatic Ligand for the Mineralocorticoid Receptor Michael Baker, Yoshinao Katsu Subject: Keywords: evolution; mineralocorticoid receptor; progesterone; aldosterone; elephant shark The progesterone receptor (PR) mediates progesterone regulation of female reproductive physiology, as well as gene transcription in non-reproductive tissues, such as brain, bone, lung and vasculature, in both women and men. An unusual property of progesterone is its high affinity for the mineralocorticoid receptor (MR), which regulates electrolyte transport in the kidney in humans and other terrestrial vertebrates. In humans, rats, alligators and frogs, progesterone antagonizes activation of the MR by aldosterone, the physiological mineralocorticoid in terrestrial vertebrates. In contrast, in elephant shark, ray-finned fishes and chickens, progesterone activates the MR. Interestingly, cartilaginous fishes and ray-finned fishes do not synthesize aldosterone, raising the question of which steroid(s) activate the MR in cartilaginous fishes and ray-finned fishes. The simpler synthesis of progesterone, compared to cortisol and other corticosteroids, makes progesterone a candidate physiological activator of the MR in elephant sharks and ray-finned fishes. Elephant shark and ray-finned fish MRs are expressed in diverse tissues, including heart, brain and lung, as well as, ovary and testis, two reproductive tissues that are targets for progesterone, which together suggests a multi-faceted physiological role for progesterone activation of the MR in elephant shark and ray-finned fish. The functional consequences of progesterone as an antagonist of some terrestrial vertebrate MRs and as an agonist of fish and chicken MRs are not fully understood. Indeed, little is known of physiological activities of progesterone via any vertebrate MR. Structure-Activity Relationship and Mechanistic Insights for Anti-HIV Natural Products Ramanpreet Kaur, Pooja Sharma, Girish K. Gupta, Fidele Ntie-Kang, Dinesh Kumar Subject: Keywords: AIDS; anti-HIV; natural products; SARs Acquired Immunodeficiency Syndrome (AIDS) which is chiefly originated by a retrovirus named Human Immunodeficiency Virus (HIV), has influenced about 70 million populations worldwide. Even though several advancements have been invented in the field of antiretroviral combination therapy, still HIV has become the dominant reason for death in South Africa, for example. The current antiretroviral therapies have achieved success in providing instant HIV suppression but with countless undesirable adverse effects. In the present day, the biodiversity of the plant kingdom is being explored by several researchers for the discovery of potent anti-HIV drugs with different mechanisms of action. The primary challenge is to afford a treatment that is free from any sort of risk of drug resistance and serious side effects. Hence, there is a strong demand to evaluate the drugs obtained from natural plants as well as the synthetic derivatives that have been derived from the natural compounds by various chemical reactions. Several plants such as Andrographis paniculata, Dioscorea bulbifera, Aegle marmelos, Wistaria floribunda, Lindera chunii, Xanthoceras sorbifolia and others have displayed significant anti-HIV activity showing more potent anti-HIV activity along with their structures, SARs & important key findings. An Agent-Based Self-Protective Method to Secure Communication between UAVs in Unmanned Aerial Vehicle Networks Reza Fotohi, Eslam Nazemi Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: unmanned aerial vehicle networks (UAVNs); secure communication; agent-based self-protective; HIS UAVNs (unmanned aerial vehicle networks) may become vulnerable to threats and attacks due to their characteristic features such as high mobility, highly dynamic network topology, and open-air wireless environments. Since previous work has focused on classical and metaheuristic-based approaches, none of these approaches have a self-adaptive approach. In this article, we examine the challenges of cyber detection methods to secure UAVNs and review exiting security schemes proposed in the current literature. Furthermore, we propose an agent-based self-protective method (ASP-UAVN) for UAVNs that is based on the Human Immune System (HIS). In ASP-UAS, the safest route from the source UAV to the destination UAV is chosen according to a self-protective system. In this method, a multi-agent system using an Artificial Immune System (AIS) is employed to detect the attacking UAV and choose the safest route. In the proposed ASP-UAVN, the route request packet (RREQ) is initially transmitted from the source UAV to the destination UAV to detect the existing routes. Then, once the route reply packet (RREP) is received, a self-protective method using agents and the knowledge base is employed to choose the safest route and detect the attacking UAVs. The method is evaluated here via extensive simulations carried out in the NS-3 environment. The experimental results of four scenarios demonstrated that the ASP-UAS increases the Packet Delivery Rate (PDR) by more than 17.4, 20.8, and 25.91%, and detection rate by more than 17.2, 23.1, and 29.3%, and decreases the Packet Loss Rate (PLR) by more than 14.4, 16.8, and 20.21%, the false-positive and false-negative rate by more than 16.5, 25.3, and 31.21% those of SUAS-HIS, SFA and BRUIDS methods, respectively. Interplay between Mediterranean Diet and Gut Microbiota in the Interface of Autoimmunity: An Overview Christina Tsigalou, Avgi Tsolou, Theoharis Konstantinidis, Efterpi Zafiriou, Dardiotis Efthimios, Alexandra Tsirogianni, Dimitrios Bogdanos Subject: Medicine & Pharmacology, Other Keywords: autoimmune disease; autoimmunity; dysbiosis; Mediterranean diet; microbiome The nutritional habits regulate the gut microbiota and may provoke and/or prevent autoimmune disease. Western diet is rich in sugars, meat and poly-unsaturated fatty acids, which lead to dysbiosis of intestinal microbiota, disruption of gut epithelial barrier and chronic mucosal inflammation. On the other hand, Mediterranean Diet (MedDiet) is rich in ω3 fatty acids, fruits and vegetables and has anti-inflammatory properties, which can restore gut eubiosis. The effect of MedDiet and its components in health and disease states have been thoroughly analyzed in several studies. Moreover, several studies have specifically investigated the association between MedDiet, microbiota and risk for autoimmune diseases. Furthermore, the MedDiet has been associated with lower risk of cardiovascular diseases, which plays a critical role in reducing mortality in patients suffering from autoimmune diseases with comorbidities. The aim of the present review is to specifically highlight current knowledge regarding possible interactions of MedDiet with the patterns of intestinal microbiota focusing on autoimmunity and a blueprint through dietary modulations for the prevention and management of diseases's activity and progression. Mon, 20 January 2020 Intelligent Road Inspection with Advanced Machine Learning; Hybrid Prediction Models for Smart Mobility and Transportation Maintenance Systems Nader Karballaeezadeh, Farah Zaremotekhases, Shahaboddin Shamshirband, Amir Mosavi, Narjes Nabipour, Peter Csiba, Annamária R. Várkonyi-Kóczy Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: transportation; mobility; prediction model; pavement management; pavement condition index; falling weight deflectometer; multilayer perceptron; radial basis function; artificial neural network; intelligent machine system committee Prediction models in mobility and transportation maintenance systems have been dramatically improved through using machine learning methods. This paper proposes novel machine learning models for an intelligent road inspection. The traditional road inspection systems based on the pavement condition index (PCI) are often associated with the critical safety, energy and cost issues. Alternatively, the proposed models utilize surface deflection data from falling weight deflectometer (FWD) tests to predict the PCI. Machine learning methods are the single multi-layer perceptron (MLP) and radial basis function (RBF) neural networks as well their hybrids, i.e., Levenberg-Marquardt (MLP-LM), scaled conjugate gradient (MLP-SCG), imperialist competitive (RBF-ICA), and genetic algorithms (RBF-GA). Furthermore, the committee machine intelligent systems (CMIS) method was adopted to combine the results and improve the accuracy of the modeling. The results of the analysis have been verified through using four criteria of average percent relative error (APRE), average absolute percent relative error (AAPRE), root mean square error (RMSE), and standard error (SD). The CMIS model outperforms other models with the promising results of APRE=2.3303, AAPRE=11.6768, RMSE=12.0056, and SD=0.0210. Uncertainty Modelling in Risk-averse Supply Chain Systems Using Multi-objective Pareto Optimization Heerok Banerjee Subject: Mathematics & Computer Science, Analysis Keywords: Supply Chain Management (SCM); Supply Chain Risk Management (SCRM); risk modelling; time-series analysis; machine learning Risk modelling along with multi-objective optimization problems have been at theepicenter of attention for supply chain managers. In this paper, we introduce a datasetfor risk modelling in sophisticated supply chain networks based on formal mathematical models. We have discussed the methodology and simulation tools used to synthesize the dataset. Additionally, the underlying mathematical models are discussed in granular details along with providing directions to conducting statistical analyses or neural machine learning models. The simulation is performed using MATLAB ™Simulink and the models are illustrated as well. Change Detection Based Building Damage Assessment Method Using Radar Imageries with GLCM Textural Parameters Asset Akhmadiya, Nabi Nabiyev, Khuralay Moldamurat, Kanagat Dyusekeev, Sabyrzhan Atanov Subject: Earth Sciences, Environmental Sciences Keywords: radar remote sensing; building damage assessment; change detection method; GLCM In this research paper, change detection based methods were considered to find collapsed and intact buildings using radar remote sensing data or radar imageries. Main task of this research paper is collection of most relevant scientific research in field of building damage assessment using radar remote sensing data. Several methods are selected and presented as best methods in present time, there are methods with using interferometric coherence, backscattering coefficients in different spatial resolution. In conclusion, methods are given in end, which show, which methods and radar remote sensing data give more accuracy and more available for building damage assessment. Low resolution Sentinel-1A/B radar remote sensing data are recomended as free available for monitoring of destruction degree in microdistrict level. Change detection and texture based method are used together to increase overall accuracy. Homogeneity and Dissimilarity GLCM texture parameters found as better for separation of a collapsed and intact buildings. Dual polarization (VV,VH) backscattering coefficients and coherence coefficients (before earthquake and coseismic) were fully utilized for this study. There were defined the better multi variable for supervised classification of none building, damaged and intact buildings features in urban areas. In this work, we were achieved overall accuracy 0.77, producer's accuracy for none building is 0.84, for damaged building case 0.85, for intact building 0.64. Amatrice town was chosen as most damaged from 2016 Central Italy Earthquake. Comparing Power-System- and User-Oriented Battery Electric Vehicle Charging Representation and its Implications on Energy System Modeling Niklas Wulff, Felix Steck, Hans Christian Gils, Carsten Hoyer-Klick, Bent van den Adel, John E. Anderson Subject: Engineering, Control & Systems Engineering Keywords: electric vehicles; sector coupling; energy system optimization; renewable energy integration; REMix; charging behavior; marginal values Battery electric vehicles provide an opportunity to balance supply and demand in future power systems with high shares of fluctuating renewable energy. Compared to other storage systems such as pumped-storage hydroelectricity, electric vehicle energy demand is highly dependent on charging and connection choices of vehicle users. We present a model framework of a utility-based stock and flow model, a utility-based microsimulation of charging decisions, and an energy system model including respective interfaces to assess how the representation of battery electric vehicle charging affects energy system optimization results. We then apply the framework to a scenario study for controlled charging of nine million electric vehicles in Germany in 2030. Assuming a respective fleet power demand of 27 TWh, we analyze the difference between power-system-based and vehicle user-based charging decisions in two respective scenarios. Our results show that taking into account vehicle users' charging and connection decisions significantly decreases the load shifting potential of controlled charging. The analysis of marginal values of equations and variables of the optimization problem yields valuable insights on the importance of specific constraints and optimization variables. In particular, state-of-charge assumptions and representing fast charging drive curtailment of renewable energy feed-in and required gas power plant flexibility. A detailed representation of fleet charge connection is less important. Peak load can be significantly reduced by 5% and 3% in both scenarios, respectively. Shifted load is very robust across sensitivity analyses while other model results such as curtailment are more sensitive to factors such as underlying data years. Analyzing the importance of increased BEV fleet battery availability for power systems with different weather and electricity demand characteristics should be further scrutinized. Protective Effects of Astaxanthin Supplementation against Ultraviolet-Induced Photoaging in Hairless Mice Xing Li, Tomohiro Matsumoto, Miho Takuwa, Mahmood Saeed Ebrahim Shaikh Ali, Takumi Hirabayashi, Hiroyo Kondo, Hidemi Fujino Subject: Medicine & Pharmacology, General Medical Research Keywords: astaxanthin; antioxidant; skin; ultraviolet; photoaging; capillary Abstract: Ultraviolet (UV) induces skin photoaging, which is characterized by thickening, wrinkling, pigmentation, and dryness. Astaxanthin, a ketocarotenoid from Haematococcus pluvialis, has been extensively studied with respect to its possible effect on skin health as well as UV protection. In addition, astaxanthin attenuates increases in the generation of reactive oxygen species (ROS) and capillary regression of skeletal muscle. In the present study, we investigated whether astaxanthin would protect UV-induced photoaging and capillary regression in the skin of HR-1 hairless mice. UV induces wrinkle formation, thickness and capillary regression in dermis of hairless mice and the administration of astaxanthin decreased the UV-induced wrinkle formation, skin thickness, and increase in collagen fibers in skin. Astaxanthin supplementation also inhibited the levels of ROS generation and attenuated the decreases in wrinkle formation, thickness and capillary number in the skin. We also found an inverse correlation between wrinkling and capillary number, and the photoaging associated with capillary regression in skin. These results suggest that astaxanthin can protect against photoaging caused by ultraviolet irradiation and the effects of astaxanthin in photoaging inhibition may be associated with the protection of capillary regression in skin. Working Paper ARTICLE Download: 10| View: 18| Comments: 0 Diagnosis of Inherited Platelet disorders on a Blood Smear Carlo Zaninetti, Andreas Greinacher Subject: Medicine & Pharmacology, Other Keywords: inherited platelet disorders; hereditary thrombocytopenias; blood smear; immunofluorescence; bleeding tendency Inherited platelet disorders (IPDs) are rare diseases featured by low platelet count and/or defective platelet function. Patients have variable bleeding diathesis and sometimes additional features that can be congenital or acquired. Identification of an IPD is desirable to avoid misdiagnosis of immune thrombocytopenia and use of improper treatments. Diagnostic tools include platelet function studies and genetic testing. The latter can be challenging as the correlation of its outcomes with phenotype is not easy. The immune-morphological evaluation of blood smear (by light- and immunofluorescence microscopy) represents a reliable method to phenotype subjects with suspected IPD. It is relatively cheap, not excessively time-consuming, and applicable to shipped samples. In some forms, it can provide diagnosis by itself, as for MYH9-RD, or in addition to other first-line tests as aggregometry or flow cytometry. In regard to genetic testing, it can guide specific sequencing. Since only minimal amounts of blood are needed for preparation of blood smears, it can be used to further characterize thrombocytopenia in pediatric patients and even newborns. In principle it is based on visualizing alterations in the distribution of proteins, which result from specific genetic mutations, by using monoclonal antibodies. It can be applied to identify deficiencies in membrane proteins, disturbed distribution of cytoskeletal proteins, and alpha as well as delta granules. On the other hand mutations associated with impaired signal transduction are difficult to identify by immunofluorescence of blood smears. This review summarizes technical aspects and the main diagnostic patterns achievable by this method. Design of Spectral Filters Based on Mode Coupling of Optical Fibers Ahmed A. Abouelfadl, Abdullah A. Alshehri Subject: Engineering, Electrical & Electronic Engineering Keywords: optical fibers; spectral filter; modes coupling; dispersion characteristics; composite waveguide; linear polarized modes In this paper, design of optical spectral filters based on mode coupling of optical fibers is presented. The finite difference method is applied to find the dispersion characteristics of optical fiber coupler constructing from two fibers as a composite multi-dielectric waveguide with different cores but the same cladding. Also, the field distribution for both fibers as a separate and as a composite waveguide. The spectral characteristics of the filters are investigated depending on the coupling of two linear polarized modes LP01 and LP11. The dependence of the transmission coefficient on operating wavelengths is illustrated. Finally, the spectral bandwidth of filter as a function of the distance between the two cores is addressed. Acute Effect of Moderate Dose Fructose in Solid Foods on Triglyceride, Glucose and Uric Acid before and after a One-Month Moderate Sugar Feeding Period - A Randomised Controlled Trial Peter M. Clifton, Jennifer B. Keogh Subject: Medicine & Pharmacology, Nutrition Keywords: triglyceride; uric acid; glucose; fructose; sucrose; solid Fructose in beverages has adverse effects on lipids, glucose and insulin sensitivity after acute and chronic ingestion. There is limited data showing that chronic consumption of fructose in solid foods has harmful effects. We hypothesized that a moderate amount of fructose compared with sucrose in solid food consumed for a month would not adversely influence fasting or postprandial lipids and glucose after an acute fat and carbohydrate load. Twenty-five men and women with prediabetes and/or obesity and overweight consumed in random order two acute test meals of muffins sweetened with either fructose or sucrose, followed by 4-week chronic consumption of 42g/day of either fructose or sucrose in low fat muffins after which the 2 meal tests were repeated. Subjects were randomised to sugar type in the chronic feeding period. Sugar type had no effect on the incremental area under the curve for triglyceride or uric acid at either time point (P=0.4 and P=0.9). There was no overall difference between meal tests at baseline and after 1 month and no effect of consuming sucrose or fructose muffins for 1 month. Fasting triglyceride increased after chronic consumption of fructose by 0.31±0.37 mmol/L compared with sucrose in people with IFG/IGT only (P=0.004). Fructose at a moderate intake of <10% of energy in solid food has no different effects on postprandial triglyceride and uric acid compared with sucrose although fasting triglyceride was increased in people with IFG/IGT after 1 month of fructose muffins suggesting the need for caution. Coronary Artery Disease Diagnosis: Ranking the Significant Features Using Random Trees Model Javad Hassannataj Joloudari, Edris Hassannataj Joloudari, Hamid Saadatfar, Mohammad Ghasemigol, Seyyed Mohammad Razavi, Amir Mosavi, Narjes Nabipour, Shahaboddin Shamshirband, Laszlo Nadai Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: heart disease; coronary artery disease; machine learning; deep learning; predictive features; coronary artery disease diagnosis; health informatics Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis by selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), the decision tree of C5.0, support vector machine (SVM), the decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models. Unveiling the Hidden Rules of Spherical Viruses using Point Arrays Subject: Life Sciences, Virology Keywords: protruding features; spherical virus; point arrays; surface modifications; VLP; drug delivery; icosahedral; nanomedicine; ligand binding Since its introduction, Triangulation number has been the most successful and ubiquitous scheme for classifying spherical viruses. However, despite its many successes, it fails to describe the relative angular orientations of proteins, as well as their radial mass distribution within the capsid. It also fails to provide any insight into critical sites of stability, modifications or possible mutations. We show how classifying spherical viruses using icosahedral point arrays, introduced by Keef and Twarock, unveils new geometric rules and constraints for understanding virus stability and key locations for exterior and interior modifications. We present a modified fitness measure which classifies viruses in an unambiguous and rigorous manner, irrespective of local surface chemistry, steric hinderance, solvent accessibility or triangulation number. We then utilize these arrays to explain the immutable surface loops of bacteriophage MS2, the relative reactivity of surface lysines in CPMV and the non-quasiequivalent flexibility of the HBV dimers. We explain how using sister and double arrays can function as predictive tools for site directed modifications in other systems. This success builds on our previous work showing that viruses place their protruding features along the great circles of the asymmetric unit, demonstrating that viruses indeed adhere to these geometric constraints. Preprint REVIEW Download: 30| View: 22| Comments: 0 | doi:10.20944/preprints202001.0218.v1 Patient Derived Models to Study Head and Neck Cancer Radiation Response Pippa F. Cosper, Lindsey Abel, Yong-Syu Lee, Cristina Paz, Saakshi Kaushik, Kwangok P. Nickel, Roxana Alexandridis, Jacob G. Scott, Justine Y. Bruce, Randall J. Kimple Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: head and neck cancer; radiation therapy; radiation; patient-derived models; cancer Patient derived model systems are important tools for studying novel anti-cancer therapies. Patient derived xenografts (PDXs) have gained favor over the last 10 years as newer mouse strains have improved the success rate of establishing PDXs from patient biopsies. PDXs can be engrafted from head and neck cancer (HNC) samples across a wide range of cancer stages, retain the genetic features of their human source, and can be treated with both chemotherapy and radiation, allowing for clinically relevant studies. Not only do PDXs allow for study of patient tissues in an in vivo model, they can also provide a renewable source of cancer cells for organoid cultures. Herein, we review the uses of HNC patient derived models for radiation research including approaches to establishing both orthotopic and heterotopic PDXs, approaches and potential pitfalls to delivering chemotherapy and radiation to these animal models, biological advantages and limitations, and alternatives to animal studies that still use patient-derived tissues. Potential in Vitro Elicitation of Secondary Metabolite Using Sodium Azide in Tissue Culture of Nigella sativa and Its Eeffect on DNA Damage Inhibition Mohammed Shariq Iqbal, Zahra Iqbal, Abeer Hashem, Elsayed Fathi Abd_Allah, Asif Jafri, Mohammad Ansari Subject: Biology, Plant Sciences Keywords: antioxidant; Nigella sativa; secondary metabolites; thymoquinone; DNA damage Nigella sativa (NS) is an effective medicinal plant possessing noteworthy antioxidant property. In NS, there are more than hundred phyto-chemicals reported, out of which thymoquinone is the utmost active phyto-constituent having sturdy antioxidative property. Thymoquinone is a cyclicdione, when reacts with sodium azide, converts into α-azido ketones i.e its analogs which are handy with extensive range of reactions. Sodium azide induces stress in plants thereby, modulating the antioxidant system. The present investigation was planned to elucidate the effect of sodium azide at different concentrations (5µM, 10µM, 20µM, 50µM, 100µM and 200µM) on its secondary metabolites (mainly thymoquinone) in NS callus culture extract (NSE). The results showed sodium azide effect on thymoquinone content and a concentration dependent boost in antioxidant property. It was also observed that thymoquinone content and percent yield (analyzed by RP-HPLC; Reverse Phase- High Performance Liquid Chromatography) were minimum (0.033±0.006% and 0.420±0.045%, respectively) at 200 µM sodium azide used. Whereas, antioxidant activity (analyzed by DPPH; 2, 2-diphenyl-1-picrylhydrazyl) was found to be maximum (3.873±0.402%) at same dose. Further, analysis was done for inhibition of oxidative DNA damage at different concentrations of sodium azide on NSE, maximum inhibition of DNA damage (0.243±0.017%) was found at 200 µM concentration of sodium azide. When correlated, strong positive correlation was observed between percent yield and percent thymoquinone, antioxidant property and inhibition of DNA damage. Whereas, strong negative correlation was observed between percent yield and antioxidant property, percent thymoquinone and antioxidant property, percent thymoquinone and inhibition of DNA damage. The findings evidently point out that the content of thymoquinone, antioxidant property and inhibition of DNA damage was affected by sodium azide. Dyanmics of The Price Behavior in Stock Markets: A Statistical Physics Approach Hung T. Diep, Gabriel Desgranges Subject: Physical Sciences, Other Keywords: econophysics; market dynamics; market networks; price variation; Monte Carlo simulations; mean-field theory; statistical physics models We study in this paper the time evolution of stock markets using a statistical physics approach. We consider an ensemble of agents who sell or buy a good according to several factors acting on them: the majority of the neighbors, the market ambiance, the variation of the price and some specific measure applied at a given time. Each agent is represented by a spin having a number of discrete states q or continuous states, describing the tendency of the agent for buying or selling. The market ambiance is represented by a parameter T which plays the role of the temperature in physics: low T corresponds to a calm market, high T to a turbulent one. We show that there is a critical value of T, say Tc, where strong fluctuations between individual states lead to a disordered situation in which there is no majority: the numbers of sellers and buyers are equal, namely the market clearing. The specific measure, by the government or by economic organisms, is parameterized by $H$ applied on the market at the time t1 and removed at the time t2. We have used Monte Carlo simulations to study the time evolution of the price as functions of those parameters. In particular we show that the price strongly fluctuates near Tc and there exists a critical value Hc above which the boosting effect remains after H is removed. Our model replicates the stylized facts in finance (time-independent price variation), volatility clustering (time-dependent dynamics) and persistent effect of a temporary shock. The second party of the paper deals with the price variation using a time-dependent mean-field theory. By supposing that the sellers and the buyers belong to two distinct communities with their characteristics different in both intra-group and inter-group interactions, we find the price oscillation with time. Results are shown and discussed. Entropy, Information and Symmetry, Ordered is Symmetrical, II: System of Spins in the Magnetic Field Edward Bormashenko Subject: Physical Sciences, General & Theoretical Physics Keywords: entropy; symmetry; ordering; elementary magnets; magnetic field; j-fold symmetry Abstract: The second part of the paper develops the approach, suggested in the Entropy 2020, 22(1), 11; https://doi.org/10.3390/e22010011 , which relates ordering in physical systems to their symmetrizing. Entropy is frequently interpreted as a quantitative measure of "chaos" or "disorder". However, the notions of "chaos" and "disorder" are vague and subjective to a much extent. This leads to numerous misinterpretations of entropy. We propose to see the disorder as an absence of symmetry and to identify "ordering" with symmetrizing of a physical system; in other words, introducing the elements of symmetry into an initially disordered physical system. We explore the initially disordered system of elementary magnets exerted to the external magnetic field H ⃗. Imposing symmetry restrictions diminishes the entropy of the system and decreases its temperature. The general case of the system of elementary magnets demonstrating the j-fold symmetry is treated. The interrelation T_j=T/j takes place, where T and T_j are the temperatures of non-symmetrized and j-fold-symmetrized systems of the magnets correspondingly. Acai Extract Increases the Red Blood Cell Population via Erythropoietin Upregulation in Mice Shuichi Shibuya, Toshihiko Toda, Yusuke Ozawa, Takahiko Shimizu Subject: Medicine & Pharmacology, Nutrition Keywords: acai; erythropoiesis; erythropoietin Acai (Euterpe oleracea Mart. Palmae, Arecaceae) is a palm plant native to the Brazilian Amazon. It contains many nutrients, such as polyphenols, iron, vitamin E, and unsaturated fatty acids, so in recent years, many of the antioxidant and anti-inflammatory effects of acai have been reported. However, the effects of acai on hematopoiesis have not been investigated yet. In the present study, we administered acai extract to mice and evaluated its hematopoietic effects. Acai treatment significantly increased the erythrocytes, hemoglobin, and hematocrit contents compared to controls for four days. We then examined the hematopoietic-related markers following a single injection. Acai administration significantly increased the levels of the hematopoietic-related hormone erythropoietin in blood compared to controls and also significantly upregulated the gene expression of Epo in the kidney. Furthermore, in the mice treated with acai extract, the kidneys were positively stained with the hypoxic probe pimonidazole in comparison to the controls. These results demonstrated that acai increases the number of blood cells through an increased erythropoietin expression via hypoxic action in the kidney. Acai can be expected to improve motility through hematopoiesis. Glomerular Filtration Rate in Former Extreme Low Birth Weight Infants over the Full Pediatric Age Range: A Pooled Analysis Elise Goetschalkx, Djalila Mekahli, Elena Levtchenko, Karel Allegaert Subject: Medicine & Pharmacology, Pediatrics Keywords: glomerular filtration rate; Brenner hypothesis; extreme low birth weight infants; renal outcome Different cohort studies documented a lower glomerular filtration rate (GFR) in former extremely low birth weight (ELBW, <1000 g) neonates throughout childhood when compared to term controls. The current aim is to pool these studies to describe the GFR pattern over the pediatric age range. To do so, we conducted a systematic review on studies reporting on GFR measurements in former ELBW cases while GFR data of healthy age-matched controls included in these studies were co-collected. Based on 248 hits, 6 case-control and 3 cohort studies were identified, with 444 GFR measurements in 380 former ELBW cases (median age 5.3-20.7 years). The majority were small (17-78 cases) single center studies, with heterogeneity in GFR measurement (inulin, Cystatin C or creatinine estimated GFR formulae) tools. Despite this, the median GFR (ml/kg/1.73m2) within case-control studies was consistently lower (-13, range -8 to -25%) in cases, so that a relevant minority (15-30%) has chronic kidney disease ≥stage 2 (GFR <90 ml/kg/1.73m2). Consequently, this pooled analysis describes a pattern of reduced GFR in former ELBW cases throughout childhood. Research should focus on perinatal risk factors for impaired GFR and long-term outcome, but is hampered by single center cohorts, study size, and heterogeneity of tools. Preprint REVIEW Download: 60| View: 292| Comments: 0 | doi:10.20944/preprints202001.0212.v1 Safeguarding Freshwater Life Beyond 2020: Recommendations for the New Global Biodiversity Framework from the European Experience Charles B. van Rees, Kerry A. Waylen, Astrid Schmidt-Kloiber, Stephen J. Thackeray, Gregor Kalinkat, Koen Martens, Sami Domisch, Ana I. Lillebø, Virgilio Hermoso, Hans-Peter Grossart, Rafaela Schinegger, Kris Decleer, Tim Adriaens, Luc Denys, Ivan Jarić, Jan H. Janse, Michael T. Monaghan, Aaike De Wever, Ilse Geijzendorffer, Mihai C. Adamescu, Sonja C. Jähnig Subject: Keywords: climate change; sustainable development goals; wildlife; wetlands; water resources; ecosystem services The drafting of a new Global Biodiversity Framework for the Convention on Biological Diversity (CBD) and Biodiversity Strategy for the European Union (EU) render 2020 a critical crossroad for biodiversity conservation. Freshwater biodiversity is disproportionately threatened and poorly studied relative to marine and terrestrial biota, despite providing numerous essential ecosystem services. The urgency of the mounting freshwater biodiversity crisis necessitates approaches catered to the unique ecology and threats of freshwater life, which are not adequately addressed by current strategies. We present a set of 15 special recommendations for freshwater biodiversity to guide the CBD's post-2020 framework and the 2020 EU strategy based on European case studies, both challenges and successes. Our recommendations cover key outcomes and guiding concepts, enabling conditions and methods of implementation, planning and accountability modalities, and cross-cutting issues. They address topics including invasive species, integrated water resources management, strategic conservation planning, data management, and emerging technologies for freshwater monitoring, among others. These recommendations will enhance the ability of global and European post-2020 biodiversity agreements to halt and reverse the rapid global decline of freshwater biodiversity. Preprint CASE REPORT Download: 23| View: 47| Comments: 0 | doi:10.20944/preprints202001.0211.v1 Megacity Wastewater Poured into a Nearby Basin: Looking for Sustainable Scenarios in a Case Study Silvia Chamizo-Checa, Elena M. Otazo-Sanchez, Alberto J. Gordillo-Martinez, Juan Suarez-Sanchez, Cesar A. Gonzalez-Ramirez, Hipolito Muñoz-Nava Subject: Earth Sciences, Environmental Sciences Keywords: water demand; megacity wastewater; hydrological balance scenarios The megacities´ sewage creates socioeconomic dependence related to water availability in the nearby zones, especially in countries with hydric stress. The present paper studies the water balance progression of realistic scenarios from 2005 to 2050 in the Mezquital Valley, the receptor of Mexico City untreated sewage since 1886, allowing agriculture irrigation in unsustainable conditions. WEAP model calculated the water demand and supply. Validation was performed with outflows data of the Tula River and simulated three scenarios: 1st) Steady-state based on inertial growth rates, 2nd) Transient scenario concerned climate change outcomes, with minor influence in surface water and hydric stress in 2050; 3rd) Transient scenario perturbed with a planned reduction of 36% in the imported wastewater and the start-up of a massive Water Treatment Plant, allowing drip and sprinkler irrigation since 2030. In the 2005-2017 period, 59% of the agriculture depended on the flood irrigation with megacity sewage. The water balance scenarios evaluated the sectorial supply of the ground and superficial water. Drip irrigation would reduce 42% of agriculture demands, but still does not grant the downflow hydroelectric requirements, aggravated by the lack of wastewater supply since 2030. This research alerts about how present policies compromise future Valley demands. Preprint CONCEPT PAPER Download: 6| View: 42| Comments: 0 | doi:10.20944/preprints202001.0210.v1 The Autism Palette: Combinations of Impairments Explain the Heterogeneity in ASD Ábel Fóthi, András Lőrincz, Latha Soorya Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: neuropsychiatric disorder; cognition; social behavior; dimensionality, reward; heterogeneity; autoencoder; components; genetic causes Autism spectrum disorder (ASD) is a heterogeneous neuropsychiatric problem with a few core symptoms: weaknesses in social behavior, verbal impairments, repetitive behavior and restricted interests. Beyond the core symptoms, autism has strong association with other disorders such as intellectual disability, epilepsy, schizophrenia among many others. This paper outlines a theory of ASD with capacity to connect heterogeneous 'core' symptoms, medical and psychiatric comorbidities as well as other etiological theories of autism in a unifying cognitive framework rooted in neuroscience and genetics. Cognition is embedded into an ever-developing structure modified by experiences, including the outcomes of environment influencing behaviors. We introduce the hypothesis that autism is caused by deficits in component-based cognition and the internal learning reinforcing machinery. Specifically, we outline our Cartesian Factor forming autoencoder like model that supports cognition by breaking combinatorial explosion and discuss the cognitive and neural processes behind our model. The high dimensionality of sensory information poses serious problems, since the brain can handle only 7±2 relevant variables at a time making processes, such as the extraction and encoding of the relevant variables and their efficient manipulation critical. These processes are influenced by previous experiences and the internal reward system. In addition, large delays of distributed information processing should be counteracted by learned predictive models to synchronize sensory, proprioceptive, and cognitive signals and have timely and accurate model-based actions. Impairments in any of these aspects may disrupt learning and execution. Combinations of small impairments may allow the solving of low complexity tasks but may become visible if learned variables and the related metric are improper and imprecise, respectively, especially if their number is large. We claim that social interactions are amongst the most challenging cognitive tasks in terms of the number of variables involved. In turn, they are highly susceptible to combinations of small impairments. We consider impairments as the basic colors of autism, whereas the combinations of diverse impairments make the palette of autism. In turn, social processes can be spoiled in many ways and can lead to diverse comorbidities. More than a Box to Check: Research Sponsor and Clinical Investigator Perspectives on Making GCP Training Relevant Teresa Swezey, F. Hunter McGuire, Patricia Hurley, Janette Panhuis, Kathy Goldstein, Tina Chuck, Carrie Dombeck, Brian Perry, Christina Brennan, Natasha Phrsai, Amy Corneli Subject: Medicine & Pharmacology, General Medical Research Keywords: good clinical practice; clinical trials; quality; investigator training; clinical investigator Background: Good clinical practice (GCP) training is the industry standard for ensuring the quality conduct of registrational clinical trials. However, concerns have been raised about whether the current structure and delivery of GCP training sufficiently prepares clinical investigators and their delegates to conduct clinical trials. Methods: We conducted qualitative semi-structured interviews with 13 clinical investigators and 10 research sponsors to 1) examine characteristics of the quality conduct of sponsored clinical trials, including critical tasks and concerns perceived as essential for trial quality, 2) identify key knowledge and skills required to perform critical tasks, and 3) identify gaps and redundancies in GCP training and areas of improvement to ensure the quality conduct of clinical trials. We used applied thematic analysis to analyze the data. Results: The top three tasks identified as critical for the quality conduct of clinical trials were obtaining informed consent, ensuring protocol compliance, and protecting participants' health and safety. Respondents acknowledged that GCP principles address each of these critical tasks; however, they described many challenges and burdens of GCP training, including high training frequency and repetitive content. Respondents suggested moving beyond GCP training as a mere check-box activity by making it more effective, engaging, and interactive. They also emphasized that applying GCP principles in a real-world, skills-based environment would increase the relevance of GCP training to investigators and their delegates. Conclusion: Our findings indicate that although investigators and sponsors recognize that GCP training addresses critical tasks necessary to the quality conduct of clinical trials, they articulated the need for significant improvement in the design, content, and presentation of GCP training. A Bottom-up Approach for Pig Skeleton Extraction Using RGB Data Akif Quddus Khan, Salman Khan Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: pig; behavior analysis; hourglass; stacked dense-net; K-mean sampler Animal behavior analysis is a crucial tasks for the industrial farming. In an indoor farm setting, extracting Key joints of animal is essential for tracking the animal for longer period of time. In this paper, we proposed a deep network that exploit transfer learning to trained the network for the pig skeleton extraction in an end to end fashion. The backbone of the architecture is based on hourglass stacked dense-net. In order to train the network, key frames are selected from the test data using K-mean sampler. In total, 9 Keypoints are annotated that gives a brief detailed behavior analysis in the farm setting. Extensive experiments are conducted and the quantitative results show that the network has the potential of increasing the tracking performance by a substantial margin. Averaging is Probably not the Optimum Way of Aggregating Parameters in Federated Learning Peng Xiao, Samuel Cheng, Vladimir Stankovic, Dejan Vukobratovic Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: federated learning; federated averaging; mutual information; correlation Federated learning is a decentralized topology of deep learning, that trains a shared model through data distributed among each client (like mobile phones, wearable devices), in order to ensure data privacy by avoiding raw data exposed in data center (server). After each client computes a new model parameter by stochastic gradient decrease (SGD) based on their own local data, all locally-computed parameters will be aggregated in the server to generate an updated global model. Almost all current studies directly average different client computed parameters by default, but no one gives an explanation why averaging parameters is a good approach. In this paper, we treat each client computed parameter as a random vector because of the stochastic properties of SGD, and estimate mutual information between two client computed parameters at different training phases using two methods in two learning tasks. The results confirm the correlation between different clients and show an increasing trend of mutual information with training iteration. However, when we further compute the distance between client computed parameters, we find that parameters are getting more correlated while not getting closer. This phenomenon suggests that averaging parameters may not be the optimum way of aggregating trained parameters. Development and Differentiation in Monobodies Based on the Fibronectin Type 3 Domain Peter G. Chandler, Ashley M. Buckle Subject: Life Sciences, Biotechnology Keywords: adnectin; biosensor; Fibronectin; monobody; non-antibody scaffold; therapeutic As a non-antibody scaffold, monobodies based on the fibronectin type III (FN3) domain overcome antibody size and complexity while maintaining analogous binding loops. However, antibodies and their derivatives remain the gold standard for design of new therapeutics. In response, clinical therapeutic proteins based on the FN3 domain are beginning to use native fibronectin function as a point of differentiation. The small and simple structure of monomeric monobodies confers increased tissue distribution and reduced half-life, whilst the absence of disulphide bonds improves stability in cytosolic environments. Where multi-specificity is challenging with an antibody format that is prone to mis-pairing of chains, FN3 domains in the fibronectin assembly already interact with a large number of molecules. As such, multiple monobodies engineered for interaction with therapeutic targets are being combined in a similar beads-on-a-string assembly which improves both efficacy and pharmacokinetics. Furthermore, full length fibronectin is able to fold into multiple conformations as part of its natural function and a greater understanding of how mechanical forces allow for the transition between states will lead to advanced applications that truly differentiate the FN3 domain as a therapeutic scaffold. Variation of the Performance of Machine-Learning Based Image Classifier in Automated Detection of Itch-Induced Scratch Chuan Liu, Sheng-Xiang Yan, Xiao-Bo Wu, Zhi-Jun Zhang, Wei Li Subject: Behavioral Sciences, Other Keywords: itch; scratch; automated real-time detection; machine-learning based image classifier; image sharpness A 'little brother' of pain, itch is an unpleasant sensation that creates a specific urge to scratch. To date, various machine-learning based image classifiers (MBICs) have been proposed for quantitative analysis of itch-induced scratch behaviour of laboratory animals in an automated, non-invasive, inexpensive and real-time manner. In spite of MBICs' advantages, the overall performances (accuracy, sensitivity and specificity) of current MBIC approaches remains inconsistent, with their values varying from ~50% to ~99%, for which the reasons underlying have yet to be investigated further, both computationally and experimentally. To look into the variation of the performance of MBICs in automated detection of itch-induced scratch, this article focuses on the experimental data recording step, and reports here for the first time that MBICs' overall performance is inextricably linked to the sharpness of experimentally recorded video of laboratory animal scratch behaviour. This article furthermore demonstrates for the first time that a linearly correlated relationship exists between video sharpness and overall performance (accuracy and specificity, but not sensitivity) of MBICs, and highlight the primary role of experimental data recording in rapid, accurate and consistent quantitative assessment of laboratory animal itch. Current Advances in Allosteric Modulation of Muscarinic Receptors Jan Jakubík, Esam E. El-Fakahany Subject: Medicine & Pharmacology, Pharmacology & Toxicology Keywords: acetylcholine; muscarinic receptors; allosteric modulation Allosteric modulators are ligands that bind to a site on the receptor that is spatially separated from the orthosteric binding site for the endogenous neurotransmitter. Allosteric modulators modulate the binding affinity, potency and efficacy of orthosteric ligands. Muscarinic acetylcholine receptors are prototypical allosterically-modulated G-protein-coupled receptors. They are a potential therapeutic target for the treatment of psychiatric, neurologic and internal diseases like schizophrenia, Alzheimer's disease, Huntington disease, type 2 diabetes or chronic pulmonary obstruction. Here we review progress made during the last decade in our understanding of their mechanisms of binding, allosteric modulation and in vivo actions of in order to understand the translational impact of studying this important class of pharmacological agents. We overview newly developed allosteric modulators of muscarinic receptors as well as new spin-off ideas like bitopic ligands combining allosteric and orthosteric moieties and photo-switchable ligands based on bitopic agents. Mechanisms and Regulation of Nonsense-Mediated mRNA Decay and Nonsense-Associated Altered Splicing in Lymphoid Cells Jean-Marie Lambert, Mohamad Omar Ashi, Nivine Srour, Laurent Delpy, Jérôme Saulière Subject: Life Sciences, Immunology Keywords: immunoglobulin (Ig); nonsense-mediated mRNA decay (NMD); nonsense-associated altered splicing (NAS); B lymphocytes; plasma cells The presence of premature termination codons (PTCs) in transcripts is dangerous for the cell as they encode potentially deleterious truncated proteins that can act with dominant-negative or gain-of-function effects. To avoid synthesis of these shortened polypeptides, several RNA surveillance systems can be activated to decrease the level of PTC-containing mRNAs. Nonsense-mediated mRNA decay (NMD) ensures an accelerated degradation of mRNAs harboring PTCs by using several key NMD factors such as up-frameshift (UPF) proteins. Another pathway called nonsense-associated altered splicing (NAS) upregulates transcripts that have skipped disturbing PTCs by alternative splicing. Therefore, these RNA quality control processes eliminate abnormal PTC-containing mRNAs from the cells by using positive and negative responses. In this review, we will describe the general mechanisms of NMD and NAS and their respective involvement in the decay of aberrant immunoglobulin and TCR transcripts in lymphoid cells. Pore-Fractures of Coalbed Methane Reservoir Restricted by Coal Facies in Sanjiang-Mulinghe Coal-Bearing Basins, Northeast China Yuejian Lu, Dameng Liu, Yidong Cai, Li Qian, Qifeng Jia Subject: Earth Sciences, Geology Keywords: pore-fracture networks; coal-facies; coalbed methane reservoir; Sanjiang-Mulinghe basin Pore-fractures network play a key role in coalbed methane (CBM) accumulation and production, while the impacts of coal facies on the pore-fractures network performance are still poorly understood. In this work, the research on the pore-fracture occurrence of 38 collected coals from Sangjiang-Muling coal-bearing basins with multiple techniques including mercury intrusion porosimetry (MIP), micro-organic quantitative analysis, and optic microscopy, and its variation controlling of coal face were studied. The MIP curves of 38 selected coals indicating pore structures were subdivided into three typical types including type I of predominant micropores, type Ⅱ of predominant micropores and macropores with good connectivity and type Ⅲ of predominant micropores and macropores with poor connectivity. For coal facies, there are three various coal facies were distinguished, which include lake shore coastal wet forest swamp, the upper delta plain wet forest swamp, tidal flat wet forest swamp with Q-cluster analysis and tissue preservation index - gelification index (TPI-GI) and Wood index - groundwater influence index (WI -GWI). The results show there is positive relationship between tissue preservation index (TPI), wood index (WI) and mesopores (102nm-103nm), while a negative relationship between TPI, WI and macropores/fractures. In addition, groundwater level fluctuations can control the development of type C and D fractures, and the frequency of type C and D fractures shows an ascending trend with increasing GWI, which may be caused by the mineral hydration of the coal. Finally, from the perspective of the pore-fractures occurrence in CBM reservoirs, the wet forest swamp of upper delta plain is considered to be the optimization areas for Sanjiang-Mulinghe coal-bearing basins by a comparative study of various coal facies. Biocompatibility Evaluation and Enhancement of Elastomeric Coatings Made Using Table-top Optical 3D Printer Giedrė Grigalevičiūtė, Daiva Baltriukienė, Virginija Bukelskienė, Mangirdas Malinauskas Subject: Materials Science, Polymers & Plastics Keywords: stereolithography; elastomer; biocompatibility; post-processing; UV curing; thermal treatment; optical 3D printing In this experimental report the biocompatibility of elastomeric scaffold structures made via stereolithography employing table-top 3D printer (Ember, Autodesk) and commercial resin FormLabs Flexible (FormLabs) was studied. The samples were manufactured using standard printing and development protocol, which is known to inherit cytotoxicity due to remaining non-polymerized remaining monomers, despite the polymerized material being fully biocompatible. Additional steps were taken to remedy this problem: the fabricated structures were soaked in isopropanol and methanol for different conditions (temperature, duration) in order to leach out the non-polymerized monomers. Also printed structures were UV exposed to assure maximum polymerization degree of the material. Post-processed structures were seeded with myogenic stem cells and the number of live cells was evaluated as an indicator for the material biocompatibility. The straightforward post-processing protocol enhances the biocompatibility by 7 times after 7 days soaking in isopropanol and methanol and is comparable to control (glass and polystyrene) samples. This proposes the approach as a novel and simple method to be widely applicable for dramatic cytotoxicity reduction of optically 3D printed micro-/nano-scaffolds for biomedical applications. Preprint DATASET Download: 11| View: 33| Comments: 0 | doi:10.20944/preprints202001.0200.v1 A Global Sea State Dataset from Spaceborne Synthetic Aperture Radar Wave Mode Data Xiao-Ming Li, BingQing Huang Subject: Earth Sciences, Oceanography Keywords: synthetic aperture radar (SAR); wave mode; ocean waves This dataset consists of integral ocean wave parameters of significant wave height (SWH) and mean wave period (MWP) data derived from the Advanced Synthetic Aperture Radar (ASAR) on board the ENVISAT satellite over its full life cycle (2002-2012) covering the global ocean. Both parameters are calibrated and validated against buoy data. A cross-validation between the ASAR SWH and radar altimeter (RA) data is also performed to ensure that the SAR-derived wave height data are of the same quality as the RA data. These data are stored in the standard NetCDF format, which are produced for each ASAR wave mode Level1B data provided by the European Space Agency. This is for the first time that a full sea state product is derived from spaceborne SAR data over the global ocean for a decadal temporal scale. Role of Signal Transduction Pathways and Transcription Factors in Cartilage and Joint Diseases Riko Nishimura, Kenji Hata, Yoshifumi Takahata, Tomohiko Murakami, Eriko Nakamura, Maki Ohkawa, Lerdluck Ruengsinpinya Subject: Medicine & Pharmacology, General Medical Research Keywords: osteoarthritis; rheumatoid arthritis; fibrodysplasia ossificans progressive; achondroplasia Osteoarthritis and rheumatoid arthritis are common cartilage and joint diseases that globally affect more than 200 and 20 million people, respectively. Several transcription factors have been implicated in the onset and progression of osteoarthritis, including Runx2, C/EBPβ, HIF2α, Sox4, and Sox11. IL-1β also leads to osteoarthritis through NF-ĸB, IκBζ, and Zn2+-ZIP8-MTF1 axis. IL-1, IL-6, and TNFα play a major pathological role in rheumatoid arthritis through NF-ĸB and JAK/STAT pathways. Indeed, inhibitory reagents for IL-1, IL-6, and TNFα provide clinical benefits for rheumatoid arthritis patients. Several growth factors, such as BMP, FGF, PTHrP, and Indian hedgehog, play roles regulating chondrocyte proliferation and differentiation. Disruption and excess of these signaling cause genetic disorders in cartilage and skeletal tissues. FOP, an autosomal genetic disorder characterized by ectopic ossification, is induced by mutant ACVR1. mTOR inhibitors were found to prevent ectopic ossification by ACVR1 mutations. ACH and related diseases are autosomal genetic diseases, which manifest severe dwarfism. CNP is currently the most promising therapy for ACH. In these ways, investigation of cartilage and chondrocyte diseases at molecular and cellular levels sheds light on the development of effective therapies. Thus, identification of signaling pathways and transcription factors implicated in these diseases is important. PID Control Algorithm Based on Hydraulic Oil Viscosity for the Proportional Valve of the Planting Depth Control System Md. Abu Ayub Siddique, Wan-Soo Kim, Yeon-Soo Kim, Taek-Jin Kim, Chang-Hyun Choi, Hyo-Jai Lee, Sun-Ok Chung, Yong-Joo Kim Subject: Engineering, Other Keywords: transplanter; hydraulic oil; temperature; viscosity; proportional valve This study was conducted to develop a PID control algorithm considering viscosity for the planting depth control system of a rice transplanter using various hydraulic oils at different temperatures and to evaluate the performance of the control algorithm, and compare the performance of the PID control algorithm without considering viscosity and considering viscosity. In this study, the simulation model of the planting depth control system and a PID control algorithm were developed based on the power flow of the rice transplanter (ERP60DS). The primary PID coefficients were determined using the Ziegler–Nichols (Z–N) second method. Routh's stability criteria were applied to optimize the coefficients. The pole and double zero points of the PID controller were also applied to minimize the sustained oscillations of the responses. The performance of the PID control algorithm was evaluated for three ISO (The International Organization for Standardization) standard viscosity grade (VG) hydraulic oils (VG 32, 46, and 68). The results show that the control algorithm considering viscosity is able to control the pressure of the proportional valve, which is associated with the actuator displacement for various types of hydraulic oils. It was noticed that the maximum pressure was 15.405 bars at 0, 20, 40, 60, 80, and 100 ℃ for all of the hydraulic oils. The settling time and steady-state errors were 0.45 s at 100 ℃ for VG 32, and 0% for all of the conditions. The maximum overshoots were found to be 17.50% at 100 ℃ for VG 32. On the other hand, the PID control algorithm without considering viscosity could not control the planting depth, because the response was slow and did not satisfy the boundary conditions. The PID control algorithm considering viscosity could sufficiently compensate for the nonlinearity of the hydraulic system and was able to perform for any of temperature-dependent viscosity of the hydraulic oils. In addition, the rice transplanter requires a faster response for accurately controlling and maintaining the planting depth. Planting depth is highly associated with actuator displacement. Finally, this control algorithm considering viscosity could be helpful in minimizing the tilting of the seedlings planted using the rice transplanter. Ultimately, it would improve the transplanter performance. Comparative Proteomic Analysis of Wild-Type Physcomitrella patens and an OPDA-Deficient Physcomitrella patens Mutant with Disrupted PpAOS1 and PpAOS2 Genes after Wounding Weifeng Luo, Setsuko Komatsu, Tatsuya Abe, Hideyuki Matsuura, Kosaku Talahashi Subject: Life Sciences, Biochemistry Keywords: Allene oxide synthase; 12-oxo-phytodienoic acid; Physcomitrella patens; proteomic analysis; wounding. Wounding is a serious environmental stress in plants. Oxylipins such as jasmonic acid play an important role in defense against wounding. Mechanisms to adapt to wounding have been investigated in vascular plants; however, those mechanisms in nonvascular plants remain elusive. To examine the response to wounding in Physcomitrella patens, a model moss, a proteomic analysis of wounded P. patens was conducted. Proteomic analysis showed that wounding increased the abundance of proteins related to protein synthesis, amino acid metabolism, protein folding, photosystem, glycolysis, and energy synthesis. 12-Oxo-phytodienoic acid (OPDA) was induced by wounding and inhibited growth. Therefore, OPDA is considered a signaling molecule in this plant. Proteomic analysis of a P. patens mutant in which the PpAOS1 and PpAOS2 genes, which are involved in OPDA biosynthesis, are disrupted showed accumulation of proteins involved in protein synthesis in response to wounding in a similar way to the wild-type plant. In contrast, the fold-changes of the proteins in the wild-type plant were significantly different from those in the aos mutant. This study suggests that PpAOS gene expression enhances photosynthesis and effective energy utilization in response to wounding in P. patens. Preprint SHORT NOTE Download: 23| View: 19| Comments: 0 | doi:10.20944/preprints202001.0196.v1 A Guide and Toolbox to Replicability and Open Science in Entomology Jacob Wittman, Brian Aukema Subject: Biology, Entomology Keywords: reproducibility; open access; data curation; data mangement; pre-print servers The ability to replicate scientific experiments is a cornerstone of the scientific method. Sharing ideas, workflows, data, and protocols facilitates testing the generalizability of results, increases the speed that science progresses, and enhances quality control of published work. Fields of science such as medicine, the social sciences, and the physical sciences have embraced practices designed to increase replicability. Granting agencies, for example, may require data management plans and journals may require data and code availability statements along with the deposition of data and code in publicly available repositories. While many tools commonly used in replicable workflows such as distributed version control systems (e.g. "git") or scripted programming languages for data cleaning and analysis may have a steep learning curve, their adoption can increase individual efficiency and facilitate collaborations both within entomology and across disciplines. The open science movement is developing within the discipline of entomology, but practitioners of these concepts or those desiring to work more collaboratively across disciplines may be unsure where or how to embrace these initiatives. This article is meant to introduce some of the tools entomologists can incorporate into their workflows to increase the replicability and openness of their work. We describe these tools and others, recommend additional resources for learning more about these tools, and discuss the benefits to both individuals and the scientific community and potential drawbacks associated with implementing a replicable workflow. Immune Landscape in Burkitt Lymphoma Reveals M2-Macrophage Polarization and Correlation between PD-L1 Expression and Non-Canonical EBV Latency Program Massimo Granai, Lucia Mundo, Ayse U. Akarca, Maria Chiara Siciliano, Hasan Rizvi, Ester Sorrentino, Virginia Mancini, Maha Ibrahim, Sandra Margielewska, Wenbin Wei, Michele Bibas, Noel Onyango, Joshua Nyagol, Pier Paolo Piccaluga, Leticia Quintanilla-Martinez, Falko Fend, Stefano Lazzi, Lorenzo Leoncini, Teresa Marafioti Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: Burkitt's lymphoma; Epstein-Barr Virus The Tumor Microenviroment (TME) is a complex milieu that is increasingly recognized as a key factor in multiple stages of disease progression and responses to therapy as well as escape from immune surveillance. However, the precise contribution of specific immune effector and immune suppressor components of the TME in Burkitt lymphoma (BL) remains poorly understood. In this paper, we applied the computational algorithm CIBERSORT to Gene Expression Profile (GEP) datasets of 40 BL samples to draw a map of immune and stromal components of TME. Furthermore, by VECTRA multispectral immunofluorescence (IF) and multiple immunohistochemistry (IHC), we investigated the TME of an additional series of 40 BL cases and evaluated the possible role of the PD-1/PD-L1 immune checkpoint axis. Our results indicated that M2 polarized macrophages are the most prominent TME component in BL. In addition, we investigated the correlation between PD-L1 and latent membrane protein-2A (LMP2A) expression on tumour cells, highlighting a subgroup of BL cases characterized by a non- canonical latency program of EBV with an activated PD-L1 pathway. In conclusion, our study analysed the TME in BL and identified a tolerogenic immune signature highlighting new potential therapeutic targets. Energy-Efficient Method for Wireless Sensor Networks Low-Power Radio Operation in Internet of Things Mehdi Amiri Nasab, Shahaboddin Shamshirband, Anthony Theodore Chronopoulos, Amir Mosavi, Narjes Nabipur Subject: Engineering, Electrical & Electronic Engineering Keywords: Internet of Things; IoT; Wireless Sensor Networks; ContikiMAC; Energy Efficiency; Duty-Cycles; Clear Channel Assessments; Received Signal Strength Indicator (RSSI) The radio operation in wireless sensor networks (WSN) in the Internet of Things (IoT) applications are the most common source for power consumption. However, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. ContikiMAC is a low-power Radio Duty-Cycle protocol in Contiki OS used in WakeUp mode, which is a clear channel assessment (CCA) to check radio status periodically. The time spent to check the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in Radio Duty-Cycles and ContikiMAC. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Then, we propose lightweight CCA (LW-CCA) as an extension to ContikiMAC to reduce the percentage of Radio Duty-Cycles in false WakeUps and idle listenings by using dynamic received signal strength indicators (RSSI) status check time. The simulation results in the Cooja simulator show that LW-CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR). Working Paper REVIEW Download: 18| View: 23| Comments: 0 The Bacteria-gut-brain Axis: Gut Bacteria as a Key Regulator of Social Stress and Stress-related Injurious Behaviors in Chickens Sha Jiang, Jiaying Hu, Heng-wei Cheng Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: laying hen; social stress; injurious behavior; microbiota; probiotic; bacillus subtilis Some management practices, such as maintaining birds under high group density, used in the poultry industry may cause birds stress, leading to injurious behaviors, such as injurious pecking, aggression, and cannibalism. In addition, some management practices used to prevent severe injuries in birds may cause pain. Beak trimming (BT), removal of 1/3 to 1/2 of a beak, is a routine husbandry procedure practiced in laying hens to prevent or reduce injurious behaviors. However, BT causes tissue damage, which may increase somatosensory sensitization of the damaged nerve tissues, resulting in pain (acute, chronic or both) in the treated birds because the beak is a complex, functional organ with an extensive nerve supply. BT has already been heavily regulated or prohibited in several European countries and, in time, this trend will impact the practice used in the United States poultry industry. With the growing public concern for poultry welfare there is a pressing need to identify and develop alternatives to BT. Probiotics defined "as a source of live (viable) naturally occurring microorganisms (direct-fed microbials)" have been used as dietary supplements or functional foods to target gut microbiota (microbiome) for prevention or therapeutic treatment of mental diseases including social stress-induced psychiatric disorders in humans and various experimental animals. In our studies, chickens were used as an animal model to assess if dietary supplementation of probiotic, Bacillus subtilis, reduces injurious behaviors following social challenge. Hens of Dekalb XL strain, an aggressive line, were used in the studies. Our results indicate that dietary supplementation of the Bacillus subtilis based probiotic reduces aggressive behaviors in chickens. These results suggest dietary probiotics could be a suitable strategy for increasing hosts' health status and welfare conditions. In situ Groundwater Remediation with Bioelectrochemical Systems (BES): A Critical Review and Future Perspectives Daniele Cecconet, Fabrizio Sabba, Matyas Devecseri, Arianna Callegari, Andrea G. Capodaglio Subject: Engineering, Other Keywords: bioelectrochemical systems; in situ treatment; groundwater remediation; bioelectroremediation; denitrification; microbial electrochemical technologies Groundwater contamination is an ever-growing environmental issue that has attracted much and undiminished attention for the past half century. Groundwater contamination may originate from both anthropogenic (e.g., hydrocarbons) and natural compounds (e.g., nitrate and arsenic); to tackle the removal of these contaminants, different technologies have been developed and implemented. Recently, bioelectrochemical systems (BES) have emerged as a potential treatment for groundwater contamination, with reported in situ applications that showed promising results. Nitrate and hydrocarbons (toluene, phenanthrene, benzene, BTEX and light PAHs) have been successfully removed, due to the interaction of microbial metabolism with poised electrodes, in addition to physical migration due to the electric field generated in a BES. The selection of proper BESs relies on several factors and problems, such as the complexity of groundwater and subsoil environment, scale-up issues, and energy requirements that need to be accounted for. Modeling efforts could help predict case scenarios and select a proper design and approach, while BES-based biosensing could help monitoring remediation processes. In this review, we critically analyze in situ BES applications for groundwater remediation, focusing in particular on different proposed setups, and we identify and discuss the existing research gaps in the field. Multivalent Lactose–Ferrocene Conjugates Based on Poly(Amido Amine) Dendrimers and Gold Nanoparticles as Electrochemical Probes for Sensing Galectin-3 Manuel C. Martos-Maldonado, Indalecio Quesada-Soriano, Luis García-Fuentes, Antonio Vargas-Berenguel Subject: Materials Science, General Materials Science Keywords: galectin-3; electrochemical probes; gold nanoparticles; ferrocene; electroactive glycodendrimers; PAMAM Galectin-3 is considered a cancer biomarker and bioindicator of fibrosis and cardiac remodeling, and, therefore, it is desirable to develop convenient methods for its detection. Herein, an approach based on the development of multivalent electrochemical probes with high galectin-3 sensing abilities is reported. The probes consist of multivalent presentations of lactose–ferrocene conjugates scaffolded on poly(amido amine) (PAMAM) dendrimers and gold nanoparticles. Such multivalent lactose–ferrocene conjugates are synthesized by coupling of azidomethylferrocene-lactose building blocks on alkyne-functionalized PAMAM, for the case of the glycodendrimers, and to disulfide‐functionalized linkers that are then used for the surface modification of citrate-stabilized gold nanoparticles. The binding and sensing abilities towards galectin 3 of both ferrocene-containing lactose dendrimers and gold nanoparticles have been evaluated by means of isothermal titration calorimetry, UV-vis spectroscopy, and differential pulse voltammetry. The highest sensitivity by electrochemical methods to galectin-3 was shown by lactosylferrocenylated gold nanoparticles, which are able to detect the lectin in nanomolar concentrations. Synthetic Apomixis an Old Enigma to Preserve Hybrid Vigor Sajid Fiaz, Xiukang Wang, Rizwana Maqbool, Habib Ali, Shakeel Ahmad, Adeel Riaz, Badr Alharthi Subject: Biology, Agricultural Sciences & Agronomy Keywords: hybrid vigor; flowering plants; apomixis; CRISPR/Cas9 The hybrid seeds of several important crops with supreme qualities, including yield, biotic and abiotic stress tolerance, have been cultivated from decades. Thus far, a major challenge with hybrid seed, it does not hold ability to produce plants with same qualities over subsequent generations. Apomixis exist naturally an asexual mode of reproduction in flowering plants via avoiding meiosis and ultimately leads to seed production. Apomixis possess potential to preserve hybrid vigor for multiple generations for economically important plant genotypes. The evolution and genetics of asexual seed production is unclear and need much more efforts to find its genetic architecture. To fix hybrid vigor synthetic apomixis has been suggested an alternative. The development of MiMe (Mitosis instead of Meiosis) genotypes are utilized further for clonal gametes production. However, the identification and parental origin of genes responsible for synthetic apomixis are less known and need further understanding. Genome modifications utilizing genome editing technologies (GETs) like clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (cas9) a reverse genetics tool has paved way to utilize emerging technologies in plant molecular biology. From the last decade, several genes in important crops have been successfully edited. The vast availability of GETs has made the functional genomics studies easy to conduct in crops important for food security. The disruption of expression of genes specific to egg cell MATRILINEAL (MTL) or BABY BOOM1 (BBM1) through CRISPR/Cas genome editing system can promote haploid plants. The establishment of synthetic apomixis by engineering MiMe genotype by genome editing BBM1 expression or disruption of MTL leads toward clonal seed production. In present review, we discussed the current development in plants by utilizing CRISPR/Cas9 technology and its possibility of promoting apomixis in crops to preserve hybrid vigour. In addition to this, genetics, evolution, epigenetic modifications and strategy for MiMe genotype development has been discussed in detail. Ranking Port State Control Detention Remarks: Professional Judgement and Spatial Overview Efe Akyrek, Pelin Bolat Subject: Engineering, Marine Engineering Keywords: Port State Control; AHP; Paris MOU; GIS; Maritime Regulations Merchant marine fleet is under inspections by several parties to ensure maritime regulation compliance. One of the major effects on implementation of regulations by International Maritime Organization (IMO) is indeed Port State Control. This article aims to analyze Paris Memoranda of Understanding (MOU) all detention remarks from 2013 to 2019 for EU15 countries (except Luxemburg and Austria) through an approach based on Analytical Hierarchy Process and demonstrate the results on Geographic Information System (GIS) to guide marine industry on detainable Port State Control remarks and country risk profile. While Analytical Hierarchy Process Approach has been used to indicate the ranking of basic maritime regulations from the perspective of the port state control , GIS help us to demonstrate the regional dispersion amongst EU15. The data of the detained vessel taken from the public website of Paris MOU and each report considered as a professional judgement that causes detention. It has been shown that almost all countries top priorities for regulation are Safety of Life at Sea (SOLAS) and Fire Safety Systems (FSS). Consequently, the results of the study can assist Port State Officers, ship crew, ship owners and managers presenting the facts of their inspection and able to improve themselves. The spatial analysis also expected to guide ship owners and managers to focus their vessel's deficiencies to prevent sub-standardization. Preprint ESSAY Download: 8| View: 17| Comments: 0 | doi:10.20944/preprints202001.0189.v1 What Drives Computational Chemistry Forward: Theory or Computational Power? Subject: Chemistry, General & Theoretical Chemistry Keywords: theory; simulation; computational power; epochs, science history History is often thought to be dull and boring – where large numbers of facts are memorized for passing exams. But the past informs the present and future, especially in delineating the context surrounding specific events that, in turn, help provide a deeper understanding of their causes and implications. Scientific progress (whether incremental or breakthroughs) is built upon prior work. Chronological examination of computational chemistry's evolution reveals the existence of major "epochs" (e.g., transition from semi-empirical methods to first principles calculations), and the centrality of key ideas (e.g., Schrodinger equation and Born Oppenheimer approximation) in potentiating progress in the field. The longstanding question of whether computing power (both capacity and speed) or theoretical insights play a more important role in advancing computational chemistry was examined by taking into account the field's development holistically. Specifically, availability of large amount of computing power at declining cost, and advent of graphics processing unit (GPU) powered parallel computing are enabling tools for solving hitherto intractable problems. On the other hand, this essay argues (using Born Oppenheimer approximation as an example) that theoretical insights' role in unlocking problems through simple (but insightful) assumptions is often overlooked. Collectively, the essay should be useful as a primer for appreciating major development periods in computational chemistry, from which counterfactual questions illuminate the relative importance of theoretical insights and advances in computer science in moving the field forward. Loss of Elongator- and KEOPS-Dependent tRNA Modifications Leads to Severe Growth Phenotypes and Protein Aggregation in Yeast Leticia Pollo-Oliveira, Roland Klassen, Nick Davis, Akif Ciftci, Jo Marie Bacusmo, Maria Marinelli, Michael S. DeMott, Thomas J. Begley, Peter C. Dedon, Raffael Schaffrath, Valérie de Crécy-Lagard Subject: Life Sciences, Genetics Keywords: tRNA modification; protein aggregation regulating translational speed and accuracy. Threonylcarbamoyl adenosine (t6A37) and 5-methoxycarbonylmethyl-thiouridine (mcm5s2U34) are critical ASL modifications that have been linked to several human diseases. The model yeast Saccharomyces cerevisiae is viable despite the absence of both modifications, growth is however greatly impaired. The major observed consequence is a subsequent increase in protein aggregates and aberrant morphology. Proteomic analysis of the t6A-deficient strain revealed a global mistranslation leading to protein aggregation without regard to physicochemical properties or t6A-dependent or biased codon usage in parent genes. However, loss of sua5 led to increased expression of soluble proteins for mitochondrial function, protein quality processing/trafficking, oxidative stress response, and energy homeostasis. These results point to a global function for t6A in protein homeostasis very similar to mcm5/s2U modifications. Detection of Spoofing Used Against the GNSS-Like Underwater Navigation Systems Tomasz Abramowski, Mateusz Bilewski, Larisa Dobryakova, Evgeny Ochin, Janusz Uriasz, Paweł Zalewski Subject: Engineering, Electrical & Electronic Engineering Keywords: antiterrorism; underwater GNSS; underwater GPS; spoofer; antispoofing; spoofing detection; underwater transport safety The purpose of the work is an underwater positioning safety study that used the GNSS-like underwater navigation systems. In the process of research, we used the methods of software modeling of underwater spoofing processes. The spoofing problem consists of three stages: design of spoofers, design of spoofing detection systems, and design of anti-spoofing systems. This article discusses some methods of spoofing detection. We briefly describe the known methods of underwater positioning systems. Unlike GNSS, currently only LNSS (Local Navigation Satellite System) can be considered in this case. Spoofing detection systems with one hydrophone are of great practical importance, as they allow for use of standard hydroacoustic equipment. However, detection of spoofing is not possible in static mode, which is with underwater vehicle at rest. In case of two hydrophones the detection of spoofing in static mode is possible. We discuss the navigation based on the use of an acoustically passive receiver. The receiver "listens" to the buoys and solves the problem of finding its own position using the coordinates of the buoys (such systems are called GNSS-like Underwater Positioning Systems or GNSS-like UPS). Depending on the scale of system service area, GNSS-like UPS-es are divided into global, regional, zonal and local systems. In this article, we take into account only the local class of GNSS-like UPS. The acoustic signal generator transmits a simulation of several buoy signals. If the level of the simulated signal exceeds the signal strength of actual buoys, the UPS receiver will "lock onto" the fake signal and then calculate a false position basing on it. The development of further research should be focused on the creation of hardware and software systems for conducting physical experiments at depths up to 400 m. Rheological Properties and Application of Molasses Modified Bitumen in Hot Mix Asphalt (HMA) Werku Hareru, Tewodros Ghebrab Subject: Engineering, Civil Engineering Keywords: MMB; DSR; FTIR; hot mix asphalt. The high volume of water in molasses has made this study serious. The reason is that using molasses as a partial replacement without treatment significantly affects the rheological properties of the neat bitumen and increases the likelihood of moisture susceptibility of the hot-mix asphalt (HMA) pavement structure and create fractures of aggregate particles. Therefore, to use molasses as a partial replacement without affecting the structural integrity of the pavement, this study proposed a treatment method before blending it with petroleum-based bitumen. A series of experiment was conducted to accomplish the objective of this paper, including convectional tests, Fourier transform infrared (FTIR) test, amplitude and frequency sweep test, performance grade (PG) determination test, and multiple stress creep recovery (MSCR) tests. The IR spectra show that carbonyl index decreased with increasing molasses percent. There was PG improvement from the control grade to PG64 and PG70 when the base binder modified with 5-20% molasses and aged with rollingl thin film oven (RTFO) respectively. At the temperature 58oC nonrecoverable creep compliance at 3.2 kPa (Jnr3.2kPa) was decreased for each percent replacement. This led to improving the rutting potential. As well, at a temperature of 64oC the Jnr value was decreased only for 5% replacement, and then the Jnr value was gradually increased for the remaining percent replacement. Overall, this study revealed that treated molasses can be used as a partial replacement to enhance the rheological properties of the base bitumen and thus it can potentially be used to produce a sustainable bio-asphalt binder. β-aminobutyric Acid Pretreatment Confers Salt Stress Tolerance in Brassica napus L. by Modulating Reactive Oxygen Species Metabolism and Methylglyoxal Detoxification Jubayer Al Mahmud, Mirza Hasanuzzaman, M. Iqbal R. Khan, Kamrun Nahar, Masayuki Fujita Subject: Biology, Plant Sciences Keywords: Abiotic stress; Antioxidant defense; Glyoxalase; Ion homeostasis; Organic acid; Osmotic stress Salinity is a serious environmental hazard which limits world agricultural production by adversely affects plant physiology and biochemistry. Hence increase tolerance against salt stress is very important. In this study, we explored the function of β-aminobutyric acid (BABA) in enhancing salt stress tolerance in rapeseed (Brassica napus L.). After pretreatment with BABA, seedlings were exposed to NaCl (100 mM and 150 mM) for 2 days. Salt stress increased Na content and decreased K content in shoot and root. It disrupted the antioxidant defense system by producing reactive oxygen species (ROS; H2O2 and O2•−), methylglyoxal (MG) content and causing oxidative stress. It also reduced the growth and photosynthetic pigments of seedlings but increased proline (Pro) content. However, BABA pretreatment in salt-stressed seedlings increased ascorbate (AsA) and glutathione (GSH) contents; GSH/GSSG ratio; and the activities of ascorbate peroxidase (APX), monodehydroascorbate reductase (MDHAR), dehydroascorbate reductase (DHAR), glutathione reductase (GR), glutathione peroxidase (GPX), superoxide dismutase (SOD), catalase (CAT), glyoxalase I (Gly I), and glyoxalase II (Gly II) as well as growth and photosynthetic pigments of plants. In addition, compared to salt stress alone BABA increased Pro content, reduced the H2O2, MDA and MG contents and decreased Na content in root and increased K content in shoot and root of rapeseed seedlings. Dynamic Changes of pStat3 are Involved in Meiotic Spindle Assembly in Mouse Oocytes Seiki Haraguchi, Mitsumi Ikeda, Satoshi Akagi, Yuji Hirao Subject: Life Sciences, Cell & Developmental Biology Keywords: Stat3; pStat3; oocyte maturation; meiosis; spindle assembly; MTOCs The signal transducer and activator of transcription 3 (Stat3) is activated in response to the phosphorylation of Y705 (pStat3) and has the dual function of signal transduction and activation of transcription. Our previous study suggested that pStat3 is functional during oocyte maturation when transcription is silenced. Therefore, we speculated that pStat3 may have another function. Immunocytochemical analysis revealed that pStat3 emerges at the microtubule asters and spindle and then localizes at the spindle poles concomitant with a Pericentrin during mouse oocyte maturation. When we examined conditionally knocked out Stat3−/− oocytes, we detected Stat3 and pStat3 proteins. The localization of the pStat3 was the same as that of Stat3+/+ oocytes, and the oocyte maturation proceeded normally, suggesting that pStat3 was still functioning. The oocytes were treated either with the Stat3 specific inhibitors, Stattic and BP-1-102, or anti-pStat3 antibody injection. This caused significant abnormal spindle assembly and chromosome mis-location in a dose-dependent manner, in which the pStat3 was either negative or localized improperly. Moreover, development of pre-implantation stage embryos derived from inhibitor-treated oocytes was also hampered significantly after in vitro fertilization. These findings indicate a novel function of pStat3 involved in spindle assembly. Sensitivity of Radiative Fluxes to Aerosols in the ALADIN-HIRLAM Numerical Weather Prediction System Laura Rontu, Emily Gleeson, Daniel Martin Perez, Kristian Pagh Nielsen, Velle Toll Subject: Earth Sciences, Atmospheric Science Keywords: aerosols; CAMS; NWP; ALADIN-HIRLAM; MUSC; direct radiative effect The direct radiative effect of aerosols is taken into account in many limited area numerical weather prediction models using wavelength-dependent aerosol optical depths of a range of aerosol species. We study the impact of aerosol distribution and optical properties on radiative transfer, based on climatological and more realistic near real-time aerosol data. Sensitivity tests were carried out using the single column version of the ALADIN-HIRLAM numerical weather prediction system, set up to use the HLRADIA broadband radiation scheme. The tests were restricted to clear-sky cases to avoid the complication of cloud-radiation-aerosol interactions. The largest differences in radiative fluxes and heating rates were found to be due to different aerosol loads. When the loads are large, the radiative fluxes and heating rates are sensitive to the aerosol inherent optical properties and vertical distribution of the aerosol species. Impacts of aerosols on shortwave radiation dominate longwave impacts. Sensitivity experiments indicated the important effects of highly absorbing black carbon aerosols and strongly scattering desert dust. Glycolipids and a Polyunsaturated Fatty Acid Methyl Ester Isolated from the Marine Dinoflagellate Karenia mikimotoi Alain S. Leutou, Jennifer R. McCall, Robert York, Rajeshwar R. Govindapur, Andrea J. Bourdelais Subject: Chemistry, Medicinal Chemistry Keywords: dinoflagellate; Karenia mikimotoi; glycolipids; monogalactosyldiacylglycerol; monogalactosylmonoacylglycerol; polyunsaturated fatty acid methyl ester; Staphylococcus aureus; Escherichia coli; Candida albicans; anti-inflammatory activity A New monogalactosyldiacylglycerol (MGDG), a known monogalactosylmonoacylglycerol (MGMG) and a known polyunsaturated fatty acid methyl ester (PUFAME) were isolated from the marine dinoflagellate Karenia mikimotoi. The planar structure of the glycolipids was elucidated using MS and NMR spectroscopic analyses and comparisons to the known glycolipid to confirm its structure. The isolation of PUFAME strongly supports the polyunsaturated fatty acid fragment of these glycolipids. The relative configuration of the sugar was deduced by comparisons of 3JHH values and proton chemical shifts with those of known glycolipids. All isolated compounds MGDG, MGMG and PUFAME (1-3) were evaluated for their antimicrobial and anti-inflammatory activity. All compounds modulated macrophage responses, with compound 3 exhibiting the greatest anti-inflammatory activity. Cell Lines for Honey Bee Virus Research Ya Guo, Cynthia L. Goodman, David W. Stanley, Bryony C. Bonning Subject: Life Sciences, Virology Keywords: honey bee virus; hymenoptera; insect cell culture; cell lines; Apis mellifera; Deformed wing virus With ongoing colony losses driven in part by the Varroa mite and the associated exacerbation of virus load, there is an urgent need to protect honey bees (Apis mellifera) from fatal levels of virus infection and from nontarget effects of insecticides used in agricultural settings. A continuously replicating cell line derived from the honey bee would provide a valuable tool for study of molecular mechanisms of virus – host interaction, for screening of antiviral agents for potential use within the hive, and for assessment of the risk of current and candidate insecticides to the honey bee. However, the establishment of a continuously replicating, honey bee cell line has proved challenging. Here we provide an overview of attempts to establish primary and continuously replicating hymenopteran cell lines, methods for establishing honey bee cell lines, challenges associated with the presence of latent viruses (especially Deformed wing virus), in established cell lines and methods to establish virus-free cell lines. We also describe the potential use of honey bee cell lines in conjunction with infectious clones of honey bee viruses for examination of fundamental virology. Revisiting the Stress Concept for A Better Recipe of Patient Prognostic: Implication of Stress Granule Markers Anais Aulas, Pascal Finetti, Shawn Lyons, François Bertucci, Daniel Birnbaum, Claire Acquaviva, Emilie Mamessier Subject: Biology, Other Keywords: cancer; metastasis; stress granules; G3BP1; G3BP2; TIA-1; TIAR; CAPRIN-1; USP10; prognostic markers Stress Granule formation is a pro-survival mechanism helping cells to cope with environmental challenges. Stress Granules have been studied for two decades in fundamental research, and are now being examined in the context of human pathogenesis. Here we review studies highlighting stress granules' involvement in cancer development through translational pattern modification. Smart Grid Technologies and the 2030 Agenda for Sustainable Development: Drivers for Innovation Giacomo Di Foggia Subject: Social Sciences, Other Keywords: smart grid; SDGs; sustainable energy; smart meters; energy access; sustainability; utilities; development Because of the significant enabling role smart meters can play in securing the transition towards sustainable energy distribution, the paper provides insights to support smart meters implementation projects. Energy utilities must propose adequate solutions to manage grid-upgrading projects and, in the meantime, increase efficiency levels. Based on empirical data analysis the paper provides insights aimed at maximize probability of success of smart meters projects. Results show common patterns of variables that may support project undertakers, policymakers and scholar when it comes to analyze projects to predict to maximize opportunities. For smart meters projects to succeed, regulatory stability is essential as long-period investments grids produce benefits for energy utilities, and for society. High-Resolution Hologram Calculation Method Based on Light Field Image Rendering Xin Yang, FuYang Xu, HanLe Zhang, HongBo Zhang, Kai Huang, Yong Li, Qiong-Hua Wang Subject: Engineering, General Engineering Keywords: holographic 3D display; computer generated holography; light field image rendering; pinhole array A fast calculation method for the full parallax high-resolution hologram is proposed based on the elemental light field image (EI) rendering. A 3D object located near the holographic plane is firstly rendered as multiple EIs with a pinhole array. Each EI is interpolated and multiplied by a divergent sphere wave and interfered with a reference wave to form a hogel. Parallel acceleration is used to calculate the high-resolution hologram because calculation of each hogel is independent. A high-resolution hologram with the resolution of 20,0000×20,0000 pixels is calculated only within 8 minutes. Full parallax high-resolution 3D displays are realized by optical reconstructions. Sole-Source LED Lighting and Fertility Impact Shoot and Root Tissue Mineral Elements in Chinese Kale (Brassica oleracea var. alboglabra) T. Casey Barickman, Dean A. Kopsell, Carl E. Sams, Robert C. Marrow Subject: Biology, Horticulture Keywords: blue light; calcium; iron; magnesium; potassium; red light The current study investigated the impacts of light quality and different levels of fertility on mineral nutrient concentrations in shoot and root tissues of Chinese kale (Brassica oleracea var. alboglabra). 'Green Lance' Chinese kale were grown under: 1) fluorescent/incandescent light; 2) 10% blue (447 ± 5 nm) / 90% red (627 ± 5 nm) LED light; 3) 20% blue / 80% red LED light; and 4) 40% blue / 60% red LED light as sole-source lighting at two different levels of fertility. All plants were harvested 30 d after seeding, and shoot and root tissues were analyzed for mineral nutrients. Lighting and fertility interacted to influence kale shoot and root mineral nutrient concentrations. Results indicate sole-source LED lighting used in production can impact mineral nutritional values of baby leafy greens now popular for the packaged market. A Study on Reduction of Copper Smelting Slag by Carbon for Recycling into Metal Values and Cement Raw Material Urtnasan Erdenebold, Jei-Pil Wang Subject: Materials Science, Metallurgy Keywords: copper smelting slag; pig iron; fayalite; recovery Copper smelting slag is a solution of molten oxides created during the copper smelting and refining process, and about 1.5 million tons of copper slag is generated annually in Korea. Oxides in copper smelting slag include ferrous (FeO), ferric oxide (Fe­2O3), silica (SiO­2 from flux), alumina (AI2O3), calcia (CaO) and magnesia (MgO). Main oxides in copper slag, which iron oxide and silica, exist in the form of fayalite (2FeO·SiO2). Since the copper smelting slag contains high content of iron, and copper and zinc. Common applications of copper smelting slag are the value added products such as abrasive tools, roofing granules, road-base construction, railroad ballast, fine aggregate in concrete, etc., as well as the some studies have attempted to recover metal values from copper slag. This research was intended to recovery Fe-Cu alloy, raw material of zinc and produce reformed slag like a blast furnace slag for blast furnace slag cement from copper slag. As a results, it was confirmed that reduction smelting by carbon at temperatures above 1400°С is possible to recover pig iron containing copper from copper smelting slag, and CaO additives in the reduction smelting assist to reduce iron oxide in the fayalite and change the chemical and mineralogical composition of the slag. Copper oxide in the slag can be easily reduced and dissolved in the molten pig iron, and zinc oxide is also reduced to a volatile zinc, which is removed from the furnace as the fumes, by carbon during reduction process. When CaO addition is above 5wt.%, acid slag has been completely transformed to calcium silicate slag and observed like blast furnace slag. Preprint CONCEPT PAPER Download: 44| View: 290| Comments: 0 | doi:10.20944/preprints202001.0176.v1 Beyond Modeling: A Roadmap to Community Cyberinfrastructure for Ecological Data-Model Integration Istem Fer, Anthony K. Gardella, Alexey N. Shiklomanov, Shawn P. Serbin, Martin G. De Kauwe, Ann Raiho, Miriam R. Johnston, Ankur Desai, Toni Viskari, Tristan Quaife, David S. LeBauer, Elizabeth M. Cowdery, Rob Kooper, Joshua B. Fisher, Benjamin Poulter, Matthew J. Duveneck, Forrest M. Hoffman, William Parton, Joshua Mantooth, Eleanor E. Campbell, Katherine D. Haynes, Kevin Schaefer, Kevin R. Wilcox, Michael C. Dietze Subject: Keywords: community cyberinfrastructure; accessibility; reproducibility; interoperability; models In an era of rapid global change, our ability to understand and predict Earth's natural systems is lagging behind our ability to monitor and measure changes in the biosphere. Bottlenecks in our ability to process information have reduced our capacity to fully exploit the growing volume and variety of data. Here, we take a critical look at the information infrastructure that connects modeling and measurement efforts, and propose a roadmap that accelerates production of new knowledge. We propose that community cyberinfrastructure tools can help mend the divisions between empirical research and modeling, and accelerate the pace of discovery. A new era of data-model integration requires investment in accessible, scalable, transparent tools that integrate the expertise of the whole community, not just a clique of 'modelers'. This roadmap focuses on five key opportunities for community tools: the underlying backbone to community cyberinfrastructure; data ingest; calibration of models to data; model-data benchmarking; and data assimilation and ecological forecasting. This community-driven approach is key to meeting the pressing needs of science and society in the 21st century. Effect of Manure and Urea Fertilization on Yield, Carbon Speciation and Greenhouse Gas Emissions from Vegetable Production Systems of Nigeria and Republic of Benin: A Phytotron Study Abimfoluwa Olaleye, Derek Peak, Akeem Shorunke, Gurbir Dhillon, Durodoluwa Oyedele, Odunayo Adebooye, P.B. Irenikatche Akponikpe Subject: Earth Sciences, Environmental Sciences Keywords: Sub-Saharan Africa; FTIR spectroscopy; fertilizer microdosing; African leafy vegetables; greenhouse gas mitigation; sustainability; tropical agriculture; soil fertility Fertility management techniques being promoted in sub-Saharan Africa (SSA) seek to grow indigenous vegetables economically and sustainably. This study was conducted in a phytotron chamber and compared yield, soil carbon (C) speciation and greenhouse gas (nitrous oxide (N2O) and carbon dioxide (CO2)) emissions from SSA soils of two ecoregions; the dry savanna (lna, Republic of Benin) and rainforest (Ife, Nigeria) cultivated with local amaranth (Amaranthus cruentus) under manure (5 t/ha) and/or urea (80 kg N/ha) fertilization. Vegetable yield ranged from 1753 kg/ac to 3198kg/ac in the rainforest, RF, soils and 1281 kg/ac to 1951 kg/ac in the dry savanna, DS, soils. Yield in the urea treatment was slightly higher compared to the manure+urea treatment, but the difference was not statistically significant. Cumulative CO2 emissions over 21 days ranged from 497.06 to 579.47 g CO2 in the RF, and 322.96 to 624.97 g CO2 in the DS, while cumulative N2O emissions ranged from 60.53 to 220.86 mg N2O in the RF, and 24.78 to 99.08 mg N2O in the DS. In the RF samples, the combined use of manure and urea reduced CO2 and N2O emissions but led to an increase in the DS samples. ATR-FTIR analysis showed that the combined use of manure and urea increased the rate of microbial degradation in the soils of the DS, but no such effect was observed in soils of the RF. We conclude that combining manure and urea fertilization has different effects on soils of the two ecoregions, and that RF farmers can reduce agricultural emissions without compromising soil productivity and yield potential. Impact of Periampullary Diverticulum on Biliary Cannulation and ERCP Outcomes: A Single-Center Prospective Study Fatema Tabak, Guo-Zhong Ji, Lin Miao Subject: Medicine & Pharmacology, Gastroenterology Keywords: endoscopic retrograde cholangiopancreatography; periampullary diverticulum; difficult cannulation; biliary cannulation; cannulation techniques; adverse events Aim: This study aimed to investigate the association between periampullary diverticulum (PAD) and difficult biliary cannulation, as well as to evaluate the impact of different types of PAD on the cannulation success rate and adverse events. Methods: A total of 636 patients who underwent endoscopic retrograde cholangiopancreatography (ERCP) during the study period were prospectively studied and divided into two groups based on the presence or absence of PAD. In group A, 126 patients had PAD compared with 510 patients in group B without PAD. The primary outcome measurements were ERCP procedures time, selective cannulation techniques, and cannulation difficulty in addition to cannulation success rate and ERCP-related adverse events. The difficult cannulation was analyzed using logistic regression considering age, co-morbidities, the presence of PAD types, and indications as independent factors. Results: The average cohort age was 65.30±16.67 years, and 52.7% were male. Significant higher rates of choledocholithiasis, cholangitis, and biliary pancreatitis were reported in the group of PAD (p<0.05). Successful selective cannulation was achieved in 97.6% in group A and 95.3% in group B (p>0.05). The cannulation time was significantly longer in the presence of PAD (5.1 min, vs. 4.09 min, p<0.05). There was no significant difference in the rate of overall adverse events and post ERCP pancreatic PEP. Conclusion: The presence of PAD did not affect the duration or success of the ERCP procedure. Furthermore, it was associated with longer cannulation time and increase in the cannulation difficulty, especially with PAD type 1. Thu, 16 January 2020 Hyperbolic Numbers in Modeling Genetic Phenomena Sergey Petoukhov Subject: Life Sciences, Genetics Keywords: hyperbolic numbers; matrix; eigenvectors; genetics; Punnett squares; Fibonacci numbers; phyllotaxis; music harmony; literary texts; doubly stochastic matrices The article is devoted to applications of 2-dimensional hyperbolic numbers and their algebraic 2n-dimensional extensions in modeling some genetic and cultural phenomena. Mathematical properties of hyperbolic numbers and of their bisymmetric matrix representations are described in a connection with their application to analyze the following structures: alphabets of DNA nucleobases; inherited phyllotaxis phenomena; Punnett squares in Mendelian genetics; the psychophisical Weber-Fechner law; long literary Russian texts (in their special binary representations). New methods of algebraic analysis of the harmony of musical works are proposed, taking into account the innate predisposition of people to music. The hypothesis is put forward that sets of eigenvectors of matrix representations of basis units of 2n-dimensional hyperbolic numbers play an important role in transmitting biological information and that they can be considered as one of foundations of coding information at different levels of biological organization. In addition, the hypothesis about some analogue of the Weber-Fechner law for sequences of spikes in single nerve fibers is formulated. The proposed algebraic approach is connected with the theme of a grammar of biology and applications of bisymmetric doubly stochastic matrices. Applications of hyperbolic numbers reveal hidden interrelations between structures of different biological and physical phenomena. They lead to new approaches in mathematical modeling genetic phenomena and innate biological structures. Preprint ARTICLE Download: 27| View: 128| Comments: 0 | doi:10.20944/preprints202001.0174.v1 The Potential of Computational Modeling to Predict Disease Course and Treatment Response in Patients with Relapsing Multiple Sclerosis Francesco Papparlardo, Giulia Russo, Marzio Pennisi, Giuseppe Alessandro Parasiliti Palumbo, Giuseppe Sgroi, Santo Motta, Davide Maimone Subject: Life Sciences, Immunology Keywords: computational modeling; agent based modeling; systems biology; multiple sclerosis; immunity; degenerative disease. As of today, 20 disease modifying drugs (DMD) have been approved for the treatment of relapsing multiple sclerosis (MS) and, based on their efficacy, they can be grouped into moderate-efficacy DMDs and high-efficacy DMDs. The choice of the drug mostly relies on the judgement and experience of neurologists and the evaluation of therapeutic response can only be obtained by monitoring clinical and magnetic resonance imaging (MRI) status during follow up. In an era where therapies are focused on personalization, the aim of this study is to develop a modeling infrastructure to predict the evolution of relapsing MS and the response to treatments. We built a computational modeling infrastructure named UISS (Universal Immune System Simulator) able to simulate the main features and dynamics of the immune system activities. We extended UISS to simulate all the underlying MS pathogenesis and its interaction with the host immune system. This simulator is a multi-scale, multi-organ, agent based simulator with an attached module capable of simulating the dynamics of specific biological pathways at the molecular level. We simulated six MS patients with different relapsing-remitting courses. These patients were characterized on the basis of their age, sex, presence of oligoclonal bands, therapy and MRI lesion load at onset. The simulator framework is made freely available and can be used following the links provided in the availability section. Even though the model can be further personalized employing immunological parameters and genetic information, based on the available data we generated a few simulation scenarios for each patient, including those who matched the real clinical and MRI history. Moreover, for two patients, the simulator anticipated the timing of subsequent relapses, which really occurred, suggesting that UISS may have the potential to assist MS specialists in predicting the course of the disease and the response to treatment. Description and Genome Analysis of Methylotetracoccus aquaticus sp. nov. , a Novel Tropical Wetland Methanotroph, with the Amended Description of Methylotetracoccus gen. nov. Monali Rahalkar, Kumal Khatri, Jyoti Mohite, Pranitha Pandit, Rahul Bahulikar Subject: Life Sciences, Microbiology Keywords: wetlands; methanotrophs; India; tropical; novel species; Type Ib; Methylotetracoccus We enriched and isolated a novel gammproteobacterial methanotroph; strain FWC3, from tropical freshwater wetland, near Nagaon beach, Alibag, India. FWC3 is a coccoid, flesh pink/peach pigmented, non-motile methanotroph and the cells are present in pairs and as tetracocci. The culture can grow on methane (20%) as well as on a wide range of methanol from concentrations (0.02%-5%). Based on the comparison of genome data, FAME analysis, morphological characters and biochemical characters, FWC3 belongs to the tentatively and newly but not validly described genus 'Methylotetracoccus' of which only a single species strain was described, Methylotetracoccus oryzae C50C1. The ANI index between FWC3 and C50C1 strains is 94%, and the DDH value is 55.7%, less than the cut-off values 96% and 70%, respectively. The genome size of FWC3 is smaller (3.4 Mbp) compared to that of C50C1 (4.8 Mbp). Additionally, the FAME profile of FWC3 shows differences in cell wall fatty acid profiles compared to Methylotetracoccus oryzae C50C1. Also, there are other differences on the morphological, physiological and genomic levels. We propose FWC3 to be a member of a novel species of the genus Methylotetracoccus, for which the name Methylotetracoccus aquaticus is proposed. Also, an amended description of the genus Methylotetracoccus gen. nov. is given here. FWC3 is available in two international culture collections with the accession numbers: MCC 4198 (Microbial Culture collection, India) and JCM 33786 (Japan Collection of Microorganisms, Japan). Determination of the Rheological Properties of Red and White Bread Wheat Flours with Different Methods Pelin Dölek Ekinci, Incilay Gökbulut Subject: Biology, Agricultural Sciences & Agronomy Keywords: Red bread wheat; white bread wheat; flour; rheological properties In this study, rheological properties of the bread wheat flour dough from 6 wheat genotypes were determined. For the pereparation of flour, 3 red bread (Pandas, Sagitorya, Pehlivan) and 3 white bread (Kaşifbey, Göktan, Ceyhan-99) were selected as wheat genotype. To determine the rheological properties of the wheat flour dough, farinograph, extensograph, mixolab and glutograph devices were used. According to the results of the Farinograph analysis, the average development time of wheat White and red genotypes were 1.95 minutes and 8.96 minutes, respectively. According to the extensograph results of the flour samples, the most extended stability value was determined with 7.47 min in Ceyhan-99 cultivar. As a result of the research, it was determined that flour yields of red bread varieties were higher other genotypes, gas retention capacities of white bread flours were showed high result in extensograph application and resistance of dough to elongation was higher. In the Mixolab analysis, it determined that white bread wheat varieties have higher values in terms of kneading properties and gluten properties, and red bread wheat varieties have higher values in values of viscosity, amylase value and starch retrogradation. Antidiabetic Therapy in the Treatment of NASH Yoshio Sumida, Masashi Yoneda, Katsutoshi Tokushige, Miwa Kawanaka, Hideki Fujii, Masato Yoneda, Kento Imajo, Hirokazu Takahashi, Yuichiro Eguchi, Masafumi Ono, Yuichi Nozaki, Hideyuki Hyogo, Masahiro Koseki, Yuichi Yoshida, Takumi Kawaguchi, Yoshihiro Kamada, Takeshi Okanoue, Atsushi Nakajima Subject: Medicine & Pharmacology, Nutrition Keywords: Dipeptidyl peptidase-4; Fibroblast growth factor; Gastrointestinal peptide; Glucagon-like peptide 1; Glucagon receptor; Peroxisome proliferator-activated receptor; Sodium glucose cotransporter Liver related diseases are the 3rd leading causes (9.3%) of mortality in type 2 diabetes mellitus (T2DM) in Japan. T2DM is closely associated with nonalcoholic fatty liver disease (NAFLD) which is the most prevalent chronic liver disease worldwide. Nonalcoholic steatohepatitis (NASH), a severe form of NAFLD, can lead to hepatocellular carcinoma (HCC) and hepatic failure. There are no established pharmacotherapies for NASH patients with T2DM. Though vitamin E is established as a 1st line agent in NASH without T2DM, its efficacy was recently denied in NASH with T2DM. The effects of pioglitazone on NASH histology with T2DM have extensively been established, but several concerns exist such as body weight gain, fluid retention, cancer incidence, and bone fracture. Glucagon-like peptide 1 (GLP-1) receptor agonists and sodium/glucose cotransporter 2 (SGLT2) inhibitors are expected to ameliorate NASH (LEAN study, LEAD trial, and E-LIFT study). Among a variety of SGLT2 inhibitors, dapagliflozin have already entered phase 3 trials (DEAN study). A key clinical question is what kinds of anti-diabetic drugs are the most appropriate for the treatment of NASH to prevent progression of hepatic fibrosis resulting in HCC/liver-related mortality without increasing risk at cardiovascular or renal events. The combination therapies such as glucagon receptor agonist/GLP-1 or gastrointestinal peptide /GLP-1 will be under development. This review focuses on antidiabetic agents and future perspectives on the view of the treatment of NAFLD with T2DM. COIIoT - An Interface between CoAP and OSGP Protocols for the Integration of Internet of Things Devices with Smart Grids Felipe Viel, Luis Augusto Silva, Valderi Leithardt, Gabriel Villarubia González, Raimundo Celeste Ghizoni Teive, Cesar Albenes Zeferino Subject: Engineering, Other Keywords: internet of things; smart grids; protocol communication; interoperability; CoAP; OSGP The evolution and miniaturization of the technologies for processing, storage, and communication have enabled computer systems to process a high volume of information and make decisions without human intervention. Within this context, several systems architectures and models have gained prominences, such as the Internet of Things (IoT) and Smart Grids (SGs). SGs use communication protocols to exchange information, among which the Open Smart Grid Protocol (OSGP) stands out. In contrast, this protocol does not have integration support with IoT systems that use some already consolidated communication protocols, such as the Constrained Application Protocol (CoAP). Thus, this work develops the integration of the protocols OSGP and CoAP to allow the communication between conventional IoT systems and systems dedicated to SGs. Results demonstrate the effectiveness of this integration, with the minimum impact on the flow of commands and data, making possible the use of the developed CoAP-OSGP Interface for Internet of Things (COIIoT). Residual Compressive Strength of Short Tubular Steel Columns with Local Corrosion Damage Kyra Kamille Toledo, Hyoung-Seok Kim, Young-Soo Jeong, In-Tae Kim Subject: Engineering, Civil Engineering Keywords: residual compressive strength; steel; finite element analysis; short tubular steel column; local corrosion Corrosion is considered as one of the main factors in the structural performance deterioration of steel members. In this study, experimental and numerical methods were used to assess the reduction in compressive strength of short tubular steel columns with local corrosion damage. The corrosion damage was varied with different depths (0, 1.5, 2, 3, 4, 4.5, and 6 mm), height (0, 20, 40, 60, 80, 100, 120, 140, 160, and 180 mm), circumference (0, 90, 180, 270, and 360°), and location along the column. A parametric numerical study was performed to establish a correlation between the residual compressive strength and the severity of corrosion damage. The results showed that as the corrosion depth, height and circumference increased, the compressive strength decreased linearly. As for the corrosion height, the residual compressive strength became constant after decreasing linearly when the corrosion height was greater than the half-wavelength of buckling of the short columns. An equation is presented to evaluate the residual compressive strength of short columns with local corrosion wherein the volume of the corrosion damage was used as a reduction factor in calculating the compressive strength. The percentage error using the presented equation was found to be within 11.4%. To What Extent should We Rely on Antibiotics to Reduce High Gonococcal Prevalence? Historical Insights from Mass-meningococcal Campaigns Chris Kenyon Subject: Medicine & Pharmacology, Other Keywords: Neisseria gonorrhoeae; AMR; Neisseria meningitidis; commensal Neisseria In the absence of a vaccine, current antibiotic-dependent efforts to reduce the prevalence of Neisseria gonorrhoeae in high prevalence populations have been shown to result in extremely high levels of antibiotic consumption. No randomized controlled trials have been conducted to validate this strategy and an important concern of this approach is that it may induce antimicrobial resistance. To contribute to this debate, we assessed if mass treatment in the related species, Neisseria meningitidis, was associated with the emergence of antimicrobial resistance. To this end, we conducted a historical review of the effect of mass meningococcal treatment programmes on the prevalence of N. meningitidis and the emergence of antimicrobial resistance. We found evidence that mass treatment programmes were associated with the emergence of antimicrobial resistance. Subject CategoriesARTS & HUMANITIESBEHAVIORAL SCIENCESBIOLOGYCHEMISTRYEARTH SCIENCESENGINEERINGLIFE SCIENCESMATERIALS SCIENCEMATHEMATICS & COMPUTER SCIENCEMEDICINE & PHARMACOLOGYPHYSICAL SCIENCESSOCIAL SCIENCES<
CommonCrawl
To screen or not to screen: an interactive framework for comparing costs of mass malaria treatment interventions Justin Millar ORCID: orcid.org/0000-0002-6866-75441,2, Kok Ben Toh2,3 & Denis Valle1,2 Mass drug administration and mass-screen-and-treat interventions have been used to interrupt malaria transmission and reduce burden in sub-Saharan Africa. Determining which strategy will reduce costs is an important challenge for implementers; however, model-based simulations and field studies have yet to develop consensus guidelines. Moreover, there is often no way for decision-makers to directly interact with these data and/or models, incorporate local knowledge and expertise, and re-fit parameters to guide their specific goals. We propose a general framework for comparing costs associated with mass drug administrations and mass screen and treat based on the possible outcomes of each intervention and the costs associated with each outcome. We then used publicly available data from six countries in western Africa to develop spatial-explicit probabilistic models to estimate intervention costs based on baseline malaria prevalence, diagnostic performance, and sociodemographic factors (age and urbanicity). In addition to comparing specific scenarios, we also develop interactive web applications which allow managers to select data sources and model parameters, and directly input their own cost values. The regional-level models revealed substantial spatial heterogeneity in malaria prevalence and diagnostic test sensitivity and specificity, indicating that a "one-size-fits-all" approach is unlikely to maximize resource allocation. For instance, urban communities in Burkina Faso typically had lower prevalence rates compared to rural communities (0.151 versus 0.383, respectively) as well as lower diagnostic sensitivity (0.699 versus 0.862, respectively); however, there was still substantial regional variation. Adjusting the cost associated with false negative diagnostic results to included additional costs, such as delayed treated and potential lost wages, undermined the overall costs associated with MSAT. The observed spatial variability and dependence on specified cost values support not only the need for location-specific intervention approaches but also the need to move beyond standard modeling approaches and towards interactive tools which allow implementers to engage directly with data and models. We believe that the framework demonstrated in this article will help connect modeling efforts and stakeholders in order to promote data-driven decision-making for the effective management of malaria, as well as other diseases. Malaria continues to be a significant contributor to the global burden of diseases. Mass administration of antimalarial drug treatments (MDA) to entire populations has been used as an intervention strategy for reducing the global malaria burden [1, 2], particularly during elimination efforts in the early to mid-twentieth century [3]. Recently, however, interest in MDA as a viable malaria intervention strategy has reemerged, in particular in conjugation with emergency responses to non-malarial epidemics (e.g., the 2014–2015 Ebola outbreak in West Africa) [4,5,6,7] and seasonal malaria chemoprevention in the Sahel region [8]. Contemporary MDA interventions, primarily through intermittent preventive treatment and seasonal chemoprevention campaigns, have been used to interrupt malaria transmission in low endemicity settings [9], as well as reduce malaria burden in vulnerable subpopulations, such as young children and pregnant women, in high endemicity settings [10, 11]. In traditional MDA interventions, all individuals in a population or subpopulation receive treatment regardless of symptoms or other diagnostic information. This approach ensures that all sick individuals receive treatment; however, it also leads to overtreatment, which may increase the overall cost of MDA and undermine resource allocation. Modern antimalarial drugs, such as artemisinin-based combination therapy (ACT), can be expensive and often are in limited supply; therefore, wasting these resources on malaria-negative individuals can be costly [12, 13]. In addition, the overuse of ACTs can lead to an increase risk of anti-malaria resistance [14]. An alternative approach to MDA is mass screen and treat (MSAT), which consists of first screening the population with a diagnostic test and then only treating individuals with a positive test outcome. Because microscopic evaluations and molecular techniques (e.g., polymerase chain reaction) are often not a viable option in remote regions and/or at large operational scales, diagnosis is increasingly based on rapid diagnostics tests (RDTs) throughout much of sub-Saharan Africa [1, 15]. RDT screening has been repeatedly shown to be a viable, cost-effective option for diagnosing malaria [16, 17]. The widespread use of RDTs has significantly reduced the use of antimalarial drugs, helping to reduce the risk of resistance emergence [18, 19]. In addition to use of RDT in clinical settings, MSAT relying on RDT outcomes has been shown to be a cost-effective method for reducing malaria burden in certain contexts [20]. However, determining when and where MSAT will reduce the cost of traditional MDA is an important challenge [9]. On the one hand, despite the potential of MSAT to reduce costs and overtreatment, there have been notable failures in field studies in terms of ensuring long-term improvements in health and educational indices [21, 22]. The emergence of resistance is a particularly important concern given the growing consensus that repeated interventions are necessary for sustaining the impact of MDA and MSAT [9, 23,24,25]. A recent Cochrane review has indicated that 182 studies have assessed the impact of these types of malaria interventions (MDA and its variants, including MSAT) [2]; however, few guidelines have emerged to help decision-makers determine when MSAT is a more cost-effective strategy than MDA [26,27,28,29]. In general, MSAT is thought to be the preferred approach in high- to mid-transmission settings [30], which was supported by the study from Crowell et al. [3]. In contrast, however, Walker et al. [31] found that MDA was more cost-effective than MSAT in all but the highest transmission settings, and noted that the slight cost deficit in these areas was likely offset by the additional period prophylaxis provided to post-intervention infected individuals [32]. Gerardin et al. [33], on the other hand, argued that in control/pre-elimination settings, the cost of overtreatment by MDA may mitigate the detection advantage (i.e., ensuring all infected individuals are treated) and therefore undermine the cost-effectiveness of MDA compared to MSAT. Corroborating these findings with field research has been difficult, as much of the observed data on recent MDA and MSAT applications is held in gray literature and unpublished reports [2, 9]. Multiple factors can influence the costs of MSAT relative to MDA. For example, unlike MDA, inaccurate diagnostic results in MSAT can lead to both overtreatment and undertreatment. Although RDTs have high overall sensitivity (above 93%) and specificity (above 95%), a comprehensive review of field studies found substantial heterogeneity in RDT performance [34]. Additionally, the detection mechanism differs among different types of RDTs. For example, commonly used HRP2-based RDTs such as Paracheck® will fail to detect infections caused by non-Plasmodium falciparum species or by P. falciparum parasites which carry mutations to the HRP-2 gene, resulting in false negative results [35, 36]. False positive results can also be an issue, as the HPR-2 protein can persist in the host for up to 2 to 3 weeks after parasitemia has cleared [36]. Ultimately, the potential cost-savings advantage of MSAT over MDA depends on baseline likelihood of infection (i.e., prevalence), RDT sensitivity and specificity, the costs of treatment and RDTs, and the costs associated with false positive and false negative results [33, 37]. However, many of these factors can vary substantially within sub-Saharan countries [34, 37], and as a result, it can be difficult to generalize which strategy is likely to reduce implementation costs in each country/region [38]. Nevertheless, identifying the local optimal strategy is important for stakeholders and policy implementers as national malaria control programs move away from a "one-size-fits-all" approach. In this article, we outline a conceptual framework for comparing the cost of malaria intervention strategies based on the probability of their possible outcomes and the costs associated with those outcomes, focusing on the comparison between MDA and MSAT. First, we demonstrate this comparative framework using hypothetical scenarios for each of these factors. Next, we create probabilistic models for estimating malaria prevalence and RDT performance using routinely collected national-scale survey data (e.g., Demographic Health Surveys (DHSs) and Malaria Indicator Surveys (MISs)) to present a real-world application. Finally, using these models, we build an interactive web application which allows end-users to compare the expected intervention costs in each region within each country based on the inputted economic values, thereby extending these models into decision support tools which allow implementers to interact with the data and models directly. Estimating intervention costs The expected cost per person associated with MDA and MSAT interventions can be estimated as a function of the costs of implementing these interventions, the costs associated with the potential outcomes, and the probability of those outcomes. The possible outcomes for an individual participant in an MDA campaign are either true positive or false positive, whereas an individual participant in an MSAT campaign may also be true negative or false negative (Fig. 1). Conceptual framework for costs of mass drug administration (MDA) and mass screen and treat (MSAT). Flow diagram based on the potential outcomes and associated costs for each intervention. Testing and outcome costs are shown in blue and red, respectively. FP and FN stand for false positive and false negative, respectively To determine the probability of each of these potential outcomes, we calculate the likelihood of malaria infection p(M = 1), the sensitivity p(RDT = 1 | M = 1), and specificity p(RDT = 0 | M = 0) of the screening diagnostic test (model variables are defined in Table 1). In this framework, testing costs refer to materials used by the intervention (e.g., RDT, treatment with an ACT), and outcome costs refer to additional costs related to outcomes from the intervention (e.g., additional healthcare costs due to a false negative RDT results). Cost items include cost of the diagnostic test used for screening (CostRDT), the cost of antimalarial treatment (CostTrt), and the outcome costs of false negatives (CostFN) and false positives (CostFP). Notice that the outcome costs of false negative or false positive diagnostic outcomes may incorporate multiple sources of cost (i.e., lost wages due to illness, deleterious impact of side effects, increased risk of antimalarial resistance) and that these costs may occur at varying levels of the overall healthcare system (e.g., provider costs, individual costs, societal costs). The per-person expected cost of MDA is given by: $$ E\left[\mathrm{Cos}{\mathrm{t}}_{\mathrm{MDA}}\right]=\mathrm{Cos}{\mathrm{t}}_{\mathrm{Trt}}+\mathrm{Cos}{\mathrm{t}}_{\mathrm{FP}}\times p\left(M=0\right) $$ Table 1 Parameter definitions for expected cost equations The per-person expected cost of MSAT is given by: $$ E\left[\mathrm{Cos}{\mathrm{t}}_{\mathrm{MSAT}}\right]=\mathrm{Cos}{\mathrm{t}}_{\mathrm{RDT}}+\mathrm{Cos}{\mathrm{t}}_{\mathrm{Trt}}\times p\left(\mathrm{RDT}=1\right)+ $$ $$ \mathrm{Cos}{\mathrm{t}}_{\mathrm{FP}}\times p\left(\mathrm{RDT}=1|M=0\right)+\mathrm{Cos}{\mathrm{t}}_{\mathrm{FN}}\times p\left(\mathrm{RDT}=0|M=1\right) $$ where, based on the law of total probabilities, p(RDT = 1) is given by: $$ p\left(\mathrm{RDT}=1\right)=p\left(M=1\right)\times p\left(\mathrm{RDT}=1|M=1\right)+p\left(M=0\right)\times p\left(\mathrm{RDT}=1|M=0\right) $$ This framework could be augmented through the inclusion of additional layers of complexity, such as the inclusion of overall program-level costs and/or an expanded set of possible outcomes (e.g., likelihood of developing severe malaria and the associated costs), if data on these costs and outcomes were available. We elected to develop this individual-level framework which includes productivity losses, but note that it can be readily extended to more complex data and applications such as designating societal and healthcare provider costs. We used this framework to compare the potential cost of MDA and MSAT in two contexts. In the generalized comparisons, we compare costs across all possible baseline prevalence. The sensitivity, specificity, and costs of RDTs were based on a recent report from the WHO [26] and make comparisons using hypothetical scenarios based on differing costs associated with false negative and false positive outcomes (Table 2). In the context-specific comparisons, we use publically available data to construct models for estimate baseline prevalence as well as RDT sensitivity and specificity, and apply similar scenario-based comparisons. Table 2 Diagnostic performance and costs (in USD) used for baseline comparison Modeling outcome probability based on prevalence, sensitivity, and specificity Description of the data used for modeling Information on malaria status of children 5 years old and under was sourced from Demographic and Health and Malaria Indicator Surveys (DHS and MIS, respectively). These data are freely available, use standardized sampling procedures, and contain information on a broad range of malaria indicators, such as age, urbanicity, and fever history. Recent surveys were selected for Burkina Faso [40], Cote d'Ivoire [41], Ghana [42], Guinea [43], Nigeria [44], and Togo [45]. For each survey, the data are reported at the first order civil entity below the country level (commonly referred to as "administrative area 1"). The official names for these areas vary between countries; therefore, we adopt the term "regions" throughout the article for clarity. Note that this means that "regions" are operationally defined as sub-areas within individual countries. These West African countries were selected because they each contain relatively recent standardized country-wide surveys and included information on RDT and microscopy (assumed to be the "gold standard" in this region [46, 47]). The RDTs used in these are surveys are specific to Plasmodium falciparum, which account for the vast majority of malarial infections in this region. Individual survey sample sizes ranged from 2713 to 6112 individuals, distributed across 6 to 13 regions per country (Table 3). Table 3 Summary of data sources Modeling prevalence, sensitivity, and specificity Malaria prevalence and diagnostic performance were estimated separately for each country using Bayesian mixed-effect logistic regression models. Microscopy (M) was considered the "gold standard" for detecting malaria infections in this region [46, 47], and RDT (R) was considered the screening diagnostic test. Let Mijk represent the binary infection status (as determined by microscopy) of individual i from cluster j in region k. We assume that Mijk is given by: $$ {M}_{ijk}\mid {p}_{ijk}\sim \mathrm{Bernouli}\left({p}_{ijk}\right) $$ where pijk is the probability of infection (e.g., prevalence). We constructed the model using just two basic covariates that could be relevant for the development of region-specific policy, namely age in months (Ageijk) and a binary classification of urban/rural environment (Urbanjk) based on the survey's definition (see references in Table 3). Using the logit link function \( g(p)=\log \left(\frac{p}{1-p}\right) \), we model infection probability as: $$ g\left({p}_{ijk}\right)={a}_{jk}+{\beta}_{0,k}+{\beta}_{1,k}\times \mathrm{Ag}{\mathrm{e}}_{ijk}+{\beta}_{2,k}\times {\mathrm{Urban}}_{jk}+{\beta}_3\times \mathrm{Ag}{\mathrm{e}}_{ijk}+{\beta}_4\times {\mathrm{Urban}}_{jk} $$ This equation includes a cluster-level random-effect intercept ajk, regional-level fixed-effects (i.e., intercepts β0, k and slopes β1, k and β2, k), and country-level fixed-effects (i.e., country-level slopes β3 and urbanicity β4). In relation to RDT, let Rijk represent the binary test outcome (1 for positive, 0 for negative) of individual i from cluster j in region k. We assume that: $$ {R}_{ijk}\mid {M}_{ijk}=1,{Sn}_{ijk}\sim \mathrm{Bernouli}\left({Sn}_{ijk}\right) $$ $$ {R}_{ijk}\mid {M}_{ijk}=0,{Sp}_{ijk}\sim \mathrm{Bernouli}\left(1-{Sp}_{ijk}\right) $$ where Snijk and Spijk denote the RDT sensitivity and specificity, respectively. Both parameters were modeled with the same set of predictor variables and link function as pijk. Individual models for prevalence, sensitivity, and specificity were created for each country using the "brms" package in the open-source R statistical software [48, 49]. We adopted the recommended priors based on our model specification (flat priors for the fixed effects, and half Student's t with 3 degrees of freedom and a scaling factor of 1 for the standard deviation of the random effects) [49,50,51]. Due to the large sample size in our data, these priors play very little to no role in influencing our results. The fully specified model with the full posterior is described in Supplemental Material 1. Each model was fitted using a Hamiltonian Monte Carlo with 4 independent chains, each containing a 1000-iteration burn-in phase and a 1000-iteration sampling phase, resulting in 4000 posterior samples. Parameter convergence was determined using the potential scale reduction factor (convergence at \( \hat{R}<1.05 \)) [52]. Designing the interactive framework Based on the outlined cost functions and associated outcome probabilities, we developed an interactive framework which allows users to compare the relative costs of MDA and MSAT in each region (as defined by DHS/MIS) within each country. This was done using the "shiny" package in R [53], a package that enables the creation of web-based interactive applications directly from R code (instead of HTML, CSS, or JavaScript), which can then be freely hosted and accessed on the Internet. Web applications like this can facilitate engagement with stakeholders and policymakers with limited statistical and programming backgrounds. Examples of other epidemiological interactive tools developed in Shiny can be found in [54,55,56,57]. By using probabilistic models for specifying the outcome probabilities, we are able to compare intervention scenarios while incorporating uncertainty. Aside from inputting cost values, the interface allows users to select covariate values (i.e., country, age range, and urbanicity), which then results in an update of the cost comparison in real time. We used the "leaflet" package in R to create an interactive map-based visualization of the cost comparison [58]. The code used to create this tool is available at https://github.com/justinmillar/mda-msat. We also constructed a "generalized" version of the application, which allows the user to specify a range for RDT sensitivity and specificity and compare MDA versus MSAT over all possible prevalence rates (rather than estimating these parameters from data). General comparisons Using general cost and diagnostic accuracy parameters from the WHO [26] (Table 2), MSAT is preferred in nearly all but the highest disease burden settings when the cost of false negatives is ignored (Fig. 2a). However, the costs associated with MSAT increase in higher prevalence scenarios once the cost of false negatives is assumed to be equal to the cost of treatment (e.g., an RDT-negative individual eventually receives treatment; cost of $2.40 [26]) (Fig. 2b). The individual-level cost of MSAT can be further undermined if the overall economic burden of false negative includes additional costs. To demonstrate this, if we assume a scenario where a person whose child has a false negative result also incurs a day of lost wages (cost of $23.95, based on hourly rate and assuming 8 h per workday based on the median monthly income in sub-Saharan from World Bank estimates [39]), in order to take their child to a health facility to receive care, then screen and treat yields a significantly higher costs (Fig. 2c). Note that this is just one potential scenario where the cost of a false outcome can drastically shift the associated costs. Other scenarios, such as a false negative leading to a severe malaria infection, could decrease the benefit of MSAT. Finally, the primary effect of including a cost associated with false positive is raising the cost of MDA treatment in lower burden settings, which is eventually offset by costs associated with misdiagnosis in the screen-and-treat scenario as prevalence increases (Fig. 2d). Costs of mass drug administration (MDA) and mass screen and treat (MSAT) based on malaria prevalence. Each panel depicts a different scenario relative to the costs associated with false positive (FP) and false negative (FN) outcomes, as specified in the legends. The lower estimated cost (y-axis) indicates which strategy will have lower associated costs for a given prevalence rate (x-axis). RDT sensitivity and specificity ranged from 0.82 to 0.96 and 0.80 to 0.90, respectively, and the cost of treatment and RDT were set to $2.40 and $0.60, based on a WHO report [26]. The gray-shaded region indicates overlap in expected cost, where the more favorable strategy is unclear due to the range of possible values for RDT sensitivity and specificity In the following section, we present similar scenario-based realizations using the models fit with national survey data. These scenarios represent just a small subset of all the possible scenarios. We believe this demonstrates the utility of using interactive decision support tools, which can re-create cost comparisons based on user-defined scenarios. Also note that these models are fit using data from young children (6 to 59 months old), and therefore, the displayed results are related to this particular subpopulation. Context-specific observations based on national-scale survey data Individual models for malaria prevalence and RDT sensitivity and specificity rates for young children were fitted for all six country datasets. In each model, all parameters reached convergence based on the potential scale reduction factor (all \( \hat{R} \) values ranged between 0.999 and 1.012). Details on each model are provided in Supplemental Material 1, and the regional prevalence, sensitivity, and specificity estimates are provided in Supplemental Material 2. The following sections illustrate the influence of the regression parameter estimates, cost scenarios, and parameter uncertainty on the cost comparison using survey data from Burkina Faso as a representative example. Effect of false negatives Designating costs specifically related to incorrect diagnostic results can have profound impacts on the cost of the MSAT. Consider the rural communities in Burkina Faso (Fig. 3), which fall within the mid- to high-transmission setting where MSAT is considered to be viable and potentially cost-effective relative to MDA, and where both clinical trials and mathematical studies have examined the effectiveness of MSAT [22, 24, 59]. We chose a favorable cost setting for MSAT by setting RDT cost to $0.60 and antimalarial treatment cost to $2.55. These costs correspond to the lower and higher ends of RDT and antimalarial prices, respectively, based on a recent WHO report [26]. Figure 3a shows the estimated value added from screening per individual for rural communities in Burkina Faso assuming no cost associated with false negatives or false positives. As expected, we find that under these conditions, MSAT is favored in most regions in Burkina Faso, and there are no regions that favor MDA. However, MSAT becomes relatively more costly once we attribute cost to false negative outcomes. When the cost of false negatives is set to the cost of the antimalarial treatment, which corresponds to a scenario where all truly infected individuals will eventually pay to receive treatment, MSAT becomes less favorable. MSAT is only favored in three regions (which had relatively low prevalence and higher sensitivity), and there are more regions where MDA may be more favorable or there is little expected cost difference between MDA and MSAT (Fig. 3b). MSAT becomes relatively more costly as the cost associated with false negatives increases. Under the hypothetical scenario where a false negative also incurs a lost 1 day's wage based on the minimum wage in Burkina Faso ($6.37 per day, based on [39]), MDA is favored in all but two regions, despite relatively high prevalence rates (ranging from 0.26 (CI 0.18–0.36) to 0.66 (CI 0.53–0.77)) (Fig. 3c). Value added from screen then treat among rural communities in Burkina Faso. Regional maps of the mean value added (i.e., MDA costs minus MSAT costs) and boxplots of value added estimates are shown on the left and right panels, respectively. Positive values (blue) indicate regions where MSAT is favored whereas negative values (red) indicate regions where MDA is favored. Whiskers in boxplots indicate 95% credible intervals. When these intervals contain both positive and negative values, there is no significant difference regarding costs between strategies (gray). Cost of diagnostic test (RDT) and treatment were set at $0.60 and $2.55, respectively. a Added value estimates ignoring any potential costs associated with false negative results. b Value added estimates when cost associated with false negative results is set to the cost of receiving delayed treatment. c Value added estimates when cost associated with false negative results includes the cost of receiving delayed treatment and 1 day of lost wage (based on minimum wage [39]). In all of these scenarios, we assume no cost associated with false positive outcomes Differences between urban and rural communities As outlined earlier, there are many factors which influence malaria prevalence and RDT performance, and therefore may influence the costs of MDA and MSAT. One factor that strongly determines these variables is the differences between urban and rural communities. Although there is considerable regional-level variability in malaria prevalence within rural communities in Burkina Faso, rural communities consistently have higher prevalence than their urban counterparts (Fig. 4). Regional RDT sensitivity and specificity rates also consistently decline from rural to urban communities (see [34] for RDT performance on varying baseline prevalence). These differences between urban and rural communities can affect the costs of both intervention strategies. Figure 5 depicts the same cost scenarios as Fig. 3 for both rural and urban communities in Burkina Faso. These comparisons generally indicate that higher prevalence rural communities will tend to favor MSAT while lower prevalence urban communities tend to favor MDA, although these comparisons greatly depend on cost assumptions. The Nord region was the only region where both urban and rural communities favored MSAT in all cost scenarios. Interestingly, this was not linked to the regional prevalence rates, which were not too different from the other regions (0.16 and 0.47 for urban and rural communities, respectively), but instead were associated with having the highest sensitivity and specificity rates for both community types. This result suggests that even under a cost scenario which strongly favors one approach (in this case MDA), there was still a region where MSAT was favored, which highlights the importance of diagnostic accuracy in assessing costs at the country level. Comparison of regional-level differences of malaria prevalence and RDT sensitivity and specificity in Burkina Faso for urban and rural areas. Mean value estimate (circles) and 95% credible interval (vertical bars) for each region are based on 1000 posterior draws from each model. Each region has a unique shade and is connected by a dotted line to depict differences in parameter estimates between urban and rural communities Urban versus rural comparison of value added from screen then treat in Burkina Faso. Comparison of regional value added (per individual) from diagnostic screening (MDA costs minus MSAT costs) between urban (left plots) and rural (right plots) communities. Positive values (blue) favor MSAT, negative values (red) favor MDA, and 95% interval ranges that contain both positive and negative values indicate no significant difference (gray). Cost of diagnostic test (RDT) and treatment were set at $0.60 and $2.55, respectively. a Added value estimates ignoring any potential costs associated with false negative results. b Value added estimates when cost associated with false negative results is set to the cost of receiving delayed treatment. c Value added estimates when cost associated with false negative results includes the cost of receiving delayed treatment and 1 day of lost wage (based on minimum wage [39]). In all of these scenarios, we assume no cost associated with false positive outcomes Estimating regional breakpoints for treatment and RDT costs In addition to comparing the costs of MDA and MSAT directly based on specific values for the direct costs of treatments and diagnostic tests, it may also be valuable to identify the breakpoints for these costs. For example, under different scenarios of false positive and false negative costs, we can use the estimated regional malaria prevalence, RDT sensitivity, and RDT specificity to determine the cost of RDT and treatment for which MDA (or MSAT) would reduce costs. Figure 6 illustrates this using the scenarios in the previous examples for Burkina Faso, assuming no cost associated with false negatives (Fig. 6a) and including loss of 1 day's wage in the cost associated with false negatives (Fig. 6b). Notice that as expected, for a given cost of RDT, the cost comparison will tend to favor MSAT as the price of treatment increases. Similarly, for a given cost of treatment, MDA is favored as RDT cost increases. This breakpoint tends to be lower in rural communities, which typically have higher prevalence rates, and increases when we include costs associated with false negative results. For example, the dashed vertical and horizontal lines demonstrate this relationship using the RDT and treatment cost for the previous example ($0.60 and $2.55, respectively). Based on these costs, MDA will be favored in most rural districts whereas MSAT may be favored in the urban communities for some districts. These figures illustrate the critical role of the ratio of treatment cost and screening cost in comparing the costs of these interventions and that regional characteristics (prevalence and diagnostic test performance) strongly mediate the relationship between this cost ratio and differences in intervention costs. For example, the Hauts-Bassins region is highlighted to demonstrate how including an outcome cost associated with false negative results in a substantial shift towards favoring MSAT. Regional breakpoints for cost of treatment and diagnostic test (RDT) in Burkina Faso. Cost points above the line favor MSAT in that region, whereas cost points below the line favor MDA. Rural and urban communities are depicted on the left and right columns, respectively. a No cost associated with false negative results. b A cost associated with false negative results which includes the cost of 1 day of lost wages (based on minimum wage [39]). Each line represents a different region, and the blue line emphasizes the Hauts-Bassins region. In all of these scenarios, we assume no cost associated with false positive outcomes Interactive applications The application based on the "General comparison" section is available at https://jjmillar.shinyapps.io/msat-general/. Following the conceptual framework (Fig. 1), this tool allows the user to set the costs of treatment and diagnostic test (RDT), the costs of false positive and false negative outcomes, and a range for the potential diagnostic sensitivity and specificity. The tool relies on these user-defined inputs to estimate the cost of MDA and MSAT across all possible prevalence, generating an output similar to the plots in Fig. 3. The application based on the data from national surveys and modeling outcomes from prevalence, sensitivity, and specificity is available at https://jjmillar.shinyapps.io/msat-example/. This tool allows the user to select the country and setting (e.g., age range and urban or rural communities), as well as the same cost parameters in the first tool. The outputs are similar to the comparisons shown in Figs. 4, 5, and 6. As described in the comprehensive review by Newby et al. [9], many studies have been published on the impact and effectiveness of MDA and MSAT. Implementation of both interventions has experienced both successes and failures. Moreover, simulations based on mathematical models comparing the costs of MDA and MSAT have in some cases provided conflicting recommendations [3, 31,32,33]. Rather than using mathematical models, our contribution to this important question of resource allocation focuses on using statistical models to quantify the likelihood of different outcomes associated with each intervention. In addition, we use these statistical models to power interactive decision support tools, in which users set the cost parameters and interact directly with inference from our statistical models in an open-source, cross-platform format. We have prioritized a methodology that would be maximally flexible and accessible, including using familiar publicly available data and relying on relatively simple statistical models and tools developed in open-source and free software. There are, however, several notable limitations associated with these data. First, in order to model malaria status and diagnostic sensitivity and specificity, we assumed microscopy to be the gold standard. While microscopy is considered to be the gold standard in western sub-Saharan Africa [46, 47], it has been well established that microscopy-based diagnoses are imperfect [47, 60, 61] and that several factors (e.g., parasite density) can influence the accuracy of microscopy [62]. Additionally, while microscopy prevalence and RDT prevalence are strongly correlated, other research on DHS data has shown that this relationship is not always linear and that there are additional factors (such as the proportion of febrile individuals) that can influence this relationship [63]. By using microscopy as a gold standard, our analysis also ignores submicroscopic infections, which typically improve the cost-effectiveness of MDA relative to MSAT. Submicroscopic infections can be significant reservoirs for sustaining transmission; however, they are most impactful in older populations in low-transmission settings and elimination-phase interventions [24, 64, 65], rather than young children in endemic settings. We believe that data on more precise diagnostic tests (such as polymerase chain reaction) should be used if they were available at national scales. Our methodology can easily be modified to produce better estimates of RDT sensitivity and specificity that accounts for submicroscopic infection, leading to improved cost comparisons. Second, the approach for estimating costs of MDA and MSAT presented in this article is intentionally simplistic because data that would support a more comprehensive national-level analysis for multiple West African countries were not available. For instance, in addition to the individual-level costs, there are also fixed programmatic-level costs of implementing each intervention, which are unlikely to be equivalent for MDA and MSAT (e.g., RDT-based intervention required additional training and storage). Furthermore, we discuss the individual-level cost of receiving the intervention, but do not take into account how these interventions influence morbidity/mortality metrics (e.g., disability-adjusted life-year). Including data on incremental outcomes would allow for comparing effectiveness, rather than just overall costs. In particular, determining which metric to use for the cost-effectiveness comparison is important for defining the goal of the intervention (e.g., interrupting transmission or reducing malaria burden). We also note that the simplified cost analysis relying on productive losses may disproportionally favor MSAT in some of the hypothetical scenarios. More detailed cost analyses, including stratified cost sources, are still needed and may yield different results to the hypothetical scenarios used to demonstrate the framework. These could be implemented by designing additional compartments to the cost components in Table 1. Finally, if national-level data on malaria prevalence and RDT performance were available for febrile patients seeking help at health facilities, tools like the one described in this article could be developed for clinical settings to determine where and when test and treat would be a better option than presumptive treatment. In this case, one could also account for other potential outcomes, such as developing severe malaria (which may have a much higher associated cost), not adhering to a diagnostic test result, and the development of adverse side effects to treatment. Such a tool may be useful for improving adherence to national policies regarding treatment protocols [66,67,68]. The WHO recently identified a pressing need for modeling-based approaches to guide the selection of optimal interventions under different epidemiological conditions [69]. Decision support tools designed specifically for malaria control, particularly using mapping approaches and geostatistical models, have become more prevalent in recent years as national survey data have become more broadly accessible [70,71,72,73]. Our framework aims to provide a decision support tool for stakeholders for comparing costs associated with MDA or MSAT in different regions. We have demonstrated that such tools can be created and adapted using a standard, open-source program, helping to bridge the gap between methodological advancements and real-world decision-making. This is an important extension, as traditional scientific articles are often not an effective way to communicate the practical implications of complex analysis to policymakers [74]. These decision support tools are critically important as national malaria control programs have identified the need to move away from "one-size-fits-all" intervention and require tools for identifying optimal interventions based on location-specific conditions [75]. The applications presented in this article contribute to the growing pool of decision support tools for guiding malaria control interventions. Similar to the work by Lubell et al. [70], one valuable characteristic of the support tool presented is interactivity. Standard results presented in scientific articles are limited to the parameters or scenarios selected by scientists, which often do not necessarily match those that would have been chosen by decision-makers. Furthermore, often, decision-makers must consider additional information that is either unknown or unaccounted for by the original developer, which is only possible if users can explore different scenarios within the decision support tool. However, similar to using statistical models without understanding their underlying assumptions, decision support tools can be prone to misinterpretation if not carefully designed and described [76]. Furthermore, important properties of the tools presented in this article are that they rely solely on a freely available open-source program, can be hosted on web applications (which makes them platform-independent), and can be completely built using software regularly used in epidemiological research. Finally, by using a Bayesian framework for modeling the data and enabling user-defined inputs for cost parameters, our application allows stakeholders to make informed decisions while taking into account uncertainty in the outcomes under different cost scenarios. We present a flexible framework for comparing the cost of MDA and MSAT and use this framework along with publicly available malaria data from national-scale surveys to construct an interactive decision support tool. The methodology used to create this tool addresses critical issues (e.g., cross-platform, open-source, real-time interactivity) with previous decision support tools for guiding malaria interventions and can be built using widely used open-source software. The tool provides a platform for decision-makers (who may not have a strong statistical background) to interact with statistical models and adjust the parameters to fit their context and external knowledge in order to support data-driven decision-making. We believe that similar decision support tools designed to fit specific malaria interventions and contexts will be valuable assets for guiding data-driven decision-making for malaria control and elimination in a way that recognizes the inherent differences between regions. The codes used to create this analysis, including the modeling outcomes and interactive applications, are available at https://github.com/justinmillar/mda-msat. The Shiny applications are available at https://jjmillar.shinyapps.io/msat-example/ and https://jjmillar.shinyapps.io/msat-general/. The GitHub repository also details how to run the applications locally in R. The data were sourced from publicly available national surveys, found in Table 3. DHS: Demographic and Health Survey MIS: Malaria Indicator Survey MSAT: Mass screen and treat Greenwood B. The use of anti-malarial drugs to prevent malaria in the population of malaria-endemic areas. Am J Trop Med Hyg. 2004;70(1):1–7. Poirot E, Skarbinski J, Sinclair D, Kachur SP, Slutsker L, Hwang J. Mass drug administration for malaria. In: Cochrane Database of Systematic Reviews [Internet]: Wiley; 2013. Available from: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD008846.pub2/abstract. Crowell V, Briët OJ, Hardy D, Chitnis N, Maire N, Pasquale AD, et al. Modelling the cost-effectiveness of mass screening and treatment for reducing Plasmodium falciparum malaria burden. Malar J. 2013;12:4. Aregawi M, Smith SJ, Sillah-Kanu M, Seppeh J, Kamara ARY, Williams RO, et al. Impact of the mass drug administration for malaria in response to the Ebola outbreak in Sierra Leone. Malar J. 2016;15:480. Eisele TP, Bennett A, Silumbe K, Finn TP, Chalwe V, Kamuliwo M, et al. Short-term impact of mass drug administration with dihydroartemisinin plus piperaquine on malaria in southern province Zambia: a cluster-randomized controlled trial. J Infect Dis. 2016;214(12):1831–9. Tripura R, Peto TJ, Chea N, Chan D, Mukaka M, Sirithiranont P, et al. A controlled trial of mass drug administration to interrupt transmission of multidrug-resistant falciparum malaria in Cambodian villages. Clin Infect Dis. 2018;67(6):817–26. Landier J, Kajeechiwa L, Thwin MM, Parker DM, Chaumeau V, Wiladphaingern J, et al. Safety and effectiveness of mass drug administration to accelerate elimination of artemisinin-resistant falciparum malaria: a pilot trial in four villages of Eastern Myanmar. Wellcome Open Res. 2017;2 [cited 2019 Mar 4]. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5635445/. Coldiron ME, Von Seidlein L, Grais RF. Seasonal malaria chemoprevention: successes and missed opportunities. Malar J. 2017;16(1):481. Newby G, Hwang J, Koita K, Chen I, Greenwood B, von Seidlein L, et al. Review of mass drug administration for malaria and its operational challenges. Am J Trop Med Hyg. 2015;93(1):125–34. Wilson AL, Taskforce on behalf of the Ipt. A systematic review and meta-analysis of the efficacy and safety of intermittent preventive treatment of malaria in children (IPTc). PLoS One. 2011;6(2):e16976. Aponte JJ, Schellenberg D, Egan A, Breckenridge A, Carneiro I, Critchley J, et al. Efficacy and safety of intermittent preventive treatment with sulfadoxine-pyrimethamine for malaria in African infants: a pooled analysis of six randomised, placebo-controlled trials. Lancet. 2009;374(9700):1533–42. Bell D, Wongsrichanalai C, Barnwell JW. Ensuring quality and access for malaria diagnosis: how can it be achieved? Nat Rev Microbiol Lond. 2006;4(9):S7–20. Abeku TA, Kristan M, Jones C, Beard J, Mueller DH, Okia M, et al. Determinants of the accuracy of rapid diagnostic tests in malaria case management: evidence from low and moderate transmission settings in the East African highlands. Malar J. 2008;7:202. D'Acremont V, Lengeler C, Mshinda H, Mtasiwa D, Tanner M, Genton B. Time to move from presumptive malaria treatment to laboratory-confirmed diagnosis and treatment in African children with fever. PLoS Med. 2009;6(1):e252. Maltha J, Gillet P, Jacobs J. Malaria rapid diagnostic tests in endemic settings. Clin Microbiol Infect. 2013;19(5):399–407. Lubell Y, Reyburn H, Mbakilwa H, Mwangi R, Chonya S, Whitty CJ, et al. The impact of response to the results of diagnostic tests for malaria: cost-benefit analysis. BMJ. 2008;336(7637):202–5. Uzochukwu BS, Obikeze EN, Onwujekwe OE, Onoka CA, Griffiths UK. Cost-effectiveness analysis of rapid diagnostic test, microscopy and syndromic approach in the diagnosis of malaria in Nigeria: implications for scaling-up deployment of ACT. Malar J. 2009;8:265. Ansah EK, Epokor M, Whitty CJM, Yeung S, Hansen KS. Cost-effectiveness analysis of introducing RDTs for malaria diagnosis as compared to microscopy and presumptive diagnosis in central and peripheral public health facilities in Ghana. Am J Trop Med Hyg. 2013;89(4):724–36. Thiam S, Thior M, Faye B, Ndiop M, Diouf ML, Diouf MB, et al. Major reduction in anti-malarial drug consumption in Senegal after nation-wide introduction of malaria rapid diagnostic tests. PLoS One. 2011;6(4):e18419. Silumbe K, Yukich JO, Hamainza B, Bennett A, Earle D, Kamuliwo M, et al. Costs and cost-effectiveness of a large-scale mass testing and treatment intervention for malaria in Southern Province, Zambia. Malar J. 2015;14:211. Halliday KE, Okello G, Turner EL, Njagi K, Mcharo C, Kengo J, Allen E, Dubeck MM, Jukes MCH, Brooker SJ. Impact of intermittent screening and treatment for malaria among school children in Kenya: a cluster randomized trial [Internet]: The World Bank; 2014. p. 69. [cited 2019 Sep 29]. (Policy Research Working Papers). Available from: https://elibrary.worldbank.org/doi/abs/10.1596/1813-9450-6791. Tiono AB, Ouédraogo A, Ogutu B, Diarra A, Coulibaly S, Gansané A, et al. A controlled, parallel, cluster-randomized trial of community-wide screening and treatment of asymptomatic carriers of Plasmodium falciparum in Burkina Faso. Malar J. 2013;12:79. Okella L, Slater H, Ghani A, Pemberton-Rossb P, Smith TA, Chitnis N, et al. Consensus modelling evidence to support the design of mass drug administration programmes. In: Malaria Policy Advisory Committee meeting. In; 2015. p. 16–8. Slater HC, Ross A, Ouédraogo AL, White LJ, Nguon C, Walker PGT, et al. Assessing the impact of next-generation rapid diagnostic tests on Plasmodium falciparum malaria elimination strategies [Internet]. Nature. 2015; [cited 2018 Mar 9]. Available from: https://www.nature.com/articles/nature16040. Brady OJ, Slater HC, Pemberton-Ross P, Wenger E, Maude RJ, Ghani AC, et al. Role of mass drug administration in elimination of Plasmodium falciparum malaria: a consensus modelling study. Lancet Glob Health. 2017;5(7):e680–7. WHO | Cost-effectiveness of malaria diagnostic methods in sub-Saharan Africa in an era of combination therapy [Internet]. WHO. [cited 2018 Jul 2]. Available from: http://www.who.int/bulletin/volumes/86/2/07-042259/en/. WHO | Meeting of the Evidence Review Group meeting on mass drug administration for malaria [Internet]. WHO. World Health Organization; [cited 2020 Mar 11]. Available from: http://www.who.int/malaria/meetings/2018/erg-mass-drug-administration/en/. Organization WH. Mass drug administration for falciparum malaria: a practical field manual. 2017. Kaehler N, Adhikari B, Cheah PY, Day NPJ, Paris DH, Tanner M, et al. The promise, problems and pitfalls of mass drug administration for malaria elimination: a qualitative study with scientists and policymakers. Int Health. 2019;11(3):166–76. World Health Organization. World malaria report 2015 [Internet]. 2016. Available from: http://apps.who.int/iris/bitstream/10665/252038/1/9789241511711-eng.pdf?ua=1. Walker PGT, Griffin JT, Ferguson NM, Ghani AC. Estimating the most efficient allocation of interventions to achieve reductions in Plasmodium falciparum malaria burden and transmission in Africa: a modelling study. Lancet Glob Health. 2016;4(7):e474–84. Okell LC, Griffin JT, Kleinschmidt I, Hollingsworth TD, Churcher TS, White MJ, et al. The potential contribution of mass treatment to the control of Plasmodium falciparum malaria. PLoS One. 2011;6(5):e20179. Gerardin J, Bever CA, Hamainza B, Miller JM, Eckhoff PA, Wenger EA. Optimal population-level infection detection strategies for malaria control and elimination in a spatial model of malaria transmission. PLoS Comput Biol. 2016;12(1):e1004707. Abba K, Deeks JJ, Olliaro PL, Naing C-M, Jackson SM, Takwoingi Y, et al. Rapid diagnostic tests for diagnosing uncomplicated P. falciparum malaria in endemic countries. In: Cochrane Database of Systematic Reviews [Internet]: Wiley; 2011. Available from: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD008122.pub2/abstract. Moody A. Rapid diagnostic tests for malaria parasites. Clin Microbiol Rev. 2002;15(1):66–78. Murray CK, Gasser RA, Magill AJ, Miller RS. Update on rapid diagnostic testing for malaria. Clin Microbiol Rev. 2008;21(1):97–110. Valle D, Millar J, Amratia P. Spatial heterogeneity can undermine the effectiveness of country-wide test and treat policy for malaria: a case study from Burkina Faso. Malar J. 2016;15:513. Drake TL, Lubell Y. Malaria and economic evaluation methods: challenges and opportunities. Appl Health Econ Health Policy. 2017;15(3):291–7. Bhorat H, Kanbur R, Stanwix B. Minimum wages in sub-Saharan Africa: a primer. World Bank Res Obs. 2017;32(1):21–74. Institut National de la Statistique et de la Démographie - INSD/Burkina Faso, ICF International. Burkina Faso Enquête Démographique et de Santé et à Indicateurs Multiples (EDSBF-MICS IV) 2010 [Internet]. Calverton: Institut National de la Statistique et de la Dmographie - INSD/Burkina Faso and ICF International; 2012. Available from: http://dhsprogram.com/pubs/pdf/FR256/FR256.pdf. Institut National de la Statistique - INS/Côte d'Ivoire, ICF International. Côte d'Ivoire Enqêute Démographique et de Santé et à Indicateurs Multiples 2011–2012 [Internet]. Calverton: INS/Côte d'Ivoire and ICF International; 2013. Available from: http://dhsprogram.com/pubs/pdf/FR272/FR272.pdf. Ghana Statistical Service. Ghana multiple Indicator cluster survey with an enhanced malaria module and biomarker 2011 [internet]. Accra: Ghana Statistical Service; 2012. Available from: http://dhsprogram.com/pubs/pdf/FR262/FR262.pdf. Direction Nationale de la Statistique - DNS/Guinée, ORC Macro. Guinée Enquête Démographique et de Santé 2005 [Internet]. Calverton: DNS/ Guinée and ORC Macro; 2006. Available from: http://dhsprogram.com/pubs/pdf/FR162/FR162.pdf. National Population Commission - NPC/Nigeria, ICF International. Nigeria Demographic and Health Survey 2013 [Internet]. Abuja: NPC/Nigeria and ICF International; 2014. Available from: http://dhsprogram.com/pubs/pdf/FR293/FR293.pdf. Ministère de la Planification, du Développement et de l'Aménagement du Territoire (MPDAT), Ministère de la. Santé - MS/Togo, ICF International. Togo Enquête Démographique et de Santè 2013–2014 [Internet]. Rockville: MPDAT/Togo, MS/Togo and ICF International; 2015. Available from: http://dhsprogram.com/pubs/pdf/FR301/FR301.pdf. Hamer DH, Ndhlovu M, Zurovac D, Fox M, Yeboah-Antwi K, Chanda P, et al. Improved diagnostic testing and malaria treatment practices in Zambia. JAMA. 2007;297(20):2227–31. Wongsrichanalai C, Barcus MJ, Muth S, Sutamihardja A, Wernsdorfer WH. A review of malaria diagnostic tools: microscopy and rapid diagnostic test (RDT). Am J Trop Med Hyg. 2007;77(6_Suppl):119–27. R Core Team. R: a language and environment for statistical computing [internet]. Vienna, Austria: R Foundation for Statistical Computing; 2017. Available from: https://www.R-project.org/. brms: An R Package for Bayesian Multilevel Models Using Stan | Bürkner | Journal of Statistical Software. [cited 2018 Sep 26]; Available from: https://www.jstatsoft.org/article/view/v080i01. Gelman A. Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper). Bayesian Anal. 2006;1(3):515–34. Bürkner P-C. Advanced Bayesian multilevel modeling with the R package brms. R J. 2018;10(1):395–411. Gelman A, Rubin DB. Inference from iterative simulation using multiple sequences. Stat Sci. 1992;7(4):457–72. Chang W, Cheng J, Allaire JJ, Xie Y, McPherson J. shiny: web application framework for R [Internet]. 2017. Available from: https://CRAN.R-project.org/package=shiny. Smith CM, Hayward AC. DotMapper: an open source tool for creating interactive disease point maps. BMC Infect Dis. 2016;16:145. Jombart T, Aanensen DM, Baguelin M, Birrell P, Cauchemez S, Camacho A, et al. OutbreakTools: a new platform for disease outbreak analysis using the R software. Epidemics. 2014;7:28–34. Moraga P. SpatialEpiApp: a Shiny web application for the analysis of spatial and spatio-temporal disease data. Spat Spatio-Temporal Epidemiol. 2017;23:47–57. Vilalta C, Arruda AG, Tousignant SJP, Valdes-Donoso P, Muellner P, Muellner U, et al. A review of quantitative tools used to assess the epidemiology of porcine reproductive and respiratory syndrome in U.S. swine farms using Dr. Morrison's swine health monitoring program data. Front Vet Sci. 2017;4:94. Cheng J, Karambelkar B, Xie Y. leaflet: create interactive web maps with the JavaScript "Leaflet" library [Internet]. 2017. Available from: https://CRAN.R-project.org/package=leaflet. Tiono AB, Guelbeogo MW, Sagnon NF, Nébié I, Sirima SB, Mukhopadhyay A, et al. Dynamics of malaria transmission and susceptibility to clinical malaria episodes following treatment of Plasmodium falciparum asymptomatic carriers: results of a cluster-randomized study of community-wide screening and treatment, and a parallel entomology study. BMC Infect Dis. 2013;13:535. English M, Reyburn H, Goodman C, Snow RW. Abandoning presumptive antimalarial treatment for febrile children aged less than five years—a case of running before we can walk? PLoS Med. 2009;6(1):e1000015. Reyburn H, Mbakilwa H, Mwangi R, Mwerinde O, Olomi R, Drakeley C, et al. Rapid diagnostic tests compared with malaria microscopy for guiding outpatient treatment of febrile illness in Tanzania: randomised trial. BMJ. 2007;334(7590):403. Kilian AH, Metzger WG, Mutschelknauss EJ, Kabagambe G, Langi P, Korte R, et al. Reliability of malaria microscopy in epidemiological studies: results of quality control. Trop Med Int Health TM IH. 2000;5(1):3–8. Mappin B, Cameron E, Dalrymple U, Weiss DJ, Bisanzio D, Bhatt S, et al. Standardizing Plasmodium falciparum infection prevalence measured via microscopy versus rapid diagnostic test. Malar J. 2015;14(1):460. Okell LC, Bousema T, Griffin JT, Ouédraogo AL, Ghani AC, Drakeley CJ. Factors determining the occurrence of submicroscopic malaria infections and their relevance for control. Nat Commun. 2012;3(1):1–9. Gerardin J, Ouédraogo AL, McCarthy KA, Eckhoff PA, Wenger EA. Characterization of the infectious reservoir of malaria with an agent-based model calibrated to age-stratified parasite densities and infectiousness. Malar J. 2015;14(1):231. Orish VN, Ansong JY, Onyeabor OS, Sanyaolu AO, Oyibo WA, Iriemenam NC. Overdiagnosis and overtreatment of malaria in children in a secondary healthcare centre in Sekondi-Takoradi, Ghana. Trop Doct. 2016;46(4):191–8. Nyangena O. Malaria overtreatment still very common in Kenya despite negative malaria tests. Am J Clin Pathol. 2018;150(suppl_1):S126. Salomão CA, Sacarlal J, Chilundo B, Gudo ES. Prescription practices for malaria in Mozambique: poor adherence to the national protocols for malaria treatment in 22 public health facilities. Malar J. 2015;14(1):483. Rabinovich RN, Drakeley C, Djimde AA, Hall BF, Hay SI, Hemingway J, et al. malERA: an updated research agenda for malaria elimination and eradication. PLoS Med. 2017;14((11):e1002456. Lubell Y, Hopkins H, Whitty CJ, Staedke SG, Mills A. An interactive model for the assessment of the economic costs and benefits of different rapid diagnostic tests for malaria. Malar J. 2008;7:21. Kelly GC, Tanner M, Vallely A, Clements A. Malaria elimination: moving forward with spatial decision support systems. Trends Parasitol. 2012 Jul 1;28(7):297–304. Kelly GC, Seng CM, Donald W, Taleo G, Nausien J, Batarii W, et al. A spatial decision support system for guiding focal indoor residual spraying interventions in a malaria elimination zone. Geospat Health. 2011;1:21–31. Wangdi K, Banwell C, Gatton ML, Kelly GC, Namgay R, Clements AC. Development and evaluation of a spatial decision support system for malaria elimination in Bhutan. Malar J. 2016;15(1):180. Bainbridge I. PRACTITIONER'S PERSPECTIVE: how can ecologists make conservation policy more evidence based? Ideas and examples from a devolved perspective. J Appl Ecol. 2014;51(5):1153–8. Ghana Statistical Service - GSS, Ghana Health Service - GHS, ICF International. Ghana Demographic and Health Survey 2014 [Internet]. Rockville: GSS, GHS, and ICF International; 2015. Available from: http://dhsprogram.com/pubs/pdf/FR307/FR307.pdf. Valle D, Toh KB, Millar J. Rapid prototyping of decision-support tools for conservation. Conserv Biol. 2019; [cited 2019 Sep 25]; Available from: https://conbio.onlinelibrary.wiley.com/doi/10.1111/cobi.13305. We would like to thank Paul Psychas, Damian Adams, Ethan White, and Gregory Glass for providing comments on an initial draft of this manuscript. Funding for this work was provided through a graduate research assistantship to JM from the University of Florida. School of Forest Resources and Conservation, University of Florida, Gainesville, USA Justin Millar & Denis Valle Emerging Pathogens Institute, University of Florida, Gainesville, USA Justin Millar, Kok Ben Toh & Denis Valle School of Natural Resources and Environment, University of Florida, Gainesville, USA Justin Millar Denis Valle JM conducted the analysis and wrote the primary draft. KBT collected the survey data and contributed to developing the statistical models and interactive applications. DV contributed to developing the statistical models, designing the interactive tools, providing substantial feedback, and editing the manuscript. All authors read and approved the final manuscript. Correspondence to Justin Millar. All data used in this article were sourced from publicly available national surveys. Individual surveys are cited in Table 3. Millar, J., Toh, K.B. & Valle, D. To screen or not to screen: an interactive framework for comparing costs of mass malaria treatment interventions. BMC Med 18, 149 (2020). https://doi.org/10.1186/s12916-020-01609-7 Decision support Data-driven decision-making
CommonCrawl
SYMMETRIC GROUP OF ORDER In the theory of Coxeter groups, the symmetric group is the Coxeter group of type A n and occurs as the Weyl group of the general linear group. In combinatorics, the symmetric groups, their elements (permutations), and their representations provide a rich source of problems involving Young tableaux, plactic monoids, and the Bruhat order. Symmetry was taught to humans by nature itself. A lot of flowers and most of the animals are symmetric in nature. Inspired by this, humans learned to build their architecture with symmetric aspects that made buildings balanced and proportionate in their foundation, like the pyramids of Egypt! We can observe symmetry around us in many forms. Origin Symmetry is when every part has a matching part: the same distance from the central point. but in the opposite direction. Check to see if the equation is the same when we replace both x with −x and y with −y. The notion that group theory captures the idea of "symmetry" derives from the notion of the symmetric group, and the very important theorem due to Cayley. Symmetric - Pharmaceutical & Biotech Online Training Courses Your Partner for Pharma and Biotech Training View All Training Courses Upcoming Training Courses All Pharma & Biotech Medical Devices ANNEX 1 Process Industry View All Training Courses Clients that have benefited from our courses Testimonials. Every group of order n is isomorphic to a subgroup of. Sn. Proof. Suppose G a group of order n. Let G operate on itself by left multiplication. Then by our. the definition of a semidirect product and prove that the symmetric group is a semi-direct product of the alternating group and a subgroup of order 2. In the theory of Coxeter groups, the symmetric group is the Coxeter group of type A n and occurs as the Weyl group of the general linear group. In combinatorics, the symmetric groups, their elements (permutations), and their representations provide a rich source of problems involving Young tableaux, plactic monoids, and the Bruhat order. Websymmetric - having similarity in size, shape, and relative position of corresponding parts symmetrical parallel - being everywhere equidistant and not intersecting; "parallel lines never converge"; "concentric circles are parallel"; "dancers in two parallel rows". The symmetric group is important in many different areas of mathematics, including combinatorics, Galois theory, and the definition of the determinant of a matrix. It is also a key object in group theory itself; in fact, every finite group is a subgroup of S n S_n S n for some n, n, n, so understanding the subgroups of S n S_n S n is equivalent to understanding every finite . The symmetric homology of group rings is related to stable homotopy theory. Two chain complexes are constructed that compute symmetric homology, as well as two. May 27, · The symmetric group (S 3, 0) has order 6. (Z, +) is a group of infinite order. Types of Groups Depending upon the order of groups, we can classify the groups as follows: . WebSymmetric-key algorithms [a] are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. [1] The keys, in practice, represent a shared secret between two or more parties that can . Websymmetrical. (sɪˈmɛtrɪkəl) adj. 1. possessing or displaying symmetry. Compare asymmetric. 2. (Mathematics) maths. a. (of two points) capable of being joined by a line that is bisected by a given point or bisected perpendicularly by a given line or plane: the points (x, y) and (–x, –y) are symmetrical about the origin. WebOrigin Symmetry is when every part has a matching part: the same distance from the central point. but in the opposite direction. Check to see if the equation is the same when we replace both x with −x and y with −y. Origin Symmetry is when every part has a matching part: the same distance from the central point. but in the opposite direction. Check to see if the equation is the same when we replace both x with −x and y with −y. WebSymmetry is defined as a proportionate and balanced similarity that is found in two halves of an object, that is, one-half is the mirror image of the other half. For example, different shapes like square, rectangle, circle are symmetric along their respective lines of symmetry. What is a Symmetrical Shape? The maximum order of an element of finite symmetric group by William Miller, American Mathematical Monthly, page Share Cite Follow edited Dec 30, at user answered Dec 30, at Bobby 7, 2 30 59 Add a comment You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged. Thus symmetric groups can be considered universal with respect to subgroups, just as free groups can be considered universal with respect to quotient groups. 21 Symmetric and alternating groups Recall. The symmetric group on nletters is the group S n= Perm(f1;;ng) Theorem (Cayley). If Gis a group of order nthen Gis isomorphic to a subgroup of S n. Proof. Let Sbe the set of all elements of G. Consider the action of Gon S G S!S; ab:= ab This action de nes a homomorphism %: G!Perm(S). Check: this homomor-. Symmetry is defined as a proportionate and balanced similarity that is found in two halves of an object, that is, one-half is the mirror image of the other half. For example, different shapes like square, rectangle, circle are symmetric along their respective lines of symmetry. What is a Symmetrical Shape? 1. A general fact for groups: the order of the product of commuting elements. σ = c 1 ⋅ c 2 ⋅ c m. is the lowest common multiple of the orders of the c i. Consider now the group to be S n and c i disjoint cycles therefore commuting. Pairwise commuting factors is essential. WebA vertical line that divides an object into two identical halves is called a vertical line of symmetry. That means that the vertical line goes from top to bottom (or vice versa) in an object and divides it into its mirror halves. For example, the star below shows a vertical line of symmetry. The Horizontal Line of Symmetry. symmetrical adjective sym· met· ri· cal sə-ˈme-tri-kəl variants or symmetric -trik: having, involving, or exhibiting symmetry: as a: affecting corresponding parts simultaneously and . The permutations of a set X form a group, SX, under composition. This is especially clear if one thinks of the permutation as a bijection on X, where the. The symmetric group S n S_n Sn​ is the group of permutations on n n n objects. Usually the objects are labeled { 1, 2, , n }, \{1,2,\ldots,n\}. returns a permutation group generated by (1,2,3). As expected this is a group of order 3. Notice that we do not get back a group of the actual cosets, but. The symmetric group on four letters, S4, contains the following permutations: permutations type. (12), (13), (14), (23), (24), (34) order isomorphic to. ocala florida highway patrol|mixing viagra and ghb Symmetry was taught to humans by nature itself. A lot of flowers and most of the animals are symmetric in nature. Inspired by this, humans learned to build their architecture with symmetric aspects that made buildings balanced and proportionate in their foundation, like the pyramids of Egypt! We can observe symmetry around us in many forms. Theorem (Cayley). If G is a group of order n then G is isomorphic to a subgroup of Sn. Proof. Let S be the set of. 1. A general fact for groups: the order of the product of commuting elements. σ = c 1 ⋅ c 2 ⋅ c m. is the lowest common multiple of the orders of the c i. Consider now the group to be S n and c i disjoint cycles therefore commuting. Pairwise commuting factors is essential. Moreover, ⟨ (1 2 3) ⟩ has order 3 and is thus distinct from the other cyclic subgroups, which have order 2. Finally, the order 2 cyclic subgroups are. Notes on the symmetric group 1 Computations in the symmetric group Recall that, given a set X, the set S X of all bijections from Xto itself (or, more brie y, permutations of X) is group under function composition. In particular, for each n2N, the symmetric group S n is the group of per-mutations of the set f1;;ng, with the group operation equal to function composition. Thus S. Let G be the group of automorphisms, and X the set of 2-cycles. We note that an automorphism must send order-2 elements to order-2 elements, and that the In general, πσ = σπ,. i.e., multiplication of permutations is not commutative. Page 3. Cycles. A permutation π of a set X is called a cycle . Oct 10, · Order of Symmetric Group - ProofWiki Order of Symmetric Group Contents 1 Theorem 2 Proof 3 Examples 3rd Symmetric Group 4 Sources Theorem Let S be a finite . WebA geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion. This means that an object is symmetric if there is a transformation that moves individual pieces of . Mathematics Stack Exchange. Oct 10, · Order of Symmetric Group - ProofWiki Order of Symmetric Group Contents 1 Theorem 2 Proof 3 Examples 3rd Symmetric Group 4 Sources Theorem Let S be a finite . Subgroups generated by a rotation. Rotation by 2π/n generates a subgroup isomorphic to Cn the cyclic group of order n. Note that Cn is the symmetry group of. A geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion. This means that an object is symmetric if there is a transformation that moves individual pieces of the object, but doesn't change the overall shape. A(S) is the set of mappings of S onto itself. If S is a finite set with say n elements then A(S) is called a symmetric group of order n denoted S_n. The. The symmetric group on four letters, S4, contains the following permutations: permutations type. (12), (13), (14), (23), (24), (34) order isomorphic to. Feb 27, · The order of a group is not the same thing as the order of an element. There is a connection, but don't worry about that yet. The order of a group, written $|G|$, is the number of elements it has. The order of an element in a group - say $g \in G$, is the smallest $n$ such $g^n = e$ where $e$ is the identity element of the group $G$. Each permutation of S4 can be written as composition of disjoint cycles. So the (5 points) Let G be a group, and let a be an element of order Every permutation in Sn S n has a cycle decomposition that is unique up to ordering of the cycles and up to a cyclic permutation of the elements within each.
CommonCrawl
ArXiv-Quant Pandemic, Shutdown and Consumer Spending: Lessons from Scandinavian Policy Responses to COVID-19 Asger Lau Andersen, Emil Toft Hansen, Niels Johannesen, Adam Sheridan econ.GN q-fin.EC This paper uses transaction data from a large bank in Scandinavia to estimate the effect of social distancing laws on consumer spending in the COVID-19 pandemic. The analysis exploits a natural experiment to disentangle the effects of the virus and the laws aiming to contain it: Denmark and Sweden were similarly exposed to the pandemic but only Denmark imposed significant restrictions on social and economic activities. We estimate that aggregate spending dropped by around 25 percent in Sweden and, as a result of the shutdown, by 4 additional percentage points in Denmark. This implies that most of the economic contraction is caused by the virus itself and occurs regardless of social distancing laws. The age gradient in the estimates suggest that social distancing reinforces the virus-induced drop in spending for low health-risk individuals but attenuates it for high-risk individuals by lowering the overall prevalence of the virus in the society. Regression to the Tail: Why the Olympics Blow Up Bent Flyvbjerg, Alexander Budzier, Daniel Lunn q-fin.GN q-fin.RM The Olympic Games are the largest, highest-profile, and most expensive megaevent hosted by cities and nations. Average sports-related costs of hosting are $12.0 billion. Non-sports-related costs are typically several times that. Every Olympics since 1960 has run over budget, at an average of 172 percent in real terms, the highest overrun on record for any type of megaproject. The paper tests theoretical statistical distributions against empirical data for the costs of the Games, in order to explain the cost risks faced by host cities and nations. It is documented, for the first time, that cost and cost overrun for the Games follow a power-law distribution. Olympic costs are subject to infinite mean and variance, with dire consequences for predictability and planning. We name this phenomenon "regression to the tail": it is only a matter of time until a new extreme event occurs, with an overrun larger than the largest so far, and thus more disruptive and less plannable. The generative mechanism for the Olympic power law is identified as strong convexity prompted by six causal drivers: irreversibility, fixed deadlines, the Blank Check Syndrome, tight coupling, long planning horizons, and an Eternal Beginner Syndrome. The power law explains why the Games are so difficult to plan and manage successfully, and why cities and nations should think twice before bidding to host. Based on the power law, two heuristics are identified for better decision making on hosting. Finally, the paper develops measures for good practice in planning and managing the Games, including how to mitigate the extreme risks of the Olympic power law. It's a Trap: Emperor Palpatine's Poison Pill Zachary Feinstein q-fin.GN In this paper we study the financial repercussions of the destruction of two fully armed and operational moon-sized battle stations ("Death Stars") in a 4-year period and the dissolution of the galactic government in Star Wars. The emphasis of this work is to calibrate and simulate a model of the banking and financial systems within the galaxy. Along these lines, we measure the level of systemic risk that may have been generated by the death of Emperor Palpatine and the destruction of the second Death Star. We conclude by finding the economic resources the Rebel Alliance would need to have in reserve in order to prevent a financial crisis from gripping the galaxy through an optimally allocated banking bailout. Racial Disparities in Voting Wait Times: Evidence from Smartphone Data M. Keith Chen, Kareem Haggag, Devin G. Pope, Ryne Rohla econ.GN q-fin.EC stat.AP Equal access to voting is a core feature of democratic government. Using data from millions of smartphone users, we quantify a racial disparity in voting wait times across a nationwide sample of polling places during the 2016 U.S. presidential election. Relative to entirely-white neighborhoods, residents of entirely-black neighborhoods waited 29% longer to vote and were 74% more likely to spend more than 30 minutes at their polling place. This disparity holds when comparing predominantly white and black polling places within the same states and counties, and survives numerous robustness and placebo tests. We shed light on the mechanism for these results and discuss how geospatial data can be an effective tool to both measure and monitor these disparities going forward. Turing's Children: Representation of Sexual Minorities in STEM Dario Sansone, Christopher S. Carpenter We provide the first nationally representative estimates of sexual minority representation in STEM fields by studying 142,641 men and women in same-sex couples from the 2009-2018 American Community Surveys. These data indicate that men in same-sex couples are 12 percentage points less likely to have completed a bachelor's degree in a STEM field compared to men in different-sex couples; there is no gap observed for women in same-sex couples compared to women in different-sex couples. The STEM gap between men in same-sex and different-sex couples is larger than the STEM gap between white and black men but is smaller than the gender STEM gap. We also document a gap in STEM occupations between men in same-sex and different-sex couples, and we replicate this finding using independently drawn data from the 2013-2018 National Health Interview Surveys. These differences persist after controlling for demographic characteristics, location, and fertility. Our findings further the call for interventions designed at increasing representation of sexual minorities in STEM. Election Predictions as Martingales: An Arbitrage Approach q-fin.PR physics.soc-ph We consider the estimation of binary election outcomes as martingales and propose an arbitrage pricing when one continuously updates estimates. We argue that the estimator needs to be priced as a binary option as the arbitrage valuation minimizes the conventionally used Brier score for tracking the accuracy of probability assessors. We create a dual martingale process $Y$, in $[L,H]$ from the standard arithmetic Brownian motion, $X$ in $(-\infty, \infty)$ and price elections accordingly. The dual process $Y$ can represent the numerical votes needed for success. We show the relationship between the volatility of the estimator in relation to that of the underlying variable. When there is a high uncertainty about the final outcome, 1) the arbitrage value of the binary gets closer to 50\%, 2) the estimate should not undergo large changes even if polls or other bases show significant variations. There are arbitrage relationships between 1) the binary value, 2) the estimation of $Y$, 3) the volatility of the estimation of $Y$ over the remaining time to expiration. We note that these arbitrage relationships were often violated by the various forecasting groups in the U.S. presidential elections of 2016, as well as the notion that all intermediate assessments of the success of a candidate need to be considered, not just the final one. The Strength of Absent Ties: Social Integration via Online Dating Josue Ortega, Philipp Hergovich physics.soc-ph cs.SI q-fin.EC We used to marry people to whom we were somehow connected. Since we were more connected to people similar to us, we were also likely to marry someone from our own race. However, online dating has changed this pattern; people who meet online tend to be complete strangers. We investigate the effects of those previously absent ties on the diversity of modern societies. We find that social integration occurs rapidly when a society benefits from new connections. Our analysis of state-level data on interracial marriage and broadband adoption (proxy for online dating) suggests that this integration process is significant and ongoing. On Single Point Forecasts for Fat-Tailed Variables Nassim Nicholas Taleb, Yaneer Bar-Yam, Pasquale Cirillo physics.soc-ph econ.GN q-fin.EC stat.AP stat.ME We discuss common errors and fallacies when using naive "evidence based" empiricism and point forecasts for fat-tailed variables, as well as the insufficiency of using naive first-order scientific methods for tail risk management. We use the COVID-19 pandemic as the background for the discussion and as an example of a phenomenon characterized by a multiplicative nature, and what mitigating policies must result from the statistical properties and associated risks. In doing so, we also respond to the points raised by Ioannidis et al. (2020). How predictable is technological progress? J. Doyne Farmer, Francois Lafond q-fin.EC physics.soc-ph Recently it has become clear that many technologies follow a generalized version of Moore's law, i.e. costs tend to drop exponentially, at different rates that depend on the technology. Here we formulate Moore's law as a correlated geometric random walk with drift, and apply it to historical data on 53 technologies. We derive a closed form expression approximating the distribution of forecast errors as a function of time. Based on hind-casting experiments we show that this works well, making it possible to collapse the forecast errors for many different technologies at different time horizons onto the same universal distribution. This is valuable because it allows us to make forecasts for any given technology with a clear understanding of the quality of the forecasts. As a practical demonstration we make distributional forecasts at different time horizons for solar photovoltaic modules, and show how our method can be used to estimate the probability that a given technology will outperform another technology at a given point in the future. Global labor flow network reveals the hierarchical organization and dynamics of geo-industrial clusters in the world economy Jaehyuk Park, Ian Wood, Elise Jing, Azadeh Nematzadeh, Souvik Ghosh, Michael Conover, Yong-Yeol Ahn cs.SI physics.soc-ph q-fin.GN Groups of firms often achieve a competitive advantage through the formation of geo-industrial clusters. Although many exemplary clusters, such as Hollywood or Silicon Valley, have been frequently studied, systematic approaches to identify and analyze the hierarchical structure of the geo-industrial clusters at the global scale are rare. In this work, we use LinkedIn's employment histories of more than 500 million users over 25 years to construct a labor flow network of over 4 million firms across the world and apply a recursive network community detection algorithm to reveal the hierarchical structure of geo-industrial clusters. We show that the resulting geo-industrial clusters exhibit a stronger association between the influx of educated-workers and financial performance, compared to existing aggregation units. Furthermore, our additional analysis of the skill sets of educated-workers supplements the relationship between the labor flow of educated-workers and productivity growth. We argue that geo-industrial clusters defined by labor flow provide better insights into the growth and the decline of the economy than other common economic units. The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies Stephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C. Parkes, Richard Socher econ.GN cs.LG q-fin.EC stat.ML Tackling real-world socio-economic challenges requires designing and testing economic policies. However, this is hard in practice, due to a lack of appropriate (micro-level) economic data and limited opportunity to experiment. In this work, we train social planners that discover tax policies in dynamic economies that can effectively trade-off economic equality and productivity. We propose a two-level deep reinforcement learning approach to learn dynamic tax policies, based on economic simulations in which both agents and a government learn and adapt. Our data-driven approach does not make use of economic modeling assumptions, and learns from observational data alone. We make four main contributions. First, we present an economic simulation environment that features competitive pressures and market dynamics. We validate the simulation by showing that baseline tax systems perform in a way that is consistent with economic theory, including in regard to learned agent behaviors and specializations. Second, we show that AI-driven tax policies improve the trade-off between equality and productivity by 16% over baseline policies, including the prominent Saez tax framework. Third, we showcase several emergent features: AI-driven tax policies are qualitatively different from baselines, setting a higher top tax rate and higher net subsidies for low incomes. Moreover, AI-driven tax policies perform strongly in the face of emergent tax-gaming strategies learned by AI agents. Lastly, AI-driven tax policies are also effective when used in experiments with human participants. In experiments conducted on MTurk, an AI tax policy provides an equality-productivity trade-off that is similar to that provided by the Saez framework along with higher inverse-income weighted social welfare. Uncovering Offshore Financial Centers: Conduits and Sinks in the Global Corporate Ownership Network Javier Garcia-Bernardo, Jan Fichtner, Eelke M. Heemskerk, Frank W. Takes physics.soc-ph q-fin.GN Multinational corporations use highly complex structures of parents and subsidiaries to organize their operations and ownership. Offshore Financial Centers (OFCs) facilitate these structures through low taxation and lenient regulation, but are increasingly under scrutiny, for instance for enabling tax avoidance. Therefore, the identification of OFC jurisdictions has become a politicized and contested issue. We introduce a novel data-driven approach for identifying OFCs based on the global corporate ownership network, in which over 98 million firms (nodes) are connected through 71 million ownership relations. This granular firm-level network data uniquely allows identifying both sink-OFCs and conduit-OFCs. Sink-OFCs attract and retain foreign capital while conduit-OFCs are attractive intermediate destinations in the routing of international investments and enable the transfer of capital without taxation. We identify 24 sink-OFCs. In addition, a small set of five countries -- the Netherlands, the United Kingdom, Ireland, Singapore and Switzerland -- canalize the majority of corporate offshore investment as conduit-OFCs. Each conduit jurisdiction is specialized in a geographical area and there is significant specialization based on industrial sectors. Against the idea of OFCs as exotic small islands that cannot be regulated, we show that many sink and conduit-OFCs are highly developed countries. Valid t-ratio Inference for IV David S. Lee, Justin McCrary, Marcelo J. Moreira, Jack Porter econ.EM econ.GN q-fin.EC In the single IV model, current practice relies on the first-stage F exceeding some threshold (e.g., 10) as a criterion for trusting t-ratio inferences, even though this yields an anti-conservative test. We show that a true 5 percent test instead requires an F greater than 104.7. Maintaining 10 as a threshold requires replacing the critical value 1.96 with 3.43. We re-examine 57 AER papers and find that corrected inference causes half of the initially presumed statistically significant results to be insignificant. We introduce a more powerful test, the tF procedure, which provides F-dependent adjusted t-ratio critical values. Curse of Democracy: Evidence from the 21st Century Yusuke Narita, Ayumi Sudo Democracy is widely believed to contribute to economic growth and public health in the 20th and earlier centuries. We find that this conventional wisdom is reversed in this century, i.e., democracy has persistent negative impacts on GDP growth during 2001-2020. This finding emerges from five different instrumental variable strategies. Our analysis suggests that democracies cause slower growth through less investment and trade. For 2020, democracy is also found to cause more deaths from Covid-19. Does Infrastructure Investment Lead to Economic Growth or Economic Fragility? Evidence from China Atif Ansar, Bent Flyvbjerg, Alexander Budzier, Daniel Lunn q-fin.GN q-fin.EC The prevalent view in the economics literature is that a high level of infrastructure investment is a precursor to economic growth. China is especially held up as a model to emulate. Based on the largest dataset of its kind, this paper punctures the twin myths that, first, infrastructure creates economic value, and, second, China has a distinct advantage in its delivery. Far from being an engine of economic growth, the typical infrastructure investment fails to deliver a positive risk adjusted return. Moreover, China's track record in delivering infrastructure is no better than that of rich democracies. Where investments are debt-financed, overinvesting in unproductive projects results in the buildup of debt, monetary expansion, instability in financial markets, and economic fragility, exactly as we see in China today. We conclude that poorly managed infrastructure investments are a main explanation of surfacing economic and financial problems in China. We predict that, unless China shifts to a lower level of higher-quality infrastructure investments, the country is headed for an infrastructure-led national financial and economic crisis, which is likely also to be a crisis for the international economy. China's infrastructure investment model is not one to follow for other countries but one to avoid. Evaluating gambles using dynamics https://arxiv.org/abs/1405.0585 Ole Peters, Murray Gell-Mann q-fin.EC cond-mat.stat-mech q-fin.GN Gambles are random variables that model possible changes in monetary wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic "utility functions" appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions. Statistical Basis for Predicting Technological Progress Bela Nagy, J. Doyne Farmer, Quan M. Bui, Jessika E. Trancik physics.soc-ph q-fin.GN stat.AP Forecasting technological progress is of great interest to engineers, policy makers, and private investors. Several models have been proposed for predicting technological improvement, but how well do these models perform? An early hypothesis made by Theodore Wright in 1936 is that cost decreases as a power law of cumulative production. An alternative hypothesis is Moore's law, which can be generalized to say that technologies improve exponentially with time. Other alternatives were proposed by Goddard, Sinclair et al., and Nordhaus. These hypotheses have not previously been rigorously tested. Using a new database on the cost and production of 62 different technologies, which is the most expansive of its kind, we test the ability of six different postulated laws to predict future costs. Our approach involves hindcasting and developing a statistical model to rank the performance of the postulated laws. Wright's law produces the best forecasts, but Moore's law is not far behind. We discover a previously unobserved regularity that production tends to increase exponentially. A combination of an exponential decrease in cost and an exponential increase in production would make Moore's law and Wright's law indistinguishable, as originally pointed out by Sahal. We show for the first time that these regularities are observed in data to such a degree that the performance of these two laws is nearly tied. Our results show that technological progress is forecastable, with the square root of the logarithmic error growing linearly with the forecasting horizon at a typical rate of 2.5% per year. These results have implications for theories of technological change, and assessments of candidate technologies and policies for climate change mitigation. On the Statistical Differences between Binary Forecasts and Real World Payoffs q-fin.GN physics.soc-ph q-fin.RM What do binary (or probabilistic) forecasting abilities have to do with overall performance? We map the difference between (univariate) binary predictions, bets and "beliefs" (expressed as a specific "event" will happen/will not happen) and real-world continuous payoffs (numerical benefits or harm from an event) and show the effect of their conflation and mischaracterization in the decision-science literature. We also examine the differences under thin and fat tails. The effects are: A- Spuriousness of many psychological results particularly those documenting that humans overestimate tail probabilities and rare events, or that they overreact to fears of market crashes, ecological calamities, etc. Many perceived "biases" are just mischaracterizations by psychologists. There is also a misuse of Hayekian arguments in promoting prediction markets. We quantify such conflations with a metric for "pseudo-overestimation". B- Being a "good forecaster" in binary space doesn't lead to having a good actual performance}, and vice versa, especially under nonlinearities. A binary forecasting record is likely to be a reverse indicator under some classes of distributions. Deeper uncertainty or more complicated and realistic probability distribution worsen the conflation . C- Machine Learning: Some nonlinear payoff functions, while not lending themselves to verbalistic expressions and "forecasts", are well captured by ML or expressed in option contracts. D- Fattailedness: The difference is exacerbated in the power law classes of probability distributions. Bitcoin, Currencies, and Fragility econ.GN physics.soc-ph q-fin.EC q-fin.GN This discussion applies quantitative finance methods and economic arguments to cryptocurrencies in general and bitcoin in particular -- as there are about $10,000$ cryptocurrencies, we focus (unless otherwise specified) on the most discussed crypto of those that claim to hew to the original protocol (Nakamoto 2009) and the one with, by far, the largest market capitalization. In its current version, in spite of the hype, bitcoin failed to satisfy the notion of "currency without government" (it proved to not even be a currency at all), can be neither a short nor long term store of value (its expected value is no higher than $0$), cannot operate as a reliable inflation hedge, and, worst of all, does not constitute, not even remotely, a safe haven for one's investments, a shield against government tyranny, or a tail protection vehicle for catastrophic episodes. Furthermore, bitcoin promoters appear to conflate the success of a payment mechanism (as a decentralized mode of exchange), which so far has failed, with the speculative variations in the price of a zero-sum maximally fragile asset with massive negative externalities. Going through monetary history, we show how a true numeraire must be one of minimum variance with respect to an arbitrary basket of goods and services, how gold and silver lost their inflation hedge status during the Hunt brothers squeeze in the late 1970s and what would be required from a true inflation hedged store of value. Statistical Consequences of Fat Tails: Real World Preasymptotics,... http://dx.doi.org/10.48550/arxiv.2001.10488 stat.OT The monograph investigates the misapplication of conventional statistical techniques to fat tailed distributions and looks for remedies, when possible. Switching from thin tailed to fat tailed distributions requires more than "changing the color of the dress". Traditional asymptotics deal mainly with either n=1 or $n=\infty$, and the real world is in between, under of the "laws of the medium numbers" --which vary widely across specific distributions. Both the law of large numbers and the generalized central limit mechanisms operate in highly idiosyncratic ways outside the standard Gaussian or Levy-Stable basins of convergence. A few examples: + The sample mean is rarely in line with the population mean, with effect on "naive empiricism", but can be sometimes be estimated via parametric methods. + The "empirical distribution" is rarely empirical. + Parameter uncertainty has compounding effects on statistical metrics. + Dimension reduction (principal components) fails. + Inequality estimators (GINI or quantile contributions) are not additive and produce wrong results. + Many "biases" found in psychology become entirely rational under more sophisticated probability distributions + Most of the failures of financial economics, econometrics, and behavioral economics can be attributed to using the wrong distributions. This book, the first volume of the Technical Incerto, weaves a narrative around published journal articles. 354.3620000000001 Quantum attacks on Bitcoin, and how to protect against them Divesh Aggarwal, Gavin K. Brennen, Troy Lee, Miklos Santha, Marco Tomamichel quant-ph q-fin.GN The key cryptographic protocols used to secure the internet and financial transactions of today are all susceptible to attack by the development of a sufficiently large quantum computer. One particular area at risk are cryptocurrencies, a market currently worth over 150 billion USD. We investigate the risk of Bitcoin, and other cryptocurrencies, to attacks by quantum computers. We find that the proof-of-work used by Bitcoin is relatively resistant to substantial speedup by quantum computers in the next 10 years, mainly because specialized ASIC miners are extremely fast compared to the estimated clock speed of near-term quantum computers. On the other hand, the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates. We analyze an alternative proof-of-work called Momentum, based on finding collisions in a hash function, that is even more resistant to speedup by a quantum computer. We also review the available post-quantum signature schemes to see which one would best meet the security and efficiency requirements of blockchain applications. A Short Note on P-Value Hacking stat.AP q-fin.ST We present the expected values from p-value hacking as a choice of the minimum p-value among $m$ independents tests, which can be considerably lower than the "true" p-value, even with a single trial, owing to the extreme skewness of the meta-distribution. We first present an exact probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena. We derive the distribution for small samples $2<n \leq n^*\approx 30$ as well as the limiting one as the sample size $n$ becomes large. We also look at the properties of the "power" of a test through the distribution of its inverse for a given p-value and parametrization. The formulas allow the investigation of the stability of the reproduction of results and "p-hacking" and other aspects of meta-analysis. P-values are shown to be extremely skewed and volatile, regardless of the sample size $n$, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon; such volatility makes the minimum $p$ value diverge significantly from the "true" one. Setting the power is shown to offer little remedy unless sample size is increased markedly or the p-value is lowered by at least one order of magnitude. Managing COVID-19 Pandemic without Destructing the Economy David Gershon, Alexander Lipton, Hagai Levine q-bio.PE q-fin.MF We analyze an approach to managing the COVID-19 pandemic without shutting down the economy while staying within the capacity of the healthcare system. We base our analysis on a detailed heterogeneous epidemiological model, which takes into account different population groups and phases of the disease, including incubation, infection period, hospitalization, and treatment in the intensive care unit (ICU). We model the healthcare capacity as the total number of hospital and ICU beds for the whole country. We calibrate the model parameters to data reported in several recent research papers. For high- and low-risk population groups, we calculate the number of total and intensive care hospitalizations, and deaths as functions of time. The main conclusion is that countries, which enforce reasonable hygienic measures on time can avoid lockdowns throughout the pandemic provided that the number of spare ICU beds per million is above the threshold of about 100. In countries where the total number of ICU beds is below this threshold, a limited period quarantine to specific high-risk groups of the population suffices. Furthermore, in the case of an inadequate capacity of the healthcare system, we incorporate a feedback loop and demonstrate that quantitative impact of the lack of ICU units on the death curve. In the case of inadequate ICU beds, full- and partial-quarantine scenarios outcomes are almost identical, making it unnecessary to shut down the whole economy. We conclude that only a limited-time quarantine of the high-risk group might be necessary, while the rest of the economy can remain operational. The Oxford Olympics Study 2016: Cost and Cost Overrun at the Games Bent Flyvbjerg, Allison Stewart, Alexander Budzier q-fin.EC Given that Olympic Games held over the past decade each have cost USD 8.9 billion on average, the size and financial risks of the Games warrant study. The objectives of the Oxford Olympics study are to (1) establish the actual outturn costs of previous Olympic Games in a manner where cost can consistently be compared across Games; (2) establish cost overruns for previous Games, i.e., the degree to which final outturn costs reflect projected budgets at the bid stage, again in a way that allows comparison across Games; (3) test whether the Olympic Games Knowledge Management Program has reduced cost risk for the Games, and, finally, (4) benchmark cost and cost overrun for the Rio 2016 Olympics against previous Games. The main contribution of the Oxford study is to establish a phenomenology of cost and cost overrun at the Olympics, which allows consistent and systematic comparison across Games. This has not been done before. The study concludes that for a city and nation to decide to stage the Olympic Games is to decide to take on one of the most costly and financially most risky type of megaproject that exists, something that many cities and nations have learned to their peril. Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications stat.OT q-fin.RM stat.AP stat.ME COUNT8310 ML Papers Crypto Papers Monotonicity for AI ethics and society: An empirical study of the... Algorithm fairness in the application of artificial intelligence (AI) is essential for a better society. As the foundational axiom of social mechanisms, fairness consists of multiple facets. Although the machine learning (ML) community has focused on intersectionality as a matter of statistical parity, especially in discrimination issues, an emerging body of literature addresses another facet -- monotonicity. Based on domain expertise, monotonicity plays a vital role in numerous fairness-related areas, where violations could misguide human decisions and lead to disastrous consequences. In this paper, we first systematically evaluate the significance of applying monotonic neural additive models (MNAMs), which use a fairness-aware ML algorithm to enforce both individual and pairwise monotonicity principles, for the fairness of AI ethics and society. We have found, through a hybrid method of theoretical reasoning, simulation, and extensive empirical analysis, that considering monotonicity axioms is essential in all areas of fairness, including criminology, education, health care, and finance. Our research contributes to the interdisciplinary research at the interface of AI ethics, explainable AI (XAI), and human-computer interactions (HCIs). By evidencing the catastrophic consequences if monotonicity is not met, we address the significance of monotonicity requirements in AI applications. Furthermore, we demonstrate that MNAMs are an effective fairness-aware ML approach by imposing monotonicity restrictions integrating human intelligence. Physics-Informed Convolutional Transformer for Predicting... Predicting volatility is important for asset predicting, option pricing and hedging strategies because it cannot be directly observed in the financial market. The Black-Scholes option pricing model is one of the most widely used models by market participants. Notwithstanding, the Black-Scholes model is based on heavily criticized theoretical premises, one of which is the constant volatility assumption. The dynamics of the volatility surface is difficult to estimate. In this paper, we establish a novel architecture based on physics-informed neural networks and convolutional transformers. The performance of the new architecture is directly compared to other well-known deep-learning architectures, such as standard physics-informed neural networks, convolutional long-short term memory (ConvLSTM), and self-attention ConvLSTM. Numerical evidence indicates that the proposed physics-informed convolutional transformer network achieves a superior performance than other methods. Quant 4.0: Engineering Quantitative Investment with Automated,... Quantitative investment (``quant'') is an interdisciplinary field combining financial engineering, computer science, mathematics, statistics, etc. Quant has become one of the mainstream investment methodologies over the past decades, and has experienced three generations: Quant 1.0, trading by mathematical modeling to discover mis-priced assets in markets; Quant 2.0, shifting quant research pipeline from small ``strategy workshops'' to large ``alpha factories''; Quant 3.0, applying deep learning techniques to discover complex nonlinear pricing rules. Despite its advantage in prediction, deep learning relies on extremely large data volume and labor-intensive tuning of ``black-box'' neural network models. To address these limitations, in this paper, we introduce Quant 4.0 and provide an engineering perspective for next-generation quant. Quant 4.0 has three key differentiating components. First, automated AI changes quant pipeline from traditional hand-craft modeling to the state-of-the-art automated modeling, practicing the philosophy of ``algorithm produces algorithm, model builds model, and eventually AI creates AI''. Second, explainable AI develops new techniques to better understand and interpret investment decisions made by machine learning black-boxes, and explains complicated and hidden risk exposures. Third, knowledge-driven AI is a supplement to data-driven AI such as deep learning and it incorporates prior knowledge into modeling to improve investment decision, in particular for quantitative value investing. Moreover, we discuss how to build a system that practices the Quant 4.0 concept. Finally, we propose ten challenging research problems for quant technology, and discuss potential solutions, research directions, and future trends. Hierarchical Deep Reinforcement Learning for VWAP Strategy Optimization Designing an intelligent volume-weighted average price (VWAP) strategy is a critical concern for brokers, since traditional rule-based strategies are relatively static that cannot achieve a lower transaction cost in a dynamic market. Many studies have tried to minimize the cost via reinforcement learning, but there are bottlenecks in improvement, especially for long-duration strategies such as the VWAP strategy. To address this issue, we propose a deep learning and hierarchical reinforcement learning jointed architecture termed Macro-Meta-Micro Trader (M3T) to capture market patterns and execute orders from different temporal scales. The Macro Trader first allocates a parent order into tranches based on volume profiles as the traditional VWAP strategy does, but a long short-term memory neural network is used to improve the forecasting accuracy. Then the Meta Trader selects a short-term subgoal appropriate to instant liquidity within each tranche to form a mini-tranche. The Micro Trader consequently extracts the instant market state and fulfils the subgoal with the lowest transaction cost. Our experiments over stocks listed on the Shanghai stock exchange demonstrate that our approach outperforms baselines in terms of VWAP slippage, with an average cost saving of 1.16 base points compared to the optimal baseline. fintech-kMC: Agent based simulations of financial platforms for... We discuss our simulation tool, fintech-kMC, which is designed to generate synthetic data for machine learning model development and testing. fintech-kMC is an agent-based model driven by a kinetic Monte Carlo (a.k.a. continuous time Monte Carlo) engine which simulates the behaviour of customers using an online digital financial platform. The tool provides an interpretable, reproducible, and realistic way of generating synthetic data which can be used to validate and test AI/ML models and pipelines to be used in real-world customer-facing financial applications. Robust machine learning pipelines for trading market-neutral stock... The application of deep learning algorithms to financial data is difficult due to heavy non-stationarities which can lead to over-fitted models that underperform under regime changes. Using the Numerai tournament data set as a motivating example, we propose a machine learning pipeline for trading market-neutral stock portfolios based on tabular data which is robust under changes in market conditions. We evaluate various machine-learning models, including Gradient Boosting Decision Trees (GBDTs) and Neural Networks with and without simple feature engineering, as the building blocks for the pipeline. We find that GBDT models with dropout display high performance, robustness and generalisability with relatively low complexity and reduced computational cost. We then show that online learning techniques can be used in post-prediction processing to enhance the results. In particular, dynamic feature neutralisation, an efficient procedure that requires no retraining of models and can be applied post-prediction to any machine learning model, improves robustness by reducing drawdown in volatile market conditions. Furthermore, we demonstrate that the creation of model ensembles through dynamic model selection based on recent model performance leads to improved performance over baseline by improving the Sharpe and Calmar ratios. We also evaluate the robustness of our pipeline across different data splits and random seeds with good reproducibility of results. Cover Papers Monotonicity for AI ethics and society: An empirical study of the monotonic neural additive model in criminology, education, health care, and finance. (arXiv:2301.07060v1 [cs.LG]) incorrect id format for 2301.070 Dangxing Chen, Luyao Zhang Efficient Risk Estimation for the Credit Valuation Adjustment Michael B. Giles, Abdul-Lateef Haji-Ali, Jonathan Spence q-fin.CP The valuation of over-the-counter derivatives is subject to a series of valuation adjustments known as xVA, which pose additional risks for financial institutions. Associated risk measures, such as the value-at-risk of an underlying valuation adjustment, play an important role in managing these risks. Monte Carlo methods are often regarded as inefficient for computing such measures. As an example, we consider the value-at-risk of the Credit Valuation Adjustment (CVA-VaR), which can be expressed using a triple nested expectation. Traditional Monte Carlo methods are often inefficient at handling several nested expectations. Utilising recent developments in multilevel nested simulation for probabilities, we construct a hierarchical estimator of the CVA-VaR which reduces the computational complexity by 3 orders of magnitude compared to standard Monte Carlo. Deep Reinforcement Learning for Asset Allocation: Reward Clipping Jiwon Kim, Moon-Ju Kang, KangHun Lee, HyungJun Moon, Bo-Kwan Jeon Recently, there are many trials to apply reinforcement learning in asset allocation for earning more stable profits. In this paper, we compare performance between several reinforcement learning algorithms - actor-only, actor-critic and PPO models. Furthermore, we analyze each models' character and then introduce the advanced algorithm, so called Reward clipping model. It seems that the Reward Clipping model is better than other existing models in finance domain, especially portfolio optimization - it has strength both in bull and bear markets. Finally, we compare the performance for these models with traditional investment strategies during decreasing and increasing markets. Robust Distortion Risk Measures Carole Bernard, Silvana M. Pesenti, Steven Vanduffel q-fin.RM The robustness of risk measures to changes in underlying loss distributions (distributional uncertainty) is of crucial importance in making well-informed decisions. In this paper, we quantify, for the class of distortion risk measures with an absolutely continuous distortion function, its robustness to distributional uncertainty by deriving its largest (smallest) value when the underlying loss distribution has a known mean and variance and, furthermore, lies within a ball - specified through the Wasserstein distance - around a reference distribution. We employ the technique of isotonic projections to provide for these distortion risk measures a complete characterisation of sharp bounds on their value, and we obtain quasi-explicit bounds in the case of Value-at-Risk and Range-Value-at-Risk. We extend our results to account for uncertainty in the first two moments and provide applications to portfolio optimisation and to model risk assessment. Optimal randomized multilevel Monte Carlo for repeatedly nested... Yasa Syed, Guanyang Wang stat.CO The estimation of repeatedly nested expectations is a challenging problem that arises in many real-world systems. However, existing methods generally suffer from high computational costs when the number of nestings becomes large. Fix any non-negative integer $D$ for the total number of nestings. Standard Monte Carlo methods typically cost at least $\mathcal{O}(\varepsilon^{-(2+D)})$ and sometimes $\mathcal{O}(\varepsilon^{-2(1+D)})$ to obtain an estimator up to $\varepsilon$-error. More advanced methods, such as multilevel Monte Carlo, currently only exist for $D = 1$. In this paper, we propose a novel Monte Carlo estimator called $\mathsf{READ}$, which stands for "Recursive Estimator for Arbitrary Depth.'' Our estimator has an optimal computational cost of $\mathcal{O}(\varepsilon^{-2})$ for every fixed $D$ under suitable assumptions, and a nearly optimal computational cost of $\mathcal{O}(\varepsilon^{-2(1 + \delta)})$ for any $0 < \delta < \frac12$ under much more general assumptions. Our estimator is also unbiased, which makes it easy to parallelize. The key ingredients in our construction are an observation of the problem's recursive structure and the recursive use of the randomized multilevel Monte Carlo method. Jian Guo, Saizhuo Wang, Lionel M. Ni, Heung-Yeung Shum Calibrating distribution models from PELVE Hirbod Assa, Liyuan Lin, Ruodu Wang The Value-at-Risk (VaR) and the Expected Shortfall (ES) are the two most popular risk measures in banking and insurance regulation. To bridge between the two regulatory risk measures, the Probability Equivalent Level of VaR-ES (PELVE) was recently proposed to convert a level of VaR to that of ES. It is straightforward to compute the value of PELVE for a given distribution model. In this paper, we study the converse problem of PELVE calibration, that is, to find a distribution model that yields a given PELVE, which may either be obtained from data or from expert opinion. We discuss separately the cases when one-point, two-point, n-point and curve constraints are given. In the most complicated case of a curve constraint, we convert the calibration problem to that of an advanced differential equation. We apply the model calibration techniques to estimation and simulation for datasets used in insurance. We further study some technical properties of PELVE by offering a few new results on monotonicity and convergence. Diversification quotients based on VaR and ES Xia Han, Liyuan Lin, Ruodu Wang The diversification quotient (DQ) is a recently introduced tool for quantifying the degree of diversification of a stochastic portfolio model. It has an axiomatic foundation and can be defined through a parametric class of risk measures. Since the Value-at-Risk (VaR) and the Expected Shortfall (ES) are the most prominent risk measures widely used in both banking and insurance, we investigate DQ constructed from VaR and ES in this paper. In particular, for the popular models of multivariate elliptical and multivariate regular varying (MRV) distributions, explicit formulas are available. The portfolio optimization problems for the elliptical and MRV models are also studied. Our results further reveal favourable features of DQ, both theoretically and practically, compared to traditional diversification indices based on a single risk measure. Simulation schemes for the Heston model with Poisson conditioning Jaehyuk Choi, Yue Kuen Kwok q-fin.MF Exact simulation schemes under the Heston stochastic volatility model (e.g., Broadie-Kaya and Glasserman-Kim) suffer from computationally expensive Bessel function evaluations. We propose a new exact simulation scheme without the Bessel function, based on the observation that the conditional integrated variance can be simplified when conditioned by the Poisson variate used for simulating the terminal variance. Our approach also enhances low-bias and time discretization schemes, which are suitable for derivatives with frequent monitoring. Extensive numerical tests reveal the good performance of the new simulation schemes in terms of accuracy, efficiency, and reliability when compared with existing methods.
CommonCrawl
Do there exist Graeco-Latin Square style puzzles? Sudoku, KenKen, and Skyscrapers are three examples of Latin square puzzles. In all three, players are challenged to complete a Latin Square, a NxN grid (typically 9x9) in which the numbers 1-N appear in each row and each column exactly once. Different Latin square puzzles are distinguished by the extra restrictions placed on the placement of numbers. For instance, in Sudoku, the grid is subdivided into 9 3x3 grids, each of which must contain all numbers 1-9. Presumably, many (infinite?) Latin square puzzles may be constructed by coming up with different extra restrictions on the placement of the numbers. A closely related type of array is a Graeco-Latin square. You can think of it as two Latin squares overlaid on each other, such that no two squares in the resulting grid contain the same two numbers. For example, the following is a 3x3 Graeco-Latin square. $$ \begin{matrix} 1,4 && 2,6 && 3,5 \\ 2,5 && 3,4 && 1,6 \\ 3,6 && 1,5 && 2,4 \end{matrix} $$ Do there exist Graeco-Latin square puzzles? That is, are there puzzles whose goal is to construct a Graeco-Latin square, under some external constraint? sudoku latin-square NathanNathan $\begingroup$ Claim: there are finitely many reasonable Latin square puzzles. Proof: a reasonable puzzle should have a description which doesn't take more than, say, one million characters (else it becomes overly convoluted), and there are only finitely many possibilities for each character. So we can only make finitely many reasonable puzzles. $\endgroup$ – boboquack Apr 27 '18 at 0:07 I do think I've seen a sort of this puzzle in some newspaper, namely a simple one where you are given a 5x5 grid and have to fill in a Graeco-Latin square over a set of 5 letters and 5 numbers, given some full or partial cells. A quick google search for graeco-latin sudoku does provide some fruit supporting this assertion. Among results are a Graeco-Latin Sudoku Magazine, containing 100 different 5x5 sudokus such as the publicly displayed one on their site: Another website produced by a company called Clarity Media also contains some Graeco-Latin Sudoku puzzles including the below one; this company seems to be a reasonably-sized media outlet however the website looks outdated/unmaintained so those puzzles may not be published any more. Hope this helps you find and design your own, or with whatever endeavours you are attempting! NB: Regarding your comments on the small solution space, I would think even with purely the 5!2=14400 possible permutations of letters and numbers there would be enough to prevent one from simply recognising part of a known configuration. boboquackboboquack I've been looking for a while now, and it appears that such puzzles do not exist. However, evidence of absence etc., etc. So, rather than settle for that kind of argument, how about we try to show why such games would be unlikely. The Online Encyclopedia of Integer Sequences is a repository of famous sequences of integers, such as the sequence Fibonocci numbers or the sequence of primes. Relevant for this question are the entries for the number of Latin squares of size n and the number of Greaco-Latin squares of size n. As you can see, while the number of Latin squares of size n grows regularly and quickly, the sequence for Graeco-Latin Squares is fitful and slow. For instance, there are only 2 possible Graeco-Latin Squares of size 3, compared to 12 Latin squares of size 3. The difference grows with n: while the number of possible Sudoku grids is massive (roughly on the order of $10^{27}$), there are a measly 8(!) Graeco-Latin squares of size 9. It should be clear why this makes for bad puzzling: once you've played a through a few Graeco-Latin Square puzzles, you've likely seen most of the possible grids. That is, the possible space of puzzles is rather small. All of this doesn't prove that such puzzles can't exist; it just suggests that such puzzles would be unsatisfying. ffao Well, the thirty-six officers problem was a 6x6 Graeco-Latin square puzzle which has been puzzling mathematicians for more than a hundred years, until Gaston Tarry in 1901 proved it had no solution. It has been featured on Puzzling Stack Exchange as well. GlorfindelGlorfindel There seem to be no sudoku-type puzzles that use a Graeco-Latin square, probably for the reason you found, the rarity of those squares. There are however some physical puzzles that use them. One rather clever puzzle is the 36 cube, which was designed by Derrick Niederman and made by ThinkFun: It seems that the puzzle requires you to complete a 6x6 Graeco-Latin square by using tubes which vary in length and colour. The base has rods of varying lengths onto which the tubes must be placed, so you are essentially given one of the Latin squares that form the Graeco-Latin square. As Glorfindel already said, no 6x6 G-L square actually exists, but the puzzle can be solved due to a bit of trickery. Jaap ScherphuisJaap Scherphuis Not the answer you're looking for? Browse other questions tagged sudoku latin-square or ask your own question. 18 families and 6 plays Magic Squares in Sudoku Grids There is a reason sudoku uses squares Non brute-force Sudoku A spartan skeleton Sudoku Skeleton sudoku, the second Tetromino Sudoku Tetromi-nuri-doku This is a crossudoku Latin square puzzle Latin Square Puzzle - Difficult
CommonCrawl
Biology and Life Sciences (18) Chemistry and Chemical Engineering (65) Medical and Health Sciences (4) https://akjournals.com/search?f_0=author&q_0=J.+Yang Author or Editor: J. Yang x Page:123456789 Preconcentration of inorganic mercury and organic mercury with solvent extraction for neutron activation analysis Journal of Radioanalytical and Nuclear Chemistry Authors: J. Lo and J. Yang A preconcentration method combined with neutron activation technique for the analysis of organic and inorganic mercury in waste water samples at ppb levels is presented. The inorganic mercury is extracted in CCl4 solution with lead diethyldithiocarbamate reagent and the organic mercury is extracted in C6H6 solution. Interfering activities of sodium and bromine are removed from the irradiated samples by this procedure. Two different solvent extraction procedures are also described in detail. Determination of icariin in bushenqiangshen capsule in high-performance liquid chromatography with fluorescence detection by precolumn chelation with aluminum Acta Chromatographica https://doi.org/10.1556/achrom.24.2012.2.6 Authors: G. J. Yang and L. Lv A simple and sensitive method of high-performance liquid chromatography with fluorescence detection (HPLC-FLD) was developed for the determination of icariin in capsules by precolumn chelation with aluminum. In order to obtain a stable fluorescence signal, the reaction conditions of the fluorescent chelation complex between icariin and aluminum were investigated in detail. Chromatography was carried out on an Agilent Zorbax Extend C18 column (150 mm × 4.6 mm, 5.0 μm) using methanol as mobile phase at a flow rate of 1.0 mL min−1. The excitation and emission wavelengths were set at 430 and 480 nm, respectively. At optimum conditions, the calibration curve was linear in the concentration range from 0.010 to 100.0 μg mL−1 with the limit of detection of 3.5 ng mL−1 (S/N = 3). A comprehensive method was validated for precision and accuracy. The method described here has been successfully applied for the determination of the icariin content in a capsule with satisfactory results. Positive Solutions of a Nonlinear Neutral Equation with Positive and Negative Coefficients Acta Mathematica Hungarica Authors: S. Cheng, X.-P. Guan, and J. Yang Synergistic extraction of uranium(VI) from nitric acid solution by mixture of DMHMP and DOSO in benzene Authors: Y. Xia, B. Bao, and J. Yang Solvent extraction of uranyl nitrate in nitric acid medium with the binary system of DMHMP and DOSO has been investigated. It was found that synergistic effect occurs during the extraction of uranyl nitrate with benzene solution of DMHMP and DOSO, the binary species extracted UO2 (NO3)2·DMHMP·DOSO has been confirmed. From the data the equilibrium constants have been determined. Quantitative trait loci for morphometric body measurements of the hybrids of silver carp (Hypophthalmichthys molitrix) and bighead carp (H. nobilis) Acta Biologica Hungarica https://doi.org/10.1556/abiol.64.2013.2.4 Authors: J. Wang, G. Yang, and G. Zhou Quantitative trait loci (QTL) for 11 morphometric body measurements of the hybrids of silver (Hypophthalmichthys molitrix) and bighead carp (H. nobilis) including body weight (BW), standard length (SL), body depth (BD), body thickness (BT), head length (HL), head depth (HD), length of ventral keel (LVK), length of pectoral fin (Lpec), length of pelvic fin (Lpel), length of caudal fin (Lcau) and space between pectoral and pelvic fins (SPP) were located on the sex average microsatellite linkage map constructed using the hybrids of a female bighead and a male silver carp, on which 15 microsatellites were newly mapped. One locus was found to be responsible for BW, LV K and SPP, respectively. As many as 6 loci were found to be responsible for HD. The variances of remaining traits were partitioned by different numbers of loci varying between 2 and 5. The variance explained each locus ranged from 9.1% to 23.8% of the total. The variance explained by all loci responsible for each measurement ranged from 17.7% to 75.1%. It was noted that multiple measurements were mapped on the same locus. For example, a region bounded by Hym435 and Hym145 was found to be responsible for all the measurements analyzed. Study of kinetics of thermal decomposition of uranyl nitrate complexes with N-alkylcaprolactams by means of non-isothermal gravimetry Authors: Z. Lu, L. Yang, and J. Sun The kinetics of thermal decomposition of a series of uranyl nitrate complexes with N-alkylcaprolactams (alkyl=C2H5, C4H9, C6H13, C8H17, C10H21 or C12H25) was studied by means of non-isothermal gravimetry under a nitrogen atmosphere. From the TG-DTG curves, the kinetic parameters relating to the loss of two molecules of coordinated ligand were obtained by employing two groups of methods: (I) a group of conventional methods involving the Coast-Redfern, Freeman-Carroll, Horowitz-Metzger, Dharwadkar-Karkhanavala and Doyle (modified by Zsakó) equations; (II) a new method were suggested by J. Máleket al.. The results obtained using two types of methods were compared, and it emerged that the results of method II were much more meaningful and reasonable in this work. Additionally, the effects of the molecular structure of the ligands on the kinetic data and models were studied and are discussed. Two novel procedures of preparation for [Tc(CO)2(NO)]2+labeled by EHIDA and its biodistribution Authors: Y. Yang, J. Zhang, J. Wang, and L. Zhu To develop potential new Tc radiopharmaceuticals, a novel compound [99mTc(CO)2(NO)(EHIDA)]0 (EHIDA: 2,6-diethylphenylcarbamoylmethyliminodiacetic acid) has been prepared by reacting [99mTc(CO)3)(EHIDA)]− with NOBF4 both in water and acetonitrile. The conversion of [99mTc(CO)3)(EHIDA)]− to [99mTc(CO)2(NO)(EHIDA)]0 was supported by TLC, HPLC and eletrophoresis. The radiochemical purity (more than 99%) was proved by TLC and HPLC. The biodistribution in mice demonstrated that [Tc(CO)2(NO)(EHIDA)]0 showed higher uptake in blood, kidney and lung (15 min, blood: 19.24±2.95; kidney: 13.61±3.49; lung: 10.81±1.09.) but a lower uptake in liver (15 min, 5.73±0.74). The slower clearances (120 min, blood: 12.75±1.34; kidney: 13.61±3.49) from blood and kidney were also found. This research describes two methods for the conversion of [99mTc(CO)3]+ into [99mTc(CO)2)(NO)]2+ by using NOBF4 as the source of NO+ both in organic solvent and water. The latter method offers the possibility to introduce the NO-group in high yield in water. Studies on thermochemical properties of ionic liquids based on transition metal Authors: W. Guan, L. Li, H. Wang, J. Tong, and J. Yang A brown and transparent ionic liquid (IL), [C4mim][FeCl4], was prepared by mixing anhydrous FeCl3 with 1-butyl-3-methylimidazolium chloride ([C4mim][Cl]), with molar ratio 1/1 under stirring in a glove box filled with dry argon. The molar enthalpies of solution, Δs H m, of [C4mim][FeCl4], in water with various molalities were determined by a solution-reaction isoperibol calorimeter at 298.15 K. Considering the hydrolyzation of anion [FeCl4]− in dissolution process of the IL, a new method of determining the standard molar enthalpy of solution, Δs H m 0, was put forward on the bases of Pitzer solution theory of mixed electrolytes. The values of Δs H m 0 and the sum of Pitzer parameters: \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$(4\beta _{Fe,Cl}^{(0)L} + 4\beta _{C_4 mim,Cl}^{(0)L} + \Phi _{Fe,C_4 mim}^L )$$ \end{document} \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$(\beta _{Fe,Cl}^{(1)L} + \beta _{C_4 mim,Cl}^{(1)L} )$$ \end{document} were obtained, respectively. In terms of thermodynamic cycle and the lattice energy of IL calculated by Glasser's lattice energy theory of ILs, the dissociation enthalpy of anion [FeCl4]−, ΔH dis≈5650 kJ mol−1, for the reaction: [FeCl4]−(g)→Fe3+(g)+4Cl−(g), was estimated. It is shown that large hydration enthalpies of ions have been compensated by large the dissociation enthalpy of [FeCl4]− anion, Δd H m, in dissolution process of the IL. Production of monoenergetic MeV-range neutrons by 3H(p,n)3He reaction Authors: G. Kim, H. Woo, J. Kim, T. Yang, and J. Chang Monochromatic MeV-energy neutron source for secondary reaction was developed utilizing tritium embedded titanium (Ti-3H) thin film via 3H(p,n)3He reaction. We have measured the neutron energies and the energy spread by resonance reactions of 12C(n,tot) and 28Si(n,tot). The available energy was within the range from 0.6 to 2.6 MeV. Energy spread was 1.6% at energy of 2.077 MeV. The flux in the beam direction was determined to be 3.76·107 n/s/sr by irradiating 197Au by about 2 MeV neutrons. This source was shown to be useful for measurements of nuclear data by measuring the total cross sections of neutrons on Fe and comparing these data to the data of ENDF-6. Preparation and biodistribution in mice of 99mTc-DTPA-b-cyanocobalamin Authors: J. Q. Yang, Y. Li, J. Lu, and X. B. Wang Cyanocobalamin (CNCbl), a kind of vitamin B12 (cobalamin, Cbl), which has a special binding capability to rapid dividing cells and proliferating tissue, especially tumors, has been modified and labeled by 99mTc. The optimal labeling condition was determined, and the biodistribution of 99mTc-DTPA-b-CNCbl both in normal mice and TA2 mice bearing MA891 mammary tumors were studied. 99mTc-DTPA-b-CNCbl showed low uptake and rapid clearance in nontarget tissues, and renal excretion. About 40% of uptake at 1 hour remained in the tumor at 12 hours p.i. The satisfying ratio of T/NT was acquired at 6 hours p.i.
CommonCrawl
math3ma HomeAboutcategoriesSubscribeContactshop  © 2015 - 2019 Math3ma Ps. 148 On Connectedness, Intuitively Today's post is a bit of a ramble, but my goal is to uncover the intuition behind one of the definitions of a connected topological space. Ideally, this is just a little tidbit I'd like to stash in The Back Pocket. But as you can tell already, the length of this post isn't so "little"! Oh well, here we go! We say a topological space $X$ is connected if it is not disconnected, i.e. if it cannot be written as $X=A\cup B$ where $A$ and $B$ are disjoint, nonempty open subsets of $X$. This seems intuitive enough, right? We can easily visualize a disconnected space. But there's also an equivalent definition: $X$ is connected if every continuous function $f:X\to\{0,1\}$ is constant, where $\{0,1\}$ has the discrete topology.* At first glance, you might think How in the world are these equivalent? Although the proof isn't too hard, it may be difficult to see the connection (har har) intuitively. But I claim that this second definition of connectedness is just as intuitive as the first! Hopefully by the end of this post, you'll agree. What's Really Going on Here? To say "there is a continuous function from $X$ to $\{0,1\}$" is the mathematician's way of saying "$X$ and $\{0,1\}$ share the same structure." Indeed, what's characteristic about a disconnected space? It's made up of two pieces! (Loosely speaking.) And what's the simplest example of a set which is made up of two pieces? $\{0,1\}$ of course! (Why? Because, well, it's made up of two pieces.) So if we can find a structure preserving map $f$ (a continuous function) from $X$ to $\{0,1\}$ which is not constant, then some elements of $X$ are mapped to $0$ while others are mapped to $1$. Hence we can separate the elements of $X$ into two subsets by putting a label, either a $0$ or a $1$, on each element. That's what a function does! If you like, imagine that $f$ paints all the elements that map to $0$ red and all the elements that map to $1$ blue. Then we'll have this scenario: and this is precisely what it means for $X$ to be disconnected! On the other hand, if every continuous function assigns to all elements in $X$ a single value** in $\{0,1\}$ (or a single color, either red or blue), then it must be that $X$ itself is intrinsically a single unit, that is, $X$ is connected. This observation is really a consequence of the fact that continuity preserves connectedness. You know the theorem: If $f:X\to Y$ is a continuous map between topological spaces $X$ and $Y$, and if $X$ is connected, then $f(X)$ is connected. In particular, consider the special case when $Y=\{0,1\}$. If we can find a continuous function $f:X\to\{0,1\}$ such that $f(X)\subset\{0,1\}$ is not connected, then by the above we may conclude that $X$ is not connected either. But what does it mean for $f(X)\subset\{0,1\}$ to be "not connected"? It means that $f(X)$ can be written as a union of two nonempty, disjoint, open subsets of $\{0,1\}$. The only* possibility for this is if $f(X)=\{0\}\cup\{1\}$! And this is precisely what it means to say $f:X\to\{0,1\}$ is not constant, which ties back to our illustration above. To put it succinctly, the connectedness (or disconnectedness) of $X$ is reflected in the connectedness (or disconnectedness) of its image, $f(X)$. Some Final Remarks This idea of having a map between two objects which share similar properties is a recurring notion in math. Take group theory, for example. We expect to find an isomorphism between $\mathbb{Z}_2\times\mathbb{Z}_2$ and the Klein four group because they share the same structure. We also expect to find an isomorphism between $S_3$ and $\mathbb{Z}_3\rtimes\mathbb{Z}_2$ because they share the same structure. Similarly we expect there to be an isomorphism between $\mathbb{Z}$ and any infinite cyclic group because, well, they have the same structure! So you see? Continuous functions, just like homomorphisms, are simply tools which allow us to determine when two objects share the same properties. (And by the way, this isn't the first time we've seen an analogy between algebra and topology!) Now this observation may incline you to think, "Aha! So continuous functions between topological spaces are the analogue of isomorphisms between algebraic structures?" And the answer is "Almost." The observation is correct if you replace "continuous functions" by "homeomorphisms." (A homeomorphism is a bijective continuous function whose inverse is also continuous.) Here's the cool thing: the analogy between these two types of maps is no mere coincidence; it's all solidified via the language of category theory! But perhaps we'll save that discussion for another day. * To say "$\{0,1\}$ has the discrete topology" means that the open sets of $\{0,1\}$ are $\varnothing,\{0\},\{1\},$ and $\{0,1\}$. **This is what it means for $f:X\to\{0,1\}$ to be constant, either all of $X$ gets sent to $0$ or all of $X$ gets sent to $1$. Compact + Hausdorff = Normal "One-Line" Proof: Fundamental Group of the Circle (Co)homology: A Poem The Fundamental Group of the Circle, Part 5
CommonCrawl
Conormal derivative problems for stationary Stokes system in Sobolev spaces On Hausdorff dimension of the set of non-ergodic directions of two-genus double cover of tori May 2018, 38(5): 2375-2393. doi: 10.3934/dcds.2018098 Hausdorff dimension of certain sets arising in Engel continued fractions Lulu Fang 1,, and Min Wu 2, School of Mathematics, Sun Yat-sen University, Guangzhou, GD 510275, China Department of Mathematics, South China University of Technology, Guangzhou, GD 510640, China * Corresponding author: Lulu Fang Received July 2017 Revised November 2017 Published March 2018 Full Text(HTML) Figure(1) In the present paper, we are concerned with the Hausdorff dimension of certain sets arising in Engel continued fractions. In particular, the Hausdorff dimension of sets $\big\{x ∈ [0,1): b_n(x) ≥ \phi (n)~i.m.~n ∈ \mathbb{N}\big\}\ \ \text{and}\ \ \big\{x ∈ [0,1): b_n(x) ≥ \phi(n),\ \forall n ≥ 1\big\}$ are completely determined, where $i.m.$ means infinitely many, $\{b_n(x)\}_{n ≥ 1}$ is the sequence of partial quotients of the Engel continued fraction expansion of $x$ $\phi$ is a positive function defined on natural numbers. Keywords: Engel continued fractions, growth rate of partial quotients, Hausdorff dimension. Mathematics Subject Classification: Primary: 11K50, 37E05; Secondary: 28A80. Citation: Lulu Fang, Min Wu. Hausdorff dimension of certain sets arising in Engel continued fractions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2375-2393. doi: 10.3934/dcds.2018098 C.-Y. Cao, B.-W. Wang and J. Wu, The growth speed of digits in infinite iterated function systems, Studia Math., 217 (2013), 139-158. doi: 10.4064/sm217-2-3. Google Scholar T. Cusick, Hausdorff dimension of sets of continued fractions, Quart. J. Math. Oxford(2), 41 (1990), 277-286. doi: 10.1093/qmath/41.3.277. Google Scholar K. Falconer, Fractal Geometry: Mathematical Foundations and Applications, John Wiley & Sons, Ltd., Chichester, 1990. Google Scholar A.-H. Fan, B.-W. Wang and J. Wu, Arithmetic and metric properties of Oppenheim continued fraction expansions, J. Number Theory, 127 (2007), 64-82. doi: 10.1016/j.jnt.2006.12.016. Google Scholar L.-L. Fang, Large and moderate deviations for modified Engel continued fractions, Statist. Probab. Lett., 98 (2015), 98-106. doi: 10.1016/j.spl.2014.12.015. Google Scholar L. -L. Fang and M. Wu, A note on Rényi's "record" problem and Engel's series, to appear in Proceedings of the Edinburgh Mathematical Society.Google Scholar L.-L. Fang, M. Wu and L. Shang, Large and moderate deviation principles for Engel continued fraction, J. Theoret. Probab., (2015), 1-25. doi: 10.1007/s10959-016-0715-3. Google Scholar D.-J. Feng, J. Wu, J.-C. Liang and S. Tseng, Appendix to the paper by T. Luczak-a simple proof of the lower bound: "On the fractional dimension of sets of continued fractions", Mathematika, 44 (1997), 54-55. doi: 10.1112/S0025579300011967. Google Scholar J. Galambos, Representations of Real Numbers by Infinite Series, Lecture Notes in Mathematics, Vol. 502. Springer-Verlag, Berlin-New York, 1976. Google Scholar I. Good, The fractional dimensional theory of continued fractions, Proc. Cambridge Philos. Soc., 37 (1941), 199-228. doi: 10.1017/S030500410002171X. Google Scholar P. Hanus, R. Mauldin and M. Urbański, Thermodynamic formalism and multifractal analysis of conformal infinite iterated function systems, Acta Math. Hungar., 96 (2002), 27-98. doi: 10.1023/A:1015613628175. Google Scholar Y. Hartono, C. Kraaikamp and F. Schweiger, Algebraic and ergodic properties of a new continued fraction algorithm with non-decreasing partial quotients, J. Théor. Nombres Bordeaux, 14 (2002), 497-516. doi: 10.5802/jtnb.371. Google Scholar K. Hirst, Continued fractions with sequences of partial quotients, Proc. Amer. Math. Soc., 38 (1973), 221-227. doi: 10.1090/S0002-9939-1973-0311581-4. Google Scholar H. Hu, Y.-L. Yu and Y.-F. Zhao, A note on approximation efficiency and partial quotients of Engel continued fractions, Int. J. Number Theory, 13 (2017), 2433-2443. doi: 10.1142/S1793042117501329. Google Scholar V. Jarník, Zur metrischen Theorie der diopahantischen Approximationen, Proc. Mat. Fyz., 36 (1928), 91-106. Google Scholar T. Jordan and M. Rams, Increasing digit subsystems of infinite iterated function systems, Proc. Amer. Math. Soc., 140 (2012), 1267-1279. doi: 10.1090/S0002-9939-2011-10969-9. Google Scholar [17] A. Khintchine, Continued Fractions, The University of Chicago Press, Chicago-London, 1964. C. Kraaikamp and J. Wu, On a new continued fraction expansion with non-decreasing partial quotients, Monatsh. Math., 143 (2004), 285-298. doi: 10.1007/s00605-004-0246-3. Google Scholar L.-M. Liao and M. Rams, Subexponentially increasing sums of partial quotients in continued fraction expansions, Math. Proc. Cambridge Philos. Soc., 160 (2016), 401-412. doi: 10.1017/S0305004115000742. Google Scholar T. Luczak, On the fractional dimension of sets of continued fractions, Mathematika, 44 (1997), 50-53. doi: 10.1112/S0025579300011955. Google Scholar R. Mauldin and M. Urbański, Dimensions and measures in infinite iterated function systems, Proc. London Math. Soc. (3), 73 (1996), 105-154. Google Scholar L. Rempe-Gillen and M. Urbański, Non-autonomous conformal iterated function systems and Moran-set constructions, Trans. Amer. Math. Soc., 368 (2016), 1979-2017. Google Scholar [23] F. Schweiger, Ergodic Theory of Fibred Systems and Metric Number Theory, Oxford University Press, New York, 1995. B.-W. Wang and J. Wu, A problem of Hirst on continued fractions with sequences of partial quotients, Bull. Lond. Math. Soc., 40 (2008), 18-22. doi: 10.1112/blms/bdm103. Google Scholar B.-W. Wang and J. Wu, Hausdorff dimension of certain sets arising in continued fraction expansions, Adv. Math., 218 (2008), 1319-1339. doi: 10.1016/j.aim.2008.03.006. Google Scholar Z.-L. Zhang and C.-Y. Cao, On points with positive density of the digit sequence in infinite iterated function systems, J. Aust. Math. Soc., 102 (2017), 435-443. doi: 10.1017/S1446788716000288. Google Scholar T. Zhong and L. Tang, The sets of different continued fractions with the same partial quotients, Int. J. Number Theory, 9 (2013), 1855-1863. doi: 10.1142/S1793042113500619. Google Scholar Figure 1. RCF-map and ECF-map Figure Options Download as PowerPoint slide Doug Hensley. Continued fractions, Cantor sets, Hausdorff dimension, and transfer operators and their analytic extension. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2417-2436. doi: 10.3934/dcds.2012.32.2417 Laura Luzzi, Stefano Marmi. On the entropy of Japanese continued fractions. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 673-711. doi: 10.3934/dcds.2008.20.673 Pierre Arnoux, Thomas A. Schmidt. Commensurable continued fractions. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4389-4418. doi: 10.3934/dcds.2014.34.4389 Claudio Bonanno, Carlo Carminati, Stefano Isola, Giulio Tiozzo. Dynamics of continued fractions and kneading sequences of unimodal maps. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1313-1332. doi: 10.3934/dcds.2013.33.1313 Élise Janvresse, Benoît Rittaud, Thierry de la Rue. Dynamics of $\lambda$-continued fractions and $\beta$-shifts. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1477-1498. doi: 10.3934/dcds.2013.33.1477 Marc Kessböhmer, Bernd O. Stratmann. On the asymptotic behaviour of the Lebesgue measure of sum-level sets for continued fractions. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2437-2451. doi: 10.3934/dcds.2012.32.2437 Hiroki Sumi, Mariusz Urbański. Bowen parameter and Hausdorff dimension for expanding rational semigroups. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2591-2606. doi: 10.3934/dcds.2012.32.2591 Shmuel Friedland, Gunter Ochs. Hausdorff dimension, strong hyperbolicity and complex dynamics. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 405-430. doi: 10.3934/dcds.1998.4.405 Sara Munday. On Hausdorff dimension and cusp excursions for Fuchsian groups. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2503-2520. doi: 10.3934/dcds.2012.32.2503 Luis Barreira and Jorg Schmeling. Invariant sets with zero measure and full Hausdorff dimension. Electronic Research Announcements, 1997, 3: 114-118. Jon Chaika. Hausdorff dimension for ergodic measures of interval exchange transformations. Journal of Modern Dynamics, 2008, 2 (3) : 457-464. doi: 10.3934/jmd.2008.2.457 Krzysztof Barański, Michał Wardal. On the Hausdorff dimension of the Sierpiński Julia sets. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3293-3313. doi: 10.3934/dcds.2015.35.3293 Stéphane Sabourau. Growth of quotients of groups acting by isometries on Gromov-hyperbolic spaces. Journal of Modern Dynamics, 2013, 7 (2) : 269-290. doi: 10.3934/jmd.2013.7.269 Federico Rodriguez Hertz, María Alejandra Rodriguez Hertz, Raúl Ures. Partial hyperbolicity and ergodicity in dimension three. Journal of Modern Dynamics, 2008, 2 (2) : 187-208. doi: 10.3934/jmd.2008.2.187 Thomas Jordan, Mark Pollicott. The Hausdorff dimension of measures for iterated function systems which contract on average. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 235-246. doi: 10.3934/dcds.2008.22.235 Vanderlei Horita, Marcelo Viana. Hausdorff dimension for non-hyperbolic repellers II: DA diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2005, 13 (5) : 1125-1152. doi: 10.3934/dcds.2005.13.1125 Krzysztof Barański. Hausdorff dimension of self-affine limit sets with an invariant direction. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1015-1023. doi: 10.3934/dcds.2008.21.1015 Carlos Matheus, Jacob Palis. An estimate on the Hausdorff dimension of stable sets of non-uniformly hyperbolic horseshoes. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 431-448. doi: 10.3934/dcds.2018020 Cristina Lizana, Leonardo Mora. Lower bounds for the Hausdorff dimension of the geometric Lorenz attractor: The homoclinic case. Discrete & Continuous Dynamical Systems - A, 2008, 22 (3) : 699-709. doi: 10.3934/dcds.2008.22.699 Paul Wright. Differentiability of Hausdorff dimension of the non-wandering set in a planar open billiard. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3993-4014. doi: 10.3934/dcds.2016.36.3993 HTML views (256) Lulu Fang Min Wu
CommonCrawl
Local Measurements of the Mean Interstellar Polarization at High Galactic Latitudes (1802.04305) R. Skalidis, G. V. Panopoulou, K. Tassis, V. Pavlidou, D. Blinov, I. Komis, I. Liodakis Feb. 12, 2018 astro-ph.CO, astro-ph.GA, astro-ph.SR We conduct a small-scale pathfinding survey designed to identify the average polarization properties of the diffuse ISM locally at the lowest dust content regions. We perform deep optopolarimetric surveys within three $\sim 15' \times 15'$ regions located at $b > 48^\circ$, using the RoboPol instrument. The observed samples of stars are photometrically complete to $\sim$16 mag in the R-band. The selected regions exhibit low dust emission at 353 GHz and low total reddening compared to the majority of high-latitude sightlines. We measure the level of systematic uncertainty for all observing epochs and find it to be 0.1\% in fractional linear polarization, $p$. The majority of individual stellar measurements are non-detections. However, our survey strategy enables us to locate the mean fractional linear polarization $p_{mean}$ in each of the three regions. The region with lowest dust content yields $p_{mean}=(0.054 \pm 0.038) \%$, not significantly different from zero. We find significant detections for the remaining two regions of: $p_{mean}=(0.113 \pm 0.036) \%$ and $p_{mean}=(0.208 \pm 0.044) \%$. Using a Bayesian approach we provide upper limits on the intrinsic spread of the small-scale distributions of $q$ and $u$. At the detected $p_{mean}$ levels, the determination of the systematic uncertainty is critical for the reliability of the measurements. We verify the significance of our detections with statistical tests, accounting for all sources of uncertainty. Using publicly available HI emission data, we identify the velocity components that most likely account for the observed $p_{mean}$ and find their morphologies to be misaligned with the orientation of the mean plane-of-sky magnetic field at a spatial resolution of 10$\arcmin$. We find indications that the standard upper envelope of $p$ with reddening underestimates the maximum $p$ at very low E(B-V) ($\leq 0.01$ mag). A closer look at the "characteristic" width of molecular cloud filaments (1611.07532) G. V. Panopoulou, I. Psaradaki, R. Skalidis, K. Tassis, J. J. Andrews Nov. 22, 2016 astro-ph.GA, astro-ph.SR Filaments in Herschel molecular cloud images are found to exhibit a "characteristic width". This finding is in tension with spatial power spectra of the data, which show no indication of this characteristic scale. We demonstrate that this discrepancy is a result of the methodology adopted for measuring filament widths. First, we perform the previously used analysis technique on artificial scale-free data, and obtain a peaked width distribution of filament-like structures. Next, we repeat the analysis on three Herschel maps and reproduce the narrow distribution of widths found in previous studies $-$ when considering the average width of each filament. However, the distribution of widths measured at all points along a filament spine is broader than the distribution of mean filament widths, indicating that the narrow spread (interpreted as a "characteristic" width) results from averaging. Furthermore, the width is found to vary significantly from one end of a filament to the other. Therefore, the previously identified peak at 0.1 pc cannot be understood as representing the typical width of filaments. We find an alternative explanation by modelling the observed width distribution as a truncated power-law distribution, sampled with uncertainties. The position of the peak is connected to the lower truncation scale and is likely set by the choice of parameters used in measuring filament widths. We conclude that a "characteristic" width of filaments is not supported by the available data. RoboPol: The optical polarization of gamma-ray--loud and gamma-ray--quiet blazars (1609.00640) E. Angelakis, T. Hovatta, D. Blinov, V. Pavlidou, S. Kiehlmann, I. Myserlis, M. Boettcher, P. Mao, G. V. Panopoulou, I. Liodakis, O. G. King, M. Balokovic, A. Kus, N. Kylafis, A. Mahabal, A. Marecki, E. Paleologou, I. Papadakis, I. Papamastorakis, E. Pazderski, T. J. Pearson, S. Prabhudesai, A. N. Ramaprakash, A. C. S. Readhead, P. Reig, K. Tassis, M. Urry, J. A. Zensus Sept. 1, 2016 astro-ph.CO, astro-ph.HE We present average R-band optopolarimetric data, as well as variability parameters, from the first and second RoboPol observing season. We investigate whether gamma- ray--loud and gamma-ray--quiet blazars exhibit systematic differences in their optical polarization properties. We find that gamma-ray--loud blazars have a systematically higher polarization fraction (0.092) than gamma-ray--quiet blazars (0.031), with the hypothesis of the two samples being drawn from the same distribution of polarization fractions being rejected at the 3{\sigma} level. We have not found any evidence that this discrepancy is related to differences in the redshift distribution, rest-frame R-band lu- minosity density, or the source classification. The median polarization fraction versus synchrotron-peak-frequency plot shows an envelope implying that high synchrotron- peaked sources have a smaller range of median polarization fractions concentrated around lower values. Our gamma-ray--quiet sources show similar median polarization fractions although they are all low synchrotron-peaked. We also find that the random- ness of the polarization angle depends on the synchrotron peak frequency. For high synchrotron-peaked sources it tends to concentrate around preferred directions while for low synchrotron-peaked sources it is more variable and less likely to have a pre- ferred direction. We propose a scenario which mediates efficient particle acceleration in shocks and increases the helical B-field component immediately downstream of the shock. Optical polarization of high-energy BL Lac objects (1608.08440) T. Hovatta, E. Lindfors, D. Blinov, V. Pavlidou, K. Nilsson, S. Kiehlmann, E. Angelakis, V. Fallah Ramazani, I. Liodakis, I. Myserlis, G. V. Panopoulou, T. Pursimo Aug. 30, 2016 astro-ph.CO, astro-ph.HE We investigate the optical polarization properties of high-energy BL Lac objects using data from the RoboPol blazar monitoring program and the Nordic Optical Telescope. We wish to understand if there are differences in the BL Lac objects that are detected with the current-generation TeV instruments compared to those that have not yet been detected. The mean polarization fraction of the TeV-detected BL Lacs is 5% while the non-TeV sources show a higher mean polarization fraction of 7%. This difference in polarization fraction disappears when the dilution by the unpolarized light of the host galaxy is accounted for. The TeV sources show somewhat lower fractional polarization variability amplitudes than the non-TeV sources. Also the fraction of sources with a smaller spread in the Q/I - U/I -plane and a clumped distribution of points away from the origin, possibly indicating a preferred polarization angle, is larger in the TeV than in the non-TeV sources. These differences between TeV and non-TeV samples seems to arise from differences between intermediate and high spectral peaking sources instead of the TeV detection. When the EVPA variations are studied, the rate of EVPA change is similar in both samples. We detect significant EVPA rotations in both TeV and non-TeV sources, showing that rotations can occur in high spectral peaking BL Lac objects when the monitoring cadence is dense enough. Our simulations show that we cannot exclude a random walk origin for these rotations. These results indicate that there are no intrinsic differences in the polarization properties of the TeV-detected and non-TeV-detected high-energy BL Lac objects. This suggests that the polarization properties are not directly related to the TeV-detection, but instead the TeV loudness is connected to the general flaring activity, redshift, and the synchrotron peak location. (Abridged) RoboPol: Do optical polarization rotations occur in all blazars? (1607.04292) D. Blinov, V. Pavlidou, I. Papadakis, S. Kiehlmann, I. Liodakis, G. V. Panopoulou, T. J. Pearson, E. Angelakis, M. Baloković, T. Hovatta, V. Joshi, O. G. King, A. Kus, N. Kylafis, A. Mahabal, A. Marecki, I. Myserlis, E. Paleologou, I. Papamastorakis, E. Pazderski, S. Prabhudesai, A. Ramaprakash, A. C. S. Readhead, P. Reig, K. Tassis, J. A. Zensus July 14, 2016 astro-ph.HE We present a new set of optical polarization plane rotations in blazars, observed during the third year of operation of RoboPol. The entire set of rotation events discovered during three years of observations is analysed with the aim of determining whether these events are inherent in all blazars. It is found that the frequency of the polarization plane rotations varies widely among blazars. This variation cannot be explained either by a difference in the relativistic boosting or by selection effects caused by a difference in the average fractional polarization. We conclude that the rotations are characteristic of a subset of blazars and that they occur as a consequence of their intrinsic properties. The magnetic field and dust filaments in the Polaris Flare (1607.00005) G. V. Panopoulou, I. Psaradaki, K. Tassis July 10, 2016 astro-ph.GA, astro-ph.SR In diffuse molecular clouds, possible precursors of star-forming clouds, the effect of the magnetic field is unclear. In this work we compare the orientations of filamentary structures in the Polaris Flare, as seen through dust emission by Herschel, to the plane-of-the-sky magnetic field orientation ($\rm B_{pos}$) as revealed by stellar optical polarimetry with RoboPol. Dust structures in this translucent cloud show a strong preference for alignment with $\rm B_{pos}$. 70 % of field orientations are consistent with those of the filaments (within 30$^\circ$). We explore the spatial variation of the relative orientations and find it to be uncorrelated with the dust emission intensity and correlated to the dispersion of polarization angles. Concentrating in the area around the highest column density filament, and in the region with the most uniform field, we infer the $\rm B_{pos}$ strength to be 24 $-$ 120 $\mu$G. Assuming that the magnetic field can be decomposed into a turbulent and an ordered component, we find a turbulent-to-ordered ratio of 0.2 $-$ 0.8, implying that the magnetic field is dynamically important, at least in these two areas. We discuss implications on the 3D field properties, as well as on the distance estimate of the cloud. $\rm^{13}CO$ Filaments in the Taurus Molecular Cloud (1408.4809) G. V. Panopoulou, P. F. Goldsmith Feb. 3, 2016 astro-ph.GA, astro-ph.SR We have carried out a search for filamentary structures in the Taurus molecular cloud using $\rm^{13}CO$ line emission data from the FCRAO survey of $\rm \sim100 \, deg^2$. We have used the topological analysis tool, DisPerSe, and post-processed its results to include a more strict definition of filaments that requires an aspect ratio of at least 3:1 and cross section intensity profiles peaked on the spine of the filament. In the velocity-integrated intensity map only 10 of the hundreds of filamentary structures identified by DisPerSe comply with our criteria. Unlike Herschel analyses, which find a characteristic width for filaments of $\rm \sim0.1 \, pc$, we find a much broader distribution of profile widths in our structures, with a peak at 0.4 pc. Furthermore, even if the identified filaments are cylindrical objects, their complicated velocity structure and velocity dispersions imply that they are probably gravitationally unbound. Analysis of velocity channel maps reveals the existence of hundreds of `velocity-coherent' filaments. The distribution of their widths is peaked at lower values (0.2 pc) while the fluctuation of their peak intensities is indicative of stochastic origin. These filaments are suppressed in the integrated intensity map due to the blending of diffuse emission from different velocities. Conversely, integration over velocities can cause filamentary structures to appear. Such apparent filaments can also be traced, using the same methodology, in simple simulated maps consisting of randomly placed cores. They have profile shapes similar to observed filaments and contain most of the simulated cores. RoboPol: optical polarization-plane rotations and flaring activity in blazars (1601.03392) D. Blinov, V. Pavlidou, I. E. Papadakis, T. Hovatta, T. J. Pearson, I. Liodakis, G. V. Panopoulou, E. Angelakis, M. Baloković, H. Das, P. Khodade, S. Kiehlmann, O. G. King, A. Kus, N. Kylafis, A. Mahabal, A. Marecki, D. Modi, I. Myserlis, E. Paleologou, I. Papamastorakis, B. Pazderska, E. Pazderski, C. Rajarshi, A. Ramaprakash, A. C. S. Readhead, P. Reig, K. Tassis, J. A. Zensus Jan. 15, 2016 astro-ph.HE We present measurements of rotations of the optical polarization of blazars during the second year of operation of RoboPol, a monitoring programme of an unbiased sample of gamma-ray bright blazars specially designed for effective detection of such events, and we analyse the large set of rotation events discovered in two years of observation. We investigate patterns of variability in the polarization parameters and total flux density during the rotation events and compare them to the behaviour in a non-rotating state. We have searched for possible correlations between average parameters of the polarization-plane rotations and average parameters of polarization, with the following results: (1) there is no statistical association of the rotations with contemporaneous optical flares; (2) the average fractional polarization during the rotations tends to be lower than that in a non-rotating state; (3) the average fractional polarization during rotations is correlated with the rotation rate of the polarization plane in the jet rest frame; (4) it is likely that distributions of amplitudes and durations of the rotations have physical upper bounds, so arbitrarily long rotations are not realised in nature.
CommonCrawl
Volume 21 Supplement 17 Selected papers from the 3rd International Workshop on Computational Methods for the Immune System Function (CMISF 2019) Validation of a yellow fever vaccine model using data from primary vaccination in children and adults, re-vaccination and dose-response in adults and studies with immunocompromised individuals Carla Rezende Barbosa Bonin1, Guilherme Côrtes Fernandes2, Reinaldo de Menezes Martins3, Luiz Antonio Bastos Camacho4, Andréa Teixeira-Carvalho6, Licia Maria Henrique da Mota5, Sheila Maria Barbosa de Lima3, Ana Carolina Campi-Azevedo6, Olindo Assis Martins-Filho6, Rodrigo Weber dos Santos7, Marcelo Lobosco ORCID: orcid.org/0000-0002-7205-95097 & Collaborative Group for Studies of Yellow Fever Vaccine An effective yellow fever (YF) vaccine has been available since 1937. Nevertheless, questions regarding its use remain poorly understood, such as the ideal dose to confer immunity against the disease, the need for a booster dose, the optimal immunisation schedule for immunocompetent, immunosuppressed, and pediatric populations, among other issues. This work aims to demonstrate that computational tools can be used to simulate different scenarios regarding YF vaccination and the immune response of individuals to this vaccine, thus assisting the response of some of these open questions. This work presents the computational results obtained by a mathematical model of the human immune response to vaccination against YF. Five scenarios were simulated: primovaccination in adults and children, booster dose in adult individuals, vaccination of individuals with autoimmune diseases under immunomodulatory therapy, and the immune response to different vaccine doses. Where data were available, the model was able to quantitatively replicate the levels of antibodies obtained experimentally. In addition, for those scenarios where data were not available, it was possible to qualitatively reproduce the immune response behaviours described in the literature. Our simulations show that the minimum dose to confer immunity against YF is half of the reference dose. The results also suggest that immunological immaturity in children limits the induction and persistence of long-lived plasma cells are related to the antibody decay observed experimentally. Finally, the decay observed in the antibody level after ten years suggests that a booster dose is necessary to keep immunity against YF. At the time this paper was written, a significant global outbreak of COVID-19 was in course. This pandemic clearly illustrates the need for new tools to assist the fast development of vaccines against emerging or unknown diseases. Even vaccines developed decades ago, such as the yellow fever vaccine (YFV), could benefit from new tools. Although YFV is considered safe, there are rare but serious adverse effects that need to be reassessed, such as viscerotropic and neurotropic events [1]. There are also questions regarding the safety of vaccinating specific populations such as the elderly, people living with Human Immunodeficiency Virus (HIV)/AIDS, and other immunocompromised populations. Studies suggest that the immunological immaturity of infants and young children limits the induction/persistence of long-lived plasma cells [2] and, for this reason, a booster dose is needed. The same occurs with the elderly due to immunosenescence.Footnote 1 In the vaccinology field, computer tools have been used to assist the vaccine development process [4,5,6,7,8,9,10,11,12,13]. Several computational modelling techniques can be used to achieve this objective [14]. Most of them focus on non-clinical trials. In previous work, we proposed a novel application of computer tools to vaccinology in the clinical development stage [15]. With mathematical and computational models, it is possible to evaluate in silico different scenarios related to vaccination and answer important questions which remain open, such as the minimal dose that confers immunity and immunity duration. The idea of using computer tools during the clinical development stage was then applied to model the immune response to the YFV [1, 16]. Results showed that mathematical models could capture the immune response to the YFV, and in subsequent work the model was validated quantitatively [17] This work presents new numerical experiments showing that our model can reproduce experimental data from scenarios such as booster dose, immune response in individuals under immunomodulatory therapy, and primovaccination in children. We also discuss, in more detail, simulations for primovaccination in adults and dose-response, extending the initial results obtained in a previous work [17]. Another work in the literature also uses an ODE-based approach to model the human immune response to vaccination against both YF and smallpox [18] using distinct data and equations sets, one for each disease. The authors aimed to primarily evaluate the dynamics of CD8+ T cells, while our work evaluates the immune response as a whole. The model proposed here differs from that presented by Le et al. [18], since it considers important populations at each stage of the immune response to YF vaccination, from virus inoculation to antigen presentation and consequent activation of lymphocytes, generation of antibodies, and memory cells. Furthermore, our validated model is a great tool to assist specialists in answering some open questions regarding YFV, which were not taken into account by Le et al. [18]. This section presents the predictions of the mathematical model presented in "Methods" section, comparing them with experimental data from several studies conducted by the Immunobiological Technology Institute (Bio-Manguinhos)/Oswaldo Cruz Foundation (Bio-Manguinhos/FIOCRUZ), René Rachou Research Center/Oswaldo Cruz Foundation (FIOCRUZ/Minas) and University of Brasilia (UnB) on human YFV [3, 19,20,21,22,23,24,25], such as viremia and antibody titers,Footnote 2 for distinct scenarios. For all experimental data, we present antibody interquartile range, lower limit, and upper limit. In order to facilitate comparison with numerical results, we also present the geometric mean of the experimental antibody titers (GMT—Geometric Mean Titers). The first scenario simulates an adult individual being vaccinated for the first time with the full dose of the vaccine against YF. The second scenario represents the revaccination of adult individuals. There are situations in which some individuals' immune response differs from the response usually obtained by vaccination in immunocompetent adults. This is the case of children and individuals with autoimmune diseases, respectively, the third and fourth scenarios. Finally, the fifth scenario evaluates the use of different doses of the vaccine against YF, all below the full dose. Numerical results are presented and compared to experimental data [3, 19,20,21,22,23,24,25]. More specifically, experimental results from primary vaccination in adults, booster dose in adults, primovaccination in children and individuals using immunomodulatory therapy, and dose-response studies are used to qualitatively and quantitatively validate the numerical results obtained by the mathematical model. A quantitative comparison was performed when experimental and numerical results were in the same unit. However, in some scenarios, the results generated by the model and the experimental data are in different units, mIU/mL, and reciprocal dilution, respectively. This is due to the experimental method used. Neutralising antibody levels in serum was measured by the Plaque Reduction Neutralisation Test (PRNT), either in reciprocal dilution or in International Units. If the standard serum for quantification in International Units is available, this unit's values are also obtained. What often occurs is the lack of this serum and, consequently, the lack of values in the mIU/mL unit, which precludes a quantitative comparison. For these cases, graphics are constructed with two Y-axes, each representing a unit. The experimental data in reciprocal dilution will be represented by the Y-axis on the left, while the results obtained by the model simulation, in mIU/mL, will be represented by the Y-axis on the right. This allows for a qualitative assessment of the model's results, by comparing the trends it predicts with those observed experimentally. First scenario: primovaccination in adults The first scenario was used to calibrate the model. In other words, its parameters and initial conditions were chosen to reproduce the experimental results of an individual vaccinated for the first time against YF using the full dose of the vaccine developed by Bio-Manguinhos/Fiocruz (17DD-YFV). After the model was calibrated, most of the parameters and initial conditions values found were kept for the experiments presented in the next sections. After vaccination, the antibody levels of the subjects who participated in the experiment were measured at different times. These samples were grouped in the following way: NV (day 0): Naïve (NV), immediately before vaccination; PV (30–45 days): primo-vaccinated (PV), 30–45 days after vaccination; PV (1–5 years): 1–5 years after vaccination; PV (> 5–9 years): 5–9 years after vaccination; PV (10 years): 10 years or more after vaccination. These groups, in general, will also be used for other studies that will be described in the following sections. Figure 1 shows the comparison between the levels of antibodies obtained experimentally and numerically, after calibration. These are cross-sectional data so that different individuals will be represented in the categories of post-vaccination time described above and the same categories are used to present numerical results. A pattern of marked increase in antibody levels 30–45 days after vaccination and a reduction, which was more pronounced after 1–5 years but was sustained for 10 years after vaccination. First scenario: primovaccination in adults. interquartile range (rectangles), median (black line), lower limit and upper limit (black stems), and geometric means (observed and estimated by model) of antibody titers for YF according to post-vaccination time. "GMT Data" ( ) refers to the geometric mean of the experimental data and "GMT Model" ( ) refers to the geometric mean of the numerical results The model errors were computed for each post-vaccination intervals and the results obtained are shown in Table 1. The model errors were small, an evidence that the model is likely suitable, and has been successfully calibrated. Table 1 Model error for each post-vaccination time interval Figure 2 presents experimental data and numerical results for the entire time simulated, which was 5000 days. Although experimental data for this scenario were used to adjust the model, as one can observe, experimental data are restricted to some days. Due to the total simulation time, it is not possible, in main graph, to observe the model results and experimental data for the two initial groups, NV and PV (30–45 days). To facilitate the visualisation of the curve in the first days of simulation, a secondary graphic is presented in the same figure, which presents experimental data only for the first 100 days after vaccination, as well as the numerical results. First scenario: primovaccination in adults. comparison between antibody curve ( ) generated by the model for 5000 days of simulation and experimental data ( ) obtained from primo-vaccinated individuals in the same period. The zoom in the figure shows in more details experimental and numerical results for the first 100 days after vaccination It is possible to notice in Fig. 2 that, between days 1 and 41, no experimental data were obtained. Thus, it was not possible to make the adjustment or even evaluate the quality of the curve generated by the model in this interval. First scenario: primovaccination in adults. Antibody curve ( ) generated by the model for 10,000 days of simulation and experimental data ( ) obtained from primo-vaccinated individuals in the same period. The red dashed line ( ) presents the protective level A booster dose is required if the antibody level is below the the seropositivity threshold. Figure 3 presents the simulation for 10,000 days after the first vaccination. As one can observe, the curve generated by the model for a single dose suggests that the amount of antibodies is below the protective level about 10,000 days after vaccination, thus indicating the need of administration of a booster dose. Second scenario: booster dose in adults In addition to assessing the immune response to the first dose of the YFV, the Fiocruz research group also collected experimental data from revaccinated (RV) individuals. In this study, the antibody levels of the subjects who participated in the experiment were measured at different times, and these samples were grouped in the following way: PV (> 5–9 years): 5–9 years after first vaccination; PV (10 years): 10 years or more after first vaccination; RV (30–45 days): 30–45 days after booster dose; RV (1–5 years): 1–5 years after booster dose; RV (> 5–9 years): 5–9 years after booster dose; RV (10 years): 10 years or more after booster dose. Data obtained for this scenario were used to validate the model, without changing or adjusting the parameters and initial values found during calibration. For this purpose, the following method was adopted. Initially, a simulation of a primo-vaccinated individual was performed. After simulating the equivalent of 5500 days since the application of the vaccine, the simulation was paused, the current values for all populations of the model were saved and only the value associated to the virus population was modified, from its current value, zero, to the adjusted full vaccine dose. The simulation was then resumed, 4500 additional days were simulated to reach 10,000 days. These specific numbers of days after vaccination, 5500 and 10,000, were chosen based on experimental data available. The PV group (10 years), that is, adult individuals 10 years after the first vaccine dose, had samples collected up to 5475 days (15 years) after vaccination. For this reason, the booster dose was simulated in the model 5500 days after the first dose. Likewise, in the RV group (10 years), individuals 10 years after the booster dose, had samples collected up to 3650 days after the second dose, and consequently up to 9125 days after the first dose. For this reason, the simulation was for 10,000 days after application of the first dose. Second scenario: booster dose in adults. Interquartile range (rectangles), median (black line), lower limit and upper limit (black stems) and geometric means (observed and estimated by model) of antibody titers for YF according to post-vaccination time, first dose and booster dose. "GMT Data" ( ) refers to the geometric mean of the experimental data and "GMT Model" ( ) refers to the geometric mean of the numerical results. Experimental data expressed in reciprocal dilution, and numerical ones in PRNT mIU/mL. The blue dashed line ( ) presents the protective level expressed in mIU/mL Figure 4 presents experimental data and numerical results for the booster dose scenario. Units for model predictions ( ) and the experimental data differ, as the latter was only available in terms of reciprocal dilution. Third scenario: primovaccination in children As mentioned in "Background" section, immune response in child is less pronounced than in adults. Some hypotheses explain these differences: immunological immaturity limits the induction and persistence of long-lived plasma cells [27]. Long-lived plasma cells are largely responsible for long-term secretion of antibodies [28]. In this scenario, these two possibilities (limitation of induction and persistence) were evaluated numerically. For this purpose, changes were made only to the values of parameters related to these hypotheses, without any further modification, except for the weight of the individual being simulated and the initial condition for the antibody population. Table 10 shows the weight, and percentage of fluids in the body that was used as a basis for calculating the initial condition of the virus that would be used in simulations of adults and children, as well as the initial amount of antibodies used for each population. The hypothesis that immunological immaturity limits the persistence of long-lived plasma cells was evaluated increasing the natural death rate for this type of cells (represented by parameter \(\delta _{pl}\)). However the simulations showed that changes in this value had no significant effect on the antibodies curve and, for this reason, this result was omitted. The hypothesis that the immunological immaturity limits the induction of long-lived plasma cells was tested reducing the rate of differentiation of B cells into long-lived plasma cells (\(\beta _{pl}\)). There was a noticeable reduction in the lifelong memory by changing only this parameter. The simultaneous alteration of \(\beta _{pl}\) and \(\delta _{pl}\) was also evaluated. Although the change in \(\delta _{pl}\) alone did not produce a significant reduction in antibodies, when combined with changes in \(\beta _{pl}\) value, the results produced the best fit to reproduce experimental data, which are described in this section. A range of values were tested for \(\beta _{pl}\) and \(\delta _{pl}\). The simulation using a reduction of approximately 70% of the value associated to the parameter \(\beta _{pl}\) and an increase of 100% of the value associated to the parameter \(\delta _{pl}\) used for adults produced the best fit to reproduce qualitatively the immune response of children. The values of \(\beta _{pl}\) in the model were \(1.68\times 10^{-6}\) and \(5.61\times 10^{-6}\) to simulate children and adults, respectively. The values of \(\delta _{pl}\) in the model were \(2.4\times 10^{-4}\) and \(4.8\times 10^{-4}\) to simulate children and adults, respectively. Figure 5 shows that numerical results were able to reproduce the same behaviour observed in experimental data: an initial rapid increase in the amount of antibodies is followed by a decrease over the course of time. It should also be noted that these are cross-sectional data, so that antibody levels in post-vaccination times are from different children. Third scenario: primovaccination in children. Interquartile range (rectangles), median (black line), lower limit and upper limit (black stems) and geometric means (observed and estimated by the model) of antibody titers for YF after vaccination in children. "GMT Experimental Data" ( ) refers to the geometric mean of children's vaccination data and "GMT Model Children" ( ) to the geometric mean of numerical results after parameters has been adjusted to represent the immune response of children. Experimental data expressed in reciprocal dilution, and numerical ones in PRNT mIU/mL Third scenario: primovaccination in children. Antibody levels generated by the model (for adults and children) and experimental data for adults and children. "GMT Data" ( ) refers to the geometric mean of data (for adults and children) and "GMT Model" ( ) to the geometric mean of numerical results (for adults and children). Experimental data expressed in reciprocal dilution, and numerical ones in PRNT mIU/mL To facilitate the comparison between the immune response of adults and children when vaccinated against YF, experimental data and numerical results for these two groups are shown in Fig. 6. For adults and children, the experimental data in reciprocal dilution (Y-axis on the left) allow direct comparison. Conversely, results from the numerical experiment in mIU/mL (Y-axis on the right), allow comparisons of patterns only. Fourth scenario: immune response in individuals using immunomodulatory therapy A study found in the literature [25] reports differences in the immune response to the YFV in groups of individuals using different types of immunomodulatory therapies. The therapies covered in the study are divided into two main groups, those that use only synthetic DMARDs (disease-modifying antirheumatic drugs) and those that use a combination of biological and synthetic drugs. According to the study [25], DMARDs have the ability to modify or affect the pre-existing protective immunity induced by the vaccine, including the function of memory T and B cells and, as a consequence, the neutralising antibody levels specific to YF. The biggest difference was found when comparing the control group, that is, individuals without any autoimmune disease, with the group using combination therapy. The hypotheses found in the literature to explain how DMARDs affects the pre-existing protective immunity induced by the vaccine [25] were evaluated using the model. Again, changes were made only in the parameters related to the hypotheses, keeping the other values found during calibration. Simulations of individuals in two conditions, control and under the use of combination therapy, were carried out. For simulating individuals under use of combination therapy, changes were tested in the values of all parameters of the equation that describes the dynamics of B cells, as well as in their initial conditions. The following alterations were able to reproduce the antibody levels of individuals using combination immunomodulatory therapy: \(50\%\) reduction in \(\alpha _b\) parameter (B cell homeostasis rate); \(25\%\) reduction in parameter \(\beta _{pl}\) (B cell differentiation rate in long-lived plasma cells); \(25\%\) reduction in B cell initial condition. These three alterations, reduction in \(\alpha _b\), reduction in \(\beta _{pl}\), and reduction in B cell initial condition, produced very similar results: all of them reproduced the immune response of an individual with autoimmune disease using combination immunomodulatory therapy. For this reason, only one of the results is presented in this section, the one which reduces the initial condition of B cells by 25%. The reduction percentages were chosen after carrying out several tests with distinct values for the parameters and initial condition of B cells. The values that produced the best adjustments in the levels of antibodies generated by the model to the experimental data obtained for the individuals in use of combination immunomodulatory therapy were chosen and are shown in Table 2. Table 2 Values of parameters αb, βpl and B0 used in the model to simulate control subjects and those using combination immunomodulatory therapy Figure 7 presents experimental and numerical data for control individuals and those using immunomodulatory therapy. In this figure, the numerical results modify only the value associated to the initial condition of B cells (B0). Fourth scenario: immune response in individuals using immunomodulatory therapy. Interquartile range (rectangles), median (black line), lower limit and upper limit (black stems) and geometric means (observed and estimated by the model) of YF antibody titers according to post-vaccination time, for control individuals and those under use of combination immunomodulatory therapy. "GMT Control Data" ( ) refers to the geometric mean of experimental data obtained from control individuals and "GMT Control Model" ( ) to the geometric mean of numerical results obtained when using the original parameters values. "GMT Immunomodulatory Data" ( ) refers to the geometric mean of experimental data obtained from individuals using the therapy and "GMT Immunomodulatory Model" ( ) to the geometric mean of the numerical results obtained when using the adjusted parameters. Experimental data expressed in reciprocal dilution, and numerical ones in PRNT mIU/mL Fifth scenario: dose-response The literature [3] reported that doses from 27,476 IU to 587 IU of the YFV induced seroconversion rates and similar GMT in the participants of the experiment. Based on that study, we simulated the immune response after the administration of different doses of the YFV. This was done changing the values used as the initial virus condition in the model to be the same described in the literature [3]. These values adopted as initial condition were computed considering the dilution of the vaccine in the body, as well as the conversion of the units, as presented in "Experimental data" section. The values of all other parameters were kept the same. Mean antibody titers 30–45 days after vaccination generated by model simulation approximated the actual data in the dose-response study, which also used International Units (Fig. 8). The data showed that antibody levels increased with vaccine doses up to 587 mIU, above which no further increase in antibody levels was achieved. Fifth scenario: dose-response. Interquartile range (rectangles), median (red line), lower limit and upper limit (black stems) and GMT (observed and estimated by model) for different doses of vaccine (31 IU, 158 IU, 587 IU, 3013 IU, 10,447 IU and 27,476 IU) within 30–45 days after vaccination. "GMT Data" ( ) refers to the geometric mean of the experimental data and "GMT Model" ( ) refers to the geometric mean of the numerical results, both obtained for each dose Fifth scenario: dose-response. Simulation of antibody levels curves until 1000 days of vaccination, considering distinct vaccine doses (31 IU, 158 IU, 587 IU, 3,013 IU, 10,447 IU and 27,476 IU) Antibody levels generated by the model (Fig. 9) showed a pattern of marked increase with a peak within 20 days of vaccination, somewhat earlier and much lower for vaccine doses 31 mIU and 158 mIU. According to the model, vaccine doses 587 mIU and above induced and sustained similar antibody levels for 1000 days. The main graphic presents the antibody levels curves until 1000 days of vaccination. To better observe the curve behaviour in the first days after vaccination, a secondary graphic on the upper right presents the same result for the first 100 days after vaccination. Fifth scenario: dose-response. Simulation in viremia curves considering distinct vaccine doses (31 IU, 158 IU, 587 IU, 3,013 IU, 10,447 IU and 27,476 IU) Figure 10 presents the numerical results for viremia curves, considering distinct vaccine doses. The main graphic does not allow a detailed observation of the curves for the smaller doses and, for this reason, a secondary graphic on the upper right presents a zoom in this figure, allowing one to observe that viremia for some of smaller doses is not equal to zero. The immune response to vaccination was successfully modelled in several of its relevant components presented in different scenarios. In general, the immune response described by the model provided a reasonable approximation of empirical data showing that it was built on sound mathematical relations of the key parameters. The first scenario, primovacciation in adults, was used to adjust model parameters and initial conditions. As one could expect, the observed error presented in Table 1 was very low, below 3%. For the fifth scenario, the dose-response experiment, except for the lowest dose whose error was about 13%, for all other doses the errors were bellow 2.5%, as Table 3 reveals. This result showed that the model, that was adjusted using only data from individuals vaccinated with the full dose, was able to satisfactorily reproduce the immune response obtained with vaccination using doses lower than the full one. For all other simulated scenarios, it was not possible to make a similar quantitative analysis either because data were not available, or because units were different. Experimental data available for antibody levels use reciprocal dilution as unit, while the model uses mIU/mL, and one unit cannot be converted into the other one with available data. Thus, it is not possible to say, for example, that an increase of 50% in the level of antibodies expressed in reciprocal dilution means an increase of the same 50% expressed in mIU/mL. Table 3 Model error for each dose between 30 and 45 days post-vaccination A similar pattern in experimental data and model outputs was observed in Fig. 4, in "Second scenario: booster dose in adults" section. Reduced antibody levels in individuals vaccinated 5–9 and 10 years before, were followed by a pronounced rise after a booster dose and a marked reduction after 1–5 years. Antibody levels 10 years after revaccination were almost as low as those before the booster dose. Despite the difference in the units adopted, it was possible to notice in Fig. 5, in "Third scenario: primovaccination in children" section, that the numerical results were able to reproduce the same behaviour observed in experimental data: an initial rapid increase in the amount of antibodies is followed by a decrease over time. As presented in Fig. 6, also in "Third scenario: primovaccination in children" section, experimental data showed that there is an evident difference in the levels of antibodies produced by adults and children. It should be noted that the model was adjusted using data from adults expressed in mIU/mL, and therefore it is not the same unit used in experimental data, which are expressed in reciprocal dilution. Still, in a qualitative way, the numerical results were able to capture this behaviour: a lower level of antibodies in children than in adults. Three reductions in constant/initial condition values (\(\alpha _b\), \(\beta _{pl}\), and B cell initial condition) numerically evaluated in this work could explain the immune response in individuals using immunomodulatory therapy. These results could change if other aspects of the way DMARDs work in the body, and its mechanisms of interaction with each type of cell, were also considered. Some mechanisms involve the production and/or inhibition of cytokines that were not yet considered in the model. The model was able to reproduce distinct scenarios related to the immune response to vaccination against YF. For this reason, we decided to use it to obtain some clues about the questions that remain unanswered or poorly understood about the vaccine. The first clue is that, among all evaluated doses, the lowest dose capable of conferring immunity is half of the minimum recommended by the WHO, as the numerical experiments in "Fifth scenario: dose-response" section show. The results presented in Fig. 9 show that the antibody curve is almost the same for all doses above 587 IU; these results are similar to those presented in the literature [3]. The second clue is related to the hypothesis that immunological immaturity in children limits the induction and persistence of long-lived plasma cells. Numerical results confirmed that both are responsible for the differences observed in experimental results of adults and children, and that persistence of long-lived plasma had no significant effect on the antibody curves alone. The third clue is related to the need of booster dose. A single dose apparently (as suggested by the results in Fig. 3) did not provide long lasting protection. The decay rate in the antibody level suggests that the booster dose is needed to maintain protection. In fact, about 10,000 days after vaccination, the level of antibodies in an adult is below the protective level if a single dose is given. If a booster dose is given, the protection level is improved, as depicted in Fig. 4. Moreover, the single dose is usually given to infants or children, which induces a lower amount of long-lived plasma cells than adults, which reinforces the need of booster doses throughout life. Some considerations and limitations of the model used in this study should be highlighted. The model was adjusted to reflect the geometric mean of the experiments. In this sense, conclusions reflect the typical immune response from the average individual described in Tables 10 and 11. Some individuals with distinct characteristics, such as the immunological immaturity of children or a compromised immune system due to some disease should, for example, receive a booster dose of the YFV in a shorter period of time. Furthermore, the extrapolation done to predict the antibody level after 10,000 days may suffer from the classical overfitting problem, where the model can replicate the data it is adjusted to but fails on any attempt of extrapolation or forecasting. Finally, other aspects that may influence the minimal dose to confer immunity against YF were not taken into account, such as problems with virus die-off during transport. This work presented the quantitative and qualitative validation of a mathematical-computational model to represent the immune response to the YFV using five distinct scenarios. The first one simulates the immune response to the administration of the full dose of the 17DD-YFV for the first time. The second one simulates the immune response to distinct doses of vaccine. The third scenario simulates the administration of a booster dose ten years after the first dose. The fourth simulates the vaccination in individuals under immunomodulatory therapy. Finally, the last one simulates the primary vaccination in children. The numerical results were collected and compared to experimental data. Some results could be compared directly, and errors below 10% were observed. For other results that could not be compared directly, because distinct units were used, it was observed that the numerical results obtained by the computational model satisfactorily reproduced the behaviour observed in experimental data. The numerical experiments show that among all vaccine doses evaluated, the lowest one capable of conferring immunity against YF is about half of the reference dose, 587 UI. The results also suggest that the hypothesis that the immunological immaturity in children limits long-lived plasma cells' persistence is not related to the antibody decay observed experimentally. The numerical experiments show that this phenomenon is due to the lower induction of long-lived plasma cells. Finally, the antibody level's decay within the ten years following vaccination suggests that a booster dose is necessary to keep immunity against YF. Although the model presented in this work focuses on the YFV, it could be used to gain new insights in the immune response to vaccine canditates, such as those for COVID-19. We also plan, as future work, to refine the model to guide future empirical studies: (1) to determine the optimal number of doses to ensure protection against YF; (2) to determine the duration of immunity with two vaccine doses in infants; (3) to determine the interval between these two doses given to infants to maximise the duration of immunity, and (4) conduct a dose-response study in infants. Mathematical model The model used in this work consists of a system of ordinary differential equations (ODEs), which were originally proposed in a previous work [1, 17], and reproduced here. These equations represent the main populations involved in the immune response to the vaccination, as well as the virus itself. They are yellow fever vaccine virus, APCs (Antigen-presenting cells), CD4\(+\) T cells, CD8\(+\) T cells, short and long-lived plasma cells, B cells, memory B cells, and antibodies. The initial conditions and acronyms of these populations, as well as the parameters and their meanings, are presented in Tables 11 and 12, respectively, which are presented in "Appendix". Equation (1) represents the vaccine virus (V): $$\begin{aligned} \frac{d}{dt}V= \pi _v V - \frac{c_{v1} V}{c_{v2} + V} - k_{v1} V A - k_{v2} V T_{ke}. \end{aligned}$$ The virus can not proliferate by itself. It needs to infect a cell and use it as a factory for new viruses. This mechanism is implicitly considered in the term \(\pi _v V\), which represents the multiplication of the virus in the body, with a production rate \(\pi _v\). The term \(\frac{c_{v1} V}{c_{v2} + V}\) denotes a non-specific viral clearance made by the innate immune system. This function models growth combined with a saturation effect [29]. The term \(k_ {v1} V A\) denotes specific viral clearance due to antibody signalling, where \(k_ {v1}\) is the clearance rate. The term \(k_ {v2} V T_{ke}\) denotes specific viral clearance due to the induction of apoptosis of cells infected by the YF virus, where \(k_ {v2}\) is the clearance rate. APCs are all cells that display antigens complexes on their surfaces, such as dendritic cells and macrophages. Two stages of APCs were considered: immature and mature. The first stage, immature APCs (\(A_{p}\)), is described by Eq. (2): $$\begin{aligned} \frac{d}{dt}A_{p}= \alpha _{ap} (A_{p0} - A_p) - \beta _{ap} A_p \frac{c_{ap1} V}{c_{ap2} + V}. \end{aligned}$$ The term \(\alpha _{ap} (A_{p0} - A_p)\) describes the homeostasis of APCs, where \(\alpha _{ap}\) is the homeostasis rate. The term \(\beta _{ap} A_p \frac{c_{ap1} V}{c_{ap2} + V}\) denotes the conversion of immature APCs into mature ones. So the same term appears in Eq. (3) with positive sign. The Eq. (3) represents the mature APCs (\(A_{pm}\)): $$\begin{aligned} \frac{d}{dt}A_{pm}= \beta _{ap} A_p \frac{c_{ap1} V}{c_{ap2} + V} - \delta _{apm}A_{pm}. \end{aligned}$$ The first term, as just explained, denotes the dynamics of APCs maturation. The second term, \(\delta _{apm}A_{pm}\), means the natural decay of the mature APCs, where \(\delta _{apm}\) is the decay rate. Equation (4) represents the population of naïve CD4+ T cells (\(T_{hn}\)): $$\begin{aligned} \frac{d}{dt}T_{hn}= \alpha _{th} (T_{hn0} - T_{hn}) - \beta _{th} A_{pm} T_{hn}. \end{aligned}$$ The term \(\alpha _{th} (T_{hn0} - T_{hn})\) represents the homeostasis of CD4+ T cells, where \(\alpha _{th}\) is the homeostasis rate. The term \(\beta _{th} A_{pm} T_{hn}\) denotes the activation of naïve CD4+ T cells, where \(\beta _{th}\) is the activation rate. Equation (5) represents the effector CD4+ T cell population (\(T_{he}\)): $$\begin{aligned} \frac{d}{dt}T_{he}= \beta _{th} A_{pm} T_{hn} + \pi _{th} A_{pm} T_{he} - \delta _{th} T_{he}. \end{aligned}$$ The term \(\pi _{th} A_{pm} T_{he}\) represents the proliferation of effector CD4+ T cells. The term \(\delta _{th} T_{he}\) represents the natural death of these cells, with \(\delta _{th}\) representing its decay rate. The mechanisms used to represent CD4+ T cells were also used to model the dynamics of CD8+ T cells. Equations (6) and (7) represent the population of naïve (\(T_{kn}\)) and effector (\(T_{ke}\)) CD8+ T cells: $$\begin{aligned} \frac{d}{dt}T_{kn}=\,\alpha _{tk} (T_{kn0} - T_{kn}) - \beta _{tk} A_{pm} T_{kn}, \text {and} \end{aligned}$$ $$\begin{aligned} \frac{d}{dt}T_{ke}= \,\beta _{tk} A_{pm} T_{kn} + \pi _{tk} A_{pm} T_{ke} - \delta _{tk} T_{ke}. \end{aligned}$$ The term \(\alpha _{tk} (T_{kn0} - T_{kn})\) represents the homeostasis of CD8+ T cells, where \(\alpha _{tk}\) is the homeostasis rate. The term \(\beta _{tk} A_{pm} T_{kn}\) denotes the activation of naïve CD8+ T cells, where \(\beta _{tk}\) is the activation rate. The term \(\pi _{tk} A_{pm} T_{ke}\) represents the proliferation of effector CD8+ T cells, where \(\pi _{tk}\) is the activation rate. The term \(\delta _{tk} T_{ke}\) represents the natural death of these cells, with \(\delta _{tk}\) representing its decay rate. Equation (8) represents B cells (B), both naïve and effector ones. These populations were not considered separately in order to simplify the model. $$\begin{aligned} \begin{aligned} \frac{d}{dt}B&= \alpha _b (B_0 - B) + \pi _{b1} V B + \pi _{b2} T_{he} B- \beta _{ps} A_{pm} B \\&\quad - \beta _{pl} T_{he} B - \beta _{bm} T_{he} B. \end{aligned} \end{aligned}$$ The term \(\alpha _b (B_0 - B)\) represents the B cells homeostasis, where \(\alpha _b\) is the homeostasis rate. The terms \(\pi _{b1} V B\) and \(\pi _{b2} T_{he} B\) represent the proliferation of B cells activated by the T-cell independent and T-cell dependent mechanisms, respectively. The terms \(\beta _{ps} A_{pm} B\), \(\beta _{pl} T_{he} B\) and \(\beta _{bm} T_{he} B\) denote the differentiation of active B cells into short-lived plasma cells, long-lived plasma cells and memory B cells, respectively. The activation rates are respectively given by \(\beta _{ps}\), \(\beta _{pl}\) and \(\beta _{bm}\). Equation (9) represents the short-lived plasma cells (\(P_s\)): $$\begin{aligned} \frac{d}{dt}P_s= \beta _{ps} A_{pm} B - \delta _{ps} P_s. \end{aligned}$$ The term \(\delta _{ps} P_s\) denotes the natural decay of short-lived plasma cells, where \(\delta _{ps}\) is the decay rate. Equation (10) represents the long-lived plasma cells (\(P_l\)): $$\begin{aligned} \frac{d}{dt}P_l= \beta _{pl} T_{he} B - \delta _{pl} P_l + \gamma _{bm} B_m. \end{aligned}$$ The term \(\delta _{pl} P_l\) denotes the natural decay of long-lived plasma cells, with \(\delta _{pl}\) representing the decay rate. The term \(\gamma _{bm} B_m\) represents the resupply of these cells by memory B cells, where \(\gamma _{bm}\) is the production rate. Eq. (11) corresponds to memory B cells (\(B_m\)): $$\begin{aligned} \frac{d}{dt}B_m= \beta _{bm} T_{he} B + \pi _{bm1} B_m\left( 1 - \frac{B_m}{\pi _{bm2}}\right) - \gamma _{bm} B_m. \end{aligned}$$ The term \(\pi _{bm1} B_m\left( 1 - \frac{B_m}{\pi _{bm2}}\right)\) represents the logistic growth of memory B cells, i.e., there is a limit to this growth. \(\pi _{bm1}\) represents the growth rate, and \(\pi _{bm2}\) limits the growth. Finally, Eq. (12) represents the antibodies: $$\begin{aligned} \frac{d}{dt}A= \pi _{ps} P_s + \pi _{pl}P_l - \delta _a A. \end{aligned}$$ The terms \(\pi _{ps} P_s\) and \(\pi _{pl}P_l\) are the production of the antibodies by the short-lived and long-lived plasma cells, respectively. The production rates are given by \(\pi _{ps}\) and \(\pi _{pl}\), respectively. The term \(\delta _a A\) denotes the natural decay of these cells, where \(\delta _a\) is the decay rate. Computational model The model was implemented in the Python programming language. Numerical solution of the system of ODEs performed by the odeint function, a member of the integrate package in the scipy library [30]. This function uses the characteristics of the ODE system to select the numerical method used, with adaptivity in both timestep and convergence order. The experiments were performed using Python version 3.7.5 using the Spyder integrated development environment (IDE). The execution environment was composed of an Intel Core i5 1.6 GHz processor, with 8 GB of RAM. The system runs macOS Mojave version 10.14.6. Experimental data The first set of experimental data used was the one that presents markers of the immunological response to the vaccine against YF in adults who were primed and revaccinated. The Tables 4 and 5 present a summary of data that were used for the primed individuals [31] and Table 6 for revaccinated individuals [24]. The antibody data presented in the tables represent the geometric mean of the antibody titers (GMT - Geometric Mean Titers) of all individuals in each group. Table 4 Single dose adults—antibodies (log10 mIU/mL)—by time interval Table 5 Single dose adults—antibodies (reciprocal dilution)—by time interval Table 6 Revaccinated—antibodies (reciprocal dilution)—by time interval Tables 7 and 8 present a summary of data that were used on vaccination against YF in children [19, 20] and individuals using immunomodulatory therapy [25], respectively. Table 9 summarises data on antibody levels from the study evaluating the dose versus response [3, 22, 23]. Table 7 Children—antibodies (reciprocal dilution)—by time interval Table 8 Use of immunomodulatory therapy—antibodies (reciprocal dilution)—by time interval Table 9 Dose response—antibodies (log10 mIU/mL)—by time interval It is possible to observe in these tables a difference in the unit of the antibody titers (mIU/mL and reciprocal dilution). The test that is normally performed for the quantification of antibodies, PRNT, generates results in reciprocal dilution. When, at the time of testing, the standard serum for quantification in International Units was available, the value in this unit was also obtained. What often occurs was the lack of this serum and consequently the lack of values in mIU/mL. Thus, in some experiments, the levels of antibodies were expressed in mIU/mL while in others, in the reciprocal dilution. The unit adopted by the model for the concentration of antibodies is mIU/mL and data in that unit were used for a quantitative validation of the model. However, data expressed in reciprocal dilution were also used in the validation of the model, but in a qualitative way. Experimental data versus numerical results One of the main changes made in our previous work [1], after access to experimental data, was to adjust the units used in the model. The amount of vaccine virus used as the model's initial condition was one of these changes. Previously [1], the value 27,476 IU was used, which is the average amount of virus present in the full dose, that is, in 0.5 mL. Variations in the amount of virus across vaccine lots were not considered. But now it is considered that, from the moment the vaccine is injected into an individual, it is diluted in the volume of fluids that the individual has in the body, something around 65% of the body weight (Table 10 shows the weight and percentage of fluids in the body used for adults and children). In addition, in this paper, a comparison between experimental data and the viremia curve generated by the model is done. The unit used in experimental viremia data is copies/mL, that is, number of viral particles per millilitre. To compare numerical and experimental data, both results must be expressed in the same unit. It is then necessary to convert from IU/dose (27,476 IU in 0.5 mL of the dose) to IU/mL of liquid in the body. After that, the value found has to be converted to PFU/mLFootnote 3 using the relationship 1 IU = 1.91 PFU [3]. To convert from PFU/mL to copies/mL, a relationship found in the literature [32], and presented in Eq. (13), was used: $$\begin{aligned} \log _{10} PFU/mL = [0.974 \log _{10} copies/mL] - 2.807. \end{aligned}$$ With respect to other populations, except for antibodies, the values used in our previous work [1] were number of cells found in 1 \(\upmu\)l, and for this reason, they were multiplied by \(10^3\) to be converted to mL. These changes in initial conditions forced us to also readjust the model parameters. Table 10 Values used for simulating adults and children Finally, it was observed in experimental data that, even before adults were vaccinated, some of them already had antibodies against YF. There are some hypotheses to justify the presence of antibodies prior to vaccination, one of them is the cross protection caused by contact with others flavivirus, such as the dengue virus, for example [33]. But perhaps the most likely is a previous non recorded vaccination. Due to this observation, the initial condition of the model that represents the antibody concentration in adults was set to a value similar to that observed experimentally, a value around 150 mIU/mL. For children, this value was defined as zero. It is necessary to clarify how comparisons between experimental and numerical data were done. For all scenarios (first vaccination in adults, booster dose in adults, primovaccination in children and individuals using immunomodulatory therapy, and dose-response), regardless of the units, data from several individuals exist at different post-vaccination time intervals. For the same day there are one or more individuals and the values found may vary due to differences in immune responses that can be caused by numerous factors such as medication use, genetic inheritance, habits and many others. Experimental data were already been grouped by post-vaccination time interval and this division was kept. For each group there are individuals spread over the entire time-interval range, which makes it difficult to identify trends in data sets. For this reason, it was decided to present data in a box diagram format (boxplot), thus facilitating a summary view of data for each of the intervals. To each boxplot it has been added the geometric mean of the antibody titers (GMT), defined as the nth root of the product of n terms and calculated using the formula presented in Eq. (14): $$\begin{aligned} \left( \prod _{i = 1}^{N} x_i\right) ^{\frac{1}{N}} = \root N \of {x_1 x_2 \ldots x_N}. \end{aligned}$$ To compare experimental and numerical data, not only the geometric mean of the antibody titers were computed for experimental data, but it was also necessary to compute it for the numerical results. This was done as follows. For the same days where experimental antibody titers exist, the numerical results were estimated. Then we computed the GMT of the numerical values found for these days. For example, suppose that a given group (30–45 days) has five individuals with antibody titers obtained on days 32, 35, 41, 43, and 45. The geometric mean was computed using the levels of antibodies estimated by the model in those same days. Model parameters and initial conditions used in simulations are included in this published article. Code to solve the mathematical model can be made available upon request to the authors. State of unregulated immune function in the elderly, which contributes to increased susceptibility to infections, cancer and autoimmunity, and reduced vaccine response. Other controversial issues are: (a) the need for a booster dose also for immunocompetent adults, and (b) the lower vaccine dose that is as immunogenic and safe as the current formulation [3]. Antibody titer indicates the level of antibodies in a blood sample, defined as the largest dilution of the blood sample with a dilution agent in which an assay, such as ELISA, still produces a positive result [26]. Plaque-Forming Unit is a measure of the number of particles, such as viral particles, capable of forming plaques (visible structures formed within a cell culture) per unit of volume. YF: YFV: HIV: Human Immunodeficiency Virus GMT: Geometric mean titers PRNT: Plaque Reduction Neutralisation Test NV: Naïve volunteers PV: Primary vaccinated volunteers RV: Revaccinated volunteers DMARDs: Disease-modifying antirheumatic drugs ODEs: APCs: Antigen-presenting cells IDE: Integrated development environment Bonin CR, Fernandes GC, dos Santos RW, Lobosco M. A qualitatively validated mathematical-computational model of the immune response to the yellow fever vaccine. BMC Immunol. 2018;19(1):15. Siegrist C-A. Vaccine immunology. In: Plotkin's vaccines. Amsterdam: Elsevier; 2018. p. 16–34. Martins RM, Maia MdLS, Farias RHG, Camacho LAB, Freire MS, Galler R, Yamamura AMY, Almeida LFC, Lima SMB, Nogueira RMR, et al. 17dd yellow fever vaccine: a double blind, randomized clinical trial of immunogenicity and safety on a dose-response study. Hum Vaccines Immunother. 2013;9(4):879–88. Kumar N, Hendriks BS, Janes KA, de Graaf D, Lauffenburger DA. Applying computational modeling to drug discovery and development. Drug Discov Today. 2006;11(17):806–11. https://doi.org/10.1016/j.drudis.2006.07.010. Groot ASD, Bosma A, Chinai N, Frost J, Jesdale BM, Gonzalez MA, Martin W, Saint-Aubin C. From genome to vaccine: in silico predictions, ex vivo verification. Vaccine. 2001;19(31):4385–95. https://doi.org/10.1016/S0264-410X(01)00145-1. Parvizpour S, Pourseif MM, Razmara J, Rafi MA, Omidi Y. Epitope-based vaccine design: a comprehensive overview of bioinformatics approaches. Drug Discov Today. 2020;. https://doi.org/10.1016/j.drudis.2020.03.006. Tanwer P, Kolora SRR, Babbar A, Saluja D, Chaudhry U. Identification of potential therapeutic targets in neisseria gonorrhoeae by an in-silico approach. J Theor Biol. 2020;490:110172. https://doi.org/10.1016/j.jtbi.2020.110172. Pennisi M, Russo G, Sgroi G, Bonaccorso A, Parasiliti PG, Fichera E, Mitra D, Walker K, Cardona P-J, Amat M, Viceconti M, Pappalardo F. Predicting the artificial immunity induced by ruti® vaccine against tuberculosis using universal immune system simulator (UISS). BMC Bioinform. 2019;20:1–10. https://doi.org/10.1186/s12859-019-3045-5. Ibrahim EH, Taha R, Ghramh HA, Kilany M. Development of Rift Valley fever (RVF) vaccine by genetic joining of the RVF-glycoprotein Gn with the strong adjuvant subunit B of cholera toxin (CTB) and expression in bacterial system. Saudi J Biol Sci. 2019;26(7):1676–81. https://doi.org/10.1016/j.sjbs.2018.08.019. Pritam M, Singh G, Swaroop S, Singh AK, Pandey B, Singh SP. A cutting-edge immunoinformatics approach for design of multi-epitope oral vaccine against dreadful human malaria. Int J Biol Macromol. 2020;. https://doi.org/10.1016/j.ijbiomac.2020.04.191. De Groot AS, Moise L, Terry F, Gutierrez AH, Hindocha P, Richard G, Hoft DF, Ross TM, Noe AR, Takahashi Y, Kotraiah V, Silk SE, Nielsen CM, Minassian AM, Ashfield R, Ardito M, Draper SJ, Martin WD. Better epitope discovery, precision immune engineering, and accelerated vaccine design using immunoinformatics tools. Front Immunol. 2020;11:442. https://doi.org/10.3389/fimmu.2020.00442. Bhattacharya M, Sharma AR, Sharma G, Patra P, Mondal N, Patra BC, Lee S-S, Chakraborty C. Computer aided novel antigenic epitopes selection from the outer membrane protein sequences of aeromonas hydrophila and its analyses. Infect Genet Evol. 2020;82:104320. https://doi.org/10.1016/j.meegid.2020.104320. Khan MAA, Ami JQ, Faisal K, Chowdhury R, Ghosh P, Hossain F, Abd El Wahed A, Mondal D. An immunoinformatic approach driven by experimental proteomics: in silico design of a subunit candidate vaccine targeting secretory proteins of Leishmania donovani amastigotes. Parasit Vectors. 2020;13(1):196. Pappalardo F, Flower D, Russo G, Pennisi M, Motta S. Computational modelling approaches to vaccinology. Pharmacol Res. 2015;92:40–5. Bonin CRB, Fernandes GC, dos Santos RW, Lobosco M. Mathematical modeling based on ordinary differential equations: a promising approach to vaccinology. Hum Vaccines Immunother. 2017;13(2):484–9. Bonin CB, Fernandes GC, dos Santos RW, Lobosco M. A simplified mathematical-computational model of the immune response to the yellow fever vaccine. In: 2017 IEEE international conference on bioinformatics and biomedicine (BIBM). Los Alamitos: IEEE Computer Society; 2017. p. 1425–32. https://doi.org/10.1109/BIBM.2017.8217872. Bonin CRB, Fernandes GC, Menezes RM, Camacho LAB, da Mota LMH, de Lima SMB, Campi-Azevedo AC, Martins-Filho OA, dos Santos RW, Lobosco M. Quantitative validation of a yellow fever vaccine model. In: 2019 IEEE international conference on bioinformatics and biomedicine (BIBM); 2019. p. 2113–20. Le D, Miller JD, Ganusov VV. Mathematical modeling provides kinetic details of the human immune response to vaccination. Front Cell Infect Microbiol. 2015;. https://doi.org/10.3389/fcimb.2014.00177. Luiza-Silva M, Campi-Azevedo AC, Batista MA, Martins MA, Avelar RS, da Silveira Lemos D, Bastos Camacho LA, de Menezes Martins R, de Lourdes de Sousa Maia M, Guedes Farias RH, et al. Cytokine signatures of innate and adaptive immunity in 17dd yellow fever vaccinated children and its association with the level of neutralizing antibody. J Infect Dis. 2011;204(6):873–83. Campi-Azevedo AC, de Araújo-Porto LP, Luiza-Silva M, Batista MA, Martins MA, Sathler-Avelar R, da Silveira-Lemos D, Camacho LAB, de Menezes Martins R, de Sousa Maia MdL, et al. 17dd and 17d-213/77 yellow fever substrains trigger a balanced cytokine profile in primary vaccinated children. PloS One. 2012;7(12):49828. Campi-Azevedo AC, Costa-Pereira C, Antonelli LR, Fonseca CT, Teixeira-Carvalho A, Villela-Rezende G, Santos RA, Batista MA, Campos FM, Pacheco-Porto L, et al. Booster dose after 10 years is recommended following 17dd-yf primary vaccination. Hum Vaccines Immunother. 2016;12(2):491–502. Martins RdM, Maia MdLS, Lima SMBd, Noronha TGd, Xavier JR, Camacho LAB, Albuquerque EMd, Farias RHG, Castro TdMd, Homma A, et al. Duration of post-vaccination immunity to yellow fever in volunteers eight years after a dose-response study. Vaccine. 2018;36(28):4112–7. Campi-Azevedo AC, de Almeida Estevam P, Coelho-dos-Reis JG, Peruhype-Magalhães V, Villela-Rezende G, Quaresma PF, Maia MdLS, Farias RHG, Camacho, L.A.B., da Silva Freire M, et al. Subdoses of 17dd yellow fever vaccine elicit equivalent virological/immunological kinetics timeline. BMC Infect Dis. 2014;14(1):391. Camacho LAB, Collaborative group for studies on yellow fever vaccines, et al. Duration of immunity in recipients of two doses of 17dd yellow fever vaccine. Vaccine. 2019;37:5129–35. Ferreira CdC, Campi-Azevedo AC, Peruhype-Magalhāes V, Coelho-dos-Reis JG, Antonelli LRdV, Torres K, Freire LC, Costa-Rocha IAd, Oliveira ACV, Maia MdLdS, et al. Impact of synthetic and biological immunomodulatory therapy on the duration of 17dd yellow fever vaccine-induced immunity in rheumatoid arthritis. Arthritis Res Ther. 2019;21(1):75. Simões M, Camacho LAB, Yamamura AM, Miranda EH, Cajaraville ACR, da Silva Freire M. Evaluation of accuracy and reliability of the plaque reduction neutralization test (micro-prnt) in detection of yellow fever virus antibodies. Biologicals. 2012;40(6):399–404. Edwards SPWOPOKM. Plotkin's Vac. 7th ed. New York: Elsevier; 2018. p. 1720. Paul WE. Fundamental immunology. 7th ed. Philadelphia: Wolters Kluwer Health; 2012. Goutelle S, Maurin M, Rougier F, Barbaut X, Bourguignon L, Ducher M, Maire P. The Hill equation: a review of its capabilities in pharmacological modelling. Fundam Clin Pharmacol. 2008;22(6):633–48. Odeint. Odeint's homepage; 2020. Accessed on Oct 2020. http://docs.scipy.org Camacho LAB, Freire MdS, Leal MdLF, Aguiar SGd, Nascimento JPd, Iguchi T, Lozana JdA, Farias RHG. Immunogenicity of who-17d and brazilian 17dd yellow fever vaccines: a randomized trial. Rev. saude publica. 2004;38:671–8. Fernandes-Monteiro AG, Trindade GF, Yamamura AM, Moreira OC, de Paula VS, Duarte ACM, Britto C, Lima SMB. New approaches for the standardization and validation of a real-time qpcr assay using taqman probes for quantification of yellow fever virus on clinical samples with high quality parameters. Hum Vaccines Immunother. 2015;11(7):1865–71. Slon Campos JL, Mongkolsapaya J, Screaton GR. The immune response against flaviviruses. Nat Immunol. 2018;19(11):1189–98. https://doi.org/10.1038/s41590-018-0210-3. About this supplement This article has been published as part of BMC Bioinformatics Volume 21 Supplement 17 2020: Selected papers from the 3rd International Workshop on Computational Methods for the Immune System Function (CMISF 2019). The full contents of the supplement are available at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-21-supplement-17 The authors would like to thank the infrastructure provided by Universidade Federal de Juiz de Fora (UFJF) to perform this study. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) - Brasil - Finance Code 001 (scholarship), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) - Brasil (scholarship) and Fundação de Amparo à Pesquisa do Estado de Minas Gerais (FAPEMIG) - Brasil (equipments). UFJF has paid the publication costs. Institute of Education, Science and Technology of Southeast of Minas Gerais - Cataguases Advanced Campus, Chácara Granjaria, s/n - Granjaria, 36773-563, Cataguases, Brazil Carla Rezende Barbosa Bonin Medical School, Presidente Antônio Carlos University, Juiz de Fora, Brazil Guilherme Côrtes Fernandes Bio-Manguinhos Oswaldo Cruz Foundation (FIOCRUZ), Rio de Janeiro, Brazil Reinaldo de Menezes Martins & Sheila Maria Barbosa de Lima Sergio Arouca National School of Public Health (ENSP), Oswaldo Cruz Foundation (FIOCRUZ), Rio de Janeiro, Brazil Luiz Antonio Bastos Camacho Rheumatology Department, University of Brasilia (UnB), Brasilia, Brazil Licia Maria Henrique da Mota René Rachou Research Center, Oswaldo Cruz Foundation (FIOCRUZ)/Minas, Belo Horizonte, Brazil Andréa Teixeira-Carvalho, Ana Carolina Campi-Azevedo & Olindo Assis Martins-Filho Graduate Program in Computational Modeling, Federal University of Juiz de Fora (UFJF), Juiz de Fora, Brazil Rodrigo Weber dos Santos & Marcelo Lobosco Reinaldo de Menezes Martins Andréa Teixeira-Carvalho Sheila Maria Barbosa de Lima Ana Carolina Campi-Azevedo Olindo Assis Martins-Filho Rodrigo Weber dos Santos Marcelo Lobosco Conception and design of the mathematical model: CRBB, RWS and ML. Computational implementation of the mathematical model: CBBB. CRBB has also been involved in drafting the manuscript. RMM, LABC, ATC, LMHM, SMBL, ACCA, OAMF were responsible for the experimental data and their interpretation. GCF participated in writing, providing perspective from the immunological standpoint. CRBB, LABC, ACCA, OAMF, RWS and ML revised the final manuscript. All authors have read and approved the final manuscript. Correspondence to Carla Rezende Barbosa Bonin. The collection of data for this study was undertaken under a larger project that was approved by the local ethics commitee (CEP IPEC/FIOCRUZ № 052/2008, CEP INI/FIOCRUZ № 1.976.767, and CEP ENSP/FIOCRUZ \(\hbox {N}^{\underline{o}}\) 508.650) All patients filled an institutional form consenting to participate of the studies were data were taken from. All patient data were anonymised before evaluation. Researchers and collaborators include employees of several units of Oswaldo Cruz Foundation (FIOCRUZ, linked to Brazilian Ministry of Health), including Bio-Manguinhos, which is responsible for the production of the YFV used in Brazil. Initial condition and parameters tables See Tables 11 and 12. Table 11 Model variables and their initial values Table 12 Model parameters Bonin, C.R.B., Fernandes, G.C., de Menezes Martins, R. et al. Validation of a yellow fever vaccine model using data from primary vaccination in children and adults, re-vaccination and dose-response in adults and studies with immunocompromised individuals. BMC Bioinformatics 21, 551 (2020). https://doi.org/10.1186/s12859-020-03845-3
CommonCrawl
3.5: The Metric (Part 2) [ "article:topic", "authorname:crowellb", "Euclidean Metric", "Isometry", "Erlangen Program", "Ehrenfest\u2019s Paradox", "Sagnac effect", "isometries", "CPT symmetry", "license:ccbysa", "showtoc:no" ] Book: General Relativity (Crowell) 3: Differential Geometry Contributed by Benjamin Crowell Professor (Physics) at Fullerton College Isometry, Inner Products, and the Erlangen Program Einstein's Carousel Non-Euclidean Geometry Observed in the Rotating Frame Ehrenfest's Paradox The Metric in the Rotating Frame The Spatial Metric and Synchronization of Clocks Impossibility of Rigid Rotation, even with External Forces In Euclidean geometry, the dot product of vectors a and b is given by \[g_{xx}a_xb_x + g_{yy}a_yb_y + g_{zz}a_zb_z = a_xb_x + a_yb_y + a_zb_z,\] and in the special case where a = b we have the squared magnitude. In the tensor notation, \[a^\mu b_\nu = a^1b_1 + a^2b_2 + a^3b_3.\] Like magnitudes, dot products are invariant under rotations. This is because knowing the dot product of vectors a and b entails knowing the value of \[\textbf{a} \cdot \textbf{b} = |\textbf{a}||\textbf{a}| \cos \theta_{\textbf{ab}}\] and Euclid's E4 (equality of right angles) implies that the angle \(\theta_{\textbf{ab}}\) is invariant. The same axioms also entail invariance of dot products under translation; Euclid waits only until the second proposition of the Elements to prove that line segments can be copied from one location to another. This seeming triviality is actually false as a description of physical space, because it amounts to a statement that space has the same properties everywhere. The set of all transformations that can be built out of successive translations, rotations, and reflections is called the group of isometries. It can also be defined as the group that preserves dot products, or the group that preserves congruence of triangles. In mathematics, a group is defined as a binary operation that has an identity, inverses, and associativity. For example, addition of integers is a group. In the present context, the members of the group are not numbers but the transformations applied to the Euclidean plane. The group operation on transformations T1 and T2 consists of finding the transformation that results from doing one and then the other, i.e., composition of functions. In Lorentzian geometry, we usually avoid the Euclidean term dot product and refer to the corresponding operation by the more general term inner product. In a specific coordinate system we have \[a^{\mu}b_{\nu} = a^0b_0 - a^1b_1 - a^2b_2 - a^3b_3.\] The inner product is invariant under Lorentz boosts, and also under the Euclidean isometries. The group found by making all possible combinations of continuous transformations from these two sets is called the Poincaré group. The Poincaré group is not the symmetry group of all of spacetime, since curved spacetime has different properties in different locations. The equivalence principle tells us, however, that space can be approximated locally as being flat, so the Poincaré group is locally valid, just as the Euclidean isometries are locally valid as a description of geometry on the Earth's curved surface. CPT Symmetry The discontinuous transformations of spatial reflection and time reversal are not included in the definition of the Poincaré group, although they do preserve inner products. General relativity has symmetry under spatial reflection (called \(P\) for parity), time reversal (\(T\)), and charge inversion (\(C\)), but the standard model of particle physics is only invariant under the composition of all three, CPT, not under any of these symmetries individually. Example 16: The triangle inequality In Euclidean geometry, the triangle inequality |b + c| < |b| + |c| follows from $$(|\textbf{b}| + |\textbf{c}|)^{2} - (\textbf{b} + \textbf{c}) \cdot (\textbf{b} + \textbf{c}) = 2 (|\textbf{b}||\textbf{c}| - \textbf{b} \cdot \textbf{c}) \geq 0 \ldotp$$ The reason this quantity always comes out positive is that for two vectors of fixed magnitude, the greatest dot product is always achieved in the case where they lie along the same direction. In Lorentzian geometry, the situation is different. Let b and c be timelike vectors, so that they represent possible world-lines. Then the relation a = b + c suggests the existence of two observers who take two different paths from one event to another. A goes by a direct route while B takes a detour. The magnitude of each timelike vector represents the time elapsed on a clock carried by the observer moving along that vector. The triangle equality is now reversed, becoming |b + c| > |b| + |c|. The difference from the Euclidean case arises because inner products are no longer necessarily maximized if vectors are in the same direction. E.g., for two lightlike vectors, bicj vanishes entirely if b and c are parallel. For timelike vectors, parallelism actually minimizes the inner product rather than maximizing it. Let b and c be parallel and timelike, and directed forward in time. Adopt a frame of reference in which every spatial component of each vector vanishes. This entails no loss of generality, since inner products are invariant under such a transformation. Since the time-ordering is also preserved under transformations in the Poincaré group, each is still directed forward in time, not backward. Now let b and c be pulled away from parallelism, like opening a pair of scissors in the x − t plane. This reduces btct, while causing bxcx to become negative. Both effects increase the inner product. \(\square\) In his 1872 inaugural address at the University of Erlangen, Felix Klein used the idea of groups of transformations to lay out a general classification scheme, known as the Erlangen program, for all the different types of geometry. Each geometry is described by the group of transformations, called the principal group, that preserves the truth of geometrical statements. Euclidean geometry's principal group consists of the isometries combined with arbitrary changes of scale, since there is nothing in Euclid's axioms that singles out a particular distance as a unit of measurement. In other words, the principal group consists of the transformations that preserve similarity, not just those that preserve congruence. Affine geometry's principal group is the transformations that preserve parallelism; it includes shear transformations, and there is therefore no invariant notion of angular measure or congruence. Unlike Euclidean and affine geometry, elliptic geometry does not have scale invariance. This is because there is a particular unit of distance that has special status; as we saw in example 4, a being living in an elliptic plane can determine, by entirely intrinsic methods, a distance scale R, which we can interpret in the hemispherical model as the radius of the sphere. General relativity breaks this symmetry even more severely. Not only is there a scale associated with curvature, but the scale is different from one point in space to another. The following example was historically important, because Einstein used it to convince himself that general relativity should be described by non-Euclidean geometry.8 Its interpretation is also fairly subtle, and the early relativists had some trouble with it. The example is described in Einstein's paper "The Foundation of the General Theory of Relativity." An excerpt, which includes the example, is given in Appendix A. Suppose that observer A is on a spinning carousel while observer B stands on the ground. B says that A is accelerating, but by the equivalence principle A can say that she is at rest in a gravitational field, while B is free-falling out from under her. B measures the radius and circumference of the carousel, and finds that their ratio is 2\(\pi\). A carries out similar measurements, but when she puts her meter-stick in the azimuthal direction it becomes Lorentz-contracted by the factor \(\gamma = (1−\omega^{2} r^{2})^{−1/2}\), so she finds that the ratio is greater than 2\(\pi\). In A's coordinates, the spatial geometry is non-Euclidean, and the metric differs from the Euclidean one found in example 8. Figure \(\PageIndex{4}\): Observer A, rotating with the carousel, measures an azimuthal distance with a ruler. Observer A feels a force that B considers to be fictitious, but that, by the equivalence principle, A can say is a perfectly real gravitational force. According to A, an observer like B is free-falling away from the center of the disk under the influence of this gravitational field. A also observes that the spatial geometry of the carousel is non-Euclidean. Therefore it seems reasonable to conjecture that gravity can be described by non-Euclidean geometry, rather than as a physical force in the Newtonian sense. At this point, you know as much about this example as Einstein did in 1912, when he began using it as the seed from which general relativity sprouted, collaborating with his old schoolmate, mathematician Marcel Grossmann, who knew about differential geometry. The remainder of this subsection, which you may want to skip on a first reading, goes into more detail on the interpretation and mathematical description of the rotating frame of reference. Even more detailed treatments are given by Grøn9 and Dieks.10 Ehrenfest11 described the following paradox. Suppose that observer B, in the lab frame, measures the radius of the disk to be r when the disk is at rest, and r' when the disk is spinning. B can also measure the corresponding circumferences C and C'. Because B is in an inertial frame, the spatial geometry does not appear non-Euclidean according to measurements carried out with his meter sticks, and therefore the Euclidean relations C = 2\(\pi\)r and C' = 2\(\pi\)r' both hold. The radial lines are perpendicular to their own motion, and they therefore have no length contraction, r = r', implying C = C'. The outer edge of the disk, however, is everywhere tangent to its own direction of motion, so it is Lorentz contracted, and therefore C' < C. The resolution of the paradox is that it rests on the incorrect assumption that a rigid disk can be made to rotate. If a perfectly rigid disk was initially not rotating, one would have to distort it in order to set it into rotation, because once it was rotating its outer edge would no longer have a length equal to 2\(\pi\) times its radius. Therefore if the disk is perfectly rigid, it can never be rotated. As discussed earlier, relativity does not allow the existence of infinitely rigid or infinitely strong materials. If it did, then one could violate causality. If a perfectly rigid disk existed, vibrations in the disk would propagate at infinite velocity, so tapping the disk with a hammer in one place would result in the transmission of information at v > c to other parts of the disk, and then there would exist frames of reference in which the information was received before it was transmitted. The same applies if the hammer tap is used to impart rotational motion to the disk. Figure \(\PageIndex{5}\): Einstein and Ehrenfest. Exercise \(\PageIndex{1}\) Self-check: What if we build the disk by assembling the building materials so that they are already rotating properly before they are joined together? What if we try to get around these problems by applying torque uniformly all over the disk, so that the rotation starts smoothly and simultaneously everywhere? We then run into issues identical to the ones raised by Bell's spaceship paradox. In fact, Ehrenfest's paradox is nothing more than Bell's paradox wrapped around into a circle. The same question of time synchronization comes up. To spell this out mathematically, let's find the metric according to observer A by applying the change of coordinates \(\theta' = \theta − \omega t\). First we take the Euclidean metric of example 8 and rewrite it as a (globally) Lorentzian metric in spacetime for observer B, $$ds^{2} = dt^{2} - dr^{2} - r^{2} d \theta^{2} \ldotp \label{1} $$ Applying the transformation into A's coordinates, we find $$ds^{2} = (1 - \omega^{2} r^{2}) dt^{2} - dr^{2} - r^{2} d \theta'^{2} - 2 \omega r^{2} d \theta' dt \ldotp \label{2} $$ Recognizing ωr as the velocity of one frame relative to another, and \((1−\omega^{2} r^{2})^{−1/2}\) as \(\gamma\), we see that we do have a relativistic time dilation effect in the \(dt^2\) term. But the \(dr^2\) and d\(\theta'^{2}\) terms look Euclidean. Why don't we see any Lorentz contraction of the length scale in the azimuthal direction? The answer is that coordinates in general relativity are arbitrary, and just because we can write down a certain set of coordinates, that doesn't mean they have any special physical interpretation. The coordinates \((t, r, \theta')\) do not correspond physically to the quantities that A would measure with clocks and meter-sticks. The tip-off is the d\(\theta'\) dt cross-term. Suppose that A sends two cars driving around the circumference of the carousel, one clockwise and one counterclockwise, from the same point. If (t, r, \(\theta'\)) coordinates corresponded to clock and meter-stick measurements, then we would expect that when the cars met up again on the far side of the disk, their dashboards would show equal values of the arc length r\(\theta'\) on their odometers and equal proper times ds on their clocks. But this is not the case, because the sign of the d\(\theta'\) dt term is opposite for the two world-lines. The same effect occurs if we send beams of light in both directions around the disk, and this is the Sagnac effect. This is a symptom of the fact that the coordinate t is not properly synchronized between different places on the disk. We already know that we should not expect to be able to find a universal time coordinate that will match up with every clock, regardless of the clock's state of motion. Suppose we set ourselves a more modest goal. Can we find a universal time coordinate that will match up with every clock, provided that the clock is at rest relative to the rotating disk? A trick for improving the situation is to eliminate the \(d \theta'\, dt\) cross-term by completing the square in the metric (Equation \ref{2}). The result is $$ds^{2} = (1 - \omega^{2} r^{2}) \left[ dt + \frac{\omega r^{2}}{1 - \omega^{2} r^{2}} d \theta' \right]^{2} - dr^{2} - \frac{r^{2}}{1 - \omega^{2} r^{2}} d \theta'^{2} \ldotp$$ The interpretation of the quantity in square brackets is as follows. Suppose that two observers situate themselves on the edge of the disk, separated by an infinitesimal angle d\(\theta'\). They then synchronize their clocks by exchanging light pulses. The time of flight, measured in the lab frame, for each light pulse is the solution of the equation ds2 = 0, and the only difference between the clockwise result dt1 and the counterclockwise one dt2 arises from the sign of d\(\theta'\). The quantity in square brackets is the same in both cases, so the amount by which the clocks must be adjusted is \[dt = \frac{(dt_{2} − dt_{1})}{2},\] $$dt = \frac{\omega r^{2}}{1 - \omega^{2} r^{2}} d \theta' \ldotp$$ Substituting this into the metric, we are left with the purely spatial metric $$ds^{2} = - dr^{2} - \frac{r^{2}}{1 - \omega^{2} r^{2}} d \theta'^{2} \ldotp \label{3}$$ The factor of \((1 − \omega^{2} r^{2})^{−1} = \gamma^{2}\) in the d\(\theta'^{2}\) term is simply the expected Lorentz-contraction factor. In other words, the circumference is, as expected, greater than 2\(\pi\)r by a factor of \(\gamma\). Does the metric (Equation \ref{3}) represent the same non-Euclidean spatial geometry that \(A\), rotating with the disk, would determine by meterstick measurements? Yes and no. It can be interpreted as the one that A would determine by radar measurements. That is, if A measures a round-trip travel time dt for a light signal between points separated by coordinate distances dr and d\(\theta'\), then A can say that the spatial separation is \(\frac{dt}{2}\), and such measurements will be described correctly by Equation \ref{3}. Physical meter-sticks, however, present some problems. Meter-sticks rotating with the disk are subject to Coriolis and centrifugal forces, and this problem can't be avoided simply by making the meter-sticks infinitely rigid, because infinitely rigid objects are forbidden by relativity. In fact, these forces will inevitably be strong enough to destroy any meter stick that is brought out to r = \(\frac{1}{\omega}\), where the speed of the disk becomes equal to the speed of light. It might appear that we could now define a global coordinate $$T = t + \frac{\omega r^{2}}{1 - \omega^{2} r^{2}} \theta',$$ interpreted as a time coordinate that was synchronized in a consistent way for all points on the disk. The trouble with this interpretation becomes evident when we imagine driving a car around the circumference of the disk, at a speed slow enough so that there is negligible time dilation of the car's dashboard clock relative to the clocks tied to the disk. Once the car gets back to its original position, \(\theta'\) has increased by \(2pi\), so it is no longer possible for the car's clock to be synchronized with the clocks tied to the disk. We conclude that it is not possible to synchronize clocks in a rotating frame of reference; if we try to do it, we will inevitably have to have a discontinuity somewhere. This problem is present even locally, as demonstrated by the possibility of measuring the Sagnac effect with apparatus that is small compared to the disk. The only reason we were able to get away with time synchronization in order to establish the metric in Equation \ref{3} is that all the physical manifestations of the impossibility of synchronization, e.g., the Sagnac effect, are proportional to the area of the region in which synchronization is attempted. Since we were only synchronizing two nearby points, the area enclosed by the light rays was zero. Example 17: GPS As a practical example, the GPS system is designed mainly to allow people to find their positions relative to the rotating surface of the earth (although it can also be used by space vehicles). That is, they are interested in their (r, \(\theta', \phi\)) coordinates. The frame of reference defined by these coordinates is referred to as ECEF, for Earth-Centered, Earth-Fixed. The system requires synchronization of the atomic clocks carried aboard the satellites, and this synchronization also needs to be extended to the (less accurate) clocks built into the receiver units. It is impossible to carry out such a synchronization globally in the rotating frame in order to create coordinates (T, r, \(\theta', \phi\)). If we tried, it would result in discontinuities (see problem 8). Instead, the GPS system handles clock synchronization in coordinates (t, r, \(\theta', \phi\)), as in Equation \ref{2}. These are known as the Earth-Centered Inertial (ECI) coordinates. The \(t\) coordinate in this system is not the one that users at neighboring points on the earth's surface would establish if they carried out clock synchronization using electromagnetic signals. It is simply the time coordinate of the nonrotating frame of reference tied to the earth's center. Conceptually, we can imagine this time coordinate as one that is established by sending out an electromagnetic "tick-tock" signal from the earth's center, with each satellite correcting the phase of the signal based on the propagation time inferred from its own r. In reality, this is accomplished by communication with a master control station in Colorado Springs, which communicates with the satellites via relays at Kwajalein, Ascension Island, Diego Garcia, and Cape Canaveral. Example 18: Einstein's goof, in the rotating frame Example 10 recounted Einstein's famous mistake in predicting that a clock at the pole would experience a time dilation relative to a clock at the equator, and the empirical test of this fact by Alley et al. using atomic clocks. The perfect cancellation of gravitational and kinematic time dilations might seem fortuitous, but it fact it isn't. When we transform into the frame rotating along with the earth, there is no longer any kinematic effect at all, because neither clock is moving. In this frame, the surface of the earth's oceans is an equipotential, so the gravitational time dilation vanishes as well, assuming both clocks are at sea level. In the transformation to the rotating frame, the metric picks up a d\(\theta'\) dt term, but since both clocks are fixed to the earth's surface, they have d\(\theta'\) = 0, and there is no Sagnac effect. The determination of the spatial metric with rulers at rest relative to the disk is appealing because of its conceptual simplicity compared to complicated procedures involving radar, and this was presumably why Einstein presented the concept using ruler measurements in his 1916 paper laying out the general theory of relativity.12 In an effort to recover this simplicity, we could propose using external forces to compensate for the centrifugal and Coriolis forces to which the rulers would be subjected, causing them to stay straight and maintain their correct lengths. Something of this kind is carried out with the large mirrors of some telescopes, which have active systems that compensate for gravitational deflections and other effects. The first issue to worry about is that one would need some way to monitor a ruler's length and straightness. The monitoring system would presumably be based on measurements with beams of light, in which case the physical rulers themselves would become superfluous. The paper is reproduced in the back of the book, and the relevant part is in Appendix A. In addition, we would need to be able to manipulate the rulers in order to place them where we wanted them, and these manipulations would include angular accelerations. If such a thing was possible, then it would also amount to a loophole in the resolution of the Ehrenfest paradox. Could Ehrenfest's rotating disk be accelerated and decelerated with help from external forces, which would keep it from contorting into a potato chip? The problem we run into with such a strategy is one of clock synchronization. When it was time to impart an angular acceleration to the disk, all of the control systems would have to be activated simultaneously. But we have already seen that global clock synchronization cannot be realized for an object with finite area, and therefore there is a logical contradiction in this proposal. This makes it impossible to apply rigid angular acceleration to the disk, but not necessarily the rulers, which could in theory be one-dimensional. 9 Relativistic description of a rotating disk, Am. J. Phys. 43 (1975) 869 10 Space, Time, and Coordinates in a Rotating World, http://www.phys.uu.nl/igg/dieks 11 P. Ehrenfest, Gleichf¨ormige Rotation starrer K¨orper und Relativitätstheorie, Z. Phys. 10 (1909) 918, available in English translation at en.wikisource.org. Benjamin Crowell (Fullerton College). General Relativity is copyrighted with a CC-BY-SA license. 3.6: The Metric in General Relativity Ben Crowell Erlangen Program Euclidean Metric isometries Sagnac effect
CommonCrawl
Multiple-choice: sum of primes below $1000$ I sat an exam 2 months ago and the question paper contains the problem: Given that there are $168$ primes below $1000$. Then the sum of all primes below 1000 is (a) $11555$ (b) $76127$ (c) $57298$ (d) $81722$ My attempt to solve it: We know that below $1000$ there are $167$ odd primes and 1 even prime (2), so the sum has to be odd, leaving only the first two numbers. Then I tried to use the formula "Every prime can be written in of the form $6n-1$,$6n+1$ except $2$ and $3$.", but I got stuck at that. elementary-number-theory summation prime-numbers Chad Shin Sufaid SaleelSufaid Saleel $\begingroup$ Only (b) is really plausible on size. I'd expect the average size to be in the 400-500 area, but definitely less than 500. Then you have eliminated (c) and (d) on parity anyway. $\endgroup$ – Joffan Jan 30 '17 at 2:27 $\begingroup$ Just by "multiple-choice psychology" I expect both obviously wrong answers (c) and (d) to be somehow close to (and ideally on both sides of) the correct answer. This is the case only if (b) is the correct answer. $\endgroup$ – Curd Jan 30 '17 at 16:22 $\begingroup$ @SlimsGhost: I know what a mathematical proof looks like, but nobody was asking for a proof. I'm just making fun of multiple-choice tests. $\endgroup$ – Curd Jan 30 '17 at 20:24 $\begingroup$ I have to say, that this test question is horrible. It's gimicky and relies on a lot of mental jumps, the opposite of what a test question should do. $\endgroup$ – HopefullyHelpful Jan 31 '17 at 9:18 $\begingroup$ According to OEIS A034387 (both "Comments" and "Formula") one can approximate the answer by $1000^2/(2\log 1000)$ which gives $72382.4$. This suggests that (a) could be wrong. That same reference has a "Link" with Table of n, a(n) for n = 1..10000. $\endgroup$ – Jeppe Stig Nielsen Jan 31 '17 at 12:44 The sum of the first 168 positive integers is $\frac{168^2+168}{2}=14196$, which is greater than answer (a). The sum of the first 168 primes must be even greater than that. Meni RosenfeldMeni Rosenfeld $\begingroup$ Oscar Lanzi has a similar answer (which is earlier); yours is a bit more simple (but your bound is only half as good, though already good enough to exclude (a)). $\endgroup$ – Jeppe Stig Nielsen Jan 31 '17 at 14:59 $\begingroup$ I agree Oscar's answer is similar, and it helped inspire this one. But I think it's instructive to show just how obviously too low (a) is, and how trivially simple the bound that excludes it can be. That said I didn't quite expect this answer to be so popular, I guess people like simplicity... $\endgroup$ – Meni Rosenfeld Jan 31 '17 at 21:19 $\begingroup$ Beauty lies in simplicity. $\endgroup$ – Brian Feb 1 '17 at 3:33 you just have to decide between $11555$ and $76127$. Notice that the first implies the average prime under $1000$ is $11555/168<69$. Which is clearly false. Jorge Fernández HidalgoJorge Fernández Hidalgo $\begingroup$ To clarify, note that there are only 19 primes under 69 and the remaining 149 primes are larger. Hence there is no way to arrive at this average. $\endgroup$ – user1952500 Jan 30 '17 at 9:27 $\begingroup$ Or the other way round, the average of the primes <1000 should be below 500, since the density decreases. So the sum should be below 168*500 = 84000. That leaves only 76127. $\endgroup$ – Florian F Jan 30 '17 at 11:39 $\begingroup$ @FlorianF Well… no, not really. 11,555 is obviously wrong, but it is definitely below 84,000. $\endgroup$ – Janus Bahs Jacquet Jan 30 '17 at 17:51 $\begingroup$ I just independently used the same method. Florian's estimate for (B) is a ballpark overestimate. Accuracy doesn't matter because (A) is infeasible because it implies an impossibly low mean. So we already know the answer can only be (B), whatever (B) is. We weren't asked to estimate (B) more accurately, in which case we'd integrate some Prime-counting function e.g. per the OEIS sequence @JeppeStigNielsen cites. $\endgroup$ – smci Feb 1 '17 at 1:00 $\begingroup$ One doesn't even need to count the number of primes under $69$. We're told that there are $168$ primes below $1000$, so the average of the primes below $1000$ is certainly greater than the average of the first $168$ positive integers, which is $169/2$. Of course, this is essentially the content of Meni Rosenfeld's answer... $\endgroup$ – Alex Wertheim Feb 2 '17 at 2:17 We have to decide among $\text{(A)}$ and $\text{(B)}$. Note that the $26$th prime is $101$. This implies that if $p_{n}$ denotes the $n$ th prime, then $$\sum_{n=1}^{168}p_{n} = \sum_{n=1}^{25}p_{n}+\sum_{n=26}^{168} p_{n} > \sum_{n=26}^{168} 101 =101 \times 143=14443 >\text{(A)}=11555$$ The answer is thus $\text{(B)}$, $76127$. The answer can be confirmed through direct calculation or can be verfied here. S.C.B.S.C.B. $\begingroup$ Why did you choose the 26th prime here? Or is it just arbitrary? $\endgroup$ – MarioDS Jan 30 '17 at 14:22 $\begingroup$ It is somewhat commonly known piece of a trivia - there are 25 primes below 100 and the primes below 25 sum to 100. $\endgroup$ – Jon Claus Jan 30 '17 at 14:31 $\begingroup$ @MarioDS It is as Jon Claus said; many people memorized it, so I was able to answer it quickly. $\endgroup$ – S.C.B. Jan 30 '17 at 15:00 There are $168$ primes with the first one equal to $2$ the rest $\ge 2k-1$ for $k=2,3,4,...,168$. So their sum is at least $168^2+1=28 225$. Oscar LanziOscar Lanzi $\begingroup$ (a) would mean that the average prime is $<70$, which is horrendously implausible, for me good enough to pick (b) instead. - But this answer is the formal reason why it's implausible $\endgroup$ – Hagen von Eitzen Jan 30 '17 at 20:22 I just wanted to carry forward your observation about "Every prime can be written in of the form $(6n-1),(6n+1)$ except $2$ & $3$". We can quickly get a minimum sum out of this. Assume that the $166$ primes not $2$ or $3$ are the smallest such numbers obeying the above; then $83$ are $6k{-}1$, $83$ are $6k{+}1$ and the minimum bound total is $83$ terms of $12k$, which is $12\cdot 84 \cdot 83 /2 = 504\cdot 83 = 41832$ - and we can decoratively add the $2$ and $3$ to get $41837$. This is more than big enough to eliminate option (a) as required. Oscar Lanzi JoffanJoffan $\begingroup$ The only problem here is that I feel cheated at the too-obvious wrongness of option (a) - it could've been say 41555 instead of 11555 :-) $\endgroup$ – Joffan Jan 30 '17 at 15:36 $\begingroup$ Interestingly, 41837 isn't that far away from 76127. $\endgroup$ – gnasher729 Jan 30 '17 at 23:22 $\begingroup$ Using that all primes > 7 are 30k ± 1, 7, 11, 13, the lower bound is 51,677. $\endgroup$ – gnasher729 Jan 30 '17 at 23:33 Your analysis that the answer must be (a) or (b) is convincing. Considering that (a) and (b) are much different, just about any simplistic method to approximate the sum of all primes should tell which is the right answer. For example, the sum of all numbers less than 1000 is about 500,000. So, 168/1000*500,000 or 84,000 should be in the right ballpark. 76127 is the right answer, by this reasoning. David Z stretchstretch $\begingroup$ Yup you got me to upvote. A comparison test, see another answer, easily limits the size of the "ballpark" thus settling the question. $\endgroup$ – Oscar Lanzi Jan 30 '17 at 2:08 $\begingroup$ I edited out the meta-commentary (as we call it... well, at least as I call it) because we try to stay away from that stuff on SE. Anyway, I think you've got nothing to worry about. This method is legit, because the four answer choices are separated by enough to easily distinguish between them. $\endgroup$ – David Z Jan 30 '17 at 5:12 Primes except for $2$ are all odd, and you have $168$ distinct primes, so their sum must be at least $2 + \sum_{k=2}^{168} (2k-1) = 2 + (168^2-1) > 160^2 = 25600 > 11555$. So option (a) is out and only (b) remains. $\begingroup$ Of course, the question is so weak that taking the first $168$ positive integers suffices, but it's even easier to get a bound using odd positive integers! $\endgroup$ – user21820 Jan 30 '17 at 11:48 Not the answer you're looking for? Browse other questions tagged elementary-number-theory summation prime-numbers or ask your own question. Primes + Inetvel + conjecture on primes Sum of two primes What is the proof to the fact that all prime numbers are 1 above or below a 6 multiple? Prime number minus 1 is an even number? What is the least prime $p$, such that $[p-1000,p+1000]$ does not contain a prime $\ne p$? Number of primes below $x$, such that the digitsum is also prime Proof: 1007 can not be written as the sum of two primes. How far is the list of known primes known to be complete? How many number of primes below n such that they are sum of consecutive primes Is $p^2+q^2+r^2=3^k$ with primes $p,q,r$ solvable for every odd positive integer $k\ge 3\ $?
CommonCrawl
Forest Ecosystems Evaluation of sampling strategies to estimate crown biomass Krishna P Poudel1, Hailemariam Temesgen1 & Andrew N Gray2 Forest Ecosystems volume 2, Article number: 1 (2015) Cite this article Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE), relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12) are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Use of auxiliary information in design or estimation phase reduces the RMSE produced by a sampling strategy. However, this is attained by having to sample larger amount of biomass. Based on our finding we would recommend sampling nine branches per tree to be reasonably efficient and limit the amount of fieldwork. The global issue of climate change and an increasing interest in the reduction of fossil fuel carbon dioxide emissions by using forest biomass for energy production has increased the importance of forest biomass quantification in recent years. Different national and international reports have presented the amount of carbon sequestered by forest ecosystems. For example, the Intergovernmental Panel on Climate Change reports that forests contain about 80% of aboveground and 40% of belowground carbon stock (IPCC 2007). Additionally, it is reported that the amount of carbon stored in dry wood is approximately 50% by weight (Brown 1986; Paladinic et al. 2009; Sedjo and Sohngen 2012). Biomass, in general, includes both above and below ground living and dead mass of trees, shrubs, vines, and roots. However, most of the researches on biomass estimation have focused on aboveground biomass because of the difficulty in collecting belowground data (Lu 2006). The amount of biomass in a forest is influenced by various site factors such as stand density and site productivity; soil characteristics such as texture and moisture content; and tree characteristics such as species and age. On the other hand, distribution of crown biomass affects the carbon cycle, soil nutrient allocation, fuel accumulation, and wildlife habitat environments in terrestrial ecosystems and it governs the potential of carbon emission due to deforestation (Lu 2005). The major components of aboveground tree biomass are merchantable stem biomass (bole including bark and wood), stump biomass, foliage biomass, and branches/top biomass (Zhou and Hemstrom 2009). The common biomass estimation approach selects some trees, which are representative of the populations of interest, for destructive sampling and weighs their components. Regression models are then fit to relate some easily measurable attributes, such as diameter at breast height and total tree height, with tree (or component) biomass. The amount of biomass distributed in different components is dependent on species and their geographic location (Pooreter et al. 2012), management practices (Tumwebaze et al. 2013) and tree size and stand density (Jenkins et al. 2003). Ritchie et al. (2013) found that for the given DBH and crown ratio, thinned stands had more foliage biomass but slightly less branch biomass than unthinned stands. Similarly, the contribution of component biomass to the total aboveground biomass varies by tree size (de-Miguel et al. 2014b). Henry et al. (2011) found differences in biomass due to floristic composition, tree species and growth strategies for the tree species within a given climatic zone. Thus, the component biomass estimations, for example branch or crown biomass, bole biomass, and bark biomass, are important to account for the variability within the tree. The common understanding among researchers and practitioners is that an accurate carbon stock estimate requires improved and consistent methods for tree and component biomass estimation (Hansen 2002; Zhou and Hemstrom 2009). Crown biomass is the oven dry weight of the entire crown, including the leading shoot above the last-formed whorl, excluding the main bole (Hepp and Brister 1982). The components of crown biomass are wood, bark, and foliage weights. Crown biomass accounts for a significant portion of total tree biomass but the amount and its distribution vary by tree and site characteristics. Using the data from two Alaskan Picea mariana ecosystems, Barney et al. (1978) reported that foliage comprised 17% to 37% of the total tree mass for the lowland stands and 17% to 50% of the total tree mass in the upland stands. Total bole mass ranged from 11% to 58% in lowland stands and 21% to 61% in the upland stands. In a study to determine the patterns of biomass allocation in dominant and suppressed loblolly pine (Pinus taeda), Naidu et al. (1998) found that the dominant trees allocated 24.5% of biomass to the crown (13.2% in branch and 11.3% in needle) and the suppressed trees allocated 12.3% (6.7% in branch and 5.6% in needle). Kuyaha et al. (2013) found that crown biomass formed up to 26% (22% in branch and 4% in needle) of aboveground biomass in farmed eucalyptus species. In assessing the importance of crown dimensions to improve tropical tree biomass estimate, Goodman et al. (2013) found the trees in their study to have nearly half of the total aboveground tree biomass in branches (44% ± 2%). Estimates of crown biomass for each stand condition is necessary to understand nutrient depletion and for evaluating the economic feasibility of crown utilization for energy production or forest products (Hepp and Brister 1982). Furthermore, estimates of crown biomass aid in fuel load assessments and fire management strategies (He et al. 2013) because it is one of the important input variables in most wildfire models (Saatchi et al. 2007). Much of the focus in estimating crown biomass has been in the form of regression models and in the selection of predictor variables rather than in the methods of sample selection. In addition, comparisons of sampling strategies have been carried out mainly for foliar biomass sampling rather than the total crown (branch wood, bark, and foliage) biomass. Thus, the evaluation of different sampling designs and sample size in estimating crown biomass is an important aspect of aboveground biomass estimation. Common sampling strategies used in aboveground biomass estimation include simple random sampling, systematic sampling, stratified random sampling, and randomized branch sampling. The suitability of a technique is determined by the availability of funds, required accuracy, structure and composition of vegetation, and desired specificity of estimation (Catchpole and Wheeler 1992). Additionally, the amount of time a particular technique takes to implement in the field is also important. The simple random sampling is generally used as the basis to evaluate the performance of other sampling designs (e.g. Snowdon 1986; Temesgen 2003). Gregoire et al. (1995) have proposed a number of sampling procedures (randomized branch sampling, importance sampling, control-variate sampling, two-stage and three-stage sampling) that can be used to estimate foliage and other characteristics of individual trees. The randomized branch sampling (RBS) was originally introduced by Jessen (1955) to determine the fruit count on orchard trees. Valentine and Hilton (1977) used this method to obtain estimates of leaf counts, foliar area, and foliar mass of mature Quercus species. Good et al. (2001) have employed RBS with importance sampling for estimating tree component biomass. Since the sample is accumulated sequentially along the path, RBS does not require locating and counting the total number of branches beforehand. However, Chiric et al. (2014) posed some doubts on the effectiveness of RBS in sampling big trees or trees with irregular forms. According to Valentine and Hilton (1977), the accuracy of RBS is largely dependent on the probability assignment and the time required to take RBS samples depends on the size of the trees and experience of those taking the samples. Swank and Schreuder (1974) compared stratified two-phase sampling, two-phase sampling with a regression estimator, and two-phase sampling with a ratio-of-means estimator. They found the stratified two-phase sampling as the most precise and appropriate method for estimating surface area and biomass for a young eastern white pine forest. Temesgen (2003) found that stratified random sampling produced the lowest mean squared error value in comparing five sampling designs to quantify tree leaf area. Stratification in branch biomass sampling can be done in many different ways. Snowdon (1986) showed improved accuracy of estimates by stratification based on crown position compared to those obtained by simple random sampling, especially at low sampling intensities. Their findings suggest that stratification by whorl was slightly but not significantly inferior to stratification based on crown position or branch diameter. Another approach used in selecting branches for estimating crown biomass is to divide the bole into sections and pile up the branches from each section into different size classes and randomly select a number of branches proportional to the total number of branches in each size class (e.g. Harrison et al. 2009, Devine et al. 2013). In an evaluation of ten different sampling strategies, Temesgen et al. (2011) found that systematic sampling with ratio estimation as the most efficient estimate of individual tree foliage biomass. de-Miguel et al. (2014a) developed generalized, calibratable, mixed-effects meta-models for large-scale biomass prediction. One of their objectives was to investigate and demonstrate how the biomass prediction differed when calibration trees were selected using different sampling strategies. They found that a stratified sampling was better compared to the simple random sampling. Thus there is no strong rationale to support one method as being superior to another. Crown biomass is difficult to predict because of the variability within and among species and various sites. A good allometric equation for predicting aboveground biomass should be based on data collected with an appropriate (precise and unbiased) sampling method. In this context, the objective of this study was to evaluate different sampling strategies to estimate crown biomass. We also evaluated how the performance of different methods was affected when different number of branches (3, 6, 9, and 12) per tree was sampled in estimating crown biomass. This study was conducted in the McDonald-Dunn Forest, an approximately 4,550 ha property, managed by the Oregon State University in the western of the edge of the Willamette Valley in Oregon and on the eastern foothills of the Coast Range (123°15' W, 44°35' N, 120 m elevation). The forest consists predominantly of the Douglas-fir (Pseudotsuga menziesii (Mirbel) Franco) and a small Grand fir (Abies grandis (Dougl. ex D. Don) Lindl.) and has a wide range of overstory age-class distribution with majority of the stands less than 80 years old and some stands that are 80 to 120 years old. The forest receives approximately 110 cm of annual rainfall and average annual temperature ranges from 6°C to 17°C. Twenty sample trees (11 Douglas-fir and 9 Grand fir) were subjectively selected from stands of different ages for destructive sampling avoiding the trees with obvious defects and trees close to stand edges. The field work was carried out between the first week of July and third week of September 2012. Trees that were forked below breast height and with damaged tops were not included in sampling. Tree level attributes including total height, height to the base of first live branch, crown width, and main stem diameter at 0.15, 0.76, 1.37, and 2.40 m above ground, and every 1.22 m afterwards were recorded. The branches were divided into four diameter classes (1.3 cm class = 0–2.5 cm, 3.8 cm class = 2.6–5.1 cm, 6.4 cm class = 5.2–7.6 cm, 8.9 cm class = 7.7–10.2 cm). For all first order branches, height to- and diameter- at branch base were measured. For the first and every third branch, when proceeding from the base, in each diameter class, length and weight of both live and dead branches were recorded. From those selected branches, four branches per diameter class were weighed with and without foliage. The needles were removed in the field to obtain the green weight of foliage and branch wood with bark. Two of these four branches were taken to the lab, keeping branch and foliage in separate paper bags, for drying. The branches were chipped in to small pieces to expedite the drying process and placed in a kiln for drying at 105°C. The oven dry weight was recorded by tracking the weight lost by each sample until no further weight was lost. Table 1 presents the tree and branch level summary of the felled-tree data used in this study. Table 1 Summary of felled-tree and branch-level attributes used in this study Individual branch biomass Kershaw and Maguire (1995) developed a tree specific log linear model (Equation 1) using branch diameter (BD) and depth into the crown (DINC: the distance from tip to the base of the branch) as covariates to estimate branch foliage biomass. Temesgen et al. (2011) successfully used this model in comparing sampling strategies for tree foliage biomass estimation. $$ \ln \left({y}_{ij}\right)={\beta}_{0i}+{\beta}_{1i} \ln \left(B{D}_{ij}\right)+{\beta}_{2i} \ln \left(DIN{C}_{ij}\right)+{\varepsilon}_{ij} $$ This model was modified by replacing DINC with branch length (Equation 2). The modified model provided the best fit (Adj-R 2 = 0.93), therefore was used to predict individual branch biomass within each tree. $$ \ln \left({y}_{ij}\right)={\beta}_{0i}+{\beta}_{1i} \ln \left(B{D}_{ij}\right)+{\beta}_{2i} \ln \left(B{L}_{ij}\right)+{\varepsilon}_{ij} $$ where y ij , BD ij and BL ij are oven dry weight (kg) of branch (wood, bark, and foliage combined), branch diameter (cm), and branch length (m) of the j th branch on i th tree respectively; β ij 's are regression parameters to be estimated; ln(·) is the natural logarithm; and ε ij 's are the random error. The full model included other variables such as height to the base of the branch, crown width and crown length, but were dropped because they were not statistically significant (p-value > 0.05). Lengths for the 2/3rd branches (not measured in the field) were obtained by fitting the following log linear model (Adj-R 2 = 0.74): $$ \ln \left(B{L}_{ij}\right)={\beta}_{0i}+{\beta}_{1i} \ln \left(B{D}_{ij}\right)+{\beta}_{2i} \ln \left(RB{D}_{ij}\right)+{\varepsilon}_{ij} $$ where BD ij , BL ij , and ε ij are same as defined in Equation 2 and RBD ij is the relative branch depth (relative position of the subject branch from the crown base) of j th branch in i th tree and is computed as follows (Ishii and Wilson 2001): $$ RBD=\frac{\mathrm{total}\ \mathrm{t}\mathrm{ree}\ \mathrm{he}\mathrm{ight}-\mathrm{height}\ \mathrm{t}\mathrm{o}\ \mathrm{t}\mathrm{he}\ \mathrm{base}\ \mathrm{o}\mathrm{f}\ \mathrm{subject}\ \mathrm{branch}}{\mathrm{total}\ \mathrm{t}\mathrm{ree}\ \mathrm{he}\mathrm{ight}-\mathrm{height}\ \mathrm{t}\mathrm{o}\ \mathrm{t}\mathrm{he}\ \mathrm{base}\ \mathrm{o}\mathrm{f}\ \mathrm{lowest}\ \mathrm{live}\ \mathrm{branch}} $$ The RBD is 1.0 for the first live branch. The logarithmic regressions are reported to result in a negative bias when data are back transformed to arithmetic scale. The commonly used remedy to this is to multiply the back transformed results by a correction factor \( \left[ exp\left(\frac{MSE}{2}\right)\right] \), where MSE is the mean squared error obtained by the least-squares regression. However, there are conflicting remarks about the correction factor itself. For example Beauchamp and Olson (1973) and Flewelling and Pienaar (1981) suggested that this correction factor was still biased because the sample variance is consistent but it is biased for finite sample sizes. We did not use the correction factor in our study. The trend in the relationship between crown biomass and branch diameter and length was similar but the variability in biomass increased with increasing branch length (Figure 1). All statistical procedures were performed using statistical software R (R Core Team 2014). Scatterplot of dry biomass (kg) against branch diameter (a) and branch length (b) by species (DF = Douglas-fir, GF = Grand fir). Methods for crown biomass sampling We evaluated 11 sampling methods to select branches for estimating crown biomass. The 11 sampling strategies belonged to three main categories: simple random sampling, systematic sampling, and stratified sampling. Methods 1 and 2 are based on simple random sampling (SRS) strategy. In each of these methods, each branch was chosen randomly such that each individual branch has equal probability of selection at any stage of selection. The difference in these methods is in the estimation of total tree biomass: method 1 uses SRS estimator while method 2 (SRS-RAT) uses the ratio estimator with squared branch diameter as auxiliary information. Method 1 is also the basis for comparing the performance of other methods. Method 3, probability proportional to size (PPS), uses branch size as auxiliary information in sample selection. Total crown biomass in this method was calculated using Horvitz-Thompson estimator (Horvitz and Thompson 1952). Methods 4 (SYS) and 5 (SYS-RAT) are systematic sampling with similar design phase but different estimation phase. Method 4 uses the SRS estimator while method 5 uses the ratio estimator. The fractional interval systematic sample selection procedure was used in the systematic selection of the branches because it ensures the equal probability of selection for all the branches (Temesgen et al. 2011). The interval was determined based on the total number of branches in each tree. In fractional interval systematic sample selection, first a random starting point between 1 and total number of branches was randomly chosen, the interval then is added obtaining exactly n (sample size) branches. Then the numbers are divided by the sample size and rounded to the nearest whole number to get the selected samples. Methods 6–11 belonged to different stratified sampling strategies. The stratified sampling method divides the population into subpopulations of size n h , where n h is the number of elements in stratum h. The total crown length was divided into three sections having equal number of branches as three strata. In methods 6 (STR) and 7 (STR-RAT), n/3 branches were randomly selected with equal probability, where n is the sample size. Again, the difference between these two methods lies in the estimation of total crown biomass. STR method uses the SRS estimation technique while STR-RAT method uses the ratio estimation technique to obtain the total crown biomass. Method 8 (STR-PPS), stratified sampling with PPS, selected branches in each stratum with probability proportional to the square of branch diameter. Total crown biomass in this method was obtained by summing the stratum total crown biomass calculated using Horvitz-Thompson unequal probability estimator (Horvitz and Thompson 1952). Methods 9–11 (stratified, unequal) are based on the idea that the distribution of crown biomass in different strata depends on the relative position of the branches in the tree. Ishii and McDowell (2001) found that mean branch volume increased from upper- to lower-crown. For a given density, biomass (oven dry weight) is the function of volume. Therefore, the stratified sampling method was modified to incorporate the variability of biomass distribution within a tree. Trees were first divided into three sections having equal number of branches. Then, 4, 3, and 2 branches from the lower, middle, and upper section of the trees were selected respectively. This corresponds that the number of branches selected in each section is proportional to the observed biomass in that section of the tree. Because stratification based on crown length resulted in the biased estimation of crown biomass, the balanced stratification method was applied. The total number of branches selected in each tree (nine) was determined based on the amount of biomass sampled. Total crown biomass in each stratum was computed using the SRS estimation technique in method 9 (Un-STR), PPS in method 10 (Un-PPS), and ratio estimation in method 11 (Un-STRRAT). Total crown biomass in each tree was computed by summing the crown biomass in each stratum. The unequal branch selection strategy was also evaluated using similar evaluation statistics used for the other eight methods. Performances of first eight methods were evaluated by selecting four different sample sizes (3, 6, 9, and 12) in each tree. These sample sizes were chosen for the ease of distributing samples into three different strata in stratified sampling with equal number of branches per stratum. Methods 9–11 were based on selecting nine branches in each tree. Table 2 summarizes the inclusion probability, selection probability, and the estimator of the total crown biomass in each of the sampling strategies evaluated in this study. Table 2 Summary of methods used for crown biomass estimation in this study Evaluation of sampling strategies We evaluated the performance of 11 sampling strategies to estimate crown biomass using the following six statistics estimated from 5,000 iterations. These measures were successfully used to evaluate the performance of sampling strategies to estimate foliage biomass in Temesgen et al. (2011). Bias: For each tree the bias (kg) was calculated as the mean difference between observed and predicted total crown biomass for that tree as follows: $$ {B}_i=\frac{1}{5000}{\displaystyle \sum_{s=1}^{5000}}\left({\tau}_{is}-{\widehat{\tau}}_{is}\right) $$ where \( {\tau}_{is}\ \mathrm{and}\ {\widehat{\tau}}_{is} \) are the observed and predicted total crown biomasses for i th tree in s th iteration, respectively. Relative bias: Relative bias percentage is the ratio of bias to the total observed crown biomass for that tree and computed as follows: $$ R{B}_i=\frac{1}{5000}{\displaystyle \sum_{s=1}^{5000}}\frac{\left({\tau}_{is}-{\widehat{\tau}}_{is}\right)}{\tau_{is}} $$ where all the variables are same as defined previously. Root mean square error (RMSE): $$ RMS{E}_i=\sqrt{\frac{1}{5000}{\displaystyle \sum_{s=1}^{5000}}{\left({\tau}_{is}-{\widehat{\tau}}_{is}\right)}^2} $$ Relative RMSE: $$ R-RMS{E}_i=\sqrt{\frac{1}{5000}{\displaystyle \sum_{s=1}^{5000}}{\left(\frac{\tau_{is}-{\widehat{\tau}}_{is}}{\tau_{is}}\right)}^2} $$ Biomass sampled (BS): Amount of cost for crown biomass estimation is directly proportional to the amount of crown biomass sampled. Therefore the amount of crown biomass sampled was also used as a criterion for the evaluation of sampling strategies. The amount of crown biomass sampled (sampling intensity) is calculated as follows: $$ B{S}_i=\frac{1}{5000}{\displaystyle \sum_{s=1}^{5000}}{\displaystyle \sum_{j\in S}}{y}_{ijs} $$ y ijs is the observed total crown biomasses for i th tree, j th sample branch in s th iteration. Relative biomass sampled (RBS%) indicates the proportion of crown biomass sampled with respect to the total crown biomass measured and is calculated as follows: $$ RB{S}_{ij}=\frac{1}{5000}{\displaystyle \sum_{s=1}^{5000}}{\displaystyle \sum_{j\in S}}\frac{y_{ijs}}{\tau_{ijs}} $$ Except for the ratio estimators, the estimators of population totals were unbiased, with biases close to zero for all sample sizes (Tables 3 and 4). The squared bias for these methods ranged from zero to 0.435 kg. Ratio estimators resulted in greater bias than the other methods. The absolute bias of the ratio estimators decreased with increasing sample size as expected. Table 3 Average bias (kg) produced by different sampling methods and sample sizes based on 5,000 simulations Table 4 Relative bias (percent) produced by different sampling methods and sample sizes based on 5,000 simulations As expected, the RMSE (and relative RMSE) decreased with increasing sample size (Tables 5 and 6) for all sampling strategies. Based on the RMSE values obtained from 5,000 simulations, the stratified sampling with PPS estimation was the superior method compared to all other methods when sample size is 3 or 6 branches per tree. However, while using PPS, stratification of the crown into sections did not reduce the RMSE and relative RMSE significantly. On the other hand, when at least nine branches per tree were sampled, the SYS-RAT was the best and the SRS-RAT was the second best method. Number of branches required to achieve desired precision is another important aspect of estimating crown biomass. On average, the RMSE decreased by 34.3% when the sample size increased from three branches per tree to six branches per tree. The RMSE further decreased by 22.1% and 15.4% when the sample size increased from 6 to 9 and 9 to 12 respectively. Table 5 Average RMSE produced by different sampling methods and sample sizes based on 5,000 simulations Table 6 Relative RMSE percent produced by different sampling methods and sample sizes based on 5,000 simulations The amount of biomass sampled determines the cost that would be incurred in estimating crown biomass. Biomass sampled and relative biomass sampled in different sampling strategies are presented in Tables 7 and 8. The Strategy-Cost-Accuracy graph (Figure 2) shows the efficiency trade-off across the strategies compared in the study. The SRS and SYS methods resulted in the lowest amount of biomass sampled. On average, the amount of biomass sampled using the PPS method was 1.6, 1.5, 1.4, and 1.4 times higher than the amount of biomass sampled in stratified random sampling when 3, 6, 9, and 12 branches per tree were selected respectively. Table 7 Amount of biomass sampled (kg) by different sampling strategies and sample sizes Table 8 Relative amount of biomass sampled (%) by different sampling strategies and sample sizes Relative RMSE (%) produced Vs. relative biomass sampled (percent of total crown mass) in different sampling strategies and sample sizes. On average, selecting 12 instead of 9 branches per tree increased the amount of biomass sampled by 29.2%. Therefore, nine branches in each tree were selected in evaluating the performance of unequal stratified sampling strategy. Results from unequal branch selection are presented in Table 9. This strategy reduced the relative RMSE by 0.6%, 4.5% and 3.5% compared to selecting 9 branches using stratified random sampling, stratified sampling with ratio estimation, and stratified sampling with PPS respectively. This reduction in relative RMSE is obtained by sampling just a little more amount of biomass (1.03 times on average). Table 9 Evaluation statistics produced when selecting 4, 3, and 2 branches from lower, middle, and upper stratum Use of allometric equations is inevitable in aboveground biomass estimation because weighing the trees and their components for direct biomass determination is destructive and prohibitively expensive. Choice of biomass sampling strategy determines the quality of data available for fitting such equations. Use of auxiliary information in design and/or estimation phase (ratio estimation and PPS) produced better results in terms of RMSE compared to the methods that do not make use of such information in this study. Previous researches (e.g. Temesgen et al. 2011) have also shown the benefits from using auxiliary information in the design and/or estimation concerning tree biomass. The model used in estimating branch biomass which is later used as a dependent variable in the test population, was a logarithmic model (Equation 2). There is an inherent negative bias in this method because the dependent variable is transformed prior to estimation (Snowdon 1991). The ratio estimation strategies, SRS-RAT, SYS-RAT, and STR-RAT in this study, were negatively biased. However, in terms of RMSE, these strategies were clearly superior methods compared to the SRS approach. As noted in Temesgen et al. (2011), however, the efficiency of sampling strategies with ratio estimation may be affected by the amount of work and difficulty in implementing these techniques in the field. The amount of biomass sampled determines the cost that would be incurred in estimating crown biomass. Choice of a sampling strategy determines the amount of biomass and relative biomass sampled. This ultimately determines the amount of time and cost required for a biomass estimation project. The SRS and SYS methods resulted in the lowest amount of biomass sampled. Our results in terms of RMSE values reported and the amount of biomass sampled by each strategy are consistent with the findings of Temesgen et al. (2011) in estimating foliar biomass of Douglas-fir (Pseudotsuga menziesii var. menziesii) and ponderosa pine (Pinus ponderosa). Crown biomass estimation is a complex process that requires intensive manual field work involving destructive sampling. The amount of fieldwork required and the accuracy of biomass estimation is dependent on the sampling strategy used. Furthermore, the accuracy of the estimation can be improved by adopting appropriate techniques in both the design and estimation phases, beginning with the selection of sample plots and sample trees through model development. In this study, we evaluated 11 different sampling strategies that belonged to three main categories: simple random sampling, systematic sampling and stratified sampling. The SRS, PPS, and ratio estimation techniques were used to obtain the total crown biomass in each tree. Based on the RMSE values obtained from 5,000 simulations, the stratified sampling with PPS estimation produced better results as compared to all other methods when 3 or 6 branches per tree were sampled. However, the SYS-RAT was the best and the SRS-RAT was the second best method when at least nine branches per tree were sampled. It should also be noted that the lower RMSE values in the PPS estimation techniques are obtained with an increased amount of biomass sampled in each tree. On the other hand, if the auxiliary information on branch size is not used, the systematic sampling provided better results than the SRS or STR method when at least 6 branches per trees were selected. Thus the selection of a specific sampling strategy is dependent on the availability of the time and cost for the given biomass sampling project. Based on our finding we would recommend sampling 9 branches per tree to obtain reasonable efficiency and amount of work involved in the field. The logic for selecting unequal numbers of branches per stratum within a tree is justified by the fact that the biomass distribution within a tree is not uniform. Selecting equal branches per stratum produced approximately similar results to unequal sampling when the SRS estimation technique was used. However, making use of auxiliary information on branch size in the design and estimation phases further decreased the relative RMSE. Once again, the decreased RMSE by use of auxiliary information is attained by having to sample slightly higher amount of biomass. Findings of this study should prove beneficial for the stakeholders working in the field of aboveground biomass and carbon estimation. Additional work using the data from different species and location should be done to further validate the findings in this study. Barney RJ, Vancleve K, Schlenter R (1978) Biomass distribution and crown characteristics in two Alaskan Picea mariana ecosystems. Can J For Res 8:36–41 Beauchamp JJ, Olson JS (1973) Corrections for bias in regression estimates after logarithmic transformation. Ecology 54(6):1403–1407 Brown S (1986) Estimating Biomass and Biomass Change of Tropical Forests: A Primer. FAO Forestry Paper 134. Food and Agriculture Organization of the United Nations, Rome Catchpole WR, Wheeler CJ (1992) Estimating plant biomass: a review of techniques. Aust J Ecol 17:121–131 Chiric G, Puletti N, Salvati R, Arbi F, Zolli C, Corona P (2014) Is randomized branch sampling suitable to assess wood volume of temperate broadleaved old-growth forests? For Ecol Manag 312:225–230 R Core Team (2014) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/ de-Miguel S, Mehtatlo L, Durkaya A (2014a) Developing generalized, calibratable, mixed-effects meta models for large-scale biomass prediction. Can J For Res 44:648–656, a or b de-Miguel S, Pukkala T, Assaf N, Shater Z (2014b) Intra-specific difference in allometric equations for aboveground biomass of eastern Mediterranean Pinus brutia. Ann For Sci 71:101–112, a or b Devine WD, Footen PW, Harrison RB, Terry TA, Harrington CA, Holub SM, Gould PJ (2013) Estimating Tree Biomass, Carbon, and Nitrogen in two Vegetation Control Treatments in an 11-Year-old Douglas-fir Plantation on a Highly Productive Site. Res. Pap. PNW-RP-591. U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station, Portland, OR, p 29 Flewelling JW, Pienaar LV (1981) Multiplicative regression with lognormal errors. For Sci 27(2):281–289 Good NM, Paterson M, Brack C, Mengersen K (2001) Estimating tree component biomass using variable probability sampling methods. J Agric Biol Environ Stat 6(2):258–267 Goodman RC, Phillips OL, Baker TR (2013) The importance of crown dimensions to improve tropical tree biomass estimates. Ecol Appl. http://dx.doi.org/10.1890/13-0070.1 Gregoire TG, Valentine HT, Furnival GM (1995) Sampling methods to estimate foliage and other characteristics of individual trees. Ecology 76(4):1181–1194 Hansen M (2002) Volume and biomass estimation in FIA: national consistency vs. regional accuracy. In: McRoberts RE, Reams GA, Van Deusen PC, Moser JW (eds) Proceedings of the third annual Forest Inventory and Analysis symposium. General Technical Report NC-230. U.S. Department of Agriculture, Forest Service, North Central Research Station, St. Paul, MN, pp 109–120 Harrison RB, Terry TA, Licata CW, Flaming BL, Meade R, Guerrini IA, Strahm BD, Xue D, Lolley MR, Sidell AR, Wagoner GL, Briggs D, Turnblom EC (2009) Biomass and stand characteristics of a highly productive mixed Douglas-Fir and Western Hemlock plantation in Coastal Washington. West J Appl For 24(4):180–186 He Q, Chen E, An R, Li Y (2013) Above-ground biomass and biomass components estimation using LiDAR data in a coniferous forests. Forests 4:984–1002 Henry M, Picard N, Trotta C, Manlay RJ, Valentini R, Bernoux M, Saint-André L (2011) Estimating tree biomass of sub-Saharan African forests: a review of available allometric equations. Silva Fenn 45(3B):477–569 Hepp TE, Brister GH (1982) Estimating crown biomass in loblolly pine plantations in the Carolina Flatwoods. For Sci 28(1):115–127 Horvitz DG, Thompson DJ (1952) A generalization of sampling without replacement from a finite universe. J Am Stat Assoc 47:663–685 IPCC (2007) Climate change 2007: synthesis report. In: Core Writing Team, Pachauri RK, Reisinger A (eds) Contribution of working groups I, II and III to the fourth assessment report of the intergovernmental panel on climate change. IPCC, Geneva, Switzerland, p 104 Ishii H, McDowell N (2001) Age-related development of crown structure in coastal Douglas-fir trees. For Ecol Manag 169:257–270 Ishii H, Wilson ME (2001) Crown structure of old-growth Douglas-fir in the western Cascade Range, Washington. Can J For Res 31:1250–1261 Jenkins CJ, Chojnacky DC, Heath LS, Birdsey RA (2003) National-scale biomass estimators for United States tree species. For Sci 49(1):12–35 Jessen RJ (1955) Determining the fruit count on a tree by randomized branch sampling. Biometrics 11(1):99–109 Kershaw JA, Maguire DA (1995) Crown structure in Western hemlock, Douglas-fir, and grand fir in western Washington: trends in branch-level mass and leaf area. Can J For Res 25:1897–1912 Kuyaha S, Dietz J, Muthuri C, Noordwijk MV, Neufeldt H (2013) Allometry and partitioning of above- and below-ground biomass in farmed eucalyptus species dominant in Western Kenyan agricultural landscapes. Biomass Bioenergy 55:276–284 Lu D (2005) Aboveground biomass estimation using Landsat TM data in the Brazilian Amazon. Int J Remote Sens 26(12):509–2525 Lu D (2006) The potential and challenge of remote sensing-based biomass estimation. Int J Remote Sens 7:1297–1328 Naidu SL, DeLucia EH, Thomas RB (1998) Contrasting patterns of biomass allocation in dominant and suppressed loblolly pine. Can J For Res 28:1116–1124 Paladinic E, Vuletic D, Martinic I, Marjanovic H, Indir K, Benko M, Novotny V (2009) Forest biomass and sequestered carbon estimation according to main tree components on the forest stand scale. Period Biol 111(4):459–466 Pooreter H, NIklas KJ, Reich PB, Oleksyn J, Poot P, Mommer L (2012) Biomass allocation to leaves, stems and roots: meta-analyses of interspecific variation and environmental control. New Phytol 193:30–50 Ritchie MW, Zhang J, Hamilton TA (2013) Aboveground tree biomass for Pinus ponderosa in northeastern California. Forests 4:179–196 Saatchi S, Halligan K, Despain DG, Crabtree RL (2007) Estimation of forest fuel load from radar remote sensing. IEEE Trans Geosci Remote Sens 45:1726–1740 Sedjo R, Sohngen B (2012) Carbon sequestration in forests and soils. Annu Rev Resour Econ 4:127–144 Snowdon P (1986) Sampling strategies and methods of estimating the biomass of crown components in individual trees of Pinus radiata D Don. Aust For Res 16(1):63–72 Snowdon P (1991) A ratio estimator for bias correction in logarithmic regressions. Can J For Res 21:720–724 Swank WT, Schreuder HT (1974) Comparison of three methods of estimating surface area and biomass for a forest of young eastern white pine. For Sci 20:91–100 Temesgen H (2003) Evaluation of sampling alternatives to quantify tree leaf area. Can J For Res 33:82–95 Temesgen H, Monleon V, Weiskittel A, Wilson D (2011) Sampling strategies for efficient estimation of tree foliage biomass. For Sci 57(2):153–163 Tumwebaze SB, Bevilacqua E, Briggs R, Volk T (2013) Allometric biomass equations for tree species used in agroforestry systems in Uganda. Agroforest Syst 87:781–795 Valentine HT, Hilton SJ (1977) Sampling oak foliage by the randomized-branch method. Can J For Res 7:295–298 Zhou X, Hemstrom MA (2009) Estimating aboveground tree biomass on forest land in the Pacific Northwest: a comparison of approaches. Res. Pap. PNW-RP-584. U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station, Portland, OR, p 18 We thank Professors Lisa Madsen and Glen Murphy (both at Oregon State University) for their insights and comments on an earlier draft, and the Forest Inventory Analysis Unit for funding the data collection and analysis phases of this project. Department of Forest Engineering, Resources, and Management, College of Forestry, Oregon State University, 280 Peavy Hall, Corvallis, OR, 97331, USA Krishna P Poudel & Hailemariam Temesgen USDA Forest Service, PNW Research Station, 3200 SW Jefferson Way, Corvallis, OR, 97331, USA Andrew N Gray Krishna P Poudel Hailemariam Temesgen Correspondence to Hailemariam Temesgen. KP designed the sampling experiments, performed the sampling experiments, analyzed the data, and wrote the paper. TH conceived the sampling experiments, critically reviewed the manuscript, contributed to coding and data analysis, and edited the manuscript. AG critically reviewed the manuscript, edited the manuscript and contributed ideas at all phases of the project. All authors read and approved the final manuscript. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/. Poudel, K.P., Temesgen, H. & Gray, A.N. Evaluation of sampling strategies to estimate crown biomass. For. Ecosyst. 2, 1 (2015). https://doi.org/10.1186/s40663-014-0025-0 Aboveground biomass Sampling strategies
CommonCrawl
Systems and Control COMEDY – Model for Control of Dynamical Systems MODESTY — MODelling, ESTimation and analysis of sYstems SYCOMORE (English page to be translated) Telecommunications and Networks Information, Learning, Optimization and Communication Sciences (ILOCOS) Intelligent Physical Communications (iPhyCom) Multimedia and Networking (MULTINET) Signal processing and Statistics Inverse Problems Group Modelling and Estimation team Health & biology Chairs & projects UQSay seminar Energy@L2S Seminar S³ : Signal Processing seminar @ Paris-Saclay You are here: L2S | Seminars | UQSay seminar UQSay: UQ, DACE & related topics @ Paris-Saclay UQSay is a series of seminars on the broad area of Uncertainty Quantification (UQ) and related topics (Read more…), organized by L2S, MSSMAT, LMT and EDF R&D. See https://www.uqsay.org/upcoming/. All seminars UQSay #54 The fifty-fourth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, February 2, 2022. 2–3 PM — Brian Staber (Safran Tech) — [slides] Quantitative performance evaluation of Bayesian neural networks Due to the growing adoption of deep neural networks in many fields of science and engineering, modeling and estimating their uncertainties has become of primary importance. Various approaches have been investigated including Bayesian neural networks, ensembles, deterministic approximations, amongst others. Despite the growing litterature about uncertainty quantification in deep learning, the quality of the uncertainty estimates remains an open question. In this work, we attempt to assess the performance of several algorithms on sampling and regression tasks by evaluating the quality of the confidence regions and how well the generated samples are representative of the unknown target distribution. Towards this end, several sampling and regression tasks are considered, and the selected algorithms are compared in terms of coverage probabilities, kernelized Stein discrepancies, and maximum mean discrepancies. Joint work with Sébastien Da Veiga (ENSAI). Ref: arXiv:2206.06779 Organizing committee: Pierre Barbillon (MIA-Paris), Julien Bect (L2S), Nicolas Bousquet (EDF R&D), Amélie Fau (LMPS), Filippo Gatti (LMPS), Bertrand Iooss (EDF R&D), Alexandre Janon (LMO), Sidonie Lefebvre (ONERA), Didier Lucor (LISN), Emmanuel Vazquez (L2S). Coordinators: Julien Bect (L2S) & Sidonie Lefebvre (ONERA) Practical details: the seminar will be held online using Microsoft Teams. If you want to attend this seminar (or any of the forthcoming online UQSay seminars), and if you do not already have access to the UQSay group on Teams, simply send an email and you will be invited. Please specify which email address the invitation must be sent to (this has to be the address associated with your Teams account). You will find the link to the seminar on the "General" UQSay channel on Teams, approximately 15 minutes before the beginning. The technical side of things: you can use Teams either directly from your web browser or using the "fat client", which is available for most platforms (Windows, Linux, Mac, Android & iOS). We strongly recommend the latter option whenever possible. Please give it a try before the seminar to anticipate potential problems. The fifty-third UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, January 19, 2022. 2–3 PM — Felipe Tobar (Initiative for Data & AI, Universidad de Chile) — [slides] Computationally-efficient initialisation of Gaussian processes: The generalised variogram method We present a computationally-efficient strategy to find the hyperparameters of a Gaussian process (GP) avoiding the computation of the likelihood function. Motivated by the fact that training a GP via ML is equivalent (on average) to minimising the KL-divergence between the true and learnt model, we set to explore different metrics/divergences among GPs that are computationally inexpensive and provide estimates close to those of ML. In particular, we identify the GP hyperparameters by projecting the empirical covariance or (Fourier) power spectrum onto a parametric family, thus proposing and studying various measures of discrepancy operating on the temporal or frequency domains. Our contribution extends the Variogram method developed by the geostatistics literature and, accordingly, it is referred to as the Generalised Variogram method (GVM). In this talk, we will start with a brief introduction to Gaussian processes, then present the proposed GVM and finally provide experimental validation using synthetic and real-world data. Joint work with Elsa Cazelles & Taco de Wolff. Ref: arXiv:2210.05394. The fifty-second UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, January 5, 2023. 2–3 PM — Georgios Karagiannis (Durham University) Bayesian spanning treed co-kriging for high dimensional output emulation We propose a new Bayesian emulator, called Bayesian spanning treed co-kriging, suitable to analyze computer models with non-stationary massive outputs in the multifidelity setting. Our motivation comes from a real-life application with a storm surge simulator. Given certain assumptions on the Bayesian model, we introduce a suitable stochastic mechanism that facilitates predictions in a principal manner. The good performance of our method is demonstrated in benchmark examples, while our method is implemented for the analysis of a surge simulator given simulations at different fidelity levels. The fifty-first UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, November 17, 2022. 2–3 PM — Cécile Mercadier (Institut Camille Jordan) — [slides] Hoeffding–Sobol and Möbius decompositions for (tail-)dependence analysis Methods to analyse dependence and tail dependence are well established. Using for instance the copula function or the stable tail dependence function, and their empirical versions, one can construct non parametric statistics, parametric inference, as well as testing or resampling procedures. My talk will reflect upon the use of g sensitivity analysis for extreme value theory and copula modeling. Through my recent publications, I will explain what their links are and the benefit in mixing these domains. Joint work with Christian Genest, Paul Ressel & Olivier Roustant. Refs: C. Mercadier, O. Roustant & C. Genest (2022). Linking the Hoeffding–Sobol and Möbius formulas through a decomposition of Kuo, Sloan, Wasilkowski, and Wozniakowski. Statistics & Probability Letters, vol. 185 [hal-03220809], C. Mercadier & P. Ressel (2021). Hoeffding–Sobol decomposition of homogeneous co-survival functions: from Choquet representation to extreme value theory application. Dependence Modeling, 9(1):179–198 [hal-03200817], C. Mercadier & O. Roustant (2019). The tail dependograph. Extremes, 22:343–372 [hal-01649596]. The fiftieth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, October 13, 2022. 2–3 PM — Gilles Stoltz (LMO, Université Paris-Saclay – CNRS) — [slides] Multi-armed bandit problems: a statistical view, focused on lower bounds Multi-armed bandit problems correspond to facing K unknown probability distributions, having to sequentially pull one of them, and observing a realization thereof at each pull. Two goals will be considered. (1) The realizations are payoffs, and the sum of these payoffs is to be maximized. This goal is achieving by minimizing regret, which is defined as the expected performance of the best arm minus the expected sum of payoffs achieved by a strategy. Two types of bounds may be defined, depending on whether they may depend on the specific bandit problem or only on the model (the class of possible distributions). We will recall classical strategies like UCB and MOSS, as well as a new strategy combining both, called KL-UCB-Switch. We will review upper bounds on the regret and detail which lower bounds may be achieved, and how. We will deal with one interesting extension, the adaptation to the unknown range of the distributions, i.e., when the distribution are supported on a compact interval that is unknown as well. The case of regret minimization is very well understood in the literature, contrary to: (2) A second goal can be to identify the best arm, i.e., control the probability that after T observations (sampled adaptively) the strategy does not identify the arm with the highest expectation. This is called best arm identification with a fixed budget. Limited results are available. We will describe a typical strategy, called successive rejects, that drops one distribution after the other after horse racing them. We will also indicate how we are currently laying the foundations of a non-parametric approach to this problem, based on KL divergences, as opposed to typical approaches based on differences between expectations. Joint work with Antoine Barrier, Aurélien Garivier, Hédi Hadiji & Pierre Ménard. Refs: KL-UCB-Switch: JMLR, 23(179):1−66, 2022, Lower bounds for regret minimization: MOOR, 4(2):377-766, 2019 [HAL], Adaptation to the unknown range: HAL-02794382, Best-arm identification: HAL-03792668. The forty-ninth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, September 29, 2022. 2–3 PM — Jonas Latz (Heriot-Watt University, Edinburgh) — [slides] Stochastic gradient descent in continuous time: discrete and continuous data Optimisation problems with discrete and continuous data appear in statistical estimation, machine learning, functional data science, robust optimal control, and variational inference. The "full" target function in such an optimisation problem is given by the integral over a family of parameterised target functions with respect to a discrete or continuous probability measure. Such problems can often be solved by stochastic optimisation methods: performing optimisation steps with respect to the parameterised target function with randomly switched parameter values. In this talk, we discuss a continuous-time variant of the stochastic gradient descent algorithm. This so-called stochastic gradient process couples a gradient flow minimising a parameterised target function and a continuous-time 'index' process which determines the parameter. We first briefly introduce the stochastic gradient processes for finite, discrete data which uses pure jump index processes. Then, we move on to continuous data. Here, we allow for very general index processes: reflected diffusions, pure jump processes, as well as other Lévy processes on compact spaces. Thus, we study multiple sampling patterns for the continuous data space. We show that the stochastic gradient process can approximate the gradient flow minimising the full target function at any accuracy. Moreover, we give convexity assumptions under which the stochastic gradient process with constant learning rate is geometrically ergodic. In the same setting, we also obtain ergodicity and convergence to the minimiser of the full target function when the learning rate decreases over time sufficiently slowly. Joint work with Kexin Jin, Chenguang Liu & Carola-Bibiane Schönlieb. Refs: DOI:10.1007/s11222-021-10016-8, arXiv:2112.03754, arXiv:2203.11555. The forty-eighth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, June 2, 2022. 2–3 PM — Valentin Resseguier (Scalian Innovation Lab, INRAE) — [slides] Fast generation of prior for Bayesian estimation problems in fluid mechanics We are interested in real-time estimation and short-term forecasting of 3D fluid flows, using limited computational resources. This is possible through the coupling between data, numerical simulations and sparse fluid flow measurements. Here, the term data refers to numerical simulation outputs. To achieve these ambitious goals, synthetic (i.e. simulated) data and intrusive surrogate models drastically reduce the problem dimensionality – typically from 10 7 to 10. Unfortunately, even with corrections, the accumulated errors of these surrogate models increase rapidly over time due to the chaotic and intermittent nature of fluid mechanics. Therefore, deterministic predictions are hardly possible outside the learning time interval. Data assimilation can alleviate these problems by (i) providing a set of simulations covering probable futures (without increasing the computational cost) and (ii) constraining these online simulations with measurements. We addressed this Uncertainty Quantification (UQ) problem (i) with a multi-scale physically-based stochastic parameterization called "Location uncertainty models" (LUM) [1-3] and new statistical estimators based on stochastic calculus, signal processing and physics [3]. The deterministic ROM coefficients are obtained by a Galerkin projection whereas the correlations of the noises are estimated from the residual velocity, the physical model structure, and the evolution of the resolved modes. We solved problem (ii) with a particle filter [4]. Whether we consider UQ [3] or DA [4] applications, our method greatly exceeds the state of the art, for ROM degrees of freedom smaller than 10 and moderately turbulent 3D flows (Reynolds number up to 300). Joint work A. M. Picard & M. Ladvig (Scalian), and D. Heitz (INRAE). Refs: [1] hal-01391420, [2] hal-02558016, [3] hal-03169957 & [4] hal-03445455. Organizing committee: Pierre Barbillon (MIA-Paris), Julien Bect (L2S), Nicolas Bousquet (EDF R&D), Amélie Fau (LMPS), Filippo Gatti (LMPS), Bertrand Iooss (EDF R&D), Alexandre Janon (LMO), Sidonie Lefebvre (ONERA), Didier Lucor (LISN), Emmanuel Vazquez (L2S). Coordinator: Julien Bect (L2S). The forty-seventh UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, May 19, 2022. 2–3 PM — Mélanie Rochoux (CECI, Cerfacs, CNRS) — [slides] Assimilating fire front position and emulating boundary-layer flow simulations for wildland fire behavior ensemble prediction and reanalysis Monitoring wildfire behavior has recently emerged as a key public policy issue due to the occurrence of extreme events, in particular in the Euro-Mediterranean area that is exposed to more frequent and more severe wildfires under climate change. Key to this modeling is the development of an event-scale numerical simulation capability as a means to understand and predict the interactions between the atmosphere and the wildfire that drive its behavior. In this framework, my research aims at designing and evaluating a wildland fire behavior reanalysis capability to reconstruct as best as possible wildland fire progression at landscape-to-atmospheric scales. This approach combines information coming from a coupled atmosphere/fire model (Costes et al. 2021) and from airborne thermal infrared images (Paugam et al. 2021) through an ensemble-based data assimilation algorithm that infers more realistic environmental factors and estimates the time-evolving fire front position. My talk will provide an overview of the different components required to build this reanalysis capability, with two main focus: i) a front data assimilation methodology to address position errors in the fire front progression (Rochoux et al. 2018; Zhang et al. 2019), and ii) a non-intrusive reduced-order modeling approach combining principal component analysis and adaptive Gaussian processes to accurately and efficiently explore the physical parameter space and predict the atmospheric boundary-layer flow patterns (Nony et al. 2021). In the long-term, these methods will be applied to the Meso-NH/Blaze coupled atmosphere/fire model to design a wildland fire behavior ensemble prediction and reanalysis capability. Joint work with Bastien Nony & Thomas Jaravel (Cerfacs), Didier Lucor (LISN), Annabelle Collin & Philippe Moireau (Inria), Cong Zhang & Arnaud Trouvé (University of Maryland). Refs: M.C. Rochoux, A. Collin, C. Zhang, A. Trouvé, D. Lucor and P. Moireau (2018). Front shape similarity measure for shape-oriented sensitivity analysis and data assimilation for eikonal equation. ESAIM: Proceedings and Surveys, EDP Sciences, 63:258–279, DOI:10.1051/proc/201863258. C. Zhang, A. Collin, P. Moireau, A. Trouvé and M.C. Rochoux (2019). State-parameter estimation approach for data-driven wildland fire spread modeling: application to the 2012 RxCADRE S5 field-scale experiment. Fire Safety Journal, 105:286–299, DOI:<10.1016/j.firesaf.2019.03.009. B.X. Nony, M.C. Rochoux, D. Lucor and T. Jaravel (2021). Compound parametric metamodeling of large-eddy simulations for micro-scale atmospheric dispersion. 20th International Conference on Harmonisation within Atmospheric Dispersion Modelling for Regulatory Purposes, Tartu (Estonia), 14–18 June, 2021. A. Costes, M.C. Rochoux, C. Lac and V. Masson (2021) Subgrid-scale fire front reconstruction for ensemble coupled atmosphere-fire simulations of the FireFlux I experiment. Fire Safety Journal, 126:103475, DOI:10.1016/j.firesaf.2021.103475. R. Paugam, M.J. Wooster, W.E. Mell, M.C. Rochoux, J-B. Filippi, G. Rücker, O. Frauenberger, E. Lorenz, W. Schroeder and N. Govendor (2021). Orthorectification of helicopter-borne high resolution experimental burn observation from infra red handheld imagers. Remote Sensing, 13(23):4913, DOI:10.3390/rs13234913. The forty-fifth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, March 31, 2022. 2–3 PM — Nathalie Bartoli (ONERA) — [slides] Bayesian optimization to solve mono- or multi-fidelity constrained black box problem This work aims at developing new methodologies to optimize computational ostly complex systems (e.g., aeronautical engineering systems). The proposed surrogate-based method (often called Bayesian Optimization) uses adaptive sampling to promote a trade-off between exploration and exploitation. Our in-house implementation, called SEGOMOE, handles a high number of design variables (continuous, discrete or categorical) and nonlinearities by combining mixtures of experts (local surrogate models) for the objective and/or the constraints. An extension to multi-fidelity is also included when a variety of information is available. The performance of the proposed approach has been evaluated on both a benchmark of analytical constrained and unconstrained problems a well as a set of realistic aeronautical applications. Refs: P. Saves, N. Bartoli, Y. Diouane, T. Lefebvre, J. Morlier, C. David, S. Defoort (2022). Multidisciplinary design optimization with mixed categorical variables for aircraft design. In AIAA SCITECH 2022 Forum (p. 0082). R. C. Arenzana, A. López-Lopera, S. Mouton, N. Bartoli, T. Lefebvre (2021, July). Multifidelity Gaussian Process model for CFD and Wind Tunnel data fusion. In Proceedings of the International Conference on Multidisciplinary Design Optimization of Aerospace Systems (AEROBEST 2021) (pp. 1-758). R. Priem, H. Gagnon, I. Chittick, S. Dufresne, Y. Diouane, and N. Bartoli (2020). An efficient application of Bayesian optimization to an industrial MDO framework for aircraft design. In AIAA AVIATION 2020 FORUM (p. 3152). R. Priem, N. Bartoli, Y. Diouane, A. Sgueglia (2020), Upper trust bound feasibility criterion for mixed constrained Bayesian optimization with application to aircraft design, Aerospace Science and Technology M. Meliani, N. Bartoli, T. Lefebvre, M.-A. Bouhlel, J. R. R. A. Martins, J. Morlier, Multi-fidelity efficient global optimization: Methodology and application to airfoil shape design, 20th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, June 2019, Dallas, United States N. Bartoli, T. Lefebvre, S. Dubreuil, R. Olivanti, R. Priem, N. Bons, J. R. R. A. Martins, J. Morlier (2019), Adaptive modeling strategy for constrained global optimization with application to aerodynamic wing design, Aerospace Science and Technology Journal, vol. 90, p. 85-102 M.-A. Bouhlel, J. T. Hwang, N. Bartoli, R. Lafage, J. Morlier, J. R. R. A. Martins (2019), A Python surrogate modeling framework with derivatives, Advances in Engineering Software M.-A. Bouhlel, N. Bartoli, A. Otsmane and J. Morlier, Improving kriging surrogates of high-dimensional design models by Partial Least Squares dimension reduction, Structural and Multidisciplinary Optimization, vol 53, no5, pp 935-952, 2016 The forty-fourth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, March 17, 2022. 2–3 PM — Nicola Pedroni (Politecnico di Torino) — [slides] Quantification of Mixed Aleatory and Epistemic Uncertainties for Robust Design Optimization, in the Presence of Scarce and Functional Data The quantitative analyses of the phenomena occurring in complex, safety-critical (e.g., civil, nuclear, aerospace and chemical) dynamic engineering systems are based on mathematical models. In practice, not all the characteristics of the system under analysis can be captured in the model: thus, uncertainty is present in the values of the input parameters and in the model hypotheses and structure. This is due to: (i) the intrinsically random nature of several of the phenomena occurring during system operation (aleatory uncertainty, here represented by multivariate probability distributions); (ii) the incomplete knowledge about some phenomena and operating conditions, often due to the scarcity of quantitative data available, which may be either very sparse or prohibitively expensive to collect (epistemic uncertainty, here described by intervals or sets). The characterization and quantification of this mixed uncertainty is of paramount importance for: (i) making robust decisions in safety-critical systems applications; (ii) optimally designing and operating such systems; and (iii) driving resource allocation for uncertainty reduction. This talk addresses the "NASA Langley Uncertainty Quantification Challenge on Optimization Under Uncertainty" with respect to two issues: (i) calibration of the mathematical model of an aerospace system and joint quantification of mixed (probabilistic) aleatory and (set-based) epistemic uncertainties; and (ii) system design optimization, in the presence of scarce and functional (time series) data (i.e., observations coming from the real system). With reference to issue (i), the parametric Sliced Normal (SN) class of distributions is employed, whose flexibility and versatility allow characterizing multivariate data and complex parameter dependencies with minimal effort. The modeling power of SNs is tested within a frequentist (optimization-based) framework and a Bayesian inverse approach. With reference to issue (ii), an iterative framework is developed to robustly optimize the design of the system (e.g., by minimizing the worst-case, epistemic upper bound of its failure probability). The issue is addressed by an efficient combination of: (i) Monte Carlo Simulation (MCS) to propagate the aleatory uncertainty described by probability distributions; (ii) Genetic Algorithms (GAs) to solve the optimization problems related to the propagation of epistemic uncertainty by interval analysis; and (iii) fast-running Artificial Neural Networks (ANNs) to reduce the computational time related to the repeated model evaluations. As a final remark, since the outputs of the system models of interest are functions of time, both issues are addressed in the space defined by the orthonormal bases resulting from a Singular Value Decomposition (SVD) of the real system observations. Ref: DOI:10.1016/j.ymssp.2021.108206. The forty-third UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, February 17, 2022. 2–3 PM — D. Austin Cole (GlaxoSmithKline, Inc.) — [slides] Locally induced Gaussian processes for modelling large-scale simulations Gaussian processes (GPs) serve as flexible surrogates for complex surfaces, but buckle under the cubic cost of matrix decompositions with big training data sizes. Geospatial and machine learning communities suggest pseudo-inputs, or inducing points, as one strategy to obtain an approximation easing that computational burden. However, we show how placement of inducing points and their multitude can be thwarted by pathologies, especially in large-scale dynamic response surface modeling tasks. As a remedy, we suggest porting the inducing point idea, which is usually applied globally, over to a more local context where selection is both easier and faster. In this way, our proposed methodology (LIGP) hybridizes global inducing point and data subset-based local GP approximation. A cascade of strategies for planning the selection of local inducing points is provided, and comparisons are drawn to related methodology with emphasis on computer surrogate modeling applications. We show that local inducing points extend their global and data-subset component parts on the accuracy—computational efficiency frontier. Next, we show how LIGP also provides benefits for stochastic simulation experiments by separating signal from noise with nugget estimation and replication. Woodbury identities allow local kernel structure to be expressed in terms of unique design locations only, increasing the amount of data (i.e., the neighborhood size) that may be leveraged without additional flops. Illustrative examples are provided on benchmark data and a variety of real-world simulation experiments, including satellite drag and epidemic management. Joint work with Ryan Christianson, Robert B. Gramacy and Mike Ludkovski. Ref: DOI:10.1007/s11222-021-10007-9 and arXiv:2109.05324. The forty-first UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, January 20, 2022. 2–3 PM — Nora Lüthen (ETH Zürich) — [slides] Poincaré chaos expansions for derivative-enhanced surrogate modelling and sensitivity analysis Variance-based global sensitivity analysis, and in particular Sobol' analysis, is widely adopted to determine the importance of input variables to a computational model. Sobol' indices can be computed cheaply based on spectral methods like polynomial chaos expansions (PCE). Another option is given by the recently developed Poincaré chaos expansions (PoinCE), whose orthonormal tensor-product basis is generated from the eigenfunctions of one-dimensional Poincaré differential operators. The Poincaré differential operator is a special case of Sturm-Liouville operator and has recently been revisited for sensitivity analysis (Roustant et al. 2017). Solving the associated eigenproblem yields the Poincaré constant for a large class of one-dimensional measures with bounded support. The associated eigenfunctions form an orthonormal basis with the special (and characterizing) property that derivatives of the basis form again an orthogonal basis with respect to the same measure (Lüthen et al. 2021). The expansion of a model in terms of this basis allows the analytical computation of Sobol' indices and derivative-based sensitivity indices (DGSM) directly from the expansion coefficients. Furthermore, the special property of the derivatives makes PoinCE particularly well suited to account for derivative information in the computation of sensitivity indices (Roustant et al. 2020). Indeed the expansions involving either model or derivative evaluations are connected, and computations can be reused. Assuming that partial derivative evaluations of the computational model are available, we compute spectral expansions in terms of Poincaré basis functions or basis partial derivatives, respectively, by sparse regression. We show on numerical examples that the derivative-based expansions provide accurate estimates for Sobol' indices, even outperforming PCE in terms of bias and variance, and explore the performance of PoinCE as a surrogate model. Joint work with Olivier Roustant, Fabrice Gamboa, Bertrand Iooss, Stefano Marelli and Bruno Sudret. Ref: arXiv:2107.00394. The fortieth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, January 6, 2022. 2–3 PM — Didier Dubois (CNRS, IRIT, Univ. Paul Sabatier) New uncertainty theories (The limited expressiveness of single probability measures) — [slides] The variability of physical phenomena and partial ignorance about them motivated the development of probability theory in the two last centuries. However, the mathematical framework of probability theory, together with the Bayesian credo claiming the inevitability of unique probability measures for representing agents' beliefs, have blurred the distinction between variability and ignorance. Modern theories of uncertainty, by putting together probabilistic and set-valued representations of information, provide a better account of the various facets of uncertainty. Organizing committee: Pierre Barbillon (MIA-Paris), Julien Bect (L2S), Nicolas Bousquet (EDF R&D), Amélie Fau (LMPS), Filippo Gatti (LMPS), Bertrand Iooss (EDF R&D), Alexandre Janon (LMO), Sidonie Lefebvre (DOTA), Didier Lucor (LISN), Emmanuel Vazquez (L2S). Coordinator: Julien Bect (L2S). The thirty-ninth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, December 16, 2021. 2–3 PM — Gianni Franchi (U2IS, ENSTA Paris) — [slides] Encoding the latent posterior of Bayesian neural networks for Uncertainty Quantification Bayesian neural networks (BNNs) have been long considered an ideal, yet unscalable solution for improving the robustness and the predictive uncertainty of deep neural networks. While they could capture more accurately the posterior distribution of the network parameters, most BNN approaches are either limited to small networks or rely on constraining assumptions such as parameter independence. These drawbacks have enabled prominence of simple, but computationally heavy approaches such as Deep Ensembles, whose training and testing costs increase linearly with the number of networks. In this work we aim for efficient deep BNNs amenable to complex computer vision architectures, e.g. ResNet50 DeepLabV3+, and tasks, e.g. semantic segmentation, with fewer assumptions on the parameters. We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer. Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient ({in terms of computation and} memory during both training and testing) ensembles. LP-BNNs attain competitive results across multiple metrics in several challenging benchmarks for image classification, semantic segmentation and out-of-distribution detection. Joint work with Andrei Bursuc, Emanuel Aldea, Séverine Dubuisson & Isabelle Bloch Ref: arXiv:2012.02818 Organizing committee: Pierre Barbillon (MIA-Paris), Julien Bect (L2S), Nicolas Bousquet (EDF R&D), Didier Clouteau (MSSMAT), Amélie Fau (LMT), Filippo Gatti (MSSMAT), Bertrand Iooss (EDF R&D), Alexandre Janon (LMO), Sidonie Lefebvre (DOTA), Fernando Lopez-Caballero (MSSMAT), Didier Lucor (LISN), Emmanuel Vazquez (L2S). Coordinator: Julien Bect (L2S). The thirty-eighth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, December 2, 2021. 2–3 PM — Luc Pronzato (CNRS, Univ. Côte d'Azur) — [slides] Maximum Mean Discrepancy, Bayesian integration and kernel herding for space-filling design A standard objective in computer experiments is to predict/interpolate the behaviour of an unknown function f on a compact domain from a few evaluations inside the domain. When little is known about the function, space-filling design is advisable: typically, points of evaluation spread out across the available space are obtained by minimizing a geometrical (for instance, minimax-distance) or a discrepancy criterion measuring distance to uniformity. We focus our attention to sequential constructions where design points are added one at a time. The presentation is based on the survey [4], built on several recent results [2, 5, 6] that show how energy functionals can be used to measure distance to uniformity. We investigate connections between design for integration of f with respect to a measure µ (quadrature design), construction of the (continuous) BLUE for the location model, and minimization of energy (kernel discrepancy) for signed measures. Integrally strictly positive definite kernels define strictly convex energy functionals, with an equivalence between the notions of potential and directional derivative showing the strong relation between discrepancy minimization and more traditional design of optimal experiments, as used for instance in [3]. Kernel herding algorithms, which are special instances of vertex-direction methods used in optimal design [1, 7], can be applied to the construction of point sequences with suitable space-filling properties. Several illustrative examples are presented Refs: F. Bach, S. Lacoste-Julien, and G. Obozinski. On the equivalence between herding and conditional gradient algorithms. In Proc. 29th Annual International Conference on Machine Learning, pages 1355–1362, 2012. S.B. Damelin, F.J. Hickernell, D.L. Ragozin, and X. Zeng. On energy, discrepancy and group invariant measures on measurable subsets of Euclidean space. J. Fourier Anal. Appl., 16:813–839, 2010. S. Mak and V.R. Joseph. Support points. Annals of Statistics, 46(6A):2562–2592, 2018. L. Pronzato and A.A. Zhigljavsky. Bayesian quadrature, energy minimization and space-filling design. SIAM/ASA J. Uncertainty Quantification, 8(3):959–1011, 2020. S. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of distance-based and RKHS-based statistics in hypothesis testing. The Annals of Statistics, 41(5):2263–2291, 2013. B.K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G.R.G. Lanckriet. Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11:1517–1561, 2010. M. Welling. Herding dynamical weights to learn. In Proc. 26th Annual International Conference on Machine Learning, pages 1121–1128, 2009. The thirty-seventh UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, November 18, 2021. 2–3 PM — Toni Karvonen (University of Helsinki) — [slides] Parameter estimation in Gaussian process regression for deterministic functions In fields such as kriging, modelling of computer experiments, and probabilistic numerical computation, Gaussian process (GP) regression is used to interpolate deterministic functions which are observed without noise on compact sets. This talk reviews recent theoretical work on estimation of parameters (in particular via maximum likelihood) of the covariance kernel of the GP prior in such a setting, as well as the effect parameter estimation has on uncertainty quantification under model misspecification. We also discuss results on sample path properties of GPs that we use to characterise data-generating functions which resemble samples from a GP and to highlight the difference in assuming that the data are generated by some deterministic function or by a stochastic process. The results are based on the theory of reproducing kernel Hilbert spaces and function approximation in Sobolev spaces, which are briefly reviewed. Ref: DOI:10.1137/20M1315968, arXiv:2103.03169, arXiv:2110.02810. The thirty-sixth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, November 4, 2021. 2–3 PM — Thomas Santner (Ohio State University) — [slides] Using Combined Physical and Computer Experiments to Solve Bioengineering Problems Bioengineering seeks to solve problems at the confluence of Engineering and Biology. Classical Bioengineering applications concerned the engineering design, and analysis of the performance of prosthetic joints, such as hips and knees, in multiple operating environments. More recent Bioengineering applications are concerned with designing replacement tissues, and analyzing different treatments for joint tissue injuries. Finite element methods can be used to numerically approximate the stresses and strains in the human bone when prosthetic joints are implanted or when cushioning tissues such as menisci are damaged. Prediction methodology from the computer experiments literature can be used to approximate the stresses and strains for a wide variety of potential prosthetic designs, to study their performance in multiple environments, and to determine the sensitivity of the prosthetic designs to specific engineering and environmental inputs. This talk will provide an overview of two such projects and describe how computer experiment methodology, including calibration to cadaver data, was used to provide insight into their solution. Ref: DOI:doi.org/10.1007/978-1-4757-3799-8. The thirty-fifth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, October 21, 2021. 3–4 PM — Polina Kirichenko (New York University) — [slides] Scaling Bayesian Deep Learning: Subspace Inference Bayesian methods can provide full-predictive distributions and well-calibrated uncertainties in modern deep learning. The Bayesian approach is especially relevant in scientific and healthcare applications—where we wish to have reliable predictive distributions for decision making, and the facility to naturally incorporate domain expertise. With a Bayesian approach, we not only want to find a single point that optimizes a loss, but rather to integrate over a loss landscape to form a Bayesian model average. The geometric properties of the loss surface, rather than the specific locations of optima, therefore greatly influence the predictive distribution in a Bayesian procedure. By better understanding loss geometry, we can realize the significant benefits of Bayesian methods in modern deep learning, overcoming challenges of dimensionality. In this talk, I review work on Bayesian inference and loss geometry in modern deep learning, including challenges, new opportunities, and applications. Refs: arxiv.org:1505.05424, arxiv:1706.04599, arxiv:1609.04836, arxiv:1506.02142, stoclangevin_v6.pdf, arxiv:1612.01474, arxiv:1902.02476, arxiv:1907.07504 The thirty-fourth UQSay seminar on UQ, DACE and related topics will take place online on Thursday afternoon, October 7, 2021. 2–3 PM — Elmar Plischke (T.U. Clausthal) — [slides] Optimal-transport-based sensitivity measures and their computation The theory of optimal transport and the use of Wasserstein distances are attracting increasing attention in statistics and machine learning. At the same time, the definition of sensitivity measures for multivariate responses is a topical research subject. This work examines the construction of probabilistic sensitivity measures using the theory of optimal transport. We obtain a new family of indicators that can deal with multivariate outputs. We test estimators based on alternative algorithmic approaches for computing optimal transport problems, showing promising results and fast execution times for resonable sample sizes. Joint work with E. Borgonovo & G. Savarè (Bocconi Univ.), A. Figalli (ETH Zürich) Ref: preprint + code snippets. The technical side of things: you can use Teams either directly from you web browser or using the "fat client", which is available for most platforms (Windows, Linux, Mac, Android & iOS). We strongly recommend the latter option whenever possible. Please give it a try before the seminar to anticipate potential problems. The thirty-third UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, July 1, 2021. 2–3 PM — Iason Papaioannou (T.U. Munich) — [slides] Reliability sensitivity analysis with FORM This talk discusses reliability sensitivity analysis with the first-order reliability method (FORM). Classical sensitivity indices, which are often used to assess the influence of the input random variables on the probability of failure, are the FORM $\alpha$-factors. These factors are the directional cosines of the the most likely failure point in an underlying independent standard normal space and are obtained as by-products of the FORM analysis. The talk reviews a set of alternative reliability sensitivity indices and their estimation with FORM. Focus is put on variance-based reliability sensitivities that emerge from the variance decomposition of the indicator function of the failure event. The resulting first-order and total-effect reliability sensitivities can be estimated as a function of the FORM reliability indices and the $\alpha$-factors. The second part of the talk addresses decision-oriented sensitivities based on the concept of value of information. In particular, the indices associated with a decision related to the safety of an existing system are presented and their estimation with FORM is examined. The accuracy of the FORM approximations of the various sensitivities is demonstrated with numerical examples. Joint work with Daniel Straub. Ref: DOI:10.1016/j.ress.2021.107496 (preprint) and arxiv:2104.00986. Organizing committee: Julien Bect (L2S), Emmanuel Vazquez (L2S), Didier Clouteau (MSSMAT), Filippo Gatti (MSSMAT), Fernando Lopez Caballero (MSSMAT), Amélie Fau (LMT), Bertrand Iooss (EDF R&D). The thirty-second UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, June 17, 2021. 2–3 PM — Andreas Fichtner (ETH Zürich) — [slides] Probabilistic Full-Waveform Inversion In the course of the past decade, full-waveform inversion has matured from a largely idealistic dream into a commonly applied method to image the internal structure of inaccessible bodies. Despite undeniable success, a major problem remains: The quantification of uncertainties in this often strongly nonlinear inverse problem. In this lecture, I will present a series of computational approaches that brings probabilistic full-waveform inversion with complete uncertainty quantification within reach: 1) Hamiltonian Monte Carlo sampling of the posterior probability density treats model parameters as particles that orbit through model space, obeying Hamilton's equations from classical mechanics. The scaling properties of Hamiltonian Monte Carlo allow us to consider high-dimensional model spaces that often cannot be considered with more traditional, derivative-free sampling methods. 2) Autotuning based on limited-memory quasi-Newton methods provides nearly optimal mass matrices for Hamiltonian Monte Carlo, thereby largely removing laborious manual tuning. A factorised version of the L-BFGS algorithm, in particular, can increase the effective sample size by more than an order of magnitude. 3) Wavefield-adapted spectral-element meshes exploit prior knowledge on the geometry of wavefields. Such prior knowledge is frequently available for media that are smooth relative to the minimum wavelength. Wavefield-adapted meshes have the potential to drastically reduce the number of elements, leading to a computational forward modelling cost that makes Monte Carlo sampling possible. Joint work with Lars Gebraad & Christian Boehm. Ref: DOI:10.1029/2019JB018428. The thirty-first UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, June 3, 2021. 2–3 PM — Adrien Touboul (IRT SystemX & CERMICS) — [slides] Uncertainty Quantification in graphs of functions through sample reweighting The needs for multidisciplinary simulations in the design of complex industrial systems motivate the development of Uncertainty Quantification and Sensitivity Analysis methods that are compatible with disciplinary autonomy. This presentation focuses on decomposition methods based on sample reweighting. The design process is modeled by a graph, whose nodes are simulation codes and edges are exchanges of variables. The first part of this presentation is dedicated to the study of one particular reweighting method, based on the minimization of a Wasserstein distance. An explicit expression of the weights is exhibited in terms of Nearest Neighbors and some consistency results and rates of convergence are derived. The second part is dedicated to the general propagation of the weights in directed acyclic graphs, inspired from an existing algorithm of Amaral, Allaire & Willcox (2014). A general framework is developed to characterize the consistency of the global algorithm in terms of local weighting condition at each node. We observe that some weighting schemes can be obtained naturally from nonparametric linear regressions and linear smoothers. An interesting equivalence with some already existing tools in the literature permits to simplify the numerical computations part. The final algorithm does not require that the simulation codes have to be run at the same time or in a specific order. Hence, it allows for disciplinary autonomy. Joint work with Julien Reygner. Ref: hal-02968059. The thirtieth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, May 20, 2021. 2–3 PM — Clément Gauchy (CEA & École polytechnique) — [slides] An information geometry approach for robustness analysis in uncertainty quantification of computer codes Robustness analysis is an emerging field in the uncertainty quantification domain. It involves analyzing the response of a computer model—which has inputs whose exact values are unknown—to the perturbation of one or several of its input distributions. Practical robustness analysis methods therefore require a coherent methodology for perturbing distributions; we present here one such rigorous method, based on the Fisher distance on manifolds of probability distributions. Further, we provide a numerical method to calculate perturbed densities in practice which comes from Lagrangian mechanics and involves solving a system of ordinary differential equations. The method introduced for perturbations is then used to compute quantile-related robustness indices. We illustrate these "perturbed-law based" indices on several numerical models. We also apply our methods to an industrial setting: the simulation of a loss of coolant accident in a nuclear reactor, where several dozen of the model's physical parameters are not known exactly, and where limited knowledge on their distributions is available. Joint work with Jérôme Stenger, Roman Sueur et Bertrand Iooss. Refs: DOI:10.1080/00401706.2021.1905072. The twenty-ninth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, May 6, 2021. 2–3 PM — Stefano Mariani (DICA @ Politecnico di Milano) — [slides] Online damage detection and model updating via proper orthogonal decomposition and recursive Bayesian filters An approach based on the synergistic use of proper orthogonal decomposition (POD) and Kalman filtering is proposed for the online health monitoring of damaged structures. The reduced-order model of the structure is obtained during the initial training stage of monitoring; afterward, effective estimations of structural damage are provided online by tracking the evolution in time of stiffness parameters and projection bases handled in the model order reduction procedure. Such tracking is accomplished via two Kalman filters: a first one to deal with the time evolution of a joint state vector, gathering the reduced-order state and the stiffness terms degraded by damage; a second one to deal with the update of the reduced-order model in case of damage evolution. Both filters exploit the information conveyed by measurements of the structural response to the external excitations. Focusing on multi-story shear building, the capability and performance of the proposed approach are assessed in terms of tracked variation of the stiffness terms, identified damage location and speed-up of the whole health monitoring procedure. Joint work with Saeed Eftekhar Azam, Giovanni Capellari, Francesco Caimmi. Refs: 10.1016/j.engstruct.2017.12.031, 10.1007/s11071-017-3530-1, 10.3390/s16010002, 10.1504/IJSMSS.2015.078355, 10.1016/j.engstruct.2013.04.004. The twenty-eighth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, April 22, 2021. 2–3 PM — Chris Oates (Newcastle University and Alan Turing Inst.) — [slides] Optimal Thinning of MCMC Output There is a recent trend in computational statistics to move away from sampling methods and towards optimisation methods for posterior approximation. These include discrepancy minimisation, gradient flows and control functionals—all of which have the potential to deliver faster convergence than a Monte Carlo method. In this talk we will see how ideas from discrepancy minimisation can be applied to the problem of optimal thinning of MCMC output. Joint work with Marina Riabiz, Wilson Chen, Jon Cockayne, Pawel Swietach, Steve Niederer, Lester Mackey. Ref: arXiv:2005.03952 and http://stein-thinning.org. The twenty-seventh UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, April 1, 2021. 2–3 PM — Julien Pelamatti (EDF R&D) — [slides] Bayesian optimization of variable-size design space problems Within the framework of complex system design, it is often necessary to solve mixed variable optimization problems, in which the objective and constraint functions can depend simultaneously on continuous and discrete variables. Additionally, complex system design problems occasionally present a variable-size design space. This results in an optimization problem for which the search space varies dynamically (with respect to both number and type of variables) along the optimization process as a function of the values of specific discrete decision variables. Similarly, the number and type of constraints can vary as well. In this paper, two alternative Bayesian optimization-based approaches are proposed in order to solve this type of optimization problems. The first one consists of a budget allocation strategy allowing to focus the computational budget on the most promising design sub-spaces. The second approach, instead, is based on the definition of a kernel function allowing to compute the covariance between samples characterized by partially different sets of variables. The results obtained on analytical and engineering related test-cases show a faster and more consistent convergence of both proposed methods with respect to the standard approaches. Joint work with Loic Brevault (ONERA), Mathieu Balesdent (ONERA), El-Ghazali Talbi (Inria Lille), Yannick Guerin (CNES). Ref: arXiv:2003.03300. The twenty-sixth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, March 18, 2021. 2–3 PM — Amaya Nogales Gómez (I3S, Sophia Antipolis) — [slides] Incremental space-filling design based on coverings and spacings: improving upon low discrepancy sequences The paper addresses the problem of defining families of ordered sequences {x_i} i∈N of elements of a compact subset X of R^d whose prefixes X_n = {x_i} i=1, …, n, for all orders n, have good space-filling properties as measured by the dispersion (covering radius) criterion. Our ultimate aim is the definition of incremental algorithms that generate sequences X_n with small optimality gap, i.e., with a small increase in the maximum distance between points of X and the elements of X_n with respect to the optimal solution X_n. The paper is a first step in this direction, presenting incremental design algorithms with proven optimality bound with respect to one-parameter families of criteria based on coverings and spacings that both converge to dispersion for large values of their parameter. Joint work with Luc Pronzato and Maria-Joao Rendas. Ref: hal-02987983. The twenty-fifth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, March 4, 2021. 2–3 PM — Victor Picheny (Secondmind) Bayesian optimisation: ablation study, global performance assessment and improvements based on trust regions Bayesian Optimisation algorithms (BO) are global optimisation methods that iterate by constructing and using conditional Gaussian processes (GP). It is a common claim that BO is state-of-the-art for costly functions. However, this claim is weakly supported by experimental evidence, as BO is most often compared to itself, rather than to algorithms of different nature. In this work, we study the performance of BO within the well-known COmparing Continuous Optimizers benchmark (COCO). We first analyse the sensitivity of BO to its own parameters, enabling us to answer general questions regarding the choice of the GP kernel or its trend, the initial GP budget, and the suboptimisation of the acquisition function. Then, we study on which function class and dimension BO is relevant when compared to state-of-the-art optimisers for expensive functions. The second part of this talk describes a new BO algorithm to improve scalability with dimension, called TREGO (trust-region-like efficient global optimisation). TREGO alternates between regular BO steps and local steps within a trust region. By following a classical scheme for the trust region (based on a sufficient decrease condition), we demonstrate that our algorithm enjoys strong global convergence properties, while departing from EGO only for a subset of optimization steps. The COCO benchmark experiments reveal that TREGO consistently outperforms EGO and closes the performance gap with other state-of-the-art algorithms in conditions (high budget and dimension) for which BO was struggling to compete previously. Joint work Youssef Diouane, Rodolphe Le Riche, Alexandre Scotto Di Perrotolo. Ref: arXiv:2101.06808 & DiceOptim. The twenty-fourth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, February 18, 2021. 2–3 PM — Amandine Marrel (CEA & IMT) ICSCREAM methodology for the Identification of penalizing Configurations using SCREening And Metamodel — Application to high-dimensional thermal-hydraulic numerical experiments In the framework of risk assessment in nuclear accident analysis, best-estimate computer codes are used to estimate safety margins. Several inputs of the code can be uncertain, due to a lack of knowledge but also to the particular choice of accidental scenario being considered. The objective of this work is to identify the most penalizing (or critical) configurations of several input parameters (called "scenario inputs"), independently of the uncertainty of the other inputs. Critical configurations of the scenario inputs correspond to high values of the code output Y, defined here by exceeding the 90%-quantile. However, thermal-hydraulic codes are too CPU-time expensive to be directly used to propagate the input uncertainties and solve the inversion problem. The adopted solution consists in fitting the code output by a metamodel, built from a reduced number of code simulations. When the number of input parameters is very large (e.g., around a hundred here), the metamodel building remains a challenge. To overcome this, we have developed a methodology, called ICSCREAM for Identification of penalizing Configurations using SCREening And Metamodel. Applied from a Monte Carlo sample of code simulations, the ICSCREAM methodology judiciously combines a step of SA to identify and rank the main influential inputs and to reduce the dimension, before building a Gaussian process (GP) metamodel. SA relies on new statistical independence tests that aggregate information of global and target Hilbert-Schmidt independence criteria. The GP is then efficiently built with a sequential process, where the inputs are taken into account in a more or less fine way, according to their supposed influence. Finally, the GP metamodel is intensively used to estimate the conditional probabilities of Y exceeding the critical value, according to each inputs to be penalized. Accurate uncertainty propagation, not feasible with the computational costly model, become therefore accessible with the ICSCREAM methodology. Joint work with Bertrand Iooss (EDF R&D & IMT) and Vincent Chabridon (EDF R&D). Ref: hal-02535146. The twenty-third UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, February 4, 2021. 2–3 PM — Clémentine Prieur (LJK, Univ. Grenoble Alpes) Global sensitivity analysis for models described by stochastic differential equations Many mathematical models involve input parameters, which are not precisely known. Global sensitivity analysis aims to identify the parameters whose uncertainty has the largest impact on the variability of a quantity of interest. One of the statistical tools used to quantify the influence of each input variable on the quantity of interest are the Sobol' sensitivity indices. In this paper, we consider stochastic models described by stochastic differential equations (SDE). We focus the study on mean quantities, defined as the expectation with respect to the Wiener measure of a quantity of interest related to the solution of the SDE itself. Our approach is based on a Feynman-Kac representation of the quantity of interest, from which we get a parametrized partial differential equation (PDE) representation of our initial problem. We then handle the uncertainty on the parametrized PDE using polynomial chaos expansion and a stochastic Galerkin projection. Joint work with Pierre Étoré, Dang Khoi Pham & Long Li. Ref: hal-01926919. The twenty-second UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, January 21, 2021. 14h–15h — Cédric Travelletti (University of Bern) Implicit Update for Large-Scale Inversion under GP prior We present an almost matrix-free update method for posterior Gaussian process distributions under sequential observations of linear functionals. By introducing a novel implicit representation of the posterior covariance matrix, we are able to extract posterior covariance information on large grids and to provide a framework for sequential data assimilation when covariance matrices cannot fit in memory. This is useful in Bayesian linear inverse problems with Gaussian priors, where the matrices involved grow quadratically in the number of elements in the discretization grid, creating memory bottlenecks when inverting on fine-grained discretizations. We illustrate our method by applying it to an excursion set recovery task arising from a gravimetric inverse problem on Stromboli volcano. In this setting, we demonstrate computation and sequential updating of exact posterior mean and covariance at resolutions finer than what state-of-the-art techniques can handle and showcase how the proposed framework enables implementing large-scale probabilistic excursion set estimation and also deriving efficient experimental design strategies tailored to this goal. Joint work with David Ginsbourger (Univ. Bern) and Niklas Linde (Univ. Lausanne). Ref: Volcapy (github). The twentieth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, December 17, 2020. 14h–15h — Bojana Rosic (University of Twente, Netherlands) Inverse methods for damage estimation in concrete given small data sets One of the main issues in material science is estimation of the constitutive laws given experimental data that may come in different forms ranging from the microscopic images to the macroscopic data collected by strain gauges for example. As data are often heterogeneous, of multi-scale/temporal nature, possibly ambiguous and of low quality due to missing values, the process of learning is often requiring the careful application of existing or design of new data fusion algorithms that are bounded to small data sets. In this talk will be presented the computationally efficient Bayesian algorithms for the damage estimation. In particular, the special attention will be paid to damage model estimation by using both classical uncertainty quantification as well as machine/deep learning approaches. Joint work with (alphabetical order) X. Chapeleau, P.-E. Charbonnel, L.-M. Cottineau, L. De Lorenzis, A. Ibrahimbegovic, V. Le Corvec, H.G. Matthies, E. Merliot, M.S. Sarfaraz, D. Siegert, R. Vidal, J. Waeytens and T. Wu. Refs: hal-01379214, arXiv:1909.07209, DOI:10.1007/s00466-020-01942-x, arXiv:1912.03108. The nineteenth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, December 3, 2020. 14h–15h — Álvaro Rollón de Pinedo (EDF R&D and Université Grenoble Alpes) Functional outlier detection applied to nuclear transient simulation analysis The ever increasing recording and storing capabilities of industrial systems provide a large amount of physical data that can be exploited by engineers. These data may take the form of functions, usually a one-dimensional function of time, but eventually as a multidimensional function of space and time. Finding the subsets of objects that behave abnormally in them is a goal that can prove to be useful in order to avoid spurious results, simulations that do not reproduce certain physical phenomena as expected, or extreme physical events and domains. In the context of nuclear transient simulations, safety reports mostly focus on the study of some scalar parameters (safety criteria), supposed to guarantee the safety of an installation during an accidental transient as long as they do not surpass a previously established threshold. Nevertheless, the state- of-the-art simulations codes (called Best Estimate) provide a much richer and complex information, which can be better taken advantage of through the identification outlying simulations amongst those generated as outputs. The goal of this talk is to introduce the functional outlier detection domain, highlighting its interest in industrial settings, as well as to present our detection technique and the conclusions on the physical analysis of nuclear transients that can be obtained from its use. Joint work with Mathieu Couplet, Bertrand Iooss, Nathalie Marie, Amandine Marrel, Elsa Merle and Roman Sueur. Reference: hal-02965504. The eighteenth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, November 19, 2020. 14h–15h — Eyke Hüllermeier (Paderborn University, Germany) — [slides] Aleatoric and Epistemic Uncertainty in Machine Learning: An Ensemble-based Approach Due to the steadily increasing relevance of machine learning for practical applications, many of which are coming with safety requirements, the notion of uncertainty has received increasing attention in machine learning research in the last couple of years. This talk will address the question of how to distinguish between two important types of uncertainty, often refereed to as aleatoric and epistemic, in the setting of supervised learning, and how to quantify these uncertainties in terms of suitable numerical measures. Roughly speaking, while aleatoric uncertainty is due to inherent randomness, epistemic uncertainty is caused by a lack of knowledge. As a concrete approach for uncertainty quantification in machine learning, the use of ensemble learning methods will be discussed. Joint work with S. Destercke, V.-L. Nguyen, M. H. Shaker & W. Waegeman. References: arXiv:1910.09457, arXiv:1909.00218, arXiv:2001.00893. The seventeenth UQSay seminar on UQ, DACE and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, November 5, 2020. 14h–15h — Luc Bonnet (ONERA & MSSMAT) — [slides] The expected performance of a system can generally differ from its operational performance due to the variability of some parameters. Optimal Uncertainty Quantification is a powerful mathematical tool that can be used to rigorously bound the probability of exceeding a given performance threshold for uncertain operational conditions or system characteristics. Metamodeling is at the heart of this research framework. In this perspective, Kernel Flow, a recent method to obtain a metamodel by kriging developed by Owhadi & Yoo, will be presented. The results obtained will be illustrated by examples in numerical and experimental aerodynamics. Joint work with Eric Savin and Houman Owhadi. References: 10.1016/j.jcp.2019.03.040, 10.1137/10080782X & 10.3390/a13080196. The sixteenth UQSay seminar on Uncertainty Quantification and related topics, organized by L2S, MSSMAT, LMT and EDF R&D, will take place online on Thursday afternoon, October 22, 2020. 14h–15h — Nicolas Bousquet (EDF R&D) Well-posed stochastic inversion in uncertainty quantification, with links with sensitivity analysis Stochastic inversion problems are typically encountered when it is wanted to quantify the uncertainty affecting the inputs of computer models. They consist in estimating input distributions from noisy, observable outputs, and such problems are increasingly examined in Bayesian contexts where the targeted inputs are affected by a mixture of aleatory and epistemic uncertainties. While they are characterized by identifiability conditions, well-posedness constraints of "signal to noise" have to be took into account within the definition of the model, prior to inference. In addition to numeric conditioning notions and regularization techniques used in inverse problems, we propose and investigate an interpretation of well-posedness, in the context of parametric uncertainty quantification and global sensitivity analysis, based on the degradation of Fisher information. It offers an explicitation of such prior constraints considering linear or linearizable operators, this linearization being either local (based on differentiability) or variational. Simulated experiments indicate that, when injected into the modeling process, these constraints can limit the influence of measurement or process noise on the estimation of the input distribution, and let hope for future extensions in a full non-linear framework, for example through the use of linear Gaussian mixtures.​ The fifteenth UQSay seminar on Uncertainty Quantification and related topics, organized by L2S, MSSMAT, and EDF R&D, will take place online on Thursday afternoon, October 8, 2020. 14h–15h — Sebastian Schöps (TU Darmstadt) Uncertainty Quantification for Maxwell's eigenproblem based on isogeometric analysis and mode tracking Superconducting cavities are used in particle accelerators, e.g. at DESY in Hamburg, Germany. Their resonating electromagnetic field is commonly characterised by eigenmodes and eigenvalues which are very sensitive to small geometry deformations. This presentation proposes an uncertainty quantification workflow based on a Karhunen–Loève expansion of the manufacturing imperfections and eigenvalue tracking based on algebraic and geometric homotopies. Joint work with Niklas Georg, Wolfgang Ackermanna, Jacopo Corno. Reference: DOI:10.1016/j.cma.2019.03.002 (arxiv:1802.02978). Organizing committee: Julien Bect (L2S), Emmanuel Vazquez (L2S), Didier Clouteau (MSSMAT), Filippo Gatti (MSSMAT), Fernando Lopez Caballero (MSSMAT), Bertrand Iooss (EDF R&D). The fourteenth UQSay seminar on Uncertainty Quantification and related topics, organized by L2S, MSSMAT, and EDF R&D, will take place online on Thursday afternoon, September 24, 2020. 14h–15h — Amélie Fau (LMT, ENS Paris-Saclay) Alternative strategies for adaptive sampling for kriging metamodels A large variety of strategies have been proposed in the literature to offer optimal dataset for kriging metamodels. Even though adaptive schemes guarantee convergence and improvement of estimation accuracy for instance for Galerkin approaches at least in a goal-oriented sense, using usual adaptive sampling schemes for kriging metamodels might be detrimental, worsing prediction results compared to one-shot sampling techniques. The goal of this seminar is to share our experience on cases leading to this disvantageous behavior. Besides, problems leading to beneficial behavior will be discussed to highlight criteria for deciding about cases of interest for which adaptive sampling strategies are highly promising. Joint work with Jan Fuhg & Udo Nackenhorst (Leibniz Universität, Hannover). Reference: DOI:10.1007/s11831-020-09474-6. Organizers: Julien Bect (L2S), Emmanuel Vazquez (L2S), Didier Clouteau (MSSMAT), Filippo Gatti (MSSMAT), Fernando Lopez Caballero (MSSMAT), Bertrand Iooss (EDF R&D). Practical details: the seminar will be held online using Microsoft Teams. If you want to attend this seminar (or any of the forthcoming online UQSay seminars), and if you do not already have access to the UQSay group on Teams, simply send an email and you will be invited. Please specify which email address the invitation must be sent to (this has to be the address associated with your Teams account). You will find the link to the seminar on the "General" UQSay channel on Teams, approximately 15 minutes before the beginning. The thirteenth UQSay seminar on Uncertainty Quantification and related topics, organized by L2S, MSSMAT, and EDF R&D, will take place online on Thursday afternoon, September 10, 2020. 14h–15h — Balázs Kégl (Noah's Ark Lab, Huawei Paris) — [slides] DARMDN: Deep autoregressive mixture density nets for dynamical system modelling Unlike computers, physical engineering systems (such as data center cooling or wireless network control) do not get faster with time. This is arguably one of the main reasons why recent beautiful advances in deep reinforcement learning (RL) stay mostly in the realm of simulated worlds and do not immediately translate to practical success in the real world. In order to make the best use of the small data sets these systems generate, we develop data-driven neural simulators to model the system and apply model-based control to optimize them. In this talk I will present the first step of this research agenda, a new versatile system modelling tool called deep autoregressive mixture density net (DARMDN – pronounced darm-dee-en). We argue that the performance of model-based reinforcement learning is partly limited by the approximation capacity of the currently used conditional density models and show how DARMDN alleviates these limitations. The model, combined with a random shooting controller, establishes a new state of the art on the popular Acrobot benchmark. Our most interesting and counter-intuitive finding is that the "sincos" Acrobot system which requires no multimodal posterior predictives, can be solved with a deterministic model, but only if it is trained as a probabilistic model. A deterministic model that is trained to minimize MSE leads to prediction error accumulation. Joint work with Gabriel Hurtado and Albert Thomas. The seventh UQSay seminar on Uncertainty Quantification and related topics, organized by L2S and MSSMAT, will take place on Thursday afternoon, January 16, 2020, at CentraleSupelec Paris-Saclay (Eiffel building, amphi III). We will have two talks: 14h — Bertrand Iooss (EDF R&D / PRISME dept.) — [slides] Iterative estimation in uncertainty and sensitivity analysis While building and using numerical simulation models, uncertainty and sensitivity analysis are invaluable tools. In engineering studies, numerical model users and modellers have shown high interest in these techniques that require to run many times the simulation model with different values of the model inputs in order to compute statistical quantities of interest (QoI, i.e. mean, variance, quantiles, sensitivity indices…). In this talk we will focus on new issues relative to large scale numerical systems that simulate complex spatial and temporal evolutions. Indeed, the current practice consists in the storage of all the simulation results. Such a storage becoming quickly overwhelming, with the associated long read time that makes cpu time consuming the estimation of the QoI. One solution consists in avoiding this storage and in computing QoI on the fly (also called in-situ). It turns the problem to considering problems of iterative statistical estimation. The general mathematical and computational issues will be posed, and a particular attention will be paid to the estimation of quantiles (via an adaptation of the Robbins-Monro algorithm) and variance-based sensitivity indices (the so-called Sobol' indices). Joint work with Yvan Fournier (EDF), Bruno Raffin (INRIA), Alejandro Ribés (EDF), Théophile Terraz (INRIA). The third UQSay seminar, organized by L2S and EDF R&D, will take place on Thursday afternoon, June 13, 2019, at CentraleSupelec Paris-Saclay (Eiffel building, amphi V). We will have two talks: 14h — Alexandre Janon (Laboratoire de Mathématique d'Orsay) — [slides] Part 1: Consistency of Sobol indices with respect to stochastic ordering of input parameters In the past decade, Sobol's variance decomposition have been used as a tool – among others – in risk management. We show some links between global sensitivity analysis and stochastic ordering theories. This gives an argument in favor of using Sobol's indices in uncertainty quantification, as one indicator among others. Reference: https://doi.org/10.1051/ps/2018001 (hal-01026373) Part 2: Global optimization using Sobol indices We propose and assess a new global (derivative-free) optimization algorithm, inspired by the LIPO algorithm, which uses variance-based sensitivity analysis (Sobol indices) to reduce the number of calls to the objective function. This method should be efficient to optimize costly functions satisfying the sparsity-of-effects principle. Reference: hal-02154121 15h — Pierre Barbillon (MIA Paris) — [slides] Sensitivity analysis of spatio-temporal models describing nitrogen transfers, transformations and losses at the landscape scale Modelling complex systems such as agroecosystems often requires the quantification of a large number of input factors. Sensitivity analyses are useful to determine the appropriate spatial and temporal resolution of models and to reduce the number of factors to be measured or estimated accurately. Comprehensive spatial and temporal sensitivity analyses were applied to the NitroScape model, a deterministic spatially distributed model describing nitrogen transfers and transformations in rural landscapes. Simulations were led on a theoretical landscape that represented five years of intensive farm management and covering an area of 3km2. Cluster analyses were applied to summarize the results of the sensitivity analysis on the ensemble of model outputs.The methodology we applied is useful to synthesize sensitivity analyses of models with multiple space-time input and output variables and could be ported to other models than NitroScape. Reference: https://doi.org/10.1016/j.envsoft.2018.09.010 (arXiv:1709.08608) Organizers: Julien Bect (L2S) and Bertrand Iooss (EDF R&D). No registration is needed, but an email would be appreciated if you intend to come. The first UQSay seminar, organized by L2S, will take place in the afternoon of March 21, 2019, at CentraleSupelec Paris-Saclay (Eiffel building, amphi IV). We will have two talks: 14h – Mickaël Binois (INRIA Sophia-Antipolis) [slides] Heteroskedastic Gaussian processes for simulation experiments An increasing number of time-consuming simulators exhibit a complex noise structure that depends on the inputs. To conduct studies with limited budgets of evaluations, new surrogate methods are required to model simultaneously the mean and variance fields. To this end, we present recent advances in Gaussian process modeling with input-dependent noise. First, we describe a simple, yet efficient, joint modeling framework that rely on replication for both speed and accuracy. Then we tackle the issue of leveraging replication and exploration in a sequential manner for various goals, such as obtaining a globally accurate model, for optimization, contour finding, and active subspace estimation. We illustrate these on applications coming from epidemiology and inventory management. Ref : https://arxiv.org/abs/1710.03206. 15h – François Bachoc (IMT, Toulouse) [slides] Gaussian process regression model for distribution inputs Monge-Kantorovich distances, otherwise known as Wasserstein distances, have received a growing attention in statistics and machine learning as a powerful discrepancy measure for probability distributions. In this paper, we focus on forecasting a Gaussian process indexed by probability distributions. For this, we provide a family of positive definite kernels built using transportation based distances. We provide asymptotic results for covariance function estimation and prediction. We also provide numerical comparisons with other forecast methods based on distribution inputs. Organizers : Julien Bect (L2S) and Emmanuel Vazquez (L2S). No registration is needed, but an email would be appreciated if you intend to come. Supporting institutes ©2023 L2S - All rights reserved, reproduction prohibited. Legal notice and Privacy policy.
CommonCrawl
Deformation rigidity for subgroups of $SL\left( {n,{\mathbf{Z}}} \right)$ acting on the $n$-torus Author: Steven Hurder Journal: Bull. Amer. Math. Soc. 23 (1990), 107-113 MSC (1985): Primary 57S25, 58H15, 22E40 DOI: https://doi.org/10.1090/S0273-0979-1990-15914-2 References | Similar Articles | Additional Information References [Enhancements On Off] (What's this?) D. V. Anosov, Geodesic flows on closed Riemannian manifolds of negative curvature, Trudy Mat. Inst. Steklov. 90 (1967), 209 (Russian). MR 0224110 Armand Borel, Stable real cohomology of arithmetic groups, Ann. Sci. École Norm. Sup. (4) 7 (1974), 235–272 (1975). MR 387496 Armand Borel, Stable real cohomology of arithmetic groups. II, Manifolds and Lie groups (Notre Dame, Ind., 1980) Progr. Math., vol. 14, Birkhäuser, Boston, Mass., 1981, pp. 21–55. MR 642850 L. Flamino and A. Katok, Rigidity of symplectic Anosov diffeomorphisms on low dimensional tori, Cal. Tech., preprint, 1989. John Franks, Anosov diffeomorphisms on tori, Trans. Amer. Math. Soc. 145 (1969), 117–124. MR 253352, https://doi.org/10.1090/S0002-9947-1969-0253352-7 M. W. Hirsch, C. C. Pugh, and M. Shub, Invariant manifolds, Lecture Notes in Mathematics, Vol. 583, Springer-Verlag, Berlin-New York, 1977. MR 0501173 S. Hurder, Deformation rigidity and structural stability for Anosov actions of higher-rank lattices, preprint. S. Hurder, Problems on rigidity of group actions and cocycles, Ergodic Theory Dynam. Systems 5 (1985), no. 3, 473–484. MR 805843, https://doi.org/10.1017/S0143385700003084 S. Hurder and A. Katok, Differentiability, rigidity and Godbillon-Vey classes for Anosov flows, Inst. Hautes Études Sci. Publ. Math. 72 (1990), 5–61 (1991). MR 1087392 J. Lewis, Infinitesimal rigidity for the action of SL(n, Z) on T, Thesis, University of Chicago, May, 1989. A. N. Livšic, Cohomology of dynamical systems, Izv. Akad. Nauk SSSR Ser. Mat. 36 (1972), 1296–1320 (Russian). MR 0334287 R. de la Llave, Invariants for smooth conjugacy of hyperbolic dynamical systems. II, Comm. Math. Phys. 109 (1987), no. 3, 369–378. MR 882805 R. de la Llave, J. M. Marco, and R. Moriyón, Canonical perturbation theory of Anosov systems and regularity results for the Livšic cohomology equation, Ann. of Math. (2) 123 (1986), no. 3, 537–611. MR 840722, https://doi.org/10.2307/1971334 J. M. Marco and R. Moriyón, Invariants for smooth conjugacy of hyperbolic dynamical systems. I, Comm. Math. Phys. 109 (1987), no. 4, 681–689. MR 885566 G. A. Margulis, Discrete subgroups of semisimple Lie groups, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], vol. 17, Springer-Verlag, Berlin, 1991. MR 1090825 Gopal Prasad and M. S. Raghunathan, Cartan subgroups and lattices in semi-simple groups, Ann. of Math. (2) 96 (1972), 296–317. MR 302822, https://doi.org/10.2307/1970790 Michael Shub, Global stability of dynamical systems, Springer-Verlag, New York, 1987. With the collaboration of Albert Fathi and Rémi Langevin; Translated from the French by Joseph Christy. MR 869255 Dennis Stowe, The stationary set of a group action, Proc. Amer. Math. Soc. 79 (1980), no. 1, 139–146. MR 560600, https://doi.org/10.1090/S0002-9939-1980-0560600-2 Robert J. Zimmer, Lattices in semisimple groups and invariant geometric structures on compact manifolds, Discrete groups in geometry and analysis (New Haven, Conn., 1984) Progr. Math., vol. 67, Birkhäuser Boston, Boston, MA, 1987, pp. 152–210. MR 900826, https://doi.org/10.1007/978-1-4899-6664-3_6 D. V. Anosov, Geodesic flows on closed Riemannian manifolds with negative curvature, Proc. Steklov Inst. Math. 90 (1967), Amer. Math. Soc. Transl. (1969), 5-209. MR 0224110 A. Borel, Stable real cohomology of arithmetic groups, Ann. Sci. École Norm. Sup. (4) 7 (1974), 235-272. MR 387496 A. Borel, Stable real cohomology of arithmetic groups II, in Manifolds and Lie Groups, Papers in Honor of Yozo Matsushima, Prog. Math. 14 (1981), 21-55. MR 642850 J. Franks, Anosov diffeomorphisms on tori, Trans. Amer. Math. Soc. 145 (1969), 117-124. MR 253352 M. W. Hirsch, C. Pugh and M. Snub, Invariant manifolds, Lecture Notes in Math., Vol. 583, Springer-Verlag, Berlin, 1977. MR 501173 S. Hurder, Problems on rigidity of group actions and cocycles, Ergodic Theory Dynamical Systems 5 (1985), 473-484. MR 805843 S. Hurder and A. Katok, Differentiability, rigidity and Godbillon-Vey classes for Anosov flows, Publications Inst. Hautes Etudes Sci. (revision to appear). MR 1087392 A. Livsic, Cohomology of dynamical systems, Math. USSR Izv. 6 (1972), 1278-1301. MR 334287 R. de la Llavé, Invariants for smooth conjugacy of hyperbolic dynamical systems II, Commun. Math. Phys. 109 (1987), 369-378. MR 882805 R. de la Llavé, J. M. Marco and R. Moriyon, Canonical perturbation theory of Anosov systems and regularity results for the Livsic cohomology equation, Ann. of Math. 123 (1986), 537-611. MR 840722 J. M. Marco and R. Moriyon, Invariants for smooth conjugacy of hyperbolic dynamical systems I, Commun. Math. Phys. 109 (1987), 681-689. MR 885566 G. A. Margulis, Discrete subgroups of Lie groups, Springer-Verlag (to appear). MR 1090825 G. Prasad and M. S. Raghunathan, Cartan subgroups and lattices in semisimple groups, Ann. of Math. 96 (1972), 296-317. MR 302822 M. Shub, Global stability of dynamical systems, Springer-Verlag, Berlin, 1987. MR 869255 D. Stowe, The stationary set of a group action, Proc. Amer. Math. Soc. 79 (1980), 139-146. MR 560600 R. Zimmer, Lattices in semi-simple groups and invariant geometric structures on compact manifolds, in Discrete Groups in Geometry and Analysis: Papers in Honor of G. D. Mostow on his sixtieth birthday (Roger Howe, ed.), Prog. Math. 67 (1987), 152-210. MR 900826 Retrieve articles in Bulletin of the American Mathematical Society with MSC (1985): 57S25, 58H15, 22E40 Retrieve articles in all journals with MSC (1985): 57S25, 58H15, 22E40
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Gaia Data Release 2: The catalogue of radial velocity standard stars (1804.09370) C. Soubiran, G. Jasniewicz, L. Chemin, C. Zurbach, N. Brouillet, P. Panuzzo, P. Sartoretti, D. Katz, J.-F. Le Campion, O. Marchal, D. Hestroffer, F. Thévenin, F. Crifo, S. Udry, M. Cropper, G. Seabroke, Y. Viala, K. Benson, R. Blomme, A. Jean-Antoine, H. Huckle, M. Smith, S. G. Baker, Y. Damerdji, C. Dolding, Y. Frémat, E. Gosset, A. Guerrier, L.P. Guy, R. Haigron, K. Janßen, G. Plum, C. Fabre, Y. Lasne, F. Pailler, C. Panem, F. Riclet, F. Royer, G. Tauran, T. Zwitter, A. Gueguen, C. Turon April 25, 2018 astro-ph.GA, astro-ph.SR Aims. The Radial Velocity Spectrometer (RVS) on board the ESA satellite mission Gaia has no calibration device. Therefore, the radial velocity zero point needs to be calibrated with stars that are proved to be stable at a level of 300 m/s during the Gaia observations. Methods. We compiled a dataset of ~71000 radial velocity measurements from five high-resolution spectrographs. A catalogue of 4813 stars was built by combining these individual measurements. The zero point was established using asteroids. Results. The resulting catalogue has seven observations per star on average on a typical time baseline of six years, with a median standard deviation of 15 m/s. A subset of the most stable stars fulfilling the RVS requirements was used to establish the radial velocity zero point provided in Gaia Data Release 2. The stars that were not used for calibration are used to validate the RVS data. Gaia Data Release 2: Processing the spectroscopic data (1804.09371) P. Sartoretti, D. Katz, M. Cropper, P. Panuzzo, G.M. Seabroke, Y. Viala, K. Benson, R. Blomme, G. Jasniewicz, A. Jean-Antoine, H. Huckle, M. Smith, S. Baker, F. Crifo, Y. Damerdji, M. David, C. Dolding, Y. Fremat, E. Gosset, A. Guerrier, L.P. Guy, R. Haigron, K. Janssen, O. Marchal, G. Plum, C. Soubiran, F. Thevenin, M. Ajaj, C. Allende Prieto, C. Babusiaux, S. Boudreault, L. Chemin, C. Delle Luche, C. Fabre, A. Gueguen, N.C. Hambly, Y. Lasne, F. Meynadier, F. Pailler, C. Panem, F. Riclet, F. Royer, G. Tauran, C. Zurbach, T. Zwitter, F. Arenou, A. Gomez, V. Lemaitre, N. Leclerc, T. Morel, U. Munari, C. Turon, M. Zerjal April 25, 2018 astro-ph.GA, astro-ph.SR, astro-ph.IM The Gaia Data Release 2 contains the 1st release of radial velocities complementing the kinematic data of a sample of about 7 million relatively bright, late-type stars. Aims: This paper provides a detailed description of the Gaia spectroscopic data processing pipeline, and of the approach adopted to derive the radial velocities presented in DR2. Methods: The pipeline must perform four main tasks: (i) clean and reduce the spectra observed with the Radial Velocity Spectrometer (RVS); (ii) calibrate the RVS instrument, including wavelength, straylight, line-spread function, bias non-uniformity, and photometric zeropoint; (iii) extract the radial velocities; and (iv) verify the accuracy and precision of the results. The radial velocity of a star is obtained through a fit of the RVS spectrum relative to an appropriate synthetic template spectrum. An additional task of the spectroscopic pipeline was to provide 1st-order estimates of the stellar atmospheric parameters required to select such template spectra. We describe the pipeline features and present the detailed calibration algorithms and software solutions we used to produce the radial velocities published in DR2. Results: The spectroscopic processing pipeline produced median radial velocities for Gaia stars with narrow-band near-IR magnitude Grvs < 12 (i.e. brighter than V~13). Stars identified as double-lined spectroscopic binaries were removed from the pipeline, while variable stars, single-lined, and non-detected double-lined spectroscopic binaries were treated as single stars. The scatter in radial velocity among different observations of a same star, also published in DR2, provides information about radial velocity variability. For the hottest (Teff > 7000 K) and coolest (Teff < 3500 K) stars, the accuracy and precision of the stellar parameter estimates are not sufficient to allow selection of appropriate templates. [Abridged] Gaia Data Release 2: Observational Hertzsprung-Russell diagrams (1804.09378) Gaia Collaboration, C. Babusiaux, F. van Leeuwen, M.A. Barstow, C. Jordi, A. Vallenari, D. Bossini, A. Bressan, T. Cantat-Gaudin, M. van Leeuwen, A.G.A. Brown, T. Prusti, J.H.J. de Bruijne, C.A.L. Bailer-Jones, M. Biermann, D.W. Evans, L. Eyer, F. Jansen, S.A. Klioner, U. Lammers, L. Lindegren, X. Luri, F. Mignard, C. Panem, D. Pourbaix, S. Randich, P. Sartoretti, H.I. Siddiqui, C. Soubiran, N.A. Walton, F. Arenou, U. Bastian, M. Cropper, R. Drimmel, D. Katz, M.G. Lattanzi, J. Bakker, C. Cacciari, J. Castañeda, L. Chaoul, N. Cheek, F. DeAngeli, C. Fabricius, R. Guerra, B. Holl, E. Masana, R. Messineo, N. Mowlavi, K. Nienartowicz, P. Panuzzo, J. Portell, M. Riello, G.M. Seabroke, P. Tanga, F. Thévenin, G. Gracia-Abril, G. Comoretto, M. Garcia-Reinaldos, D. Teyssier, M. Altmann, R. Andrae, M. Audard, I. Bellas-Velidis, K. Benson, J. Berthier, R. Blomme, P. Burgess, G. Busso, B. Carry, A. Cellino, G. Clementini, M. Clotet, O. Creevey, M. Davidson, J. DeRidder, L. Delchambre, A. Dell'Oro, C. Ducourant, J. Fernández-Hernández, M. Fouesneau, Y. Frémat, L. Galluccio, M. García-Torres, J. González-Núñez, J.J. González-Vidal, E. Gosset, L.P. Guy, J.-L. Halbwachs, N.C. Hambly, D.L. Harrison, J. Hernández, D. Hestroffer, S.T. Hodgkin, A. Hutton, G. Jasniewicz, A. Jean-Antoine-Piccolo, S. Jordan, A.J. Korn, A. Krone-Martins, A.C. Lanzafame, T. Lebzelter, W. Löffler, M. Manteiga, P.M. Marrese, J.M. Martín-Fleitas, A. Moitinho, A. Mora, K. Muinonen, J. Osinde, E. Pancino, T. Pauwels, J.-M. Petit, A. Recio-Blanco, P.J. Richards, L. Rimoldini, A.C. Robin, L.M. Sarro, C. Siopis, M. Smith, A. Sozzetti, M. Süveges, J. Torra, W. vanReeven, U. Abbas, A. Abreu Aramburu, S. Accart, C. Aerts, G. Altavilla, M.A. Álvarez, R. Alvarez, J. Alves, R.I. Anderson, A.H. Andrei, E. Anglada Varela, E. Antiche, T. Antoja, B. Arcay, T.L. Astraatmadja, N. Bach, S.G. Baker, L. Balaguer-Núñez, P. Balm, C. Barache, C. Barata, D. Barbato, F. Barblan, P.S. Barklem, D. Barrado, M. Barros, S. Bartholomé Muñoz, J.-L. Bassilana, U. Becciani, M. Bellazzini, A. Berihuete, S. Bertone, L. Bianchi, O. Bienaymé, S. Blanco-Cuaresma, T. Boch, C. Boeche, A. Bombrun, R. Borrachero, S. Bouquillon, G. Bourda, A. Bragaglia, L. Bramante, M.A. Breddels, N. Brouillet, T. Brüsemeister, E. Brugaletta, B. Bucciarelli, A. Burlacu, D. Busonero, A.G. Butkevich, R. Buzzi, E. Caffau, R. Cancelliere, G. Cannizzaro, R. Carballo, T. Carlucci, J.M. Carrasco, L. Casamiquela, M. Castellani, A. Castro-Ginard, P. Charlot, L. Chemin, A. Chiavassa, G. Cocozza, G. Costigan, S. Cowell, F. Crifo, M. Crosta, C. Crowley, J. Cuypers, C. Dafonte, Y. Damerdji, A. Dapergolas, P. David, M. David, P. deLaverny, F. DeLuise, R. DeMarch, D. deMartino, R. deSouza, A. deTorres, J. Debosscher, E. delPozo, M. Delbo, A. Delgado, H.E. Delgado, S. Diakite, C. Diener, E. Distefano, C. Dolding, P. Drazinos, J. Durán, B. Edvardsson, H. Enke, K. Eriksson, P. Esquej, G. Eynard Bontemps, C. Fabre, M. Fabrizio, S. Faigler, A.J. Falcão, M. Farràs Casas, L. Federici, G. Fedorets, P. Fernique, F. Figueras, F. Filippi, K. Findeisen, A. Fonti, E. Fraile, M. Fraser, B. Frézouls, M. Gai, S. Galleti, D. Garabato, F. García-Sedano, A. Garofalo, N. Garralda, A. Gavel, P. Gavras, J. Gerssen, R. Geyer, P. Giacobbe, G. Gilmore, S. Girona, G. Giuffrida, F. Glass, M. Gomes, M. Granvik, A. Gueguen, A. Guerrier, J. Guiraud, R. Gutiérrez-Sánchez, R. Haigron, D. Hatzidimitriou, M. Hauser, M. Haywood, U. Heiter, A. Helmi, J. Heu, T. Hilger, D. Hobbs, W. Hofmann, G. Holland, H.E. Huckle, A. Hypki, V. Icardi, K. Janßen, G. JevardatdeFombelle, P.G. Jonker, Á.L. Juhász, F. Julbe, A. Karampelas, A. Kewley, J. Klar, A. Kochoska, R. Kohley, K. Kolenberg, M. Kontizas, E. Kontizas, S.E. Koposov, G. Kordopatis, Z. Kostrzewa-Rutkowska, P. Koubsky, S. Lambert, A.F. Lanza, Y. Lasne, J.-B. Lavigne, Y. LeFustec, C. LePoncin-Lafitte, Y. Lebreton, S. Leccia, N. Leclerc, I. Lecoeur-Taibi, H. Lenhardt, F. Leroux, S. Liao, E. Licata, H.E.P. Lindstrøm, T.A. Lister, E. Livanou, A. Lobel, M. López, S. Managau, R.G. Mann, G. Mantelet, O. Marchal, J.M. Marchant, M. Marconi, S. Marinoni, G. Marschalkó, D.J. Marshall, M. Martino, G. Marton, N. Mary, D. Massari, G. Matijevič, T. Mazeh, P.J. McMillan, S. Messina, D. Michalik, N.R. Millar, D. Molina, R. Molinaro, L. Molnár, P. Montegriffo, R. Mor, R. Morbidelli, T. Morel, D. Morris, A.F. Mulone, T. Muraveva, I. Musella, G. Nelemans, L. Nicastro, L. Noval, W. O'Mullane, C. Ordénovic, D. Ordóñez-Blanco, P. Osborne, C. Pagani, I. Pagano, F. Pailler, H. Palacin, L. Palaversa, A. Panahi, M. Pawlak, A.M. Piersimoni, F.-X. Pineau, E. Plachy, G. Plum, E. Poggio, E. Poujoulet, A. Prša, L. Pulone, E. Racero, S. Ragaini, N. Rambaux, M. Ramos-Lerate, S. Regibo, C. Reylé, F. Riclet, V. Ripepi, A. Riva, A. Rivard, G. Rixon, T. Roegiers, M. Roelens, M. Romero-Gómez, N. Rowell, F. Royer, L. Ruiz-Dern, G. Sadowski, T. Sagristà Sellés, J. Sahlmann, J. Salgado, E. Salguero, N. Sanna, T. Santana-Ros, M. Sarasso, H. Savietto, M. Schultheis, E. Sciacca, M. Segol, J.C. Segovia, D. Ségransan, I-C. Shih, L. Siltala, A.F. Silva, R.L. Smart, K.W. Smith, E. Solano, F. Solitro, R. Sordo, S. SoriaNieto, J. Souchay, A. Spagna, F. Spoto, U. Stampa, I.A. Steele, H. Steidelmüller, C.A. Stephenson, H. Stoev, F.F. Suess, J. Surdej, L. Szabados, E. Szegedi-Elek, D. Tapiador, F. Taris, G. Tauran, M.B. Taylor, R. Teixeira, D. Terrett, P. Teyssandier, W. Thuillot, A. Titarenko, F. TorraClotet, C. Turon, A. Ulla, E. Utrilla, S. Uzzi, M. Vaillant, G. Valentini, V. Valette, A. vanElteren, E. Van Hemelryck, M. Vaschetto, A. Vecchiato, J. Veljanoski, Y. Viala, D. Vicente, S. Vogt, C. vonEssen, H. Voss, V. Votruba, S. Voutsinas, G. Walmsley, M. Weiler, O. Wertz, T. Wevers, Ł. Wyrzykowski, A. Yoldas, M. Žerjal, H. Ziaeepour, J. Zorec, S. Zschocke, S. Zucker, C. Zurbach, T. Zwitter We highlight the power of the Gaia DR2 in studying many fine structures of the Hertzsprung-Russell diagram (HRD). Gaia allows us to present many different HRDs, depending in particular on stellar population selections. We do not aim here for completeness in terms of types of stars or stellar evolutionary aspects. Instead, we have chosen several illustrative examples. We describe some of the selections that can be made in Gaia DR2 to highlight the main structures of the Gaia HRDs. We select both field and cluster (open and globular) stars, compare the observations with previous classifications and with stellar evolutionary tracks, and we present variations of the Gaia HRD with age, metallicity, and kinematics. Late stages of stellar evolution such as hot subdwarfs, post-AGB stars, planetary nebulae, and white dwarfs are also analysed, as well as low-mass brown dwarf objects. The Gaia HRDs are unprecedented in both precision and coverage of the various Milky Way stellar populations and stellar evolutionary phases. Many fine structures of the HRDs are presented. The clear split of the white dwarf sequence into hydrogen and helium white dwarfs is presented for the first time in an HRD. The relation between kinematics and the HRD is nicely illustrated. Two different populations in a classical kinematic selection of the halo are unambiguously identified in the HRD. Membership and mean parameters for a selected list of open clusters are provided. They allow drawing very detailed cluster sequences, highlighting fine structures, and providing extremely precise empirical isochrones that will lead to more insight in stellar physics. Gaia DR2 demonstrates the potential of combining precise astrometry and photometry for large samples for studies in stellar evolution and stellar population and opens an entire new area for HRD-based studies. Gaia Data Release 2: Properties and validation of the radial velocities (1804.09372) D. Katz, P. Sartoretti, M. Cropper, P. Panuzzo, G.M. Seabroke, Y. Viala, K. Benson, R. Blomme, G. Jasniewicz, A. Jean-Antoine, H. Huckle, M. Smith, S. Baker, F. Crifo, Y. Damerdji, M. David, C. Dolding, Y. Frémat, E. Gosset, A. Guerrier, L.P. Guy, R. Haigron, K. Janßen, O. Marchal, G. Plum, C. Soubiran, F. Thévenin, M. Ajaj, C. Allende Prieto, C. Babusiaux, S. Boudreault, L. Chemin, C. Delle Luche, C. Fabre, A. Gueguen, N.C. Hambly, Y. Lasne, F. Meynadier, F. Pailler, C. Panem, F. Royer, G. Tauran, C. Zurbach, T. Zwitter, F. Arenou, D. Bossini, A. Gomez, V. Lemaitre, N. Leclerc, T. Morel, U. Munari, C. Turon, A. Vallenari, M. Žerjal April 25, 2018 astro-ph.IM For Gaia DR2 (GDR2), 280 million spectra, collected by the RVS instrument on-board Gaia, were processed and median radial velocities were derived for 9.8 million sources brighter than Grvs = 12 mag. This paper describes the validation and properties of the median radial velocities published in GDR2. Quality tests and filters are applied to select, from the 9.8 million radial velocities, those with the quality to be published in GDR2. The accuracy of the selected sample is assessed with respect to ground-based catalogues. Its precision is estimated using both ground-based catalogues and the distribution of the Gaia radial velocity uncertainties. GDR2 contains median radial velocities for 7 224 631 stars, with Teff in the range [3550, 6900] K, which passed succesfully the quality tests. The published median radial velocities provide a full sky-coverage and have a completness with respect to the astrometric data of 77.2\% (for $G \leq 12.5$ mag). The median radial velocity residuals with respect to the ground-based surveys vary from one catalogue to another, but do not exceed a few 100s m/s. In addition, the Gaia radial velocities show a positive trend as a function of magnitude, which starts around Grvs $\sim 9$ mag and reaches about $+500$ m/s at Grvs $= 11.75$ mag. The overall precision, estimated from the median of the Gaia radial velocity uncertainties, is 1.05 km/s. The radial velocity precision is function of many parameters, in particular the magnitude and effective temperature. For bright stars, Grvs in [4, 8] mag, the precision is in the range 200-350 m/s, which is about 3 to 5 times more precise than the pre-launch specification of 1 km/s. At the faint end, Grvs = 11.75 mag, the precisions for Teff = 5000 K and 6500 K are respectively 1.4 km/s and 3.7 km/s. Gaia Data Release 1. Testing the parallaxes with local Cepheids and RR Lyrae stars (1705.00688) Gaia Collaboration: G. Clementini, L. Eyer, V. Ripepi, M. Marconi, T. Muraveva, A. Garofalo, L.M. Sarro, M. Palmer, X. Luri, R. Molinaro, L. Rimoldini, L. Szabados, I. Musella, R.I. Anderson, T. Prusti, J.H.J. de Bruijne, A.G.A. Brown, A. Vallenari, C. Babusiaux, C.A.L. Bailer-Jones, U. Bastian, M. Biermann, D.W. Evans, F. Jansen, C. Jordi, S.A. Klioner, U. Lammers, L. Lindegren, F. Mignard, C. Panem, D. Pourbaix, S. Randich, P. Sartoretti, H.I. Siddiqui, C. Soubiran, V. Valette, F. van Leeuwen, N.A. Walton, C. Aerts, F. Arenou, M. Cropper, R. Drimmel, E. Høg, D. Katz, M.G. Lattanzi, W. O'Mullane, E.K. Grebel, A.D. Holland, C. Huc, X. Passot, M. Perryman, L. Bramante, C. Cacciari, J. Castañeda, L. Chaoul, N. Cheek, F. De Angeli, C. Fabricius, R. Guerra, J. Hernández, A. Jean-Azntoine-Piccolo, E. Masana, R. Messineo, N. Mowlavi, K. Nienartowicz, D. Ordóñez-Blanco, P. Panuzzo, J. Portell, P.J. Richards, M. Riello, G.M. Seabroke, P. Tanga, F. Thévenin, J. Torra, S.G. Els, G. Gracia-Abril, G. Comoretto, M. Garcia-Reinaldos, T. Lock, E. Mercier, M. Altmann, R. Andrae, T.L. Astraatmadja, I. Bellas-Velidis, K. Benson, J. Berthier, R. Blomme, G. Busso, B. Carry, A. Cellino, S. Cowell, O. Creevey, J. Cuypers, M. Davidson, J. De Ridder, A. de Torres, L. Delchambre, A. Dell'Oro, C. Ducourant, Y. Frémat, M. García-Torres, E. Gosset, J.-L. Halbwachs, N.C. Hambly, D.L. Harrison, M. Hauser, D. Hestroffer, S.T. Hodgkin, H.E. Huckle, A. Hutton, G. Jasniewicz, S. Jordan, M. Kontizas, A.J. Korn, A.C. Lanzafame, M. Manteiga, A. Moitinho, K. Muinonen, J. Osinde, E. Pancino, T. Pauwels, J.-M. Petit, A. Recio-Blanco, A.C. Robin, C. Siopis, M. Smith, K.W. Smith, A. Sozzetti, W. Thuillot, W. van Reeven, Y. Viala, U. Abbas, A. Abreu Aramburu, S. Accart, J.J. Aguado, P.M. Allan, W. Allasia, G. Altavilla, M.A. Álvarez, J. Alves, A.H. Andrei, E. Anglada Varela, E. Antiche, T. Antoja, S. Antón, B. Arcay, N. Bach, S.G. Baker, L. Balaguer-Núñez, C. Barache, C. Barata, A. Barbier, F. Barblan, D. Barrado y Navascués, M. Barros, M.A. Barstow, U. Becciani, M. Bellazzini, A. Bello García, V. Belokurov, P. Bendjoya, A. Berihuete, L. Bianchi, O. Bienaymé, F. Billebaud, N. Blagorodnova, S. Blanco-Cuaresma, T. Boch, A. Bombrun, R. Borrachero, S. Bouquillon, G. Bourda, H. Bouy, A. Bragaglia, M.A. Breddels, N. Brouillet, T. Brüsemeister, B. Bucciarelli, P. Burgess, R. Burgon, A. Burlacu, D. Busonero, R. Buzzi, E. Caffau, J. Cambras, H. Campbell, R. Cancelliere, T. Cantat-Gaudin, T. Carlucci, J.M. Carrasco, M. Castellani, P. Charlot, J. Charnas, A. Chiavassa, M. Clotet, G. Cocozza, R.S. Collins, G. Costigan, F. Crifo, N.J.G.Cross, M. Crosta, C. Crowley, C. Dafonte, Y. Damerdji, A. Dapergolas, P. David, M. David, P. De Cat, F. de Felice, P. de Laverny, F. De Luise, R. De March, R. de Souza, J. Debosscher, E. del Pozo, M. Delbo, A. Delgado, H.E. Delgado, P. Di Matteo, S. Diakite, E. Distefano, C. Dolding, S. Dos Anjos, P. Drazinos, J. Durán, Y. Dzigan, B. Edvardsson, H. Enke, N.W. Evans, G. Eynard Bontemps, C. Fabre, M. Fabrizio, S. Faigler, A.J. Falcão, M. Farràs Casas, L. Federici, G. Fedorets, J. Fernández-Hernánde, P. Fernique, A. Fienga, F. Figueras, F. Filippi, K. Findeisen, A. Fonti, M. Fouesneau, E. Fraile, M. Fraser, J. Fuchs, M. Gai, S. Galleti, L. Galluccio, D. Garabato, F. García-Sedano, N. Garralda, P. Gavras, J. Gerssen, R. Geyer, G. Gilmore, S. Girona, G. Giuffrida, M. Gomes, A. González-Marcos, J. González-Núñez, J.J. González-Vidal, M. Granvik, A. Guerrier, P. Guillout, J. Guiraud, A. Gúrpide, R. Gutiérrez-Sánchez, L.P. Guy, R. Haigron, D. Hatzidimitriou, M. Haywood, U. Heiter, A. Helmi, D. Hobbs, W. Hofmann, B. Holl, G. Holland, J.A.S.Hunt, A. Hypki, V. Icardi, M. Irwin, G. Jevardat de Fombelle, P. Jofré, P.G. Jonker, A. Jorissen, F. Julbe, A. Karampelas, A. Kochoska, R. Kohley, K. Kolenberg, E. Kontizas, S.E. Koposov, G. Kordopatis, P. Koubsky, A. Krone-Martins, M. Kudryashova, I. Kull, R.K. Bachchan, F. Lacoste-Seris, A.F. Lanza, J.-B. Lavigne, C. Le Poncin-Lafitte, Y. Lebreton, T. Lebzelter, S. Leccia, N. Leclerc, I. Lecoeur-Taibi, V. Lemaitre, H. Lenhardt, F. Leroux, S. Liao, E. Licata, H.E.P.Lindstrøm, T.A. Lister, E. Livanou, A. Lobel, W. Löffler, M. López, D. Lorenz, I. MacDonald, T. Magalhães Fernandes, S. Managau, R.G. Mann, G. Mantelet, O. Marchal, J.M. Marchant, S. Marinoni, P.M. Marrese, G. Marschalkó, D.J. Marshall, J.M. Martín-Fleitas, M. Martino, N. Mary, G. Matijevič, T. Mazeh, P.J. McMillan, S. Messina, D. Michalik, N.R. Millar, B.M.H.Miranda, D. Molina, M. Molinaro, L. Molnár, M. Moniez, P. Montegriffo, R. Mor, A. Mora, R. Morbidelli, T. Morel, S. Morgenthaler, D. Morris, A.F. Mulone, J. Narbonne, G. Nelemans, L. Nicastro, L. Noval, C. Ordénovic, J. Ordieres-Meré, P. Osborne, C. Pagani, I. Pagano, F. Pailler, H. Palacin, L. Palaversa, P. Parsons, M. Pecoraro, R. Pedrosa, H. Pentikäinen, B. Pichon, A.M. Piersimoni, F.-X. Pineau, E. Plachy, G. Plum, E. Poujoulet, A. Prša, L. Pulone, S. Ragaini, S. Rago, N. Rambaux, M. Ramos-Lerate, P. Ranalli, G. Rauw, A. Read, S. Regibo, C. Reylé, R.A. Ribeiro, A. Riva, G. Rixon, M. Roelens, M. Romero-Gómez, N. Rowell, F. Royer, L. Ruiz-Dern, G. Sadowski, T. Sagristà Sellés, J. Sahlmann, J. Salgado, E. Salguero, M. Sarasso, H. Savietto, M. Schultheis, E. Sciacca, M. Segol, J.C. Segovia, D. Segransan, I-C. Shih, R. Smareglia, R.L. Smart, E. Solano, F. Solitro, R. Sordo, S. Soria Nieto, J. Souchay, A. Spagna, F. Spoto, U. Stampa, I.A. Steele, H. Steidelmüller, C.A. Stephenson, H. Stoev, F.F. Suess, M. Süveges, J. Surdej, E. Szegedi-Elek, D. Tapiador, F. Taris, G. Tauran, M.B. Taylor, R. Teixeira, D. Terrett, B. Tingley, S.C. Trager, C. Turon, A. Ulla, E. Utrilla, G. Valentini, A. van Elteren, E. Van Hemelryck, M. van Leeuwen, M. Varadi, A. Vecchiato, J. Veljanoski, T. Via, D. Vicente, S. Vogt, H. Voss, V. Votruba, S. Voutsinas, G. Walmsley, M. Weiler, K. Weingrill, T. Wevers, Ł. Wyrzykowski, A. Yoldas, M. Žerjal, S. Zucker, C. Zurbach, T. Zwitter, A. Alecu, M. Allen, C. Allende Prieto, A. Amorim, G. Anglada-Escudé, V. Arsenijevic, S. Azaz, P. Balm, M. Beck, H.-H. Bernstein, L. Bigot, A. Bijaoui, C. Blasco, M. Bonfigli, G. Bono, S. Boudreault, A. Bressan, S. Brown, P.-M. Brunet, P. Bunclark, R. Buonanno, A.G. Butkevich, C. Carret, C. Carrion, L. Chemin, F. Chéreau, L. Corcione, E. Darmigny, K.S. de Boer, P. de Teodoro, P.T. de Zeeuw, C. Delle Luche, C.D. Domingues, P. Dubath, F. Fodor, B. Frézouls, A. Fries, D. Fustes, D. Fyfe, E. Gallardo, J. Gallegos, D. Gardiol, M. Gebran, A. Gomboc, A. Gómez, E. Grux, A. Gueguen, A. Heyrovsky, J. Hoar, G. Iannicola, Y. Isasi Parache, A.-M. Janotto, E. Joliet, A. Jonckheere, R. Keil, D.-W. Kim, P. Klagyivik, J. Klar, J. Knude, O. Kochukhov, I. Kolka, J. Kos, A. Kutka, V. Lainey, D. LeBouquin, C. Liu, D. Loreggia, V.V. Makarov, M.G. Marseille, C. Martayan, O. Martinez-Rubi, B. Massart, F. Meynadier, S. Mignot, U. Munari, A.-T. Nguyen, T. Nordlander, K.S. O'Flaherty, P. Ocvirk, A. Olias Sanz, P. Ortiz, J. Osorio, D. Oszkiewicz, A. Ouzounis, P. Park, E. Pasquato, C. Peltzer, J. Peralta, F. Péturaud, T. Pieniluoma, E. Pigozzi, J. Poels, G. Prat, T. Prod'homme, F. Raison, J.M. Rebordao, D. Risquez, B. Rocca-Volmerange, S. Rosen, M.I. Ruiz-Fuertes, F. Russo, S. Sembay, I. Serraller Vizcaino, A. Short, A. Siebert, H. Silva, D. Sinachopoulos, E. Slezak, M. Soffel, D. Sosnowska, V. Straižys, M. ter Linden, D. Terrell, S. Theil, C. Tiede, L. Troisi, P. Tsalmantza, D. Tur, M. Vaccari, F. Vachier, P. Valles, W. Van Hamme, L. Veltz, J. Virtanen, J.-M. Wallut, R. Wichmann, M.I. Wilkinson, H. Ziaeepour, S. Zschocke May 1, 2017 astro-ph.GA, astro-ph.SR Parallaxes for 331 classical Cepheids, 31 Type II Cepheids and 364 RR Lyrae stars in common between Gaia and the Hipparcos and Tycho-2 catalogues are published in Gaia Data Release 1 (DR1) as part of the Tycho-Gaia Astrometric Solution (TGAS). In order to test these first parallax measurements of the primary standard candles of the cosmological distance ladder, that involve astrometry collected by Gaia during the initial 14 months of science operation, we compared them with literature estimates and derived new period-luminosity ($PL$), period-Wesenheit ($PW$) relations for classical and Type II Cepheids and infrared $PL$, $PL$-metallicity ($PLZ$) and optical luminosity-metallicity ($M_V$-[Fe/H]) relations for the RR Lyrae stars, with zero points based on TGAS. The new relations were computed using multi-band ($V,I,J,K_{\mathrm{s}},W_{1}$) photometry and spectroscopic metal abundances available in the literature, and applying three alternative approaches: (i) by linear least squares fitting the absolute magnitudes inferred from direct transformation of the TGAS parallaxes, (ii) by adopting astrometric-based luminosities, and (iii) using a Bayesian fitting approach. TGAS parallaxes bring a significant added value to the previous Hipparcos estimates. The relations presented in this paper represent first Gaia-calibrated relations and form a "work-in-progress" milestone report in the wait for Gaia-only parallaxes of which a first solution will become available with Gaia's Data Release 2 (DR2) in 2018. Gaia Data Release 1. Open cluster astrometry: performance, limitations, and future prospects (1703.01131) Gaia Collaboration, F. van Leeuwen, A. Vallenari, C. Jordi, L. Lindegren, U. Bastian, T. Prusti, J.H.J. de Bruijne, A.G.A. Brown, C. Babusiaux, C.A.L. Bailer-Jones, M. Biermann, D.W. Evans, L. Eyer, F. Jansen, S.A. Klioner, U. Lammers, X. Luri, F. Mignard, C. Panem, D. Pourbaix, S. Randich, P. Sartoretti, H.I. Siddiqui, C. Soubiran, V. Valette, N.A. Walton, C. Aerts, F. Arenou, M. Cropper, R. Drimmel, E. Høg, D. Katz, M.G. Lattanzi, W. O'Mullane, E.K. Grebel, A.D. Holland, C. Huc, X. Passot, M. Perryman, L. Bramante, C. Cacciari, J. Castañeda, L. Chaoul, N. Cheek, F. De Angeli, C. Fabricius, R. Guerra, J. Hernández, A. Jean-Antoine-Piccolo, E. Masana, R. Messineo, N. Mowlavi, K. Nienartowicz, D. Ordóñez-Blanco, P. Panuzzo, J. Portell, P.J. Richards, M. Riello, G.M. Seabroke, P. Tanga, F. Thévenin, J. Torra, S.G. Els, G. Gracia-Abril, G. Comoretto, M. Garcia-Reinaldos, T. Lock, E. Mercier, M. Altmann, R. Andrae, T.L. Astraatmadja, I. Bellas-Velidis, K. Benson, J. Berthier, R. Blomme, G. Busso, B. Carry, A. Cellino, G. Clementini, S. Cowell, O. Creevey, J. Cuypers, M. Davidson, J. De Ridder, A. de Torres, L. Delchambre, A. Dell'Oro, C. Ducourant, Y. Frémat, M. García-Torres, E. Gosset, J.-L. Halbwachs, N.C. Hambly, D.L. Harrison, M. Hauser, D. Hestroffer, S.T. Hodgkin, H.E. Huckle, A. Hutton, G. Jasniewicz, S. Jordan, M. Kontizas, A.J. Korn, A.C. Lanzafame, M. Manteiga, A. Moitinho, K. Muinonen, J. Osinde, E. Pancino, T. Pauwels, J.-M. Petit, A. Recio-Blanco, A.C. Robin, L.M. Sarro, C. Siopis, M. Smith, K.W. Smith, A. Sozzetti, W. Thuillot, W. van Reeven, Y. Viala, U. Abbas, A. Abreu Aramburu, S. Accart, J.J. Aguado, P.M. Allan, W. Allasia, G. Altavilla, M.A. Álvarez, J. Alves, R.I. Anderson, A.H. Andrei, E. Anglada Varela, E. Antiche, T. Antoja, S. Antón, B. Arcay, N. Bach, S.G. Baker, L. Balaguer-Núñez, C. Barache, C. Barata, A. Barbier, F. Barblan, D. Barrado y Navascués, M. Barros, M.A. Barstow, U. Becciani, M. Bellazzini, A. Bello García, V. Belokurov, P. Bendjoya, A. Berihuete, L. Bianchi, O. Bienaymé, F. Billebaud, N. Blagorodnova, S. Blanco-Cuaresma, T. Boch, A. Bombrun, R. Borrachero, S. Bouquillon, G. Bourda, H. Bouy, A. Bragaglia, M.A. Breddels, N. Brouillet, T. Brüsemeister, B. Bucciarelli, P. Burgess, R. Burgon, A. Burlacu, D. Busonero, R. Buzzi, E. Caffau, J. Cambras, H. Campbell, R. Cancelliere, T. Cantat-Gaudin, T. Carlucci, J.M. Carrasco, M. Castellani, P. Charlot, J. Charnas, A. Chiavassa, M. Clotet, G. Cocozza, R.S. Collins, G. Costigan, F. Crifo, N.J.G. Cross, M. Crosta, C. Crowley, C. Dafonte, Y. Damerdji, A. Dapergolas, P. David, M. David, P. De Cat, F. de Felice, P. de Laverny, F. De Luise, R. De March, D. de Martino, R. de Souza, J. Debosscher, E. del Pozo, M. Delbo, A. Delgado, H.E. Delgado, P. Di Matteo, S. Diakite, E. Distefano, C. Dolding, S. Dos Anjos, P. Drazinos, J. Durán, Y. Dzigan, B. Edvardsson, H. Enke, N.W. Evans, G. Eynard Bontemps, C. Fabre, M. Fabrizio, S. Faigler, A.J. Falcão, M. Farràs Casas, L. Federici, G. Fedorets, J. Fernández-Hernández, P. Fernique, A. Fienga, F. Figueras, F. Filippi, K. Findeisen, A. Fonti, M. Fouesneau, E. Fraile, M. Fraser, J. Fuchs, M. Gai, S. Galleti, L. Galluccio, D. Garabato, F. García-Sedano, A. Garofalo, N. Garralda, P. Gavras, J. Gerssen, R. Geyer, G. Gilmore, S. Girona, G. Giuffrida, M. Gomes, A. González-Marcos, J. González-Núñez, J.J. González-Vidal, M. Granvik, A. Guerrier, P. Guillout, J. Guiraud, A. Gúrpide, R. Gutiérrez-Sánchez, L.P. Guy, R. Haigron, D. Hatzidimitriou, M. Haywood, U. Heiter, A. Helmi, D. Hobbs, W. Hofmann, B. Holl, G. Holland, J.A.S. Hunt, A. Hypki, V. Icardi, M. Irwin, G. Jevardat de Fombelle, P. Jofré, P.G. Jonker, A. Jorissen, F. Julbe, A. Karampelas, A. Kochoska, R. Kohley, K. Kolenberg, E. Kontizas, S.E. Koposov, G. Kordopatis, P. Koubsky, A. Krone-Martins, M. Kudryashova, I. Kull, R.K. Bachchan, F. Lacoste-Seris, A.F. Lanza, J.-B. Lavigne, C. Le Poncin-Lafitte, Y. Lebreton, T. Lebzelter, S. Leccia, N. Leclerc, I. Lecoeur-Taibi, V. Lemaitre, H. Lenhardt, F. Leroux, S. Liao, E. Licata, H.E.P. Lindstrøm, T.A. Lister, E. Livanou, A. Lobel, W. Löffer, M. López, D. Lorenz, I. MacDonald, T. Magalhães Fernandes, S. Managau, R.G. Mann, G. Mantelet, O. Marchal, J.M. Marchant, M. Marconi, S. Marinoni, P.M. Marrese, G. Marschalkó, D.J. Marshall, J.M. Martín-Fleitas, M. Martino, N. Mary, G. Matijevič, T. Mazeh, P.J. McMillan, S. Messina, D. Michalik, N.R. Millar, B.M.H. Miranda, D. Molina, R. Molinaro, M. Molinaro, L. Molnár, M. Moniez, P. Montegrio, R. Mor, A. Mora, R. Morbidelli, T. Morel, S. Morgenthaler, D. Morris, A.F. Mulone, T. Muraveva, I. Musella, J. Narbonne, G. Nelemans, L. Nicastro, L. Noval, C. Ordénovic, J. Ordieres-Meré, P. Osborne, C. Pagani, I. Pagano, F. Pailler, H. Palacin, L. Palaversa, P. Parsons, M. Pecoraro, R. Pedrosa, H. Pentikäinen, B. Pichon, A.M. Piersimoni, F.-X. Pineau, E. Plachy, G. Plum, E. Poujoulet, A. Prša, L. Pulone, S. Ragaini, S. Rago, N. Rambaux, M. Ramos-Lerate, P. Ranalli, G. Rauw, A. Read, S. Regibo, C. Reylé, R.A. Ribeiro, L. Rimoldini, V. Ripepi, A. Riva, G. Rixon, M. Roelens, M. Romero-Gómez, N. Rowell, F. Royer, L. Ruiz-Dern, G. Sadowski, T. Sagristà Sellés, J. Sahlmann, J. Salgado, E. Salguero, M. Sarasso, H. Savietto, M. Schultheis, E. Sciacca, M. Segol, J.C. Segovia, D. Segransan, I-C. Shih, R. Smareglia, R.L. Smart, E. Solano, F. Solitro, R. Sordo, S. Soria Nieto, J. Souchay, A. Spagna, F. Spoto, U. Stampa, I.A. Steele, H. Steidelmüller, C.A. Stephenson, H. Stoev, F.F. Suess, M. Süveges, J. Surdej, L. Szabados, E. Szegedi-Elek, D. Tapiador, F. Taris, G. Tauran, M.B. Taylor, R. Teixeira, D. Terrett, B. Tingley, S.C. Trager, C. Turon, A. Ulla, E. Utrilla, G. Valentini, A. van Elteren, E. Van Hemelryck, M. van Leeuwen, M. Varadi, A. Vecchiato, J. Veljanoski, T. Via, D. Vicente, S. Vogt, H. Voss, V. Votruba, S. Voutsinas, G. Walmsley, M. Weiler, K. Weingril, T. Wevers, Ł. Wyrzykowski, A. Yoldas, M. Žerjal, S. Zucker, C. Zurbach, T. Zwitter, A. Alecu, M. Allen, C. Allende Prieto, A. Amorim, G. Anglada-Escudé, V. Arsenijevic, S. Azaz, P. Balm, M. Beck, H.-H. Bernsteiny, L. Bigot, A. Bijaoui, C. Blasco, M. Bonfigli, G. Bono, S. Boudreault, A. Bressan, S. Brown, P.-M. Brunet, P. Bunclarky, R. Buonanno, A.G. Butkevich, C. Carret, C. Carrion, L. Chemin, F. Chéreau, L. Corcione, E. Darmigny, K.S. de Boer, P. de Teodoro, P.T. de Zeeuw, C. Delle Luche, C.D. Domingues, P. Dubath, F. Fodor, B. Frézouls, A. Fries, D. Fustes, D. Fyfe, E. Gallardo, J. Gallegos, D. Gardio, M. Gebran, A. Gomboc, A. Gómez, E. Grux, A. Gueguen, A. Heyrovsky, J. Hoar, G. Iannicola, Y. Isasi Parache, A.-M. Janotto, E. Joliet, A. Jonckheere, R. Keil, D.-W. Kim, P. Klagyivik, J. Klar, J. Knude, O. Kochukhov, I. Kolka, J. Kos, A. Kutka, V. Lainey, D. LeBouquin, C. Liu, D. Loreggia, V.V. Makarov, M.G. Marseille, C. Martayan, O. Martinez-Rubi, B. Massart, F. Meynadier, S. Mignot, U. Munari, A.-T. Nguyen, T. Nordlander, K.S. O'Flaherty, P. Ocvirk, A. Olias Sanz, P. Ortiz, J. Osorio, D. Oszkiewicz, A. Ouzounis, M. Palmer, P. Park, E. Pasquato, C. Peltzer, J. Peralta, F. Péturaud, T. Pieniluoma, E. Pigozzi, J. Poelsy, G. Prat, T. Prod'homme, F. Raison, J.M. Rebordao, D. Risquez, B. Rocca-Volmerange, S. Rosen, M.I. Ruiz-Fuertes, F. Russo, S. Sembay, I. Serraller Vizcaino, A. Short, A. Siebert, H. Silva, D. Sinachopoulos, E. Slezak, M. Soffel, D. Sosnowska, V. Straižys, M. ter Linden, D. Terrell, S. Theil, C. Tiede, L. Troisi, P. Tsalmantza, D. Tur, M. Vaccari, F. Vachier, P. Valles, W. Van Hamme, L. Veltz, J. Virtanen, J.-M. Wallut, R. Wichmann, M.I. Wilkinson, H. Ziaeepour, S. Zschocke March 3, 2017 astro-ph.SR Context. The first Gaia Data Release contains the Tycho-Gaia Astrometric Solution (TGAS). This is a subset of about 2 million stars for which, besides the position and photometry, the proper motion and parallax are calculated using Hipparcos and Tycho-2 positions in 1991.25 as prior information. Aims. We investigate the scientific potential and limitations of the TGAS component by means of the astrometric data for open clusters. Methods. Mean cluster parallax and proper motion values are derived taking into account the error correlations within the astrometric solutions for individual stars, an estimate of the internal velocity dispersion in the cluster, and, where relevant, the effects of the depth of the cluster along the line of sight. Internal consistency of the TGAS data is assessed. Results. Values given for standard uncertainties are still inaccurate and may lead to unrealistic unit-weight standard deviations of least squares solutions for cluster parameters. Reconstructed mean cluster parallax and proper motion values are generally in very good agreement with earlier Hipparcos-based determination, although the Gaia mean parallax for the Pleiades is a significant exception. We have no current explanation for that discrepancy. Most clusters are observed to extend to nearly 15 pc from the cluster centre, and it will be up to future Gaia releases to establish whether those potential cluster-member stars are still dynamically bound to the clusters. Conclusions. The Gaia DR1 provides the means to examine open clusters far beyond their more easily visible cores, and can provide membership assessments based on proper motions and parallaxes. A combined HR diagram shows the same features as observed before using the Hipparcos data, with clearly increased luminosities for older A and F dwarfs. Vlasov versus N-body: the H\'enon sphere (1504.07337) S. Colombi, T. Sousbie, S. Peirani, G. Plum, Y. Suto April 28, 2015 astro-ph.CO, astro-ph.GA We perform a detailed comparison of the phase-space density traced by the particle distribution in Gadget simulations to the result obtained with a spherical Vlasov solver using the splitting algorithm. The systems considered are apodized H\'enon spheres with two values of the virial ratio, R ~ 0.1 and 0.5. After checking that spherical symmetry is well preserved by the N-body simulations, visual and quantitative comparisons are performed. In particular we introduce new statistics, correlators and entropic estimators, based on the likelihood of whether N-body simulations actually trace randomly the Vlasov phase-space density. When taking into account the limits of both the N-body and the Vlasov codes, namely collective effects due to the particle shot noise in the first case and diffusion and possible nonlinear instabilities due to finite resolution of the phase-space grid in the second case, we find a spectacular agreement between both methods, even in regions of phase-space where nontrivial physical instabilities develop. However, in the colder case, R=0.1, it was not possible to prove actual numerical convergence of the N-body results after a number of dynamical times, even with N=10$^8$ particles.
CommonCrawl
Full snapshot reconstruction in hybrid architecture antenna arrays Maria Trigka ORCID: orcid.org/0000-0001-7793-04071, Christos Mavrokefalidis1 & Kostas Berberidis1 In the context of this research work, we study the so-called problem of full snapshot reconstruction in hybrid antenna array structures that are utilized in mmWave communication systems. It enables the recovery of the snapshots that would have been obtained if a conventional (non-hybrid) uniform linear antenna array was employed. The problem is considered at the receiver side where the hybrid architecture exploits in a novel way the antenna elements of a uniform linear array. To this end, the recommended scheme is properly designed so as to be applicable to overlapping and non-overlapping architectures. Moreover, the full snapshot recoverability is addressed for two cases, namely for time-varying and constant signal sources. Simulation results are also presented to illustrate the consistency between the theoretically predicted behaviors and the simulated results, and the performance of the proposed scheme in terms angle-of-arrival estimation, when compared to the conventional MUSIC algorithm and a recently proposed hybrid version of MUSIC (H-MUSIC). Communications in the millimeter wave (mmWave) spectrum are bringing a new era for the next generations of wireless and cellular telecommunication systems [1,2,3]. It will enable gigabit-per-second data rates thanks to the large bandwidth of available frequencies (namely, from 30 GHz to 300 GHz, and also THz), addressing the emerging demand of cellular networks for higher data rate. Due to the short wavelength of mmWave, more antenna elements can be packed into the same physical area, enabling the use of massive multiple-input and multiple-output (MIMO) antenna arrays at both the transmitter and the receiver. The leverage of mmWave with the promising massive MIMO physical layer technology will increase spectral and power efficiency, transmission throughput and network coverage of the fifth generation (5G) networks ensuring the successful support of a wide variety of emerging applications, such as augmented reality (AR), virtual reality (VR), cloud-based services, smart city, vehicular-to-vehicular (V2V) or vehicular-to-everything (V2X) communication systems for autonomous vehicles, Machine-to-Machine (M2M)/Internet-of-Things (IoT), multimedia, etc. [4]. However, the implementation of a conventional MIMO with a large-scale antenna array and a full digital precoding (combining) approach is still prohibitive. This is because each antenna element in the transmitter (receiver) is connected to a digital (analog) to analog (digital) converter and an radio frequency (RF) chain of analog elements. Such an option brings high implementation cost and large power consumption at mmWave band when the antennas are too many. An alternative approach to reduce the number of RF chains is hybrid analog/digital architectures, where the processing is split between an analog and a digital part. The analog part of a hybrid precoder (combiner) can be implemented by employing elements such as phase shifters, switches or lens antenna [5, 6], to name some of the most common proposed in the literature. Depending on the involved elements different hardware constraints arise [3]. In the literature, two main MIMO architectures have been proposed [7, 8] for the RF precoding (combining) matrix. In both cases, the aim is to find the optimal number of RF chains thus optimizing the cost, energy consumption and complexity. The first one (see Fig. 3) is called fully connected architecture, where each of the available RF chains is connected to all antenna elements, while, in the second one, the so-called partially connected (see Fig. 2), the involved RF chains are associated with unique non-overlapping subgroups of antenna elements [9]. Assuming a network of phase-shifters, the main characteristics and differences between these architectures are the following [10]: (1) A fully connected network provides full precoding (combining) gain, achieves highly directive transmissions by adjusting the phases of the transmitted signals in all antenna elements with constant modulus phase-shifters, but it has high complexity. (2) In a partially connected network, for each RF chain, only the transmitted signals on the corresponding subset of antennas can be adjusted. For practical reasons, the partially connected structure is often preferable, although it achieves reduced array gain and directivity, proportional to the number of subarrays. Also, it imposes the analog precoder (combiner) to be a block diagonal matrix of unit magnitude nonzero elements. Additionally, it has lower hardware complexity at the cost of beamforming gain, as compared to the fully connected structure. The authors in [11] claim that there is no distinct solution for the hybrid structure that can offer the best trade-off between complexity and performance. Hence, a dynamic structure is needed depending on the application and channel conditions. Since the wavelength at mmWave is shorter than the one in microwaves, material penetration will incur greater attenuation, thus increasing the significance of line-of-sight (LOS) propagation and reflection. Due to severe path loss, a few reflecting paths could arrive at the receiver [2]. In this case, the channel is strongly sparse in the angle domain. The high propagation attenuation, the increased sensitivity to blockage and the possible mobility of users require mature signal processing techniques and novel insights in architectures and protocols to combat these challenges. Hence, severe propagation losses occurring at the mmWave band could be balanced by hybrid precoding (combining) techniques using massive MIMO technology, in order to provide high antenna array gain, through narrow directional beamforming, and sufficient spatial coverage [10]. The mmWave propagation environment is typically modeled via a geometric channel model which involves the angles of departure (AoD) and the angles of arrival (AoA) at the transmitter and receiver sides, respectively. It is noted, here, that the problem of estimating of AoD/AoA in the frame of array signal processing is closely related to the problem of channel estimation and beamforming in the frame of mmWave wireless communications. The core idea of parametric massive MIMO channel modeling is to estimate the involved angles and the respective complex factors, instead of estimating the channel impulse response (communication viewpoint). The AoA estimation has been of particular interest to researchers for several decades and still remains an active area in wireless and mobile communication (e.g., radar, smart antenna) [12]. The AoA estimation algorithms [13, 14] and their variants, such as [15], have been designed to estimate the unknown angles from the received vector for a classical ULA. A beam scanning is applied by a full analog array to search the AoA, while spectrum-based techniques can be employed by a full digital array. Many high-resolution sub-space-based methods have been developed, with the most widely used among them due to their simplicity, being the MUSIC (MUltiple Signal Classification) [16,17,18] and ESPRIT (Estimation of Signal Parameters via Rotational Invariant Techniques) [19, 20] algorithms. Such algorithms exploit the structure of the signal/noise subspaces and estimate the desired angles as the points at which the received signal spectrum is maximized. However, the main drawback of these algorithms is that they are not compatible with the hybrid architectures that are proposed for mmWave transmitters (receivers). Moreover, these algorithms fail to resolve coherent/highly correlated signals which are very common in communication systems due to the multi-path phenomenon, thus their performance is degraded. Many methods have been suggested to treat the coherent case, which are applicable to ULAs as well. The most representative are the Spatial Smoothing (SS) [21] and forward/backward-SS (FB-SS) [22]. The role of spatial smoothing techniques is to decorrelate coherent signals and reconstruct a full-rank source signal covariance matrix. A solution in [23] focuses on the design of high-resolution AoA algorithms (e.g., of the MUSIC and ESPRIT type) for hybrid architectures assuming coherent source signals, a feature incorporated to the system model and proposed algorithms. Actually, the source signals consist of multi-path copies of a transmitted signal. Nevertheless, the algorithms proposed in [23] cannot handle effectively the coherent case unless the involved channels change at a rate that is close to the snapshot rate, which happens rarely in practice. Several experiments have been conducted to verify the behavior of H-MUSIC in terms of AoA estimation in a coherent environment for different rates of change of the involved channel. Additionally, some preliminary results are presented in our previous works [24, 25], where the AoA estimation problem is also investigated based on the reconstructed ULA snapshot. In both works, the MUSIC algorithm is applied in the reconstructed snapshots to estimate the unknown AoAs. It is pointed out, here, that, differently to [23] and more recent works [6, 26,27,28], this paper summarizes and extends previous findings, providing an alternative approach to the problem. Specifically, unlike previous works which focus on new AoA estimation techniques, in this work, the key idea is to develop techniques which, based on the sampled output of a hybrid array, will reconstruct the snapshots that would have been captured using a conventional (non-hybrid) ULA. Obviously, having reconstructed the full ULA snapshots, any existing AoA estimation algorithm can subsequently be applied, hence, the extensive relevant literature can be still exploited. As a final note, although the work in this paper has focused on the recoverability of the full snapshot in hybrid antenna arrays using ULAs [29], there are other architectures that could be studied as future extensions of this work. Among different array geometries, fractal-wavelet modeling theory [30,31,32,33,34,35,36,37,38,39], widely used in antenna theory and its application to small antenna arrays and to the case of mmWave bands would be interesting. Additionally, other antenna array configurations could be studied like co-prime configurations [40]. Finally, lens-antennas, as already have been applied in mmWave communication systems, would be an interesting direction [41]. The main contributions of this paper are briefly the following: An efficient preprocessing scheme is developed that leads to the acquisition of the baseband snapshot irrespective of the hybrid antenna array architecture, as if a non-hybrid antenna array was employed. The new scheme is studied for two typical hybrid receiver architectures implemented through a network of phase shifters that are employed in order to create suitable combinations of the received signals from which the baseband snapshot is retrieved. By maximizing the signal-to-total-noise ratio (STNR) of the restored snapshot, i.e., a signal-dependent noisy term, the proposed scheme determines the optimal values of the phase shifters. The full snapshot recoverability problem is treated for both cases of time-varying and constant source signals during sub-snapshots collection time. A different combiner matrix is applied for each sub-snapshot acquisition. The theoretical findings regarding the proposed optimal reconstruction schemes are verified by extensive simulation experiments. As already mentioned, once the full snapshots are reconstructed via the proposed schemes, then any suitable AoA estimation algorithm can be applied. The coherent case, commonly encountered in multi-path channel environments, can be treated by employing appropriate techniques (e.g., spatial smoothing-based algorithm). The paper is organized as follows: In Sect. 2, the system model is defined for both the conventional ULA and the hybrid structure receiver cases. In Sect. 3, the problem formulation for the full snapshot recovery is developed, along with the design issues of the analog combiner in the hybrid structure, investigating the issue for two different phase-shifter architectures. In Sect. 4, we show experimental results for the evaluation of the performance of the proposed scheme. Finally, Sect. 5 concludes the paper. The following notations are used in this paper. Uppercase bold letters are matrices, lowercase bold letters are vectors, letters with a hat are estimations, \((\cdot )^{{\mathrm{T}}}\) denotes the transposition, \((\cdot )^{{\mathrm{H}}}\) denotes the complex conjugate transposition, \(E[\cdot ]\) means statistical expectation, \({\hbox {Tr}}\{.\}\) denotes the trace of a matrix and \({\mathbf {I}}\) is the identity matrix. The system model Let us consider a ULA in which L far-field, narrowband, bandpass signals impinge on. The L signals arrive at the array from L different directions. Moreover, the antenna array size N is larger than the number of source signals, i.e., \(N>L\) [20]. The spatial sampling depends on the wavelength of carrier frequency. In that case, to avoid spatial aliasing, the inter-element spacing should satisfy the condition \(d = \frac{\lambda _{{\mathrm{c}}}}{2} = \frac{c}{2f_{{\mathrm{c}}}}\), where c is the propagation speed and \(\lambda _{{\mathrm{c}}}\), \(f_{{\mathrm{c}}}\) are the carrier wavelength and frequency. Furthermore, each source signal i is associated with an AoA denoted as \(\theta _i\), where \(i=1,2,\ldots ,L\) and \(\theta _i\in [-\pi ,\pi ]\). We assume that AoAs do not change during snapshots collection time. Thus, the signal array on the time instant t can be written as $$\begin{aligned} {\mathbf {x}}(t)={\mathbf {A}}({\theta }){\mathbf {s}}(t)+{\mathbf {w}}(t), \end{aligned}$$ where \({\mathbf {x}}(t)= [x_{1}(t), x_{2}(t),\ldots ,x_{N}(t)]\) is a \(N\times 1\) vector with the signals received by the N array elements, \({\theta } = [\theta _{1}(t), \theta _{2}(t),\ldots ,\theta _{L}(t)]\) is an \(L\times 1\) vector with the AoA's, \({\mathbf {s}}(t) = [s_1(t), s_2(t),\ldots ,s_L(t)]^{{\mathrm{T}}}\) is an \(L\times 1\) vector containing the source signals and \({\mathbf {w}}(t) = [w_{1}(t), w_{2}(t),\ldots ,w_{N}(t)]\) is the \(N\times 1\) noise vector. Finally, \({\mathbf {A}}({\theta })\) is the \(N\times L\) array response matrix $$\begin{aligned} {\mathbf {A}}({\theta }) = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {a}}(\theta _1)&{\mathbf {a}}(\theta _2)&\cdots&{\mathbf {a}}(\theta _L)\end{array} \right] , \end{aligned}$$ $$\begin{aligned} {\mathbf {a}}(\theta _{i})=\frac{1}{\sqrt{N}}\left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1&{\hbox {e}}^{j\frac{2d\pi }{\lambda _{{\mathrm{c}}}}\sin {\theta _{i}}}&\cdots&{\hbox {e}}^{j\frac{2(N-1)d\pi }{\lambda _{{\mathrm{c}}}}\sin {\theta _{i}}}\end{array}\right] ^{{\mathrm{T}}}. \end{aligned}$$ The sampled version of \({\mathbf {x}}(t)\) is usually utilized either by a conventional receiver, where each antenna element is connected to an RF chain (see Fig. 1), or by a hybrid structure receiver, where the signals of groups of antenna elements are, first, combined and the resulting signal is passed via an RF chain (see Figs. 2, 3). These two cases are analyzed in detail in the following subsections. The receiver in the conventional case The classical ULA case Each received signal is processed by an individual RF chain, where, among other, is downconverted, and, then, is sampled by the corresponding analog-to-digital converter (ADC). At discrete-time n, the baseband signal, called full snapshot, is $$\begin{aligned} {\mathbf {x}}(n)= {\mathbf {A}}({\theta }){\mathbf {s}}(n) + {\mathbf {w}}(n), \end{aligned}$$ where \({\mathbf {x}}(n), {\mathbf {w}}(n) \in \mathtt {C}^{N}\), \({\mathbf {s}}(n) \in \mathtt {C}^{L}\) and \({\mathbf {w}}(n)\sim \mathtt {CN}({\mathbf {0}},\sigma ^2{\mathbf {I}})\), while \(E[{\mathbf {w}}(n){\mathbf {w}}^{{\mathrm{H}}}(k)]={\mathbf {0}}\), for \(n\ne k\) and \({\mathbf {R}}_s = E\bigl [{\mathbf {s}}(n){\mathbf {s}}^{{\mathrm{H}}}(n)\bigr ]\) the source signals autocorrelation matrix. The receiver in the partially connected hybrid case The receiver in the fully connected hybrid case The hybrid case We assume an antenna array with N elements which can be organized into \(L_r\) groups considering two widely used analog beamforming architectures, as shown in Figs. 2 and 3. In the partially connected one, each group, called subarray, consists of M antenna elements and connects to one RF chain, where \(N = ML_r\). Furthermore, in the lth group (\(l=1,2,\ldots ,L_r\)), the ith received signal (\(i=1,2,\ldots ,M\)) is, first, processed by a phase-shifter by \({\hbox {e}}^{j\phi _{li}(t)}\), \(\phi _{li}(t) \in [-\,\pi ,\pi ]\), and, then, all of them are added up. In the fully connected one, all antenna elements form \(L_r\) different antenna groups each of which connects to an RF chain. Similarly, each received signal at each antenna element is processed by a phase-shifter, and then, all of them are added up. In both architectures, the matrix \({\mathbf {B}}(t)\) of size \(L_r\times N\), known as analog combiner in the relevant literature [1, 42], constitutes the analog part of the hybrid structure, as well as the involved operations (depicted in Figs. 2, 3). The receiver first processes the received signals using an \(L_r\times N\) combiner, \({\mathbf {B}}(t)\), implemented using phase shifters such that \(|{\mathbf {B}}(i;j)| = 1\). Therefore, the matrix \({\mathbf {B}}(t)\) maps the received signal \({\mathbf {x}}(t)\) to a reduced size signal vector \({\mathbf {r}}(t)\), namely \(L_r\times 1\), where $$\begin{aligned} {\mathbf {r}}(t)={\mathbf {B}}(t){\mathbf {x}}(t). \end{aligned}$$ Finally, as in the classical case, \({\mathbf {r}}(t)\) passes through the available RF chains and the corresponding ADCs and the resulting baseband signal can be written as $$\begin{aligned} {\mathbf {r}}(n) = {\mathbf {B}}(n){\mathbf {x}}(n), \end{aligned}$$ or according to (4) in the following compact form, $$\begin{aligned} {\mathbf {r}}(n) = {\mathbf {B}}(n){\mathbf {A}}({\theta }){\mathbf {s}}(n)+{\mathbf {B}}(n){\mathbf {w}}(n). \end{aligned}$$ Inspecting (6), the reconstruction of the full snapshot in (4) depends on the inversion of matrix \({\mathbf {B}}(n)\). However, it is non-square with \(L_r < N\), i.e., it has more columns than rows, and thus, it is singular. In this case, the initial ULA snapshot \({\mathbf {x}}(n)\) cannot be restored. Hence, in the following section the adopted methodology for the full snapshot recovery is presented in detail. Recovering the full snapshot In this section, the problem of full snapshot reconstruction of the baseband signal in (4) is described. The adopted methodology consists of two parts: (1) the problem formulation for the full snapshot estimation and (2) the appropriate RF combiner design assuming both cases of time-varying and constant source signals for the hybrid architectures described previously. Description of the procedure for recovering the full snapshot Because of the hybrid architecture, as explained in the previous section, at discrete time n, we obtain a sub-snapshot \({\mathbf {r}}(n)\) of size \(L_r\) instead of the desired one in (4). To reconstruct the full snapshot of (4), a number of sub-snapshots \({\mathbf {r}}(l)\) are collected and utilized. In the following, the description of this collection, which will permit the reconstruction of the full snapshot, will be presented. As it will be shown, the reconstruction accuracy depends, among other, on the variability of the source signalsFootnote 1 that impinge on the antenna array, which signals will be denoted as \({\mathbf {s}}_v(l)\) from now on. Both cases of time varying and constant source signals will be considered in the analysis of Sect. 3.2, where the optimal design of the respective combiners is discussed. In more detail, let us assume that T signals $$\begin{aligned} {\mathbf {r}}(l)={\mathbf {B}}(l){\mathbf {A}}({\theta }){\mathbf {s}}_v(l)+{\mathbf {B}}(l){\mathbf {w}}(l) \end{aligned}$$ are collected at time instants \(l = n, n+1, \ldots ,n+T-1\), which will be referred to as sub-snapshots in the following. By concatenating the T sub-snapshots into the \(T L_r\times1\) vector $$\begin{aligned} \bar{\mathbf {r}}(n) = \left[ \begin{array}{c} {\mathbf {r}}(n)\\ {\mathbf {r}}(n+1)\\ \vdots \\ {\mathbf {r}}(n+T-1) \end{array} \right] {,} \end{aligned}$$ the input–output relation can be formulated as $$\begin{aligned} \bar{\mathbf {r}}(n)= & {} \left[ \begin{array}{c} {\mathbf {B}}(n){\mathbf {A}}({\theta }){\mathbf {s}}_v(n) \\ {\mathbf {B}}(n+1){\mathbf {A}}({\theta }){\mathbf {s}}_v(n+1) \\ \vdots \\ {\mathbf {B}}(n+T-1){\mathbf {A}}({\theta }){\mathbf {s}}_v(n+T-1) \end{array} \right] \nonumber \\&+ \left[ \ \begin{array}{c} {\mathbf {B}}(n){\mathbf {w}}(n)\\ {\mathbf {B}}(n+1){\mathbf {w}}(n+1)\\ \vdots \\ {\mathbf {B}}(n+T-1){\mathbf {w}}(n+T-1) \end{array}\right] \end{aligned}$$ or, in a more compact form, $$\begin{aligned} \bar{\mathbf {r}}(n)= \bar{\mathbf {B}}(n)\bar{\mathbf {A}}({\theta })\bar{{\mathbf {s}}}_v(n)+\bar{{\mathbf {w}}}(n), \end{aligned}$$ $$\begin{aligned} \bar{\mathbf {B}}(n)= & {} \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {B}}(n) &{} {\mathbf {0}} &{} \ldots &{} {\mathbf {0}} \\ {\mathbf {0}} &{} {\mathbf {B}}(n+1) &{} \ldots &{} {\mathbf {0}} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {\mathbf {0}} &{} {\mathbf {0}} &{} \ldots &{} {\mathbf {B}}(n+T-1) \end{array}\right] , \end{aligned}$$ $$\begin{aligned} \bar{\mathbf {A}}(\theta )= & {} \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {A}}(\theta ) &{} {\mathbf {0}} &{} \ldots &{} {\mathbf {0}} \\ {\mathbf {0}} &{} {\mathbf {A}}(\theta ) &{} \ldots &{} {\mathbf {0}} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {\mathbf {0}} &{} {\mathbf {0}} &{} \ldots &{} {\mathbf {A}}(\theta ) \end{array}\right] \end{aligned}$$ $$\begin{aligned} \bar{{\mathbf {s}}}_v(n) = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {s}}_v^{{\mathrm{T}}}(n)&{\mathbf {s}}_v^{{\mathrm{T}}}(n+1)&\dots&{\mathbf {s}}_v^{{\mathrm{T}}}(n+T-1) \end{array}\right] ^{{\mathrm{T}}}. \end{aligned}$$ In the following, the vectors \({\mathbf {s}}_v(l)\) involved in (14) are modeled as the sum of a constant signal \({\mathbf {s}}(n)\) and a non-constant signal \({\mathbf {s}}_{nc}(l)\), i.e., \({\mathbf {s}}_v(l) = {\mathbf {s}}(n) +{\mathbf {s}}_{nc}(l)\), for \(l = n, n+1, \ldots , n+T-1\). Actually, it is practically infeasible to acquire constant source signals when operating at very high data rates (i.e., multiple Gbps depending on the mmWave carrier frequency), without oversampling. Therefore, here, the general case of non-constant signals is considered, as, the assumption of constant ones in a full snapshot period (usually encountered in the relevant literature [23]) can be treated as a special case, where \({\mathbf {s}}_v(l)={\mathbf {s}}(n)\) for \(l=n,n+1,\ldots ,n+T-1\). Specifically, the source signals vector in (14) can be rewritten as $$\begin{aligned} \bar{{\mathbf {s}}}_{v}(n) \equiv { \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {s}}(n) + {\mathbf {s}}_{nc}(n) \\ {\mathbf {s}}(n) + {\mathbf {s}}_{nc}(n+1) \\ \vdots \\ {\mathbf {s}}(n) + {\mathbf {s}}_{nc}(n+T-1) \end{array}\right] } \end{aligned}$$ Hence, by substituting (15) in (11) a collection of sub-snapshots of the hybrid array can be obtained as follows $$\begin{aligned} \bar{\mathbf {r}}(n)= & {} \left[ \begin{array}{c} {\mathbf {B}}(n){\mathbf {A}}({\theta }){\mathbf {s}}(n) \\ {\mathbf {B}}(n+1){\mathbf {A}}({\theta }){\mathbf {s}}(n) \\ \vdots \\ {\mathbf {B}}(n+T-1){\mathbf {A}}({\theta }){\mathbf {s}}(n) \end{array} \right] \nonumber \\&+ \left[ \begin{array}{c} {\mathbf {B}}(n){\mathbf {A}}({\theta }){\mathbf {s}}_{nc}(n) \\ {\mathbf {B}}(n+1){\mathbf {A}}({\theta }){\mathbf {s}}_{nc}(n+1) \\ \vdots \\ {\mathbf {B}}(n+T-1){\mathbf {A}}({\theta }){\mathbf {s}}_{nc}(n+T-1) \end{array}\right] \nonumber \\&+ \left[ \ \begin{array}{c} {\mathbf {B}}(n){\mathbf {w}}(n)\\ {\mathbf {B}}(n+1){\mathbf {w}}(n+1)\\ \vdots \\ {\mathbf {B}}(n+T-1){\mathbf {w}}(n+T-1) \end{array}\right] + \end{aligned}$$ $$\begin{aligned} \bar{\mathbf {r}}(n)= \mathbf {\mathcal {B}}(n){\mathbf {A}}({\theta }){\mathbf {s}}(n)+ \bar{\mathbf {B}}(n)\bar{\mathbf {A}}({\theta })\bar{\mathbf {s}}_{nc}(n) + \bar{\mathbf {w}}(n), \end{aligned}$$ $$\begin{aligned} \mathbf {\mathcal {B}}(n) = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {B}}^{{\mathrm{T}}}(n)&{\mathbf {B}}^{{\mathrm{T}}}(n+1)&\dots&{\mathbf {B}}^{{\mathrm{T}}}(n+T-1) \end{array}\right] ^{{\mathrm{T}}} \end{aligned}$$ $$\begin{aligned} \bar{\mathbf {s}}_{nc}(n) = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {s}}^{{\mathrm{T}}}_{nc}(n)&{\mathbf {s}}^{{\mathrm{T}}}_{nc}(n+1)&\dots&{\mathbf {s}}^{{\mathrm{T}}}_{nc}(n+T-1) \end{array}\right] ^{{\mathrm{T}}}. \end{aligned}$$ The value of parameter T (i.e., the number of sub-snapshots) is selected such that the \(TL_r \times N\) matrix \(\mathbf {\mathcal {B}}(n)\) is invertible, thus, \(TL_r \ge N\) should hold. To ensure low sub-snapshots collection overhead the minimum value \(T =\frac{N}{L_r}\) is chosen, where the number of antenna elements N and the number of RF chains \(L_r\) are considered to be powers of two, i.e., \(N = 2^i\) and \(L_r = 2^j\), where i, j are integers. By applying different matrices \({\mathbf {B}}(l)\) for each l, linear independent \({\mathbf {r}}(l)\)'s are collected. Due to the fact that \(\mathbf {\mathcal {B}}(n)\) is designed to be rectangular and full rank, its pseudo-inverse matrix \(\mathbf {\mathcal {B}}^{\dag }(n)\) is reduced to \(\mathbf {\mathcal {B}}^{-1}(n)\). In the following, from (17) an estimate \(\hat{{\mathbf {x}}}(n)\) of the full snapshot \({\mathbf {x}}(n)\) can be acquired as $$\begin{aligned} \hat{{\mathbf {x}}}(n)=\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {r}}(n). \end{aligned}$$ Hence, from (20) and as outlined in Algorithm 1, we can fully recover the full snapshot as $$\begin{aligned} \begin{aligned} \hat{{\mathbf {x}}}(n)&= {\mathbf {A}}({\theta }){\mathbf {s}}(n) +\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {w}}(n)\\&\quad + \mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {B}}(n)\bar{\mathbf {A}}(\theta )\bar{\mathbf {s}}_{nc}(n). \end{aligned} \end{aligned}$$ According to (21), the \(\hat{{\mathbf {x}}}(n)\) is hindered by a term associated with the additive noise and an additional term which depends on the time variability of the received source signals. As it will be observed in the following, the variability in the source signals is treated as a noise term. Hence, in this case, the total noise term is $$\begin{aligned} {\mathbf {w}}^{\prime \prime }(n)=\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {w}}(n) +\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {B}}(n)\bar{\mathbf {A}}(\theta )\bar{\mathbf {s}}_{nc}(n). \end{aligned}$$ Considering now the special case of constant signals, namely \({\mathbf {s}}_v(l)= {\mathbf {s}}(n)\), for \(l=n, n+1, \ldots , n+T-1\), \(\bar{{\mathbf {s}}}_{v}(n)\) is simplified to $$\begin{aligned} \bar{{\mathbf {s}}}_{v}(n) = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {s}}(n)\\ {\mathbf {s}}(n)\\ \vdots \\ {\mathbf {s}}(n) \end{array}\right] . \end{aligned}$$ Hence, \(\bar{{\mathbf {r}}}(n)\) can be written as $$\begin{aligned} \bar{\mathbf {r}}(n)= \mathbf {\mathcal {B}}(n){\mathbf {A}}({\theta }){\mathbf {s}}(n)+\bar{\mathbf {w}}(n) \end{aligned}$$ and the restored full snapshot in this case is given as $$\begin{aligned} \hat{{\mathbf {x}}}(n)={\mathbf {A}}({\theta }){\mathbf {s}}(n)+\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {w}}(n), \end{aligned}$$ where the total noise term \({\mathbf {w}}^{\prime \prime }(n)\) in (22) is reduced to $$\begin{aligned} {\mathbf {w}}^{\prime \prime }(n) = \mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {w}}(n). \end{aligned}$$ From (21) and (25), it is evident that the matrix \(\mathbf {\mathcal {B}}(n)\) has an impact on the power of the noise terms thus affecting the subsequent utilization of the recovered signals, i.e., in AoA estimation. Hence, the matrix \(\mathbf {\mathcal {B}}(n)\) should be determined appropriately so that it is invertible and well conditioned and, at the same time, it does not increase the power of the noise term. Such a discussion and the relevant analysis are presented in the following section. Problem formulation for the RF combiner design In this section, the analysis focuses on the appropriate design of the RF combiner \(\mathbf {\mathcal {B}}(n)\) matrix, i.e., the analog part of the hybrid structure for the minimum value of \(T = \frac{N}{L_r}\). The problem of interest in this paper is to design the \(\mathbf {\mathcal {B}}(n)\), i.e., the analog part of the hybrid structure, such that the STNR, after the application of the preprocessing scheme, to be maximized. The STNR is defined as $$\begin{aligned} {\hbox {STNR}} = \frac{{\hbox {Tr}}\{{\mathbf {C}}\}}{{\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\}}{,} \end{aligned}$$ where the power of the signal component of ULA snapshot \({\mathbf {x}}(n)\) in (4), defined as $$\begin{aligned} \begin{aligned} {\hbox {Tr}}\{{\mathbf {C}}\}&= {\hbox {Tr}}\{{\mathbf {A}}({\theta }) E\bigl [{\mathbf {s}}(n){\mathbf {s}}^{{\mathrm{H}}}(n)\bigr ] {\mathbf {A}}({\theta })^{{\mathrm{H}}}\} \\&= {\hbox {Tr}}\{{\mathbf {A}}({\theta }) {\mathbf {R}}_s {\mathbf {A}}({\theta })^{{\mathrm{H}}}\}, \end{aligned} \end{aligned}$$ is kept constant and identical to the one in the reconstructed full snapshot. Hence, the STNR maximization is equivalent to the minimization of the total noise power \({\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\}\) which entails that the total noise term \({\mathbf {w}}^{\prime \prime }(n)=\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {w}}(n) +\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {B}}(n)\bar{\mathbf {A}}(\theta )\bar{\mathbf {s}}_{nc}(n)\) in (22) and the noise term in (4) have equivalent statistical properties (e.g., as to covariance matrix). It is noted, here, that in the special case of constant signals, \({\mathbf {w}}^{\prime \prime }(n)\) is equal to \(\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {w}}(n)\). The desired minimization problem, which lies in the minimization of the power of the total noise term \({\mathbf {w}}^{\prime \prime }(n)\), is written as, $$\begin{aligned} \min _{\mathbf {\mathcal {B}}(n)}\,{\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\},\, s.t.\, \mathbf {\mathcal {B}}(n) \in \mathtt {S}, \end{aligned}$$ where \({\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }} = E\{{\mathbf {w}}^{\prime \prime }(n)({\mathbf {w}}^{\prime \prime }(n))^{{\mathrm{H}}}\}\) is the covariance matrix of the noise term in (21) and \(\mathtt {S}\) denotes the feasibility set of \(\mathbf {\mathcal {B}}(n)\), namely all matrices with elements being equal to exponentials that denote the phase-shifting operation of the hybrid architecture (either fully or partially connected). In particular, the set \(\mathtt {S}\) consists of all \(N\times N\) matrices, of the form $$\begin{aligned} \mathbf {\mathcal {B}}(n)=\left[ \begin{array}{c} {\mathbf {B}}(n)\\ {\mathbf {B}}(n+1)\\ \vdots \\ {\mathbf {B}}(n+T-1)\\ \end{array} \right] . \end{aligned}$$ The form of sub-matrices \({\mathbf {B}}(n)\) in (30) captures both the structure and the operations, described in Sect. 3.1, and will be elaborated in the following. If the hybrid architecture is organized into non-overlapping subarrays as depicted in Fig. 2, in (30) the sub-matrices are of the form $$\begin{aligned} {\mathbf {B}}(l) = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {b}}^{{\mathrm{T}}}_{l1} &{} {\mathbf {0}} &{} \ldots &{} {\mathbf {0}} \\ {\mathbf {0}} &{} {\mathbf {b}}^{{\mathrm{T}}}_{l2} &{} \ldots &{} {\mathbf {0}} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {\mathbf {0}} &{} {\mathbf {0}} &{} \ldots &{} {\mathbf {b}}^{{\mathrm{T}}}_{lL_r} \end{array}\right] \end{aligned}$$ where \({\mathbf {b}}_{lj}\) is an \(M\times 1\) vector with the phase-shifters' values to be used by the jth subarray at time \(l = n, n+1, \ldots , n+T-1\). Assuming a fully connected architecture, as in Fig. 3, the sub-matrices in (30) are of the form $$\begin{aligned} {\mathbf {B}}(l) = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {b}}^{{\mathrm{T}}}_{l1} \\ {\mathbf {b}}^{{\mathrm{T}}}_{l2} \\ \vdots \\ {\mathbf {b}}^{{\mathrm{T}}}_{lL_r} \end{array}\right] \end{aligned}$$ and \({\mathbf {b}}_{lj}\) is an \(N\times 1\) vector with the exponentials to be used by the jth subarray at time \(l = n, n+1,\ldots , n+T-1\). Here, it should be noted that the same notation \({\mathbf {b}}_{lj}\) for two different sized vectors is utilized so as to keep the symbols simple and the dimension is implied by the context. In the following, the minimization problem of (29) will be solved for the two architectures (Figs. 2, 3) under study and the two signal cases (i.e., for constant and non-constant signals described in (23) and (15), respectively). As it will be shown later, when constant signals are considered, the problem can be solved optimally, whereas for the non-constant signals the selection of the appropriate design becomes a more intriguing task. Constant source signals First, the special case of constant source signals will be considered. Under this case, the noise term \({\mathbf {w}}^{\prime \prime }(n)\) is equal to \(\mathbf {\mathcal {B}}^{-1}(n) \bar{\mathbf {w}}(n)\), as shown in (25), and the covariance matrix can be written as \({\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}= E\{\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {w}}(n)(\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {w}}(n))^{{\mathrm{H}}}\}\). In order to solve the problem in (29), first, \({\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\) is written in the following equivalent form, $$\begin{aligned} \begin{aligned} {\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}&= \mathbf {\mathcal {B}}^{-1}(n) E\{\bar{\mathbf {w}}(n)(\bar{\mathbf {w}}(n))^{{\mathrm{H}}}\}(\mathbf {\mathcal {B}}^{-1}(n))^{{\mathrm{H}}}. \end{aligned} \end{aligned}$$ In the following, using (33), (29) will be solved for the two hybrid architectures. First, the partially connected case will be treated. In the partially connected architecture, it can be proved, by employing (30), (31), that \(E\{\bar{\mathbf {w}}(n)(\bar{\mathbf {w}}(n))^{{\mathrm{H}}}\} = \sigma ^2 M{\mathbf {I}}_{N \times N}\), and, thus, \({\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\) is written as $$\begin{aligned} {\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }} = \sigma ^2M\left( \mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {B}}(n)\right) ^{-1}. \end{aligned}$$ Moreover, the following inequality will be used for determining a lower bound for the cost function in (29), $$\begin{aligned} {\hbox {Tr}}\{{\mathbf {A}}^{-1}\}\ge \sum _i\frac{1}{[{\mathbf {A}}]_{ii}}, \end{aligned}$$ where \([{\mathbf {A}}]_{ii}\) is the ith diagonal element of \({\mathbf {A}}\) and the equality holds when \({\mathbf {A}}\) is diagonal [44]. By substituting (34) in (29) and applying (35), the following lower bound can be derived, $$\begin{aligned} {\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\}\ge \sigma ^2M\sum _{i=1}^N\frac{1}{[\mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {B}}(n)]_{ii}} = \sigma ^2 N. \end{aligned}$$ Inspecting (36), the lower bound is independent of the unknown \({\mathbf {B}}(n)\) we are looking for. Therefore, any \(\mathbf {\mathcal {B}}(n) \in \mathtt {S}\) can be considered as the optimal solution in (29). In our case, an optimal choice of \({\mathbf {B}}(n)\) is the one for which the following condition is fulfilled $$\begin{aligned} \mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {B}}(n) = M{\mathbf {I}}, \end{aligned}$$ which subsequently makes the covariance matrix \({\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\) diagonal and equal to the one of the noise term in (4), i.e., \({\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }} = \sigma ^2{\mathbf {I}}\) (as observed by 34). In the following, such an optimal, in this setting, matrix will be described. In order to determine a suitable \(\mathbf {\mathcal {B}}(n) \in \mathtt {S}\), first, \(\mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {B}}(n)\) is written in the following more convenient form $$\begin{aligned} \mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {B}}(n)= \mathbf {\mathcal {B}}^{{\mathrm{H}}}(n){\mathbf {P}}^{{\mathrm{T}}}{\mathbf {P}}\mathbf {\mathcal {B}}(n), \end{aligned}$$ where \({\mathbf {P}}\) is an \(N\times N\) permutation matrix that changes the position of the rows of \(\mathbf {\mathcal {B}}(n)\) in the following manner. The first row of each \({\mathbf {B}}(l)\) goes at the top (see (30), (31)). Then, the second row of each \({\mathbf {B}}(l)\) follows and so on. This matrix can be created from the identity matrix if its rows are moved similarly. The matrix \({\mathbf {P}}\mathbf {\mathcal {B}}(n)\) can be written as $$\begin{aligned} {\mathbf {P}}\mathbf {\mathcal {B}}(n)= \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {F}}_{1} &{} \mathbf{0} &{}\ldots &{} \mathbf{0} \\ \mathbf{0} &{} {\mathbf {F}}_{2} &{} \ldots &{} \mathbf{0} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \mathbf{0} &{} \mathbf{0} &{} \ldots &{} {\mathbf {F}}_{L_r} \end{array} \right] , \end{aligned}$$ where \({\mathbf {F}}_i\), \(i=1,2,\ldots ,L_r\), is an \(M\times M\) matrix with its lth row equal to \({\mathbf {b}}^{{\mathrm{T}}}_{li}\), \(l=n,n+1,\ldots ,n+M-1\). After substituting (39) in (38), \(\mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {B}}(n)\) is a block diagonal matrix, as shown in (40), with the ith diagonal element being \({\mathbf {F}}^{{\mathrm{H}}}_i{\mathbf {F}}_i\), \(i=1,2,\ldots ,L_r\). $$\begin{aligned} \mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {B}}(n)= \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\mathbf {F}}_{1}^{{\mathrm{H}}}{\mathbf {F}}_{1} &{} \mathbf{0} &{} \ldots &{} \mathbf{0} \\ \mathbf{0} &{} {\mathbf {F}}_{2}^{{\mathrm{H}}}{\mathbf {F}}_{2} &{} \ldots &{} \mathbf{0} \\ \vdots &\vdots &{} \ddots &{} \vdots \\ \mathbf{0} &{} \mathbf{0} &{} \ldots &{} {\mathbf {F}}_{L_r}^{{\mathrm{H}}}{\mathbf {F}}_{L_r} \end{array} \right] \end{aligned}$$ Thus, the desired condition depicted by (37) can be fulfilled if equivalently \({\mathbf {F}}^{{\mathrm{H}}}_i{\mathbf {F}}_i = M{\mathbf {I}}\). An appropriate choice for the \({\mathbf {F}}_i\)'s, \(i=1,2,\ldots ,L_r\), could be based on the Fourier matrix \({\mathbf {F}}_{N_ \times N}\), whose (p, q)th element is given as \([{\mathbf {F}}]_{p,q} = {\hbox {e}}^{-j2\pi pq/N}\). Then, only the first M columns are kept and the resulting matrix is denoted as \({\mathbf {F}}_{N \times M}\). Finally, rows of \({\mathbf {F}}_i\), \(i=1,2,\ldots ,L_r\), are set equal to the rows \(i, L_r+i, 2L_r+i, \ldots\) of \({\mathbf {F}}_{N \times M}\). These matrices fulfill the desired condition, namely \({\mathbf {F}}_{i}^{{\mathrm{H}}}{\mathbf {F}}_{i} = M{\mathbf {I}}\). Hence, by utilizing these matrices, the phase shifters of the receiver in the partially connected, hybrid case can be determined. It is noted, here, that the \({\mathbf {F}}_i\)'s are actually the beamforming matrices used by H-MUSIC in [23] in order to obtain the overall received vector, assuming \(T = M\), e.g., rectangular beamformers. It is worth mentioning that H-MUSIC can work with non-rectangular beamformers \({\mathbf {F}}_i\)'s, i.e., with \(T < M\), thus, with smaller size sub-snapshots and snapshots whose size will be less than the so-called full snapshot. Here, the case of the fully connected architecture is investigated. In order to solve the problem in (29) by employing (32), it can be shown that \(E\{\bar{\mathbf {w}}(n)(\bar{\mathbf {w}}(n))^{{\mathrm{H}}}\} = \sigma ^2 \mathbf {\mathcal {D}}(n)\), where matrix \(\mathbf {\mathcal {D}}(n)\) is block diagonal with diagonal elements of the form \({\mathbf {B}}(l){\mathbf {B}}^{H}(l)\) for \(l= n, n+1, \ldots , n+T-1\). Using above, the trace of \({\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\) in (33) is written in the following equivalent form, $$\begin{aligned} \begin{aligned} {\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\}&= {\hbox {Tr}}\{\sigma ^2\mathbf {\mathcal {B}}^{-1}(n)\mathbf {\mathcal {D}}(n)\left( \mathbf {\mathcal {B}}^{-1}(n)\right) ^{H}\} \\&= {\hbox {Tr}}\{\sigma ^2\mathbf {\mathcal {D}}(n)\left( \mathbf {\mathcal {B}}^{-1}(n)\right) ^{H}\mathbf {\mathcal {B}}^{-1}(n)\} \\&= {\hbox {Tr}}\{\sigma ^2\mathbf {\mathcal {D}}(n)(\mathbf {\mathcal {B}}(n)\mathbf {\mathcal {B}}^{{\mathrm{H}}}(n))^{-1}\} \\&= {\hbox {Tr}}\{\sigma ^2(\mathbf {\mathcal {B}}(n)\mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {D}}^{-1}(n))^{-1}\}. \end{aligned} \end{aligned}$$ By substituting (41) in (29) and applying (35), the following lower bound can be derived, $$\begin{aligned} {\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\}\ge \sigma ^2\sum _{i=1}^N\frac{1}{[\mathbf {\mathcal {B}}(n)\mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {D}}^{-1}(n)]_{ii}}=\sigma ^2N, \end{aligned}$$ where, for deriving the last equality, \([\mathbf {\mathcal {B}}(n)\mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\mathbf {\mathcal {D}}^{-1}(n)]_{ii} = 1\), \(\forall i\), is used, which can be proved by employing (30), (32) and observing the diagonal elements of the result. Observing (42), any matrix could be utilized for deriving the lower bound. However, in this case, the matrix \(\mathbf {\mathcal {B}}(n)\) is designed based on the Fourier matrix \({\mathbf {F}}_{N\times N}\), as this one has the properties of \(\mathtt {S}\). Then, submatrices of \({\mathbf {F}}\) of size \(L_r\times N\) can be used for the design of the \({\mathbf {B}}(l)\)s for \(l= n, n+1, \ldots , n+T-1\) that comprise the matrix \(\mathbf {\mathcal {B}}(n)\) in (30). Hence, by utilizing these matrices the phase shifters of the receiver in the fully connected, hybrid case can be determined, as, in this case, the minimum value in (36) is satisfied. Non-constant source signals In this section, the case of non-constant source signals will be elaborated. In order to solve the problem in (29), the covariance matrix, \({\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\), for the noise term \({\mathbf {w}}^{\prime \prime }(n)=\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {w}}(n) +\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {B}}(n)\bar{\mathbf {A}}(\theta )\bar{\mathbf {s}}_{nc}(n)\) can be written as $$\begin{aligned} \begin{aligned} {\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}&= E\{{\mathbf {w}}^{\prime \prime }(n)({\mathbf {w}}^{\prime \prime }(n))^{{\mathrm{H}}}\}\\&= \sigma ^2\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {B}}(n)\bar{\mathbf {B}}^{{\mathrm{H}}}(n)\left( \mathbf {\mathcal {B}}^{-1}(n)\right) ^{H} \\&\quad +\mathbf {\mathcal {B}}^{-1}(n)\bar{\mathbf {B}}(n)\bar{\mathbf {A}}({\theta }){\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\bar{\mathbf {A}}^{{\mathrm{H}}}({\theta })\bar{\mathbf {B}}^{{\mathrm{H}}}(n)\left( \mathbf {\mathcal {B}}^{-1}(n)\right) ^{H} \\&= \mathbf {\mathcal {B}}^{-1}(n)(\sigma ^2\bar{\mathbf {B}}(n)\bar{\mathbf {B}}^{{\mathrm{H}}}(n) \\&\quad + \bar{\mathbf {B}}(n)\bar{\mathbf {A}}({\theta }){\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\bar{\mathbf {A}}^{{\mathrm{H}}}({\theta })\bar{\mathbf {B}}^{{\mathrm{H}}}(n)) \left( \mathbf {\mathcal {B}}^{-1}(n)\right) ^{H}. \end{aligned} \end{aligned}$$ By substituting (43) into the cost function of (29), the following equation can be written as $$\begin{aligned} {\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\} = {\hbox {Tr}}\{\mathbf {\mathcal {B}}^{-1}(n){\mathbf {U}}(\mathbf {\mathcal {B}}^{-1}(n))^{{\mathrm{H}}}\}, \end{aligned}$$ where \({\mathbf {U}}=\sigma ^2\bar{\mathbf {B}}(n)\bar{\mathbf {B}}^{{\mathrm{H}}}(n)+ \bar{\mathbf {B}}(n)\bar{\mathbf {A}}({\theta }){\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\bar{\mathbf {A}}^{{\mathrm{H}}}({\theta })\bar{\mathbf {B}}^{{\mathrm{H}}}(n)\) and \({\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\) the non-constant signals correlation matrix of size \(TL\times TL\). For the full snapshot reconstruction the preprocessing matrix \(\mathbf {\mathcal {B}}(n)\) in (30) should not change the noise power, thus, the set of solutions \(\mathtt {S}\) is further restricted into a new set that consists of only unitary matrices, i.e., \(\mathcal {B}^{-1}(n) = \mathcal {B}^{{\mathrm{H}}}(n)\). Hence, the trace of (44) can be further simplified as $$\begin{aligned} \begin{aligned} {\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\}&= {\hbox {Tr}}\{\mathbf {\mathcal {B}}^{{\mathrm{H}}}(n){\mathbf {U}}\mathbf {\mathcal {B}}(n)\} \\&= {\hbox {Tr}}\{{\mathbf {U}}\mathbf {\mathcal {B}}(n)\mathbf {\mathcal {B}}^{{\mathrm{H}}}(n)\} \\&= {\hbox {Tr}}\{{\mathbf {U}}\}\\&= {\hbox {Tr}}\{\sigma ^2\bar{\mathbf {B}}(n)\bar{\mathbf {B}}^{{\mathrm{H}}}(n) \\&\quad + \bar{\mathbf {B}}(n)\bar{\mathbf {A}}({\theta }){\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\bar{\mathbf {A}}^{{\mathrm{H}}}({\theta }))\bar{\mathbf {B}}^{{\mathrm{H}}}(n)\}\\&= {\hbox {Tr}}\{\bar{\mathbf {B}}(n)(\sigma ^2{\mathbf {I}}_{TN} + \bar{\mathbf {A}}({\theta }){\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\bar{\mathbf {A}}^{{\mathrm{H}}}({\theta }))\bar{\mathbf {B}}^{{\mathrm{H}}}(n)\}\\&= {\hbox {Tr}}\{\bar{\mathbf {B}}(n){\mathbf {V}}\bar{\mathbf {B}}^{{\mathrm{H}}}(n)\}{,} \end{aligned} \end{aligned}$$ where \({\mathbf {V}} = \sigma ^2{\mathbf {I}}_{TN} + \bar{\mathbf {A}}({\theta }){\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\bar{\mathbf {A}}^{{\mathrm{H}}}({\theta })\). Next, according to [44], the lower bound of (29), when the cost-function has the form of (45), can be written as $$\begin{aligned} {\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\} \ge \sum _{i=1}^{N} \lambda _i({\mathbf {V}}), \end{aligned}$$ where the \(\lambda _i\)'s, for \(i=1,2,\dots ,TN\), are the eigenvalues of the matrix \({\mathbf {V}}\). Irrespective of the hybrid architecture, the minimum value in (46) is attained when the matrix \(\bar{\mathbf {B}}(n)\) equals the matrix of eigenvectors of \({\mathbf {V}}\) that correspond to the N smallest eigenvalues, sorted in an ascending order. It is noted, here, that \(\bar{\mathbf {A}}({\theta }){\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\bar{\mathbf {A}}^{{\mathrm{H}}}({\theta })\) of size \(TN \times TN\) is a matrix of rank equal to TL. If the number of antenna elements N and the number of sources L satisfy the condition \(TN-TL > N\) this entails that the N smallest eigenvalues are equal to \(\sigma ^2\), and so the minimum value in (46) can be written as \(\min {\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\} = \sigma ^2N\). The solution in (46) that minimizes the problem in (29), does not have the desired phase shifters' matrix structure because of the term \(\bar{\mathbf {A}}({\theta }){\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\bar{\mathbf {A}}^{{\mathrm{H}}}({\theta })\) in matrix \({\mathbf {V}}\), which depends on the unknown AoAs. Under the assumption of a low SNR regime, in the involved cost function in (45), the dominant term would be the one related to the additive noise, especially when the time variations of the source signals are not strong enough. Hence, in this case, (45) can be approximated by \({\hbox {Tr}}\{\sigma ^2\bar{\mathbf {B}}(n)\bar{\mathbf {B}}^{{\mathrm{H}}}(n)\}\). To design the \(\mathbf {\mathcal {B}}(n)\), for either partially connected or fully connected architecture, a similar process as the one followed in case of constant signals can be employed. As it has been already proved in the previous section, the Fourier matrix is an appropriate choice for the design of matrix \(\mathbf {\mathcal {B}}(n)\) for both hybrid architectures, as this one has the properties of \(\mathtt {S}\) (i.e., all matrices with elements being equal to exponentials that denote the phase-shifting operation of the hybrid architecture) and satisfies the minimum value \(\sigma ^2N\) in (46). Finally, in the medium to high SNR regime, the dominant term in matrix \({\mathbf {V}}\) is \(\bar{\mathbf {A}}({\theta }){\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\bar{\mathbf {A}}^{{\mathrm{H}}}({\theta })\). Hence, the solution to the minimization problem in (29) is a matrix which actually depends on the unknown AoAs, which constitutes this case intractable, and the statistical properties of the time varying signals \({\mathbf {s}}_{nc}(l)\). Since, in this case, the solution to the minimization problem under study is related to the unknown AoAs, any predefined matrix is expected to lead to a floor at the cost function of (45). This floor depends on how powerful the time-varying nature of the source signals are (namely, on the values of the covariance matrix \({\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\), if one considers the model that is adopted here). Since the modeling of \({\mathbf {R}}_{\bar{\mathbf {s}}_{nc}}\) is out of the scope of this paper a simple model for the time-varying signals \({\mathbf {s}}_{nc}(l)\) can be adopted. Specifically, they can be modeled as discrete random variables of Gaussian distribution assuming that their values are statistically independent of equal power, with known first and second order statistics (i.e., \({\mathbf {s}}_{nc}(l) \sim \mathcal {C}N\left( {\mathbf {0}},\sigma _{nc}^2{\mathbf {I}}\right)\)). Under this model, the noise power behavior in (45) is verified in the simulations which are presented in the next section. In this section, simulations are carried out to demonstrate the performance of the proposed scheme and verify its behavior under the various cases that have been studied previously. To be more specific, the recoverability of the full snapshot is considered for the problem of AoA estimation in 1D mmWave massive MIMO systems by employing MUSIC. The combination of the proposed full snapshot reconstruction scheme with the classical MUSIC algorithm is called hereafter PRE+MUSIC. In the following, we assume that there are \(L=4\) signal sources. The constant part \({\mathbf {s}}(n)\) of the source signals is assumed to be an i.i.d. sequence of Quadrature Phase Shift Keying (QPSK) symbols. These signals impinge on the array with AoAs equal to \(12.3^{\circ }\), \(28.1^{\circ }\), \(54.6^{\circ }\), \(62.8^{\circ }\), respectively. The carrier frequency \(f_{{\mathrm{c}}}\) is selected in the mmWave band and it is set to 30 GHz, which corresponds to a wavelength \(\lambda\) equals to 1 cm and antenna spacing 0.5 cm. An \(N=32\) elements antenna array is deployed in receiver side which, in the hybrid case, is grouped into \(L_r=8\) RF chains. In the following, three experiments will be presented. In the first experiment, we demonstrate how well the theoretical analysis, presented in Sect. 3.2, predict the performance of the proposed scheme for the full snapshot reconstruction and validate the agreement between the computed theoretical minimum of the total noise power \({\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\}\), under the optimal DFT-based phase-shift combiner matrix in the case of constant signals. Moreover, the performance improvement of the proposed solution over the random phase-shift combiner matrix is also demonstrated. In the following experiments we investigate the performance of the proposed full snapshot reconstruction scheme in the context of AoA estimation. Total noise power evaluation for a combiner Matrix based on (i) DFT matrix and (ii) random phase shifters for \(\sigma _{nc}^2 = \{0, 0.01, 0.1\}\) Noise power in reconstructed snapshots Let recall that, since the signal component power remains constant, the STNR values in (27) depend only on the corresponding total noise power \({\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\}\), which, in the case of non-constant sources is dependent on the additive noise power \(\sigma ^2\) plus a term which depends on the non-constant signals power \(\sigma _{nc}^2\), treated as noise term, as well. In the special case of constant sources, the total noise term consists only of the additive noise. Hence, the power of the noise component (\({\mathbf {w}}^{\prime \prime }(n)\)) in the reconstructed snapshot [see (22) for non-constant sources or (26) for constant ones] is depicted in Fig. 4, versus the additive noise power \(\sigma ^2\), for \(\sigma _{nc}^2 = \{0, 0.01, 0.1\}\) (denoting the power of \({\mathbf {s}}_{nc}\)), when the Fourier matrix for the phase shifters is utilized, for the fully connected architecture. It is worth to mention that the behavior of the cost function remains the same if, instead of the fully connected architecture, the partially connected architecture is adopted in each case. Furthermore, the theoretical minimum value in (36), (42) is also depicted in the same figure when the Fourier matrix is considered. Indeed, when \(\sigma _{nc}^2 = 0\) the Fourier matrix achieves the minimum value and constitutes an optimal choice. As discussed in Sect. 3.2, the proposed phase shifters for both architectures, in the case when \(\sigma _{nc}^2 \ne 0\), follow the minimum value at the low SNR regime. Additionally, as \(\sigma _{nc}^2\) increases, the floor of the cost function starts appearing in smaller values of SNR. Also, inspecting Fig. 4, when considering random phase-shifters combiner matrices \(\mathbf {\mathcal {B}}(n)\) at the cost function in (44), it is observed that the theoretical minimum value of \(\sigma ^2N\) cannot be achieved because the inequality here is strict. Moreover, the use of random phase-shifters combiner matrix, shifted the total noise power values, however, the form of figures remained the same. Note that, the resulted figures in case of random phases combiner illustrate the average values of \({\hbox {Tr}}\{{\mathbf {C}}_{{\mathbf {w}}^{\prime \prime }}\}\) for 500 different random combiners. Application of full snapshot reconstruction In this section, the application of the proposed scheme, under the AoA estimation performance, is evaluated versus the additive noise SNR (see Figs. 5, 6) and the number of required snapshots (see Figs. 7, 8) for the covariance matrix estimation. In the first two experiments, the proposed scheme, called PRE+MUSIC, will be compared with H-MUSIC [23] in the same partially and fully connected hybrid antenna arrays, when the Fourier matrix is utilized, and MUSIC when the conventional antenna array is used. In the last experiment demonstrated in Fig. 9, PRE+MUSIC performance is evaluated for both DFT-based and exponentials with random phases combiner matrix. It is interesting to note that, in the case of the PRE+MUSIC (as well as for the H-MUSIC) a number of T sub-snapshots are needed in order to reconstruct a full conventional snapshot as in (4). Thus, to make a fair comparison with the conventional MUSIC, we have assumed in the experiments that the number \(S_2\) of snapshots collected by MUSIC is T times the number of snapshots \(S_1\) used by the proposed scheme and H-MUSIC. The performance measure that is used, i.e., the RMSE (root-mean-square-error) [45] of the estimated AoAs, is defined as $$\begin{aligned} {\hbox {RMSE}} = \sqrt{\frac{\sum _{m_{{\mathrm{c}}}=1}^{M_{{\mathrm{c}}}}\sum _{l=1}^{L} ({\hat{\theta }}_{m_{{\mathrm{c}}}}(l) - \theta (l))^2}{LM_{{\mathrm{c}}}}}, \end{aligned}$$ where \({\hat{\theta }}_{m_{{\mathrm{c}}}}(l)\) is the estimate of true AoA \(\theta (l)\) in the \({m_{{\mathrm{c}}}}{\mathrm{th}}\) Monte Carlo trial and \(M_{{\mathrm{c}}}\) is the total number of trials which is set to 500. Finally, all schemes, in order to estimate the desired AoAs, are employed in the angular range from \({-}\,90^{\circ }\) to \(90^{\circ }\) which is discretized with a \(0.5^{\circ }\) step. RMSE of AoA estimation for the partially connected architecture, \(M_{{\mathrm{c}}} = 500\), \(S_1 = 20\), \(S_2 = 80\), \(\lambda = 1\,{\text {cm}}\) RMSE of AoA estimation for the fully connected architecture, \(M_{{\mathrm{c}}} = 500\), \(S_1 = 20\), \(S_2 = 80\), \(\lambda = 1\,{\text {cm}}\) In the context of the first two experiments in Figs. 5, 6, 7 and 8, first, it is assumed that the full antenna array is organized into non-overlapping subarrays of the same size \(M = 4\) as in Fig. 2. Specifically, we will demonstrate an experiment assuming that \(T = M = 4\), i.e., the number of sub-snapshots is equal to the number of antennas in each subarray, and a second experiment considering the fully connected architecture, where all antenna elements contribute to each RF chain, as shown in Fig. 3. In the same experiments of this section, the fully connected architecture, as in Fig. 3, is considered as well, given that \(L_r = 8\) and \(T = 4\) the desired number of sub-snapshots in order to construct a full snapshot \(N = 32\). In particular, in Figs. 5 and 6 we demonstrate the RMSE of the three schemes (i.e., PRE+MUSIC, H-MUSIC and MUSIC) versus the SNR for \(\sigma _{nc}^2 = \{0, 0.01, 0.1\}\) and \(S_1=20\), \(S_2=80\), respectively. It can be observed that both schemes designed for hybrid antennas (i.e., PRE+MUSIC and H-MUSIC) have almost identical performance, which attains that of MUSIC for high SNR values. Also, MUSIC has the best performance at very low SNRs, but it is noted that MUSIC is applicable only to conventional ULAs and, for the same observation time, it utilizes T times more full snapshots, therefore, it is able to have a better estimation of the covariance matrix. RMSE of AoA estimation for the partially connected architecture, \(M_{{\mathrm{c}}} = 500\), \(SNR = 20\,{\text {dB}}\), \(S_1 = 1:20\), \(S_2 = 4:80\) snapshots (\(S_2 = TS_1\) where \(T=4\)), \(\lambda = 1\,{\text {cm}}\) RMSE of AoA estimation for the fully connected architecture, \(M_{{\mathrm{c}}} = 500\), \(SNR = 20\,{\text {dB}}\), \(S_1 = 1:20\), \(S_2 = 4:80\) snapshots (\(S_2 = TS_1\) where \(T=4\)), \(\lambda = 1\,{\text {cm}}\) In the context of the second experiment in Figs. 7 and 8, the RMSE performance versus the number of snapshots is also evaluated for \(\sigma _{nc}^2 = \{0, 0.01, 0.1\}\) and \(SNR = 20\,{\text {dB}}\). Obviously, as illustrated in Figs. 7 and 8, in the case of both architectures and irrespective of the non-constant signals power \(\sigma _{nc}^2\), the classical MUSIC algorithm can achieve better AoA estimation performance than the rest schemes, as the number of snapshots actually used is larger. Moreover, it is also observed that the RSME performance is similar for all schemes for \(S_1 \ge 8\) or \(S_2 \ge 32\) despite MUSIC captures greater number of snapshots than the hybrid schemes. Also, concerning the source signals time-variations (in both architectures), it is observed that all schemes are not affected in terms of the AoA estimation and their behavior is approximately the same as in the case of constant sources. Additionally, in the last experiment in Fig. 9 the focus is solely on PRE+MUSIC, at which, it is depicted the impact of a random phase-shift combiner matrix in the AoA estimation. Since, the proposed scheme performance is slightly affected by the possible time variations of the source signals and the hybrid architecture type, this experiment is conducted assuming only the fully connected architecture for constant value sources. As it was expected, the design of the combiner matrix based on DFT optimized the performance of PRE+MUSIC. Indeed, the cost function values and the lower bound impacted by the use of a random phase-shift matrix (as demonstrated in Fig. 4), which impacted its performance under the AoA estimation problem, as well, as shown in Fig. 9. This relates with the fact that, in the reconstructed full snapshot the noise term is not white if the \(\mathbf {\mathcal {B}}(n)\) is not unitary. RMSE of AoA estimation of PREMUSIC for a combiner matrix based on (i) DFT and (ii) random phase shifters, \(M_{{\mathrm{c}}} = 500\), \(S_1 = 20\), \(S_2 = 80\), \(\lambda = 1\,{\text {cm}}\) We should refer that the use of orthogonal beamforming matrices, e.g., the use of the discrete Fourier matrix achieves the optimal behavior of the performance metric (RMSE) in terms AoA estimation. Non-orthogonal beamforming matrices, e.g., random beamforming matrices, can be utilized as well. Notwithstanding, the performance in terms of the AoA estimation accuracy may be considerably degraded in comparison with its optimal behavior, as shown in Fig. 9. Ultimately, it should be noted that, irrespective of the adopted hybrid architecture and the power of the non-constant signal component, the performance of PRE+MUSIC in terms AoA estimation, is not affected by the floors that appear in the involved cost function in Fig. 4, revealing that the discrete Fourier matrix is an appropriate choice for the design of the RF combiner. In summary, we investigated the recoverability of a ULA snapshot applicable to sub- and fully connected hybrid antenna arrays. The recommended scheme is able to recover the full snapshot as if we had a conventional antenna array either for time-varying source signals or constant signals. The phase shifters of the hybrid antenna array were designed with the aim to maximize the STNR of the restored snapshot. Finally, typical simulation results have been presented for the AoA estimation problem, confirming the efficacy of the proposed approach under the optimal solution. Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. In many radio communication standards, the involved signals exhibit fluctuations [43]. A. Alkhateeb, J. Mo, N. Gonzalez-Prelcic, R.W. Heath, MIMO precoding and combining solutions for millimeter-wave systems. IEEE Commun. Mag. 52(12), 122–131 (2014) J.A. Zhang, X. Huang, V. Dyadyuk, Y.J. Guo, Massive hybrid antenna array for millimeter-wave cellular communications. IEEE Wirel. Commun. 22(1), 79–87 (2015) R.W. Heath, N. Gonzalez-Prelcic, S. Rangan, W. Roh, A.M. Sayeed, An overview of signal processing techniques for millimeter wave MIMO systems. IEEE J. Sel. Top. Signal Process. 10(3), 436–453 (2016) K. Sakaguchi, T. Haustein, S. Barbarossa, E.C. Strinati, A. Clemente, G. Destino et al., Where, when, and how mm Wave is used in 5G and beyond. IEICE Trans. Electron. 100(10), 790–808 (2017) S. Ghosh, D. Sen, An inclusive survey on array antenna design for millimeter-wave communications. IEEE Access. 7, 83137–83161 (2019) F. Shu, Y. Qin, T. Liu, L. Gui, Y. Zhang, J. Li et al., Low-complexity and high-resolution DOA estimation for hybrid analog and digital massive MIMO receive array. IEEE Trans. Commun. 66(6), 2487–2501 (2018) M. Majidzadeh, A. Moilanen, N. Tervo, H. Pennanen, A. Tolli, M. Latva-Aho, Partially connected hybrid beamforming for large antenna arrays in multi-user MISO systems, in 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC) (2017), pp. 1–6 X. Yu, J. Zhang, K.B. Letaief, Partially-connected hybrid precoding in mm-Wave systems with dynamic phase shifter networks. arXiv Preprint (2017). arXiv:170500859 R. Méndez-Rial, C. Rusu, N. González-Prelcic, A. Alkhateeb, R.W. Heath, Hybrid MIMO architectures for millimeter wave communications: phase shifters or switches? IEEE Access. 4, 247–267 (2016) M. Xiao, S. Mumtaz, Y. Huang, L. Dai, Y. Li, M. Matthaiou et al., Millimeter wave communications for future mobile networks. IEEE J. Sel. Areas Commun. 35(9), 1909–1935 (2017) I. Ahmed, H. Khammari, A. Shahid, A. Musa, K.S. Kim, E. De Poorter et al., A survey on hybrid beamforming techniques in 5G: architecture and system model perspectives. IEEE Commun. Surv. Tutor. 20(4), 3060–3097 (2018) L. Li, F. Chen, J. Dai, Separate DOD and DOA estimation for bistatic MIMO radar. Int. J. Antennas Propagat. 2016, 1 (2016) S. Ejaz, M.A. Shafiq, Comparison of spectral and subspace algorithms for FM source estimation. Prog. Electromagn. Res. 14, 11–21 (2010) I. El Ouargui, S. Safi, M. Frikel, Minimum array elements for resolution of several direction of arrival estimation methods in various noise-level environments. J. Telecommun. Inf. Technol. 2, 87–94 (2018) R. Cao, B. Liu, F. Gao, X. Zhang, A low-complex one-snapshot DOA estimation algorithm with massive ULA. IEEE Commun. Lett. 21(5), 1071–1074 (2017) Y. Han, B. Cao, W. Dong, Q. Fang, W. Zhang, An improved mode-music algorithm for DOA estimation of coherent sources based on hybrid array, in 12th International Conference on Signal Processing (ICSP) (2014), pp. 358–362 Y. Khmou, S. Safi, M. Frikel, Comparative study between several direction of arrival estimation methods. J. Telecommun. Inf. Technol. 1, 41–48 (2014) T.B. Lavate, V. Kokate, A. Sapkal, Performance analysis of MUSIC and ESPRIT DOA estimation algorithms for adaptive array smart antenna in mobile communication, in 2nd International Conference on Computer and Network Technology (ICCNT) (2010), pp. 308–311 F.M. Han, X.D. Zhang, An ESPRIT-like algorithm for coherent DOA estimation. IEEE Antennas Wirel. Propag. Lett. 4(1), 443–446 (2005) R. Roy, T. Kailath, ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process. 37(7), 984–995 (1989) T.J. Shan, M. Wax, T. Kailath, On spatial smoothing for direction-of-arrival estimation of coherent signals. IEEE Trans. Acoust. Speech Signal Process. 33(4), 806–811 (1985) S.U. Pillai, B.H. Kwon, Forward/backward spatial smoothing techniques for coherent signal identification. IEEE Trans. Acoust. Speech Signal Process. 37(1), 8–15 (1989) S.F. Chuang, W.R. Wu, Y.T. Liu, High-resolution AoA estimation for hybrid antenna arrays. IEEE Trans. Antennas Propag. 63(7), 2955–2968 (2015) MathSciNet MATH Article Google Scholar M. Trigka, C. Mavrokefalidis, K. Berberidis, An effective preprocessing scheme for DoA estimation in hybrid antenna arrays, in 25th International Conference on Telecommunications (ICT) (2018), pp. 127–131 M. Trigka, C. Mavrokefalidis, K. Berberidis, AoA estimation scheme for fully-connected hybrid architecture antenna arrays, in 13th European Conference on Antennas and Propagation (EuCAP) (2019), pp. 1–5 Z. Ni, J.A. Zhang, K. Yang, F. Gao, J. An, Estimation of multiple angle-of-arrivals with localized hybrid subarrays for millimeter wave systems. IEEE Trans. Commun. 68, 3 (2019) R. Zhang, J. Zhou, J. Lan, B. Yang, Z. Yu, A high-precision hybrid analog and digital beamforming transceiver system for 5G millimeter-wave communication. IEEE Access. 7, 83012–83023 (2019) C. Qin, J.A. Zhang, X. Huang, K. Wu, Y.J. Guo, Fast angle-of-arrival estimation via virtual subarrays in analog antenna array. IEEE Trans. Wirel. Commun. 19, 6425–6439 (2020) K. Wu, W. Ni, T. Su, R.P. Liu, Y.J. Guo, Recent breakthroughs on angle-of-arrival estimation for millimeter-wave high-speed railway communication. IEEE Commun. Mag. 57(9), 57–63 (2019) K.C. Hwang, A modified Sierpinski fractal antenna for multiband application. IEEE Antennas Wirel. Propag. Lett. 6, 357–360 (2007) E. Guariglia, Entropy and fractal antennas. Entropy 18(3), 84 (2016) E. Guariglia, Spectral analysis of the weierstrass-mandelbrot function, in 2017 2nd International Multidisciplinary Conference on Computer and Energy Science (SpliTech) (IEEE, New York, 2017), pp. 1–6 X. Zheng, Y.Y. Tang, J. Zhou, A framework of adaptive multiscale wavelet decomposition for signals on undirected graphs. IEEE Trans. Signal Process. 67(7), 1696–1711 (2019) C. Puente-Baliarda, J. Romeu, R. Pous, A. Cardama, On the behavior of the Sierpinski multiband fractal antenna. IEEE Trans. Antennas Propag. 46(4), 517–524 (1998) E. Guariglia, Primality, fractality, and image analysis. Entropy 21(3), 304 (2019) J. Anguera, C. Borja, C. Puente, Microstrip fractal-shaped antennas: a review, in The 2nd European Conference on Antennas and Propagation, EuCAP 2007. IET (2007), pp. 1–7 E. Guariglia, Harmonic Sierpinski gasket and applications. Entropy 20(9), 714 (2018) S.R. Best, A discussion on the significance of geometry in determining the resonant behavior of fractal and other non-Euclidean wire antennas. IEEE Antennas Propag. Mag. 45(3), 9–28 (2003) E. Guariglia, S. Silvestrov, Fractional-wavelet analysis of positive definite distributions and wavelets on \(\mathscr {D}^{\prime }(\mathbb{C} )\), in Engineering Mathematics II (Springer, Berlin, 2016), pp. 337–353 C. Zhou, Y. Gu, S. He, Z. Shi, A robust and efficient algorithm for coprime array adaptive beamforming. IEEE Trans. Veh. Technol. 67(2), 1099–1112 (2017) E. Björnson, L. Sanguinetti, H. Wymeersch, J. Hoydis, T.L. Marzetta, Massive MIMO is a reality-What is next? Five promising research directions for antenna arrays. Digit. Signal Proc. 94, 3–20 (2019) O. El Ayach, S. Rajagopal, S. Abu-Surra, Z. Pi, R.W. Heath, Spatially sparse precoding in millimeter wave MIMO systems. IEEE Trans. Wirel. Commun. 13(3), 1499–1513 (2014) R. Dinis, A. Palhau, A class of signal-processing schemes for reducing the envelope fluctuations of CDMA signals. IEEE Trans. Commun. 53(5), 882–889 (2005) A.W. Marshall, Inequalities: Theory of Majorization and Its Applications (Academic Press, London, 1979) S. Ren, X. Ma, S. Yan, C. Hao, 2-D unitary ESPRIT-like direction-of-arrival (DoA) estimation for coherent signals with a uniform rectangular array. Sensors 13(4), 4272–4288 (2013) Not applicable in this section. This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme 'Human Resources Development, Education and Lifelong Learning' in the context of the project 'Strengthening Human Resources Research Potential via Doctorate Research' (MIS-5000432), implemented by the State Scholarships Foundation (IKY). This work is also supported by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation under Project INFRASTRUCTURES/1216/0017 IRIDA. Department of Computer Engineering and Informatics, University of Patras, Campus Rio, 26504, Patra, Greece Maria Trigka, Christos Mavrokefalidis & Kostas Berberidis Maria Trigka Christos Mavrokefalidis Kostas Berberidis All authors participated in the development of the new methods as well as in the design and execution of the experiments and the interpretation of the obtained results. Also, the authors have jointly contributed to the writing of the manuscript. Finally, all authors read, reviewed and edited the manuscript and have approved the submitted one. All authors read and approved the final manuscript. Correspondence to Maria Trigka. Trigka, M., Mavrokefalidis, C. & Berberidis, K. Full snapshot reconstruction in hybrid architecture antenna arrays. J Wireless Com Network 2020, 243 (2020). https://doi.org/10.1186/s13638-020-01863-6 AoA estimation
CommonCrawl
PrepAnywhere 8.7 Mid Chapter Review Calculus and Vectors Nelson Purchase this Material for $10 You need to sign up or log in to purchase. Subscribe for All Access Solutions 27 Videos Find x- and y-intercepts for each of the following lines: \vec{r} = (3, 1) + t(-3, 5), t\in \mathbb{R} Buy to View 1.08mins Q2a x = -6 + 2s, y = 3 - 2s, s\in \mathbb{R} Q2b Two lines L_1: \vec{r} = (5, 3) + p(-4, 7), p \in \mathbb{R} and L_2: \vec{r} = (5,3) + q(2, 1), q \in \mathbb{R}, intersect at the point with coordinates (5, 3). What is the angle between L_1 and L_2? Determine the angle that the line with equation \vec{r} = t(4, -5), t\in \mathbb{R}, makes with the x-axis and y-axis. Determine a Cartesian equation for the line that passes through the point (4, -3) and is perpendicular to the line \vec{r} = (2, -3) + t(5, -7), t\in \mathbb{R}. Determine an equation in symmetric form of a line parallel to \displaystyle \frac{x -3}{3} = \frac{y -5}{-4} = \frac{z+7}{4} and passing through (0, 0, 2). Determine parametric equations of the line passing through (1, 2, 5) and parallel to the line passing through K(2, 4, 5) and L(3, -5, 6). Determine direction angles (the angles the direction vector makes with the x-axis, y-axis, and z-axis) for the line with parametric equations x = 5 + 2t, y = 12-8t, z= 5+ 7t, t\in \mathbb{R}. Determine an equation in symmetric form for the line passing through P(3, -4, 6) and having direction angles 60^o, 90^o, and 30^o. Write an equation in parametric form for each of the three coordinates axes in \mathbb{R^3}. The two lines with equations \vec{r} = (1, 2, -4) + t(k + 1, 3k + 1, k -3), t \in \mathbb{R}$, and$x = 2 -3s, y = 1 -10s, z = 3 - 5s, s\in \mathbb{R}$`, are given. a. Determine a value for k if these lines are parallel. b. Determine a value for k if these lines are perpendicular. Determine the perimeter and area of the triangle whose vertices are the origin and the x- and y- intercepts of the line \displaystyle \frac{x -6}{3} = \frac{y + 8}{-2} . The Cartesian equation of a line is given by 3x + 4y - 24 = 0. a. Determine a vector equation for this line. b. Determine the parametric equations of this line. c. Determine the acute angle that this line makes with the x-axis. d. Determine a vector equation of the line that is perpendicular to the given line and passes through the origin. Determine the scalar, vector, and parametric equations of the line that passes through points A(-4, 6) and B(8, 4). Determine a unit vector normal to the line defined by the parametric equations x =1 + 2t and y = -5 - 4t. Determine the parametric equations of each line. the line that passes through (-5, 10) and has a slope of -\frac{2}{3}. Q16a the line that passes through (1, -1) and is perpendicular to the line (x, y) = (4, -6) + t(2, -2). the line that passes through (0, 7) and (0, 10). Q16c Given the line (x, y, z) = (12, -8, -4)+ t(-3, 4, 2), a. determine the intersections with the coordinate planes, if any b. determine the intercepts with the coordinate axes, if any c. graph the line in an x-,y-,z-coordinate system. For each of the following, determine vector, parametric, and, if possible, symmetric equations of the line that passes through P_0 nas has direction vector \vec{d}. P_0 = (1, -2, 8), \vec{d} = (-5, -2, 1) P_0 = (3, 6, 9), \vec{d} = (2, 4, 6) P_0 = (0, 0, 6), \vec{d} = (-1, 5, 1) P_0 = (2, 0, 0), \vec{d} = (0, 0, -2) Q18d Determine a vector equation of the line that passes through the origin and is parallel to the line through the points (-4 5, 6) and (6, -5, 4). Determine the parametric equations of the line through (0, -8, 1), and which passes through the midpoint of the segment joining (0, -8, 1) and which passes through the midpoint of the segment joining (2, 6, 10) and (-4, 4, -8). The symmetric equations of two lines are given. Show that these lines are parallel. \displaystyle L_1: \frac{x - 2}{1} = \frac{y + 3}{3} = \frac{z - 4}{-5} and \displaystyle L_2: \frac{x +1}{-3} = \frac{y -2}{-9} = \frac{z + 1}{15} . Does the point D(7, -1, 8) lie on the line with symmetric equations \displaystyle \frac{x - 4}{3} = \frac{y + 2}{1} = \frac{z - 6}{2} ? Explain About Us Terms of Use Privacy Policy Login Signup Reset Password Contact Textbooks Solutions Grade 9 Math Grade 10 Math Grade 11 Math Grade 12 Math University © MGL Math
CommonCrawl
Screening and identification of BP100 peptide conjugates active against Xylella fastidiosa using a viability-qPCR method Aina Baró1, Esther Badosa1, Laura Montesinos1, Lidia Feliu2, Marta Planas2, Emilio Montesinos1 & Anna Bonaterra ORCID: orcid.org/0000-0002-6755-53131 Xylella fastidiosa is one of the most harmful bacterial plant pathogens worldwide, causing a variety of diseases, with huge economic impact to agriculture and environment. Although it has been extensively studied, there are no therapeutic solutions to suppress disease development in infected plants. In this context, antimicrobial peptides represent promising alternatives to traditional compounds due to their activity against a wide range of plant pathogens, their low cytotoxicity, their mode of action that make resistance more difficult and their availability for being expressed in plants. Peptide conjugates derived from the lead peptide BP100 and fragments of cecropin, magainin or melittin were selected and tested against the plant pathogenic bacteria X. fastidiosa. In order to screen the activity of these antimicrobials, and due to the fastidious nature of the pathogen, a methodology consisting of a contact test coupled with the viability-quantitative PCR (v-qPCR) method was developed. The nucleic acid-binding dye PEMAX was used to selectively quantify viable cells by v-qPCR. In addition, the primer set XF16S-3 amplifying a 279 bp fragment was selected as the most suitable for v-qPCR. The performance of the method was assessed by comparing v-qPCR viable cells estimation with conventional qPCR and plate counting. When cells were treated with peptide conjugates derived from BP100, the observed differences between methods suggested that, in addition to cell death due to the lytic effect of the peptides, there was an induction of the viable but non-culturable state in cells. Notably, a contact test coupled to v-qPCR allowed fast and accurate screening of antimicrobial peptides, and led to the identification of new peptide conjugates active against X. fastidiosa. Antimicrobial peptides active against X. fastidiosa have been identified using an optimized methodology that quantifies viable cells without a cultivation stage, avoiding underestimation or false negative detection of the pathogen due to the viable but non-culturable state, and overestimation of the viable population observed using qPCR. These findings provide new alternative compounds for being tested in planta for the control of X. fastidiosa, and a methodology that enables the fast screening of a large amount of antimicrobials against this plant pathogenic bacterium. Xylella fastidiosa (Xf) is a xylem-limited Gram-negative bacterium transmitted by insect vectors that causes economically important plant diseases. Pierce's disease of grapevine and Citrus Variegated Chlorosis were the most important diseases caused by Xf worldwide for many years [1, 2]. However, Xf recently emerged as a potential threat to European agriculture [3]. The outbreak of Xf in 2013 in Apulia (Italy) in oleander, almond and olive trees [4], and the detections in Corsica and Provence Alpes-Côte d'Azur (France), Alicante and the Balearic Islands (Spain), Tuscany (Italy), and Vila Nova de Gaia (Portugal) [5, 6] constitute an important change to its geographical distribution and adds new host plants. The measures adopted in Europe are eradication of the infected plants to reduce inoculum sources to prevent the spread of the bacterium, the use of insecticides to control the vector population, and the use of pathogen-free plant material. However, these methods have been only partially successful and different strategies are being explored in order to find alternatives to achieve the management of diseases caused by Xf [7]. Direct strategies to control disease in affected hosts, based on chemical compounds like antibiotics, copper compounds, or biofilm inhibitors, either applied by sprays, drench or endotherapy, have failed to cure infected trees [8]. Therefore, there is a need for new and safe compounds for Xf disease management. Among the new compounds, antimicrobial peptides (AMPs) could be considered good candidates because they display activity against a wide range of plant pathogens, exhibit low cytotoxicity and their mode of action make more difficult the development of resistance [9,10,11,12]. In particular, a few AMPs with bactericidal activity against Xf have been reported, including cecropin A and B, magainin I and II, Shiva-1, indolicidin, PGQ, dermaseptin and gomesin [13,14,15]. Most of these peptides cause disruption of the cytoplasmic membrane, but also some of them have been described to interact with intracellular targets causing the inhibition of key processes [16]. Within our search for new AMPs to control plant diseases, we reported peptide conjugates incorporating units of the lead peptide BP100 and fragments of cecropin A, magainin II or melittin, which were specifically designed to be expressed in plants [17, 18]. In fact, the peptide conjugate BP178 was successfully expressed in rice endosperm, showing resistance against some plant pathogens [19]. This demonstrates the availability of these peptides for being produced by the plant itself, which could overcome the difficulties in accessing the vascular location of Xf observed by other treatment strategies. This family of peptides exhibited high antibacterial activity in vitro against plant pathogenic bacteria such as Xanthomonas axonopodis pv. vesicatoria, Pseudomonas syringae pv. syringae and Erwinia amylovora, were low haemolytic, and were able to control infections in plant hosts caused by these bacteria or even due to phytoplasms [10, 17, 20]. In the case of Xf, only BP178 has been tested in vitro, showing high antibacterial activity against a collection of Xf strains. Its lytic activity upon Xf cells was identified as the main mode of action, with pore formation and disorganization of the cell membrane [21]. Therefore, we envisaged that this biological activity profile makes peptide conjugates derived from BP100 good candidates to be tested against Xf. Since any sequence modification may influence their antimicrobial activity against Xf, as well as their stability and toxicity, a wide range of peptides must be screened to obtain the best candidates to be tested in plants. Currently, there is a need for rapid, reliable and efficient methods useful for the screening of antimicrobial compounds against Xf due to the difficulties of culture of most of the strains and their slow growth [22]. Conventional methods, such as disk-diffusion test, broth or agar dilution assays, as well as antimicrobial gradient and automated instrument systems, rely on measuring growth inhibition using culture-based methods that are time consuming and unreliable for Xf [23]. Moreover, these methods may overestimate the antimicrobial activity of the tested compounds against Xf considering that its cells can enter in a viable-but-non-culturable state (VBNC) in response to harsh environments [24, 25]. Several methods have already been proposed to analyze only viable cells, such as ATP bioluminescence [23], direct microscopy or flow cytometry such as LIVE/DEAD® Baclight™ [26], DAPI combined with SYTOX Green [27], or 5-cyano-2,3-ditolyl tetrazolium chloride (CTC) that evaluates respiratory activity [28]. However, these methods are not able to specifically quantify viable target cells in mixture cultures. Alternative non culture-based methods would be more suitable to evaluate the efficacy of new compounds to inhibit Xf. Nucleic acid-based techniques such as quantitative PCR (qPCR) are commonly used to quantify total specific bacteria, as they can specifically detect target cells. All methods mentioned require specific sample preparation, training, and equipment. Nevertheless qPCR is particularly popular because it has been used for a wide number of applications and has become a standard equipment in researcher laboratories, so methods that use qPCR are easier to be performed anywhere. A limitation of the qPCR is the overestimation of alive cells. Due to the fact that DNA can persist for an extended period after cell death [29], the DNA of both viable and dead cells is amplified. In contrast, the viable quantitative PCR (v-qPCR) allows the quantification of only viable cells. Generally, v-qPCR uses the nucleic acid-binding dyes propidium monoazide (PMA or PMAxx) or ethidium monoazide (EMA) in combination with qPCR for selectively detecting and enumerating viable cells. Both PMA and EMA bind to the free DNA and the DNA of dead cells with damaged membranes. In addition, EMA binds to the DNA of non-metabolically active cells with an intact membrane, avoiding its subsequent amplification by qPCR. In the PEMAX reagent, an optimized mixture of PMA (≥20 μM) and EMA (< 10 μM) is used [30]. This low level of EMA is accumulated inside non-metabolically active cells that still have an intact cell membrane, while it is eliminated from viable cells through active transport. Therefore, after treatment with PEMAX, only the DNA of viable cells remains unlabelled and is detected by qPCR [31, 32]. This methodology has already been used for foodborne pathogenic bacteria in different matrices [33], to monitor biological control agents in field studies [34] and, in the case of Xf, to differentiate viable cells under stressing conditions [35, 36]. Nevertheless, v-qPCR using the PEMAX reagent has never been optimized as a screening methodology for the identification of antimicrobials active against Xf. For the development of a v-qPCR assay for the detection and quantification of Xf it is necessary to find a molecular marker species-specific suitable to be used with PEMAX. Different primer pairs and probes specific for Xf detection have been described and validated [37,38,39,40]. Primer pairs normally show different amplification efficiencies and levels of sensitivity depending on the target site, the nature of the primers and the length of the amplicon. Moreover, suppression of dead cells amplification after PEMAX treatment is also dependent on the length of the DNA fragment amplified by qPCR, as the probability of dye binding increases in longer target regions [34]. The aim of the present work was to find peptide conjugates derived from BP100 highly active against Xf in vitro. To accomplish this purpose, firstly a screening methodology based on a contact test combined with a v-qPCR method was optimized for representative strains of Xf and for an accurate and reliable evaluation of the antimicrobial activity of peptides. Afterwards, a set of peptide conjugates derived from BP100, designed for being expressed in plant systems and active against other plant pathogens, were selected and screened using the optimized methodology to evaluate its antimicrobial activity against Xf. Amplification efficiency and sensitivity of qPCR assays Eight TaqMan based qPCR assays amplifying three different gene sequence targets of Xf and producing different amplicon lengths were checked in order to study their suitability for v-qPCR (Table 1). Standard curves of the eight qPCR assays showed good linearity over 7-log range, from 1 × 102 to 1 × 108 CFU/ml, reporting R2 values over 0.99 (Additional file 1). Table 1 shows the amplification efficiency and the sensitivity of each qPCR assay. All amplification efficiencies were higher than 94%, and did not vary between qPCR assays having the same target gene, except for the Elongation factor Tu (EFTu), which ranged from 95 to 98%. The three qPCR assays amplifying part of the 16S rRNA gene (XF16S) displayed the best amplification efficiencies (97%). Table 1 Primers and TaqMan probes used for qPCR analysis, amplification efficiency and sensitivity analysis Regarding the sensitivity, the eight qPCR assays were very different at 5 × 103 CFU/ml cycle threshold values (CT), ranging from 27.2 to 33.9. Again, the three assays amplifying part of the XF16S gene displayed the higher sensitivity. In all cases, qPCR assays amplifying larger DNA fragments (311, 279, and 307 bp) were less sensitive than the ones generating shorter amplicons. The XF16S-3 design (279 bp) showed sensitivity values comparable to the ones obtained with the qPCR assays amplifying fragments of less than 100 bp. Due to the fact that higher amplicon lengths are more suitable when using PEMAX, the qPCR assay with XF16S-3 was selected for further experiments. V-qPCR The effect of different PEMAX concentrations on the amplification of DNA targets of viable and dead Xf subsp. fastidiosa strain Temecula (Xff) cells was studied by determining the signal reduction value (SR), defined as the difference of CT value between PEMAX and non-PEMAX treated samples (ΔCT) (Additional file 2). On viable cells, no significant differences on SR values were observed when using a PEMAX concentration of 2.5, 5, 7.5 and 50 μM. However, at 10 μM, the SR value was significantly higher compared to 5 and 7.5 μM. Regarding dead cells, significant differences were observed between PEMAX concentrations, being 7.5 and 10 μM the concentrations with highest SR values. Based on these results, a PEMAX concentration of 7.5 μM was chosen for further experiments, as it was the lowest concentration that allowed a better discrimination between viable and dead cells. Standard curves were performed using cell suspensions of Xff, Xf subsp. pauca (Xfp) and Xf subsp. multiplex (Xfm) to evaluate the suitability of the v-qPCR method to quantify viable cells. PEMAX and non-PEMAX-treated standard curves showed good linearity between 1 × 102 and 1 × 107 CFU/ml, with R2 values above 0.985. In all cases, a shift of around 2 cycles was observed when comparing PEMAX treated and non-treated samples from the same subspecies (Fig. 1). This variation was already observed in the optimization of the PEMAX concentration (Additional file 2). Amplification efficiencies of all standard curves were around 80% and values were comparable among subspecies (88.1% without PEMAX and 80% with PEMAX for Xff, 83.2% without PEMAX and 77.1% with PEMAX for Xfp, and 79.5% without PEMAX and 80.8% with PEMAX for Xfm). In dead cells, samples ranging from 1 × 103 to 1 × 107 CFU/ml treated with PEMAX displayed CT values higher than 37.5, indicating, as expected, an inhibition of their amplification (Fig. 1). In mixtures of viable cells (from 1 × 103 to 1 × 107 CFU/ml) and dead cells (fixed quantity of 1 × 106 CFU/ml), standard curves showed a high correlation coefficient (R2 values above 0.99) when samples were treated with PEMAX (Fig. 1). Amplification efficiencies calculated were similar to the ones obtained in the standard curves of only viable cells (92.9% for Xff, 80.9% for Xfp and 80.8% for Xfm), indicating that presence of DNA from dead cells do not interfere in the amplification of DNA from viable cells. Relationship between CT values and cell concentration in three strains of Xf using conventional qPCR (white symbols) and v-qPCR (black symbols), for viable cells, dead cells, and a mixture of viable cells with a fixed concentration of dead cells (1 × 106 CFU/ml). TaqMan-based qPCR assay done with XF16S-3 primers. The thin line represents the detection limit at CT = 37.5 Antimicrobial activity of peptide conjugates derived from BP100: optimization of the contact test To develop a method for screening antimicrobial activity of AMPs against Xf, different contact test conditions were studied, such as Xff cell concentration, contact test time and AMP concentration. Loss of viability after the contact test was assessed by v-qPCR and compared with plate counting (culturable cells) and qPCR (total cells). The antimicrobial activity of BP178 at 1.6, 12.5 and 50 μM was studied against two different Xff cell concentrations in a 3 h contact test (Fig. 2). At all peptide concentrations, Xff cells showed higher loss of viability (expressed as log reduction of cell viability) at 1 × 107 CFU/ml than at 1 × 108 CFU/ml, indicating a significant effect of the initial cell concentration (P < 0.001). Specifically, treatment of Xff cells at 1 × 107 CFU/ml with BP178 at 1.6 μM caused a significant reduction of viable cells (1.5 log), while no significant reduction was observed at 1 × 108 CFU/ml. At 12.5 μM, significant reduction of viable cells was observed in both cases, being 3 log reduction at 1 × 107 CFU/ml whereas 2 log reduction at 1 × 108 CFU/ml. A peptide concentration of 50 μM, both Xff cell concentrations exhibited a similar reduction of viability of around 3 log. Effect of peptide BP178 on viability of Xff strain Temecula estimated by v-qPCR at different peptide concentrations (1.6, 12.5 and 50 μM). Two assays were performed at different initial Xff cell concentrations, 1 × 107 CFU/ml (circles) and 1 × 108 CFU/ml (squares). The exposure time to the peptide was 3 h. Xff concentration in non-treated cells was estimated after 3 h by v-qPCR. The detection limit of the v-qPCR is 3 log CFU/ml. Values are the means of three replicates, and error bars represent the standard deviation of the mean. Lowercase letters correspond to the means comparison of viable cells in 1 × 107 CFU/ml. Capital letters correspond to the means comparison of viable cells in 1 × 108 CFU/ml. Means sharing the same letters are not significantly different (P < 0.05), according to the Tukey's test The effect of peptide BP178 on viability and culturability at different contact test times (from 1.5 to 48 h) was studied (Fig. 3). BP178 at 50 μM reduced (P < 0.001) viable and culturable cells of Xff in all exposure times. There were significant differences (P < 0.001) between v-qPCR (viable cells) and plate counting (culturable cells) in Xff suspensions mixed with BP178. In the case of v-qPCR, a progressive viability reduction occurred up to a contact test time of 6 h (between 2 and 3.5 log reduction), practically reaching the detection limit of the method (3 log CFU/ml). In contrast, the culturability of the cells mixed with the peptide dropped abruptly to levels near the detection limit (1.5 log CFU/ml) after 1 h of incubation. Xff cells maintained similar levels of both, cell viability (v-qPCR) and cell culturability, in the non-treated control (without BP178) over 48 h. Effect of peptide BP178 on viability and culturability of Xff strain Temecula at different exposure times. Cell viability was estimated by v-qPCR (black symbols) and cell culturability by plate counting (grey symbols). Initial cell concentration was 1 × 107 CFU/ml and the BP178 concentration used was 50 μM. Non-treated controls (NTC) were also performed by adding the corresponding volume of sterile distilled water. The dash line represents the detection limit of v-qPCR, whereas the normal line indicates the detection limit of the plate counting technic. Values are the means of three replicates, and error bars represent the standard deviation of the mean. Lowercase letters correspond to the means comparison of viable cells treated with BP178 (black triangles). Capital letters correspond to the means comparison of culturable cells treated with BP178 (grey triangles). Means sharing the same letters are not significantly different (P < 0.05), according to the Tukey's test The AMP concentration was evaluated by assessing the loss of viability of Xff suspensions mixed with BP178 at 3.1, 6.2, 12.5, 25 and 50 μM at a contact time of 3 and 24 h (Fig. 4). In this experiment, there were also significant differences (P < 0.001) between v-qPCR (viable cells), plate counting (culturable cells) and qPCR (total cells) in Xff suspensions in the presence of different concentrations of BP178. In the case of v-qPCR, a similar reduction of cell viability was observed (around 3 log reduction) for all BP178 concentrations in both contact test times (3 and 24 h), and only differences between the incubation periods were observed at 3.1 μM and 50 μM (P < 0.001). Comparing peptide concentrations, a progressive viability reduction occurred in the contact test of 3 h and significant differences were observed between 0, 3.1 and 12.5 μM. The culturability of Xff cells was also reduced without significant differences in almost all peptide concentrations at both incubation periods (around 5 log cell culturability reduction). The minimal bactericidal concentration (MBC) of BP178 was determined to be 3.1–6.25 μM, which corresponds to 10–20 μg/ml. Effect of peptide BP178 on viability and culturability of Xff strain Temecula at different peptide concentrations. Total cell concentration was estimated by conventional qPCR (white symbols), cell viability was estimated by v-qPCR (black symbols), and cell culturability by plate counting (grey symbols). Exposure times of 3 h (triangles) and 24 h (circles) were used. Cell concentration was 1 × 107 CFU/ml in both cases. The dash line represents the detection limit of v-qPCR, whereas the normal line indicates the detection limit of the plate counting technic. Values are the means of three replicates, and error bars represent the standard deviation of the mean. Letters correspond to the means comparison of viable cells treated with BP178 at exposure time of 3 h. Means sharing the same letters are not significantly different (P < 0.05), according to the Tukey's test Screening of peptide conjugates derived from BP100 against Xff Eleven selected conjugates derived from BP100 were tested against Xff at 3.1 and 12.5 μM (Table 2). Peptide tag54, an epitope tag designed for being used in peptide detection and purification, was included as a control to check the effect upon Xff cells of a peptide that previously showed no antimicrobial activity against other plant pathogenic bacteria [17]. BP100 and BP178 were also assayed for comparison purposes. Peptide tag54 did not show antimicrobial activity against Xff cells. BP100 led to a log reduction (N0/N) of cell viability of 1.39 at 3.1 μM and of 3.27 at 12.5 μM. At 3.1 μM, all peptide conjugates showed antimicrobial activity with a log reduction of cell viability between 0.91 and 2.95. At 12.5 μM, a higher effect was observed for all peptides, leading to an Xff viability reduction between 1.33 and 3.79 log. BP171, BP175 and BP178 were highly active, with a 3.5–4 log reduction of cell viability at 12.5 μM of peptide concentration and 2.5–3 log reduction at 3.1 μM. BP170, BP176 and BP180 were moderately active, with a 3–3.5 log reduction at 12.5 μM and 2–3 log reduction at 3.1 μM. BP181, BP188, BP192, BP198 and BP213 were low active, with less than 3 log reduction at 12.5 μM and less than 2 log reduction at 3.1 μM. Highly and moderately active peptides have a significantly different log reduction compared to low active peptides (according to the mean separation test). The highly active peptides were conjugates incorporating a BP100 unit and a melittin or a magainin fragment. In particular, BP171, containing BP100 and a melittin fragment, led to a log reduction of 3.79 at 12.5 μM and of 2.91 at 3.1 μM, while BP175 and BP178, which incorporate BP100 and a magainin fragment, led to a log reduction of 3.52 and 3.54 at 12.5 μM, and of 2.65 and 2.95 at 3.1 μM, respectively. Table 2 Screening of conjugate peptides derived from BP100 against Xff, compared with BP100 and tag54, by means of a contact test combined with v-qPCR method The antibacterial activity of BP171 and BP198 was also evaluated at different peptide concentrations by v-qPCR and plate counting (Additional file 3). Results showed that, as expected, both methods classified BP171 as highly active against Xff and BP198 as a low active peptide against this pathogen. After BP171 treatment, the Xff viable and culturable cells reached the detection limit in both technics, whereas BP198 was not able to completely inactivate Xff, neither using plate counting nor v-qPCR. The difficulties in managing diseases caused by Xf have stimulated the search for novel bactericides. Several antimicrobial compounds, such as toxins, antibiotics, phenolic acids and AMPs, have been reported to be active against several Xf strains with MBC or minimal inhibitory concentrations (MIC) ranging from 8 to 800 μM [14, 15, 41,42,43,44]. Interestingly, the AMPs magainin I and II, and dermaseptin have been reported to display low MIC or MBC values against Xf [14]. In addition, AMPs such as the lytic peptides LIMA-A and cecropin B have been expressed in grapevines resulting in a successful control of Xf in greenhouse conditions [45, 46]. So, their antibacterial activity and their availability for being expressed in plants make AMPs good candidates for the control of this plant pathogen, either using transgenic expression or other delivery strategies such as endotherapy. In the present study, a set of 11 peptide conjugates derived from the lead peptide BP100 and a fragment of cecropin, magainin or melittin, previously reported by our group as active against several plant pathogenic bacteria and with low toxicity to eukaryotic cells (moderate to low hemolysis) (Table 3) [10, 17, 20], were screened for their activity against Xf. One of these peptide conjugates (BP178) has been produced in transgenic rice [18, 19], and has also been tested in vitro against Xf and other plant pathogens showing high antibacterial activity [17, 21]. Table 3 MIC against different plant pathogens and hemolysis percentage displayed by the peptide conjugates derived from BP100 tested in this study A methodology consisting of a contact test combined with a v-qPCR method was developed in the present work in order to screen the activity of AMPs against the fastidious bacterium Xf. The v-qPCR method has the advantage to allow the quantification of viable cells, including VBNC and culturable cells, without a cultivation stage. Other studies used a variety of culture-dependent methods to evaluate the antimicrobial activity against Xf. Nevertheless, for the screening of large amounts of antimicrobial peptides, these methodologies are time consuming, as they require incubation periods of several days for Xf to grow. While the v-qPCR can be performed in less than 1 day, about 4–7 days are required for the agar plate dilution assay, for the contact test followed by plate counting or for the agar disc diffusion method [14, 15, 42, 44]. The v-qPCR has been efficiently used for the monitoring of microorganisms with biotechnological potential [31], and for the detection and quantification of human pathogens in food [47] or in the environment [48]. In particular, in the case of Xf, different PCR assays are commonly used for the detection and quantification, and v-qPCR methods in combination with EMA or PMAxx reagents were also reported to discriminate between viable and membrane-damaged cells [35, 36]. In our work the PEMAX reagent, an optimized mixture of EMA and PMA that has been previously proven to be efficient in discriminating viable from dead cells in a biological control agent was used [34]. The effect of PEMAX concentration was optimized in order to detect only viable Xf cells. PEMAX at 7.5 μM was the lowest concentration showing good results as inhibited the DNA amplification of dead Xff cells at 1 × 107 CFU/ml while viable cells were not affected. Lower concentrations, 2.5 and 5 μM, were less effective in preventing DNA amplification of high cell concentrations, probably due to the lack of available reagent. In contrast, higher concentrations of PEMAX, 10 and 50 μM, caused a slight toxicity effect on Xff cells. In other studies, a PEMAX concentration of 50 μM has been reported to be the optimal to detect Lactobacillus and Salmonella using a v-qPCR assay [31, 34]. However, it has also been described that excessive concentrations of these two dyes causes toxicity in some microorganisms [36, 49]. Therefore, the concentration of PEMAX has to be optimized for each species in order to allow DNA amplification of only viable cells, without being toxic to the bacteria. In order to choose the best conditions for v-qPCR, eight Xf-specific qPCR assays with different amplification sites and lengths were compared. All assays showed acceptable and similar efficiency percentages that were in agreement with those observed in other qPCR designs used for the quantification of Xf [50, 51]. In contrast, the assays differed in the sensitivity values. The primer pair (XF16S-3), chosen for further assays with a length of 279 bp, exhibited sensitivity values similar to those previously reported [50, 51]. As it has been described [34, 52], the amplicon length is an important parameter to consider when optimizing a v-qPCR because there is a higher probability of dye intercalation in cell-free DNA when using long length amplicons compared to short length amplicons. The reliability of the v-qPCR when using XF16S-3 as primer pair and a PEMAX concentration of 7.5 μM was evaluated and validated on viable and dead cells, and on a mixture of viable and dead cells of Xff, Xfp and Xfm. v-qPCR method developed showed acceptable amplification efficiencies and correlation coefficient values. Although both, the use of longer amplicons and the presence of PEMAX, decreased the sensitivity of the qPCR, a CT value corresponding to 1 × 103 CFU/ml viable cells was determined as the detection limit of the developed v-qPCR. To set up the conditions of the contact test, the initial cell concentration of Xff, the contact test time and the peptide concentration were optimized. Considering other studies, which employed Xf cell concentrations ranging from 1 × 105 to 1 × 108 CFU/ml to test antimicrobial compounds [14, 41, 42], a Xff concentration of 1 × 107 CFU/ml was chosen as it brought out the effect of the AMP at low concentration and enabled a viability reduction of 4 log before reaching the detection limit of the v-qPCR method (1 × 103 CFU/ml). At 1 × 108 CFU/ml, a different viability reduction pattern was observed. As described, antimicrobial peptides (and other antimicrobials) are quenched during interaction with target cells due to their binding to the cell through time. Because there is a threshold number of peptide molecules necessary to kill a target cell, the viability reduction is not only dependent on the antimicrobial concentration but also on the target bacteria concentration [53, 54]. Regarding the contact test time, a lethality percentage around 99.8% was observed after 3 h of contact test with the peptide. Against Xf, a contact test time of 18 h has been employed [15] but as reported, the bactericidal effect of an antimicrobial compound is time-dependent and a lethality percentage of 90% after 6 h is equivalent to a 99.9% of dead cells after 24 h [23]. Therefore, in our study a contact test of 3 h allows fast screening of AMPs against Xf with similar results than longer contact test times. Finally, peptide concentrations of 3.1 and 12.5 μM were the ones selected for the screening of AMPs because it was envisaged that they would allow the classification of the peptides according to their activity against Xf. The suitability of the v-qPCR method to estimate the viability of Xf cells after the contact test was studied by comparing it with qPCR and plate counting onto PD2 agar plates. No significant differences were observed between the three methods in untreated Xff cells. However, in cells treated with the peptide conjugates derived from BP100, qPCR overestimated viable cells (around 4 log units) compared to v-qPCR, indicating the presence of DNA from dead cells, and plate counting underestimated the viability of Xff (around 2 log units). While previous research has focused on determining the activity of antimicrobial compounds using methodologies that report information about the culturable cells [14, 15, 41, 42], v-qPCR offers the possibility of determining the amount of viable cells, irrespective of their culturability. In the present study, it was observed that viability of Xf cells was progressively reduced after the treatment with AMPs, while culturability dropped abruptly to levels near the detection limit. This fact is probably due to the formation of metabolically active persistent cells (VBNC state). It has been described that Xf cells enter in the VBNC state when they are exposed to inhibitory concentrations of antimicrobial compounds [25, 35, 55, 56]. In other plant pathogens, VBNC cells have been reported to have the capacity to revert its physiological state and acquire again its virulence, being widely responsible for recalcitrant infections [57, 58]. Taking this into account, the quantification of the whole viable fraction (including VBNC and culturable cells) is necessary to determine the antimicrobial activity of compounds because the presence of these cells can play a significant role in terms of defining their pathogenicity and epidemiology. Remarkably, the use of the above described contact test coupled with the v-qPCR for the screening of peptides allowed a rapid and reliable identification of sequences among the peptide conjugates derived from BP100 active against Xf, and their classification as: (i) highly active (BP171, BP175, BP178), (ii) mid active (BP170, BP176, BP180), and (iii) low active (BP181, BP188, BP192, BP198, BP213). The best peptides BP171, which incorporates BP100 and a melittin fragment, and BP175 and BP178, which result from the conjugation of BP100 with a magainin II fragment, showed higher activity than BP100. Other AMPs, such as gomesin, dermaseptin or magainin II, have been reported to be active against Xf (MIC or MBC of 4.5–9, 8–32 and 8–64 μg/ml, respectively) [14, 15]. Unfortunately, it is not possible to compare these activity values obtained using v-qPCR because the methods used in other works to assay the antibacterial activity were different. However, the activity values of BP171 and BP178 determined in our work using plate counting, which attained a MBC between 1.5 and 3.1 μM (~ 5–10 μg/ml) and 3.1 and 6.1 μM (~ 10–20 μg/ml) respectively, can be compared and are similar to the gomesin values, since the method used in both cases was a contact test followed by plate counting. This work has allowed the fast screening and identification of five new bactericidal peptide conjugates (BP171, BP175, BP170, BP176, BP180) active against Xf, in addition to the previously described BP178. All of them can be considered as candidates for the development of new agents to treat the plant diseases caused by this bacterium. The contact test combined with v-qPCR method has the advantage of quantifying only viable Xf cells, therefore the evaluation of the antimicrobial effect of AMPs is more precise. Moreover, considering the European Union rules for quarantine organisms, and particularly for Xf, the method minimizes the risk of dissemination of the pathogen, as it allows working in more safe conditions (shorter periods of time with manipulating living cells), compared to the culture-based methods. Apart from testing AMPs and other antimicrobials against Xf in vitro, the method could also be used in plants, as the Xf population quantified in naturally infected olive trees and in artificially inoculated grapevines is around 107–108 CFU/ml [36, 59]. This would be of interest to confirm the antimicrobial activity of the AMPs against the pathogen in their hosts. In addition, the fact that these conjugates were designed to be expressed in plants extends their possible technological use by means of transgenic plant hosts producing peptides to kill the pathogen [45, 46, 60, 61]. Xf strains, growth conditions and DNA extraction Xff strain Temecula 1 ATCC 700964 [62], Xfp strain DD1 [63] and Xfm strain CFBP 8173 [64] were used. All strains were grown in BCYE agar [65] at 28 °C for 1 week and were stored in PD2 broth [66] with 30% glycerol at − 80 °C. Cell suspensions were prepared in sterile succinate-citrate-phosphate (SCP) buffer [40] at 1 × 108 CFU/ml (optical density at 600 nm being 0.3, confirmed by colony counts) and diluted to appropriate concentrations. DNA was extracted using GeneJET Genomic DNA Purification Kit (Thermo Fisher Scientific, Waltham, USA) following the specific protocol for Gram-negative bacterial suspensions. Briefly, 200 μl were centrifuged at 15,900 x g during 10 min, the pellet was resuspended in 180 μl of digestion solution and 20 μl of proteinase K. Samples were incubated at 56 °C for 30 min, then 20 μl of RNase solution was added and another incubation step of 10 min at room temperature was carried out. Next, 200 μl of lysis solution were added, followed by 400 μl of 50% ethanol, and all the volume was transferred to a GeneJET Genomic DNA Purification Column. Two washes were performed using two different wash buffers, and finally DNA was re-suspended with 30 μl of PCR-grade water. DNA was stored at − 20 °C for further analysis. qPCR design: evaluation of the amplification efficiency and sensitivity qPCR assays were conducted using the primer pairs and TaqMan probe sets described in Table 1. Primer3Plus software was used to obtain amplicons with different length that shared the described forward primers and probes but with new different reverse primers. All qPCR were performed using 96-well plates containing 12.5 μl 2X TaqMan Universal PCR Master Mix (Thermo Fisher Scientific, USA), final concentrations of 400 nM for each forward and reverse primer and of 150 nM for TaqMan probe with dye, 8.46 μl of PCR-grade water and 2 μl of template DNA in each well. Serial 10-fold dilutions of Xff covering a 7-log range (from 1 × 102 to 1 × 108 CFU/ml) were prepared in sterile SCP buffer and each concentration was performed in triplicate. DNA extraction from each suspension was performed as described above. All reactions were performed in duplicate and carried out in a QuantStudio 5 real-time PCR system (Applied Biosystems, Foster City, CA, USA). qPCR conditions were 95 °C for 10 min for enzyme activation followed by denaturation at 95 °C for 1 min, and extension and annealing at 59 °C for 1 min. The qPCR was run for 45 cycles. Standard curves were developed to check the sensitivity and efficiency of the qPCR assays. CT values were plotted against the logarithm of the initial number of CFU/ml to determine the amplification efficiency of each design using the following equation. $$ \mathbf{E}\left(\%\right)=\left({10}^{-1/ slope}-1\right)\times 100 $$ V-qPCR: optimization of the PEMAX concentration A stock solution of 2000 μM of PEMAX reagent (GenIUL, Terrassa, Spain) was prepared and stored as described [32]. To optimize the concentration of PEMAX, 20 μl of PEMAX stock solutions at 25, 50, 75, 100 or 500 μM were added into 180 μl of viable or dead Xff cell suspension, both adjusted to 1 × 107 CFU/ml in SCP. Dead cells were obtained by heating the cell suspension at 95 °C for 10 min (ThermoMixer F1.5; Eppendorf, Hamburg, Germany), and the suspension was plated on PD2 agar and incubated for 1 week at 28 °C to check the absence of growth. PEMAX treated samples were thoroughly mixed and incubated for 30 min in the dark at room temperature with manual shaking every 10 min. Next, samples were photoactivated with the PhAST Blue photoactivation system (GenIUL, Barcelona, Spain) for 15 min with intensity of 100%. Each PEMAX treated sample was transferred into DNA low-binding 1.5 ml tube (Sarstedt, Nümbrecht, Germany) and collected by centrifugation at 15,900 x g for 10 min. A washing step to eliminate the excess of PEMAX was required, so supernatant was eliminated and 500 μl of sterile SCP buffer was added. Samples were collected under the same centrifugation conditions. Non-PEMAX treated samples, prepared with 20 μl of SCP buffer plus 180 μl of viable and dead cells, were also analysed. DNA extraction of all samples was carried out as described above and qPCR was performed according to the conditions described initially and using the primer pair XF16S forward and probe and its reverse 3 (XF16S-3). Signal reduction (SR), defined as the difference between cycle threshold values (ΔCT) of non-PEMAX treated and PEMAX treated samples, was calculated to determine the effect of PEMAX concentration on DNA amplification suppression by qPCR assay. Three biological replicates were performed. Evaluation of v-qPCR with Xff, Xfp and Xfm strains The v-qPCR sensitivity and amplification efficiency was evaluated with standard curves. Suspensions of viable and dead Xff, Xfp and Xfm cells were prepared in SCP as described above. Samples were prepared to cover a 7-log range (from 1 × 102 CFU/ml to 1 × 108 CFU/ml) in Xff and a 6-log range (from 1 × 102 CFU/ml and up to 1 × 107 CFU/ml) in Xfp and Xfm. Mixture suspensions were also prepared, with the same concentration range of viable Xf cells in addition to a constant number of dead cells (1 × 106 CFU/ml). From each suspension, 180 μl were treated with PEMAX at 7.5 μM according to the procedure described previously, and 180 μl were used as non-PEMAX treated sample. DNA extraction was performed as described in both PEMAX treated and non-PEMAX treated samples. qPCR was performed as described previously, each reaction per duplicate and using XF16S-3 as the primer pair. Standard curves were generated plotting CT values obtained against the logarithm of the initial number of CFU/ml, and the amplification efficiency was calculated as described above. Evaluation of v-qPCR for antimicrobial activity assessment The contact test conditions were optimized for the antimicrobial activity assessment of AMPs against Xf. Xff cell concentration, contact test time and peptide concentration were evaluated. The peptide BP178 (Table 2) was used [19]. Lyophilized BP178 was solubilized in sterile Milli-Q water to a final concentration of 1 mM, filter sterilized through a 0.22 μm pore filter and 10X stock solutions of the desired concentrations were prepared in sterile distilled water. Suspensions of Xff cells prepared in sterile SCP buffer were used, and 20 μl of each BP178 stock concentration were mixed in 1.5 ml tubes with 160 μl of the corresponding Xff cell suspension and incubated for 1.5, 3, 6, 24, or 48 h depending on the experiment. After the incubation period, 20 μl of PEMAX or SCP buffer were added to the samples for v-qPCR or qPCR, respectively, before DNA extraction. In a first experiment, suspensions of Xff cells at 1 × 107 and 1 × 108 CFU/ml were tested to determine the differences in log reduction when using BP178 final concentrations of 1.6, 12.5 and 50 μM. A second experiment was used to evaluate different contact test times in order to select the most suitable one for the assays. Additionally, in a third experiment, BP178 concentrations of 3.1, 6.2, 12.5, 25 and 50 μM were incubated with Xff at 1 × 107 CFU/ml for 3 and 24 h to determine the most informative peptide concentrations to screen the peptides. A non-treated control (Xff cells without peptide) using SCP buffer instead of peptide was also included in all the experiments, and three replicates for each Xff cell concentration, contact test time and peptide concentration were used. Xff log10 CFU/ml of the initial cell suspensions and of the contact tests, with or without the peptide, was determined using qPCR (total cells), v-qPCR (viable cells) and plate counting (culturable cells). For assessment of total, viable and culturable cells, aliquots were taken from the contact test wells at given times. For qPCR and v-qPCR, DNA was isolated from two individual samples of 200 μl of each contact test, in the case of v-qPCR, previously the sample was treated with PEMAX at final concentration of 7.5 μM as described above. DNA extraction, qPCR analysis using the TaqMan-based qPCR assay XF16S-3 and quantification were performed as described above. The amount of total and viable cells was obtained by interpolating the CT values from each sample against the respective standard curve and expressed as log10 CFU/ml. For plate counting, each sample was serially diluted, and appropriate dilutions were seeded onto PD2 agar plates. Plates were incubated at least for 1 week at 28 °C, colonies were counted and CFU/ml value was determined for each sample. The MBC of BP178 was determined in order to compare with other described peptides. The MBC corresponds to the lowest concentration where no growth was detected in plate counting after exposure to the peptide in the contact test. A set of peptide conjugates derived from peptide BP100 reported by our group as AMPs (Table 2) [17] were selected to be screened against Xff. The epitope tag peptide tag54 was used as a negative control and BP100 and BP178 was included for comparison purposes. All AMPs were evaluated as described above, at final concentrations of 3.1 and 12.5 μM and against a suspension of Xff at 1 × 107 CFU/ml using a 3 h contact test. After the incubation period, Xff population level was assessed using v-qPCR as described above. Loss of viability after the contact test was calculated and expressed as logarithmic reduction of Xff population. Three replicates for each AMP and concentration were used. After the screening, BP171 and BP198, showing different antibacterial activity against Xff, were selected to assess the performance of the v-qPCR methodology for the quantification of the viable Xff population. Peptide concentrations of 1.5, 3.1, 6.2, 12.5 and 25 μM were incubated with Xff at 1 × 107 CFU/ml for 3 h to determine viable and culturable cells by v-qPCR and plate counting respectively. Three replicates for each AMP and concentration were used. To test the significance of the effect of PEMAX concentration in the suppression of DNA amplification (signal reduction) on dead and viable cells of Xff, a one-way analysis of variance (ANOVA) was performed. To test the significance of the parameters studied to set up the conditions of the contact test (initial Xff cell concentration, contact test time and peptide concentration) and of the cell quantification method, a two or three-way ANOVA were performed. To test the effect of AMPs on Xff viability reduction a one-way ANOVA was performed. In all cases, means were separated according to the Tukey's test at a P value of ≤0.05. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Xf: VBNC: Viable-but-non-culturable CTC: 5-cyano-2,3-ditolyl tetrazolium chloride qPCR: v-qPCR: Viable quantitative PCR PMA or PMAxx: Propidium monoazide Ethidium monoazide CFU: Colony forming units C T : Cycle threshold Xff: Xylella fastidiosa subsp. fastidiosa Xfp: Xylella fastidiosa subsp. pauca Xfm: Xylella fastidiosa subsp. multiplex SR: Signal reduction ΔCT : Difference between cycle threshold values MBC: Minimal bactericidal concentration MIC: Minimal inhibitory concentration PD2: Pierce Disease 2 BCYE: Buffered charcoal yeast extract SCP: Succinate-citrate-phosphate Garcia AL, Torres SCZ, Heredia M, Lopes SA. Citrus responses to Xylella fastidiosa infection. Plant Dis. 2012;96:1245–9. Purcell A. Paradigms: examples from the bacterium Xylella fastidiosa. Annu Rev Phytopathol. 2013;51:339–56. Sicard A, Zeilinger AR, Vanhove M, Schartel TE, Beal DJ, Daugherty MP, et al. Xylella fastidiosa: insights into an emerging plant pathogen. Annu Rev Phytopathol. 2018;56:181–202. Saponari M, Boscia D, Nigro F, Martelli GP. Identification of DNA sequences related to Xylella fastidiosa in oleander, almond and olive trees exhibiting leaf scorch symptoms in Apulia (southern Italy). J Plant Pathol. 2013;95:668. Denancé N, Legendre B, Briand M, Olivier V, de Boisseson C, Poliakoff F, et al. Several subspecies and sequence types are associated with the emergence of Xylella fastidiosa in natural settings in France. Plant Pathol. 2017;66:1054–64. Landa BB, Marco-Noales E, López MM. Enfermedades causadas por la bacteria Xylella fastidiosa. 1st ed. Almeria: Cajamar Caja Rural; 2017. EFSA PLH Panel (EFSA Panel on Plant Health). Treatment solutions to cure Xylella fastidiosa diseased plants. EFSA J. 2016;14:4456. EFSA PLH Panel (EFSA Panel on Plant Health), Bragard C, Dehnen-Schmutz K, DiSerio F, Gonthier P, Jacques MA, Jaques Miret JA, et al. Scientific Opinion on the effectiveness of in planta control measures for Xylella fastidiosa. EFSA J. 2019;17:5666. Monroc S, Badosa E, Besalú E, Planas M, Bardají E, Montesinos E, et al. Improvement of cyclic decapeptides against plant pathogenic bacteria using a combinatorial chemistry approach. Peptides. 2006;27:2575–84. Badosa E, Ferre R, Planas M, Feliu L, Besalú E, Cabrefiga J, et al. A library of linear undecapeptides with bactericidal activity against phytopathogenic bacteria. Peptides. 2007;28:2276–85. Vilà S, Badosa E, Montesinos E, Planas M, Feliu L. Synthetic cyclolipopeptides selective against microbial, plant and animal cell targets by incorporation of D-amino acids or histidine. PLoS One. 2016;11:e0151639. Güell I, Vilà S, Badosa E, Montesinos E, Feliu L, Planas M. Design, synthesis, and biological evaluation of cyclic peptidotriazoles derived from BPC194 as novel agents for plant protection. Pept Sci. 2017;108:e23012. Li ZT, Gray DJ. Effect of five antimicrobial peptides on the growth of Agrobacterium tumefaciens, Escherichia coli and Xylella fastidiosa. Vitis. 2003;41:95–7. Kuzina LV, Miller TA, Cooksey DA. In vitro activities of antibiotics and antimicrobial peptides against the plant pathogenic bacterium Xylella fastidiosa. Lett Appl Microbiol. 2006;42:514–20. Fogaça AC, Zaini PA, Wulff NA, Da Silva PIP, Fázio MA, Miranda A, et al. Effects of the antimicrobial peptide gomesin on the global gene expression profile, virulence and biofilm formation of Xylella fastidiosa. FEMS Microbiol Lett. 2010;306:152–9. Subbalakshmi C, Sitaram N. Mechanism of antimicrobial action of indolicidin. FEMS Microbiol Lett. 1998;160:91–6. Badosa E, Moiset G, Montesinos L, Talleda M, Bardají E, Feliu L, et al. Derivatives of the antimicrobial peptide BP100 for expression in plant systems. PLoS One. 2013;8:e85515. Company N, Nadal A, Ruiz C, Pla M. Production of phytotoxic cationic a-helical antimicrobial peptides in plant cells using inducible promoters. PLoS One. 2014;9:e109990. Montesinos L, Bundó M, Badosa E, San Segundo B, Coca M, Montesinos E. Production of BP178, a derivative of the synthetic antibacterial peptide BP100, in the rice seed endosperm. BMC Plant Biol. 2017;17:63. Rufo R, Batlle A, Camprubi A, Montesinos E, Calvet C. Control of rubus stunt and stolbur diseases in Madagascar periwinkle with mycorrhizae and a synthetic antibacterial peptide. Plant Pathol. 2017;66:551–8. Baró A, Mora I, Montesinos L, Montesinos E. Differential susceptibility of Xylella fastidiosa strains to synthetic bactericidal peptides. Phytopathology. 2020;12:PHYTO12190477R. Campanharo C, Lemos MVF, de Macedo Lemos EG. Growth optimization procedures for the phytopathogen Xylella fastidiosa. Curr Microbiol. 2003;46:99–102. Balouiri M, Sadiki M, Ibnsouda SK. Methods for in vitro evaluating antimicrobial activity: a review. J Pharm Anal. 2016;6:71–9. Oliver JD. The viable but nonculturable state in bacteria. J Microbiol. 2005;43:93–100. Martins PMM, Merfa MV, Takita MA, De Souza AA. Persistence in phytopathogenic bacteria: do we know enough? Front Microbiol. 2018;9:1099. Bankier C, Cheong Y, Mahalingam S, Edirisinghe M, Ren G, Cloutman-Green E, et al. A comparison of methods to assess the antimicrobial activity of nanoparticle combinations on bacterial cells. PLoS One. 2018;13(2):e0192093. Johnson MB, Criss AK. Fluorescence microscopy methods for determining the viability of Bacteria in association with mammalian cells. J Vis Exp. 2013;79:e50729. Kobayashi T, Mito T, Watanabe N, Suzuki T, Shiraishi A, Ohashic Y. Use of 5-Cyano-2,3-Ditolyl-Tetrazolium chloride staining as an indicator of biocidal activity in a rapid assay for anti-Acanthamoeba agents. J Clin Microbiol. 2012;50(5):1606–12. Josephson KL, Gerba CP, Pepper IL. Polymerase chain reaction detection of nonviable bacterial pathogens. Appl Environ Microbiol. 1993;59:3513–5. Codony F. Procedimiento para la detección de células vivas, con las membranas celulares integras y funcionales, mediante técnicas de amplificación de ácidos nucleicos. ES 2 568 527 B1, 2014. Codony F, Agustí G, Allué-Guardia A. Cell membrane integrity and distinguishing between metabolically active and inactive cells as a means of improving viability PCR. Mol Cell Probes. 2015;29:190–2. Agustí G, Fittipaldi M, Codony F. False-positive viability PCR results: an association with microtubes. Curr Microbiol. 2017;74:377–80. Elizaquível P, Sánchez G, Aznar R. Quantitative detection of viable foodborne E. coli O157:H7, Listeria monocytogenes and Salmonella in fresh-cut vegetables combining propidium monoazide and real-time PCR. Food Control. 2012;25:704–8. Daranas N, Bonaterra A, Francés J, Cabrefiga J, Montesinos E, Badosa E. Monitoring viable cells of the biological control agent Lactobacillus plantarum PM411 in aerial plant surfaces by means of a strain-specific viability quantitative PCR method. Appl Environ Microbiol. 2018;84:e00107–18. Navarrete F, De La Fuente L. Response of Xylella fastidiosa to zinc: decreased culturability, increased exopolysaccharide production, and formation of resilient biofilms under flow conditions. Appl Environ Microbiol. 2014;80:1097–107. Sicard A, Merfa MV, Voeltz M, Zeilinger AR, De La Fuente L, Almeida RPP. Discriminating between viable and membrane damaged cells of the plant pathogen Xylella fastidiosa. PLoS One. 2019;14:e0221119. Francis M, Lin H, Cabrera-La Rosa J, Doddapaneni H, Civerolo EL. Genome-based PCR primers for specific and sensitive detection and quantification of Xylella fastidiosa. Eur J Plant Pathol. 2006;115:203–13. Gambetta GA, Fei J, Rost TL, Matthews MA. Leaf scorch symptoms are not correlated with bacterial populations during Pierce's disease. J Exp Bot. 2007;58:4037–46. Li W, Teixeira DC, Hartung JS, Huang Q, Duan Y, Zhou L, et al. Development and systematic validation of qPCR assays for rapid and reliable differentiation of Xylella fastidiosa strains causing citrus variegated chlorosis. J Microbiol Methods. 2013;92:79–89. EPPO. PM 7/24 (3) Xylella fastidiosa. EPPO Bull. 2018;48:175–218. Maddox CE, Laur LM, Tian L. Antibacterial activity of phenolic compounds against the phytopathogen Xylella fastidiosa. Curr Microbiol. 2010;60:53–8. Bleve G, Gallo A, Altomare C, Vurro M, Maiorano G, Cardinali A, et al. In vitro activity of antimicrobial compounds against Xylella fastidiosa, the causal agent of the olive quick decline syndrome in Apulia (Italy). FEMS Microbiol Lett. 2018;365(5):fnx281. Santiago MB, Moraes TS, Massuco JE, Silva LO, Lucarini R, da Silva DF, et al. In vitro evaluation of essential oils for potential antibacterial effects against Xylella fastidiosa. J Phytopathol. 2018;166:790–8. Scortichini M, Chen J, de Caroli M, Dalessandro G, Pucci N, Modesti V, et al. A zinc-copper-citric acid biocomplex shows promise for control of Xylella fastidiosa subsp. pauca in olive trees in Apulia region (southern Italy). Phytopathol Mediterr. 2018;57:48–72. Dandekar AM, Gouran H, Ibanez AM, Uratsu SL, Agüero CB, McFarland S, et al. An engineered innate immune defense protects grapevines from Pierce's disease. Proc Natl Acad Sci U S A. 2012;109:3721–5. Li ZT, Hopkins DL, Gray DJ. Overexpression of antimicrobial lytic peptides protects grapevine from Pierce's disease under greenhouse but not field conditions. Transgenic Res. 2015;24:821–36. Thanh MD, Agustí G, Mader A, Appel B, Codony F. Improved sample treatment protocol for accurate detection of live Salmonella spp. in food samples by viability PCR. PLoS One. 2017;12:e0189302. Lizana X, López A, Benito S, Agustí G, Ríos M, Piqué N, et al. Viability qPCR, a new tool for Legionella risk management. Int J Hyg Environ Health. 2017;220:1318–24. Gedalanga PB, Olson BH. Development of a quantitative PCR method to differentiate between viable and nonviable bacteria in environmental water samples. Appl Microbiol Biotechnol. 2009;82:587–96. Schaad NW, Opgenorth D, Gaush P. Real-time polymerase chain reaction for one-hour on-site diagnosis of Pierce's disease of grape in early season asymptomatic vines. Phytopathology. 2002;92:721–8. Harper SJ, Ward LI, Clover GRG. Development of LAMP and real-time PCR methods for the rapid detection of Xylella fastidiosa for quarantine and field applications. Phytopathology. 2010;100:1282–8. Contreras PJ, Urrutia H, Sossa K, Nocker A. Effect of PCR amplicon length on suppressing signals from membrane-compromised cells by propidium monoazide treatment. J Microbiol Methods. 2011;87:89–95. Lambert RJW. Evaluation of antimicrobial efficacy. In: Fraise AP, Lambert PA, Maillard JY, editors. Russell, Hugo & Ayliffe's principles and practice of disinfection, preservation & sterilization. Hoboken: Blackwell Publishing Ltd; 2004. p. 345–60. Roversi D, Luca V, Aureli S, Park Y, Mangoni ML, Stella L. How many antimicrobial peptide molecules kill a bacterium? The case of PMAP-23. ACS Chem Biol. 2014;9:2003–7. Muranaka LS, Takita MA, Olivato JC, Kishi LT, de Souza AA. Global expression profile of biofilm resistance to antimicrobial compounds in the plant-pathogenic bacterium Xylella fastidiosa reveals evidence of persister cells. J Bacteriol. 2012;194:4561–9. Merfa MV, Niza B, Takita MA, de Souza AA. The MqsRA toxin-antitoxin system from Xylella fastidiosa plays a key role in bacterial fitness, pathogenicity, and persister cell formation. Front Microbiol. 2016;7:904. Grey BE, Steck TR. The viable but nonculturable state of Ralstonia solanacearum may be involved in long-term survival and plant infection. Appl Environ Microbiol. 2001;67:3866–72. Santander RD, Català-Senent JF, Marco-Noales E, Biosca EG. In planta recovery of Erwinia amylovora viable but nonculturable cells. Trees. 2012;26:75–82. Saponari M, Boscia D, Altamura G, Loconsole G, Zicca S, D'Attoma G, et al. Isolation and pathogenicity of Xylella fastidiosa associated to the olive quick decline syndrome in southern Italy. Sci Rep. 2017;7:17723. Yevtushenko DP, Misra S. Transgenic expression of antimicrobial peptides in plants: strategies for enhanced disease resistance, improved productivity, and production of therapeutics. In: Rajasekaran K, Cary JW, Jaynes JM, Montesinos E, editors. Small wonders: peptides for disease control. Washington DC: American Chemical Society; 2012. p. 445–58. Agüero CB, Uratsu SL, Greve C, Powell ALT, Labavitch JM, Dandekar AM. Evaluation of tolerance to Pierce's disease and Botrytis in transgenic plants of Vitis vinifera L. expressing the pear PGIP gene. Mol Plant Pathol. 2005;6:43–51. Van Sluys MA, de Oliveira MC, Monteiro-Vitorello CB, Miyaki CY, Furlan LR, Camargo LEA, et al. Comparative analyses of the complete genome sequences of Pierce's disease and citrus variegated chlorosis strains of Xylella fastidiosa. J Bacteriol. 2003;185:1018–26. Giampetruzzi A, Chiumenti M, Saponari M, Donvito G, Italiano A, Loconsole G, et al. Draft genome sequence of the Xylella fastidiosa CoDiRO strain. Genome Announc. 2015;3:e01538–14. Schaad NW, Postnikova E, Lacy G, Fatmi MB, Chang CJ. Xylella fastidiosa subspecies: X. fastidiosa subsp. fastidiosa [corrected], subsp. nov., X. fastidiosa subsp. multiplex subsp. nov., and X. fastidiosa subsp. pauca subsp. nov. Syst Appl Microbiol. 2004;27:290–300. Wells JM, Raju BC, Nyland G, Lowe SK. Medium for isolation and growth of bacteria associated with plum leaf scald and phony peach diseases. Appl Environ Microbiol. 1981;42:357–63. Davis MJ, Purcell AH, Thomson SV. Isolation medium for the Pierce's disease bacterium. Phytopathology. 1980;70:425–9. We are thankful to Dr. Ester Marco from IVIA (Spain) and Dr. Maria Saponari from CNR-IPSP (Italy) for providing X. fastidiosa strains. This work was supported by different grants from Spain Ministerio de Ciencia, Innovación y Universidades RTI2018–099410-B-C21 and E-RTA INIA 2017–00004-C06–03, from the European Union H2020 programme XF-ACTORS cod. 727987 and from the Organización Interprofesional del Aceite de Oliva Español (041/18). A. Baró was the recipient of research grant 2018 FI B00334 (Secretaria d'Universitats i Recerca, Departament d'Economia i Coneixement, Generalitat de Catalunya, Spain, European Union). The funding bodies did not play a role in the design of the study, in the collection, analysis and interpretation of data, or in writing the manuscript. Laboratory of Plant Pathology, Institute of Food and Agricultural Technology-CIDSAV-XaRTA, University of Girona, Girona, Spain Aina Baró, Esther Badosa, Laura Montesinos, Emilio Montesinos & Anna Bonaterra LIPPSO, Department of Chemistry, University of Girona, Girona, Spain Lidia Feliu & Marta Planas Aina Baró Esther Badosa Laura Montesinos Lidia Feliu Marta Planas Emilio Montesinos Anna Bonaterra ABa performed the main experiments and data analyses, and wrote the paper. EB, EM and ABo designed the research, analysed the data, and contributed in the writing process. MP and LF provided the AMPs. LM assisted in laboratory experiments and data analyses. EM, ABo, EB, MP and LF obtained the financial support. All authors read, review and approved the final manuscript. Correspondence to Anna Bonaterra. Standard curves of the eight qPCR assays studied. Each set of primer pairs amplifying the same target gene with different amplicon lengths are shown in the same box, (A) 16S rRNA gene (XF16S), (B) EFTu gene (EFTu), and (C) conserved hypothetical protein (HL). The equations of the curves are shown for each primer pair. Signal reduction (SR) in the qPCR of viable (white) and dead (grey) cells after treatment with different PEMAX concentrations. SR is the difference between the CT value of non-PEMAX and PEMAX treated cells. Cell concentration was 1 × 107 CFU/ml. TaqMan-based qPCR assay XF16S-3 (amplicon length of 279 bp) was used for this experiment. The results are shown as the mean from three independent replicates, and error bars represent standard deviation of the means. Lowercase letters correspond to the means comparison of SR in viable cells. Capital letters correspond to the means comparison of SR in dead cells. Means sharing the same letters are not significantly different (P < 0.05), according to the Tukey's test. Effect of peptides BP171 (circles) and BP198 (triangles) on viability and culturability of Xff strain Temecula at different peptide concentrations. Cell viability was estimated by v-qPCR (black symbols), and cell culturability by plate counting (grey symbols). An exposure time of 3 h and a cell concentration of 1 × 107 CFU/ml were used in both cases. The dash line represents the detection limit of v-qPCR, whereas the normal line indicates the detection limit of the plate counting technic. Values are the means of three replicates, and error bars represent the standard deviation of the mean. Baró, A., Badosa, E., Montesinos, L. et al. Screening and identification of BP100 peptide conjugates active against Xylella fastidiosa using a viability-qPCR method. BMC Microbiol 20, 229 (2020). https://doi.org/10.1186/s12866-020-01915-3 Viability-qPCR PEMAX Plant pathogens
CommonCrawl
Bouncing Ball Sound In Ceiling Why is it important to establish relationship between sounds and letters? What is a grapheme? 2. /ŋ/ is a nasal sound made in the soft palate. This is because every time sound strikes a surface some of its energy is absorbed. eSpecial Needs is the premier provider of adaptive equipment and therapy products including special needs strollers, toys, clothing, furniture and learning tools. Whether you're looking for a simple two-channel system to compliment your turntable, or upgrading your home theater setup, Klipsch has the solution you need to take your music and movies to the next level. Physics4Kids. Explore all wood ceiling systems. Count the [ ʃ ] sounds in sentence: "English Shoppers Short of Cash". 70 m, it bounces o a concrete oor and rebounds to a Similarly, Bouncing up to 1. I suppose he could be bouncing the waves off of something, but a careful examination of your property would clue you in as to what would give a clean bounce (say, a metal sheet f. shape ( "ball" ). 1, Windows Phone 8. Browse our wide selection of beautiful and practical pendant lighting for under $100. Why would hdmi to tv make loud noise? How much can you be fined for making loud noise?. Eat all the reb ball on the screen to finnish level. 1 ID 5 Achievements 6 History 7 Issues 8 Trivia 9 Gallery 10 Notes 11 References Slime blocks can be broken instantly, regardless of. They dictate a certain rhythm. It is friction with the ground each time the ball hits the ground that dampens the energy the ball had. SUPERBOUNCES (Oct 2007, Dec 2009) A popular physics demonstration is to drop two balls together, say a tennis ball on top of a basketball. Reflected sound strikes a surface before reaching the receiver. The Hot line is performance value at. If the collision between ball and wall is perfectly elastic, then all the incident energy and momentum is reflected, and the ball bounces back with the same speed. The tennis ball then bounces with about 16 times more energy, by bouncing off the basketball, than it does by bouncing directly off the floor. Bouncing Balls. Best Buy provides online shopping in a number of countries and languages. Kick the green ball straight ahead and down to the lower level. This phenomena occurs when sound waves bounce back and forth between the walls, ceiling, and floor, making conversations difficult. Visit us today for the widest range of Ceiling Fans products. It drips during and after we shower, but there is no water damage on the ceiling. You get organic sounding delays that are all synced together to match a groove. Shop both residential and commercial speakers and amps online from any of our brands: SoundTube, Soundsphere, SolidDrive, PhaseTech, and Induction Dynamics. There are two major classes of sounds traditionally distinguished by phoneticians in any language. Bell Sound Effects - different bell Sounds like Bycicle Bell Sounds. If the parabola is closed off by another curved surface, it is called an ellipse. Mid-range: Surround sound systems that cost between $400 and $1,000 are great for larger rooms or users who care a lot about high-end audio formats. Ball sounds in mp3 download for free and without registration. Screen Serve. 11 - Double-layer Starry Sky LED Pattern Magic Ball light KTV Flash Room Stage Lighting Sound Control Stroboscopic Laser Laser Light 2020. If ball hits a candy located on the ceiling, you got a point. The ability to know where you are in space is called proprioception; children with. The Flextracks offers flexible and straight ceiling mounted curtain track. Acceleration Ball Bounce Simple Draw Acceleration Color. This is a big hint that the collisions that the ball is experiencing are elastic collisions. Hold the hair dryer very steady and watch as the ping-pong ball floats in the stream of air. Bouncing a tennis ball or basketball against the wall or even on the driveway reverberates into the building the same way. When refrigerator turns on it sounds like a ping pong bouncing about 5-6 times. And to control the ball, jumping over the one stone to the other becomes harder. Outdoor Playhouses. Thread starter Sirsh. You can, however, fix a ticking or humming ceiling fan. Menards® offers a variety of ceiling fans in numerous finishes and colors for any room. Select Lighting & Ceiling Fans*. Long "i" can also come from the letter combination of i_e, igh, le, and plain old -y. Instances of ball lightning—glowing, electric orbs in the sky—have captivated and mystified us for centuries. Mason Industries has been a leader in the field of noise, vibration and seismic control for over 60 years. Each pair can then send their own sound ball back and forth in rapid-fire succession. Opening and closing of the mosquito trap mechanism. Grab a cue and take your best shot! Time your bounces to get through the obstacles!. Cool features: You can change the balls to bubbles, emojis, or even eyeballs! This is also the. How to Deaden Sounds in Big Rooms. ) If your ceiling's shape — a cathedral-type, for instance — poses a problem for installing in-ceiling or Atmos Enabled speakers, I'd check out. Features of puzzle and platform game genres are combined in Red Ball. Download Pong Video Game sounds 946 stock sound clips starting at $2. 2M people have watched this. Some of the designs can get a little tricky. Klipsch audio means being able to hear every lick, every drumbeat, every breathe, and every nuance of your music, movies, or TV with unmatched precision. The four sides, rather than two, help reduce sound like flutter echoes with extra diffusion but offer slightly less absorption, both desired characteristics depending on usage style. Download : https://goo. The original "Super Balls" got their amazing bounce ability from compressed rubber under thousands of pounds of pressure. Back! New cancerous sounds are here cunt bag nigga penis! (Suggest sum stuff you would want me to u. blare : make a loud unpleasant noise. The other sound I only heard about four times. Ernie Ball String Theory is a web series that explores the sonic origins of some of music's most innovative guitar players. Portable Audio & Video Accessories. When living in an historic building in IL. If we look back at its history, we will see that there were several foreign communities living in Moscow on a permanent basis. The sound could indicate many things — a busted pipe in a wall, under the floor, or even in the irrigation system. This will look like a bouncing ball. Frequency of Sound: Sound is the quickly varying pressure wave travelling through a medium. Sets And Reps – 3 sets of 15 reps. Some tiles are far better than others at absorbing sound so you need to be looking for ceiling tiles with a NRC (Noise Reduction Coefficient) rating of 0. At least one study has theorized that about half of all ball lightning sightings. It should be able to withstand repetitive hits by different kinds of balls. Bouncing Ball Outside Service Zone. Hit the golden circles to gain extra balls to launch. 1 Obtaining 1. This may be described as the sound of a ping-pong ball bouncing on the wall from the other side. Sennheiser MKH60 + MKH30 into SD Mixpre-3. Understand what the glass ceiling is, who it affects, why it exists and how you can help break down such barriers in your workplace. July 01, 2020 at 4:31 pm, Tired of the noise said: Bouncing basket balls,jumping. To soundproof a room effectively, you can use a combination of noise blocking and sound absorbing materials and techniques. Double click. You might try bouncing the ball outside on a very cold day, and then, using the same psi settings, record the height of bounces in a nice, warm house. They can be attached to the wall studs and a hat Since sound needs particles to travel, the isolated chamber slows it down. Due to manufacturer restrictions, all new (not including closeouts) items (including logo overruns) from Adams, Adidas, Ben Hogan, Callaway, Cleveland, Club Glove, Cobra, FootJoy, Mizuno, Never Compromise, Nike, Odyssey, PING, Sun Mountain, Taylormade. ! Check them out at brunospowersports. Add the edge value for the Ball. How sound travels through a ceiling. She reported a sound like pins or jacks coming from the ceiling. Shop FBL's Collection of Light Up Toys Including LED Wands, Flashing Toy Guns, Light Up Swords & More! Over 1000 Top Quality Light Up Novelties at Low Wholesale Prices. This is a big hint that the collisions that the ball is experiencing are elastic collisions. Bouncing Balls Full Screen Go Back 82 % 18 % PLAY GAME. ATS Acoustic Panels. Get today's news headlines from Business, Technology, Bollywood, Cricket, videos, photos, live news coverage and exclusive breaking news from India. For in-depth coverage, CNN provides special reports, video, audio, photo galleries, and interactive guides. The PC-1860 Ceiling Mount Speaker is of all metallic construction and ideal for use in a voice alarm system or BGM system. This is not usually a major problem, and the pipes will not necessarily suffer any damage from this rubbing, but it may be worth it to widen holes or otherwise insulate the pipes to prevent the noise since it is not technically "normal. For example, if the ball has eyes, start by creating that pattern first before moving on. Or something being bounced off the floor above. Bouncing Ball Sound In Ceiling. Large ceiling fan blade sizes are the most common incorporating 52" diameters up to 60". bounce: Players must bounce the ball on the floor as they run. Bouncing Ball Sound In Ceiling. I learn a lot, our next assignment is the flour sack. Why bouncing droplets are a pretty good model of quantum mechanics Robert Brady and Ross Anderson University of Cambridge Computer Laboratory JJ Thomson Avenue, Cambridge CB3 0FD, United Kingdom frobert. 1975, the attacking rat jumps or lies across the back of his opponent and attempts to bite the opposite flank. Top of the line soundbanks ready to help you evolve your sounds to the next level. Roll, jump, bounce and try not to get blown out! As the player, you try to complete 12 different levels, each filled with various deadly traps and mechanisms. Sound absorption, also known as sound treatment, is designed to reduce reverberation of sound within a room. Match the color of the balls and make them disappear in this fun match-3 game. so you have to figure out how to do it yourself. Balls ball balls balls ball balls! when ever you're ready you can spam your balls all over the screen like so As for the Ball class, don't use it as I wrote it, I left out tons of stuff you need. ANDRE HVAC International Inc. Will try to bounce runs too often. Catching the Ball. Hold the yardstick still and. Learn Bouncing-ball skills by watching tutorial videos about DAW Studio Setup and Design, Audacity: The Video Manual, Analog Tape Recording, Mixing Adam Pollard is back with an Advanced look at dance music vocal sound design. In general, every syllable has a vowel sound (although, as we saw in the last. Such as in the word "fry". same as your nut sack. Another way to prevent low-frequency sound bouncing is to interrupt it with a heavy but non-rigid surface. In English, there are two different sounds for the consonants "c" and "g. To bounce we can use CSS property top and position:absolute for the ball inside the field with position:relative. Our basement has never been the same. Ways of making sounds. Filled Protein Energy Balls on Bounce… By signing up to our newsletter, you agree for your data to be stored following the Mailchimp privacy policy guidelines, and to be contacted by ourselves regarding Bounce Brands and its partners. Bold letters for given sound. Disco Ball, Sound Activated Party Lights, Disco Lights Party Lights with Remote Control, USB 9 Colors DJ Lights, Wireless Phone Connection LED, Stage Light for Kid Bedroom Bar Club Par - -. If you need ceiling speakers for a surround sound installation then we have a few recommendations. 2 set-up: (4) Polk Monitor 10B's, (2) Polk Monitor 4's, Polk CS300 center. com! Listen at 7:40a each weekday to guess the secret sound and win the cash. Collect floating power ups to change the game's colors. Takes 400watts rms easily and plays clean and gets loud in a sealed box. Eventually the ball comes to a halt. IF you can't specific ways to figure them Bouncing a ball. Macy's - FREE Shipping at Macys. Shop FBL's Collection of Light Up Toys Including LED Wands, Flashing Toy Guns, Light Up Swords & More! Over 1000 Top Quality Light Up Novelties at Low Wholesale Prices. Original hi-def sounds and music. With locations in Gauteng and Kwa-Zulu Natal, BOUNCE trampoline parks across South Africa are the best value for money offerings anywhere in the country. Why In-Ceiling Speaker Placement is Important In small rooms, dealers must remember that sound waves don't just pass by the listener, but they also bounce. Menards® offers a variety of ceiling fans in numerous finishes and colors for any room. When you run the game now, you'll be able to tap on the screen to drop bouncy balls. This is because every time sound strikes a surface some of its energy is absorbed. The influence of the neighbouring sounds in English can act in a progressive, regressive or reciprocal (double) direction. Consider using a room divider to setup thin and temporary walls. so you have to figure out how to do it yourself. There are two major classes of sounds traditionally distinguished by phoneticians in any language. What Do The Sound Ratings Mean?. Leave 'em hanging. Third was up against the ceiling playing accross the room. Remove all of the balls from the field to pass each level. Sound Effects Volume - Transportation Soundz Various forms of transit, including bikes, boats, cars, planes, helicopters, subways, trolleys and trains and the people who operate them have been a vital part. A room with a tile or wood floor and high ceiling bounces sound all around the space. In general, every syllable has a vowel sound (although, as we saw in the last. Live entertainment venue located in the heart of Baltimore's Inner Harbor showcasing the best in national, regional and local talent!. By the exterior angle theorem we see that $$\alpha+\gamma-\beta=y$$ $$\beta+\gamma-\alpha=x$$ Then $\varepsilon$ is equal to the third angle of the triangle formed by the ball's path and $$\varepsilon=\pi-x-y=\pi-\alpha-\gamma-\beta-\gamma+\alpha+\beta=\pi-2\gamma$$. Sound it out. In the classic game of Pong, there is a little ball that bounces around off various walls and objects. Its 6 clutch pockets provide numerous opportunities for your dog to grab, carry, and toss the toy and the more he does the more giggles and fun sounds are emitted to amuse your pet. If a room is empty and the surfaces are hard, sound bounces many more times before it dies. A local background value is determined for every pixel by averaging over a very large ball around the pixel. Basketball - percussion - hitting the board only - old arena - Denver East High School, USA - Zoom. The chart below lists the vowel. Review of a-b-c-d-e - practice tracing and listening to the different sounds. A bouncing ball model is a classic example of a hybrid dynamic system. It can result in many anxious behaviors or even cause dogs to bolt out. Without a doubt, teaching the sounds of English is one skill you can bring to your classroom. Turn your home into a theatre with a stunning surround sound system. When the black blob hits the floor, transform your blob back. If you dropped the balls at the same time, the tennis ball should bounce off the basketball and fly high into the air. When the rubber ball hits the ground it gets compressed, or. If you have surround sound speakers, place the centre channel exactly in front of you and place the side speakers at a 110-degree angle to the sides, following the same rules as. To send back the sound of: echo to bounce back. 08 MB 128 Kbps. Hit the golden circles to gain extra balls to launch. - Listen the sound indicating position of ball. You have to know that the unit ceiling shakes when he walks around, so it's not exactly what I call solid :. If you have a 5. I feel like it's a really common building sound. Ball Bouncing off Walls by Simon Yang. They are nocturnal animals in the attic, so the noise will be at night. A b-ball is meant to bounce around, and that's what it's trying to do in this game. I love making art. If any part of your foot or body touches the center line, you will be declared out and thrown ball will be dead. Lay-in and tegular ceiling panels are easy to install in exposed grid systems and come in a variety of Watch as Julian Treasure relates how sound effects human behavior and describes how. A free classroom noise level meter, monitor and management tool. These are also "temporary" bouncing balls and will lose their elasticity within a few days as they dry. Speakers sat next to your television will also often suffer from distortion as. It teaches children to be mindful of the sights and sounds of their experience. Play claim that these are the best rubber bouncing balls in the world! With a 90% bounce ratio, these balls are the perfect The 65mm ball is a good choice for bouncers who like tricks with a lot of balls - the smaller diameter means there is less chance of collision and you. 08 MB 128 Kbps. Fun for games of fetch and water play. Learn Bouncing-ball skills by watching tutorial videos about DAW Studio Setup and Design, Audacity: The Video Manual, Analog Tape Recording, Mixing Adam Pollard is back with an Advanced look at dance music vocal sound design. For the running gag of speaking sounds out loud, see Spoken Sound Effects. If lots of reflected sounds arrive at the listener they may be unable to distinguish between them. Bold letters for given sound. Placing pillows, jackets, blankets or any other sound muffling material will make a large difference to the overall sound. Could get on the field immediately at high-major level and become a Power 5 star, in addition to first-round NFL Draft ceiling. To do so, the Wolverines will be depending on several players to. Great mix to the bag of balls. As a continuation of the theme of potential and kinetic energy, this lesson introduces the concepts of momentum, elastic and inelastic collisions. The ʃ sound from the 'Consonants Pairs' group and is called the 'Voiceless palato-alveolar sibilant'. You have to know that the unit ceiling shakes when he walks around, so it's not exactly what I call solid :. Choose from our elaborate range of Sony home theatres, JBL soundbars and Philips tower speakers for an experience to remember. That is the essence of phonetic reading. My sounds don't repeat or carry any silence feel free to download via converter Welcome to the Sound Laboratory MASSIVE THANKS 4 reading and I hope you enjoy. placeholder. Reflection of Sound. All it does is roll. With a baseball at full force, it'll bounce maybe a couple feet off the mat. When sound is reflected, that means it is bouncing off a surface; when it is diffused, that means it scatters. The Bouncing Souls "Highway Kings" Official Music Video. Mid-sized bounce house with huge bounce for the buckSmall enough for indoor play, big enough for a partyInflates in less than 2 minutes-Hours of jumping fun for up to 5 kids at a timeLooking for something a little bigger than our standard Magic Castle Bounce House but not quite as big as the Magic Ultra 12 Bounce Castle? This bouncer is the answer. 3 Pandas In Japan 1,613 plays. Because the air will still be able to get through, this method would result in the room having sufficient air flow. 1 fixture: $69. Catching the Ball. "These waves of sound bounce off things in the environment, and when they return they actually carry with them an imprint of. If you dropped the balls at the same time, the tennis ball should bounce off the basketball and fly high into the air. By the exterior angle theorem we see that $$\alpha+\gamma-\beta=y$$ $$\beta+\gamma-\alpha=x$$ Then $\varepsilon$ is equal to the third angle of the triangle formed by the ball's path and $$\varepsilon=\pi-x-y=\pi-\alpha-\gamma-\beta-\gamma+\alpha+\beta=\pi-2\gamma$$. Reflected sound strikes a surface before reaching the receiver. The paddle is on a single axis of movement, parallel to the wall, and must intercept the ball on its return path to keep the game from ending. On the other hand, if you dribble the ball on a soft surface, the force is absorbed by the soft ground and the ball doesn't bounce back, or bounces back with diminished force. 10 yr mechanical; 5 yr electrical*. When you let go of the ball, it swings downward like a pendulum. Bowling balls are heavy and you have to do work to get it up in the air. Start bouncing on the ball and focus on the pelvic floor slightly descending as you bounce down on the ball and ascending (lifting up) as you bounce up. Soften the Space. The impact initiates the airborne vibration through a barrier into another space within your home or office. If the room doesn't have an overhead box, hire an electrician to install […]. I often would hear what sounded like a marble or small ball dropping then almost a bit of a quick tap-tap like bounce. gl/qBnqjV Credit : 1 - intro music : Big Horns Intro performs Audionautix with a Creative Commons Attribution license (https://creativ. , 38 minutes. Flutter echo is mostly caused by reflective parallel surfaces that allow the echo to sustain itself. Bouncing Ball using C Program. Eggcrate/Convoluted Toppers : Eggcrate acoustic soundproofing foam is great for people who want the sound-deadening qualities of 1 and 2 inch wedge foam, but at a. Commercial wood ceilings from Armstrong Ceiling Solutions include wood ceiling panels, planks, canopies, acoustical & custom solutions. It was yellow and hollow. © 2020 All Rights Reserved. They feature disco lights and a sound system and yet again Mega Bouncing Castles were the first company in the country to get one ! We lead the way , others follow. You can also improve your sound insulation by building a drop ceiling beneath the existing ceiling. Noise in house ceiling sounds like ball bouncing. The 44" model has a 32" bouncing zone and is best for people measuring up to 6'2" in size. The reduced vowel sound called schwa is the most common vowel sound in spoken English. But every time I see a ball bounce, I think about bouncing back myself. Share this story: If you're a bird, however, you probably don't know what a golf ball OR a carth path. Chordify is your #1 platform for chords. A reverberation, or reverb, is created when a sound is produced in an enclosed space causing a large number of echoes to build up and then slowly decay as the sound is absorbed by the walls and air. For in-depth coverage, CNN provides special reports, video, audio, photo galleries, and interactive guides. For example, certain types of balls (such as SuperBalls) can be given a backspin and (after the bounce) the velocity and rotation of the ball will reverse direction. When you later place the electronics enclosure on the ceiling tile, you can rest the clip above the hole in the ceiling tile and adjust. Macy's - FREE Shipping at Macys. Balance, Spin & Bounce Sort by Sort by Title A-Z Title Z-A Price Low-High Price High-Low Reference A-Z Reference Z-A Most Recent Oldest Top Sellers Featured Add All Products to Wishlist. While hanging light fixtures from your ceiling is a fantastically creative way to centralize your light source without taking up much space, choosing to hang other crafty things such as origami or planters can serve to make any space seem more modern and unique. By controling the x-axis and y-axis movements in separate keyframes, we're able to create a nice bouncing movement. 1 surround sound system for a small- or medium-sized space, you'll find your best options in this range. Create a Fork. In regard to Frank's post above mine, there's very little bounce back. Ball Bouncing off Walls by Simon Yang. Bounce, is an arcade-style action game in which players have to activate all orange balls in play by knocking into them. You Can Follow Us on Twitter or Like Our Facebook to Keep Yourself Updated on All the Latest From Hip Hop Beats, Ringtones, Type Beats and Karaoke. When these air balls are compressed, they create a region of higher pressure. Adds a chance to fire tears that bounce off of enemies and obstacles. 361 bouncing ball with sound products are offered for sale by suppliers on Alibaba. The packaging is advertised for dog of all sizes, but for my little 5 pounder, that is not the case. Listen to the pronunciation of English sounds online using our interactive phonetic chart, and learn how English sounds are produced. Hit the golden circles to gain extra balls to launch. Bouncing Ball. Mindfulness is often natural for children and this audio helps to enhance their ability to focus in a mindful way. (Below, see link "54: Preemptive Reading. Curtis Stribbling. Thicker carpet works here as well because it will absorb more sound. Instances of ball lightning—glowing, electric orbs in the sky—have captivated and mystified us for centuries. This is because they provide a better directional response than a speaker sat on the side. If you hear a banging sound similar to what happens with the water pipes, this might be your problem. Big rubber ball bouncing (2). Ceiling and wall fans At Mercator, we are dedicated to providing any homeowner the luxury of selecting the right applicances for them, to suit their interior design and unique lifestyle. Having one surface area with a sound absorber is sufficient to disrupt the sound field. Medium fan blades size from 44" up to 60". If you have a 5. Noisy ceiling fans are often wobbly ceiling fans, and wobbles can have a few different causes. This will look like a bouncing ball. The nucleus lies between the sounds [ю] and [ɔ]. By Sam Weinman. Soundrangers the first and finest royalty free sound effects and production music libraries on the web creating original content for modern media. Two Bounce Battle games (for the best beer pong experience or an extra for a friend). As soon as the zorb goes over the top of the hill, you'll be all Glossary over the place. Breathe in, flex your elbows, and slowly drop your forearms until the dumbbells are in line with your ears. ' 'A pitch of consistent bounce and enough pace to hurry the ball on to the bat aided confident strokeplay. Hit the golden circles to gain extra balls to launch. How to Reduce Echo and Improve the Acoustics (without being technical and scientific) Hello hello hello! In the good old days (you know, the 90's) echoing didn't seem to be such a big issue for homeowners – wall to wall carpeting, upholstered furniture and 8′ ceilings made a big difference in the sound absorption of a space. Learn more. You have to know that the unit ceiling shakes when he walks around, so it's not exactly what I call solid :. New HARMAN Products • Join Our Mailing List • Software • Where To Buy • Support • Forum • HARMAN. clientHeight. This is a tough assignment and I am doing it in steps. Find another word for bounce. It's at this point that the Bouncing Ball videos might be especially helpful. This is achieved by absorbing sound energy and turning it in to heat rather than reflecting sound, allowing the sound to bounce off walls or ceilings and echo around the room. Card to Bottle. Hear and feel more with Sony audio sound systems. They can bounce off any surface that they collide with and continue flying at an angle equal to the one at which they hit the surface. If the cue ball is placed at the speaker's mouth and fired toward the ceiling, it will always go to the listener. For more than 30 years, Craftmade has earned lighting showrooms' loyalty with great designs, top product quality and service that puts customers first. ) (Move mouth as if eating. SUPERBOUNCES (Oct 2007, Dec 2009) A popular physics demonstration is to drop two balls together, say a tennis ball on top of a basketball. Serum Soundset Samples, and Project Files. Bouncing Balls. Here the principle is the same as when bouncing a ball off concrete and then trying to bounce it off dry sand. In fact, most will go quiet as soon as you get close to them or make a noise by tapping on the ceiling. City Sounds - Siren Sounds, Police Sounds, Metro Sounds - Free Online Sound Effects Library MP3 download. Since some of the ball's energy went into the carpet, the ball doesn't have as much energy afterwards, and it can't bounce as high. It was yellow and hollow. Pendant Lighting on a Budget. Mindfulness is often natural for children and this audio helps to enhance their ability to focus in a mindful way. but I've got to be honest, I need to say this out loud. JumpStart Art Studio; JumpStart Punk Punk Blitz; JumpStart Roller Squash; Math Blaster B-Force Blaster; JumpStart Jetpack; MathBlaster Space Zapper. • Think beyond windows. Surfaces in the room: Hard surfaces bounce sound waves around, creating an echo, while soft surfaces absorb the sound, making it dull. Bouncing ball. Explanation: When all three balls are dropped from the same height, the rubber ball will bounce the highest because it has the greatest elasticity. Are they crossing like so 'X';, or do they bounce off each. [email protected] A soft "c" sounds like an "s" as in city, receive, and cell. The sound is like ball bearings dropping, and then rolling around. Bowling balls are heavy and you have to do work to get it up in the air. 1 Sounds of English. shape ( "ball" ). Students explore these concepts by bouncing assorted balls on different surfaces and calculating the momentum for. The 44 Sounds (Phonemes) of English. Bounce House & Inflatable Rentals in Connecticut. You might try bouncing the ball outside on a very cold day, and then, using the same psi settings, record the height of bounces in a nice, warm house. Download : https://goo. As the balls move towards the screen's base, it turns out to be more difficult to point specifically towards the spot where the ball must land. I got this for a cattle dog as its supposed to be good for their herding skills which seems to come naturally for him, but to him, this ball is incredibly boring. came to find a little device, a ball to be precise. 10 yr mechanical; 5 yr electrical*. There are currently no gaming sessions for the Bouncing Ball achievements that you can join - why not register and make a new session?. Bounce block The Ball. All it does is roll. Search for: Search. Owing to these benefits, a DC ceiling fan is more durable and energy efficient than an AC ceiling fan. This storybook uses Macromedia Flash for sound & graphics. Home Minecraft Mods Bouncing Balls Minecraft Mod. It's also a creative way to decorate a child's room and add some unique lights. When an object, like a ball, is thrown against a rigid wall it bounces back. Kick the green ball straight ahead and down to the lower level. 70 m, it bounces o a concrete oor and rebounds to a Similarly, Bouncing up to 1. Importantly, the ball changes direction when it bounces but it never really slows down in speed. When a ball hits a surface, some energy is trans-formed into sound energy, some is transformed into thermal energy. That is the essence of phonetic reading. Manual quarter turn ball valves are available from 1/4 NPT through 1-1/4 NPT. Use an impact-resistant sound-absorbing ceiling with very good absorption qualities; Add impact-resistant wall absorbers on two adjacent walls. WAV files & sound bites at The Sound Archive, offering sound files from some of our favorite movies and TV shows. Bounce House & Inflatable Rentals in Connecticut. About Bouncing Balls. Similar Games. End of dialog window. Finished Product (System). 1 Unheard Sounds Electromagnetic Hazards, Golf Carts, and More PWC - Jet Ski - Sea Doo Volume 2 Marbles Sonic Seaside This Library Sucks Trains Ball bounces Beer & Whiskey Commercials Eating Handwriting. this is when someone rubs you up the wrong way and you have no choice but to retaliate with a childish but seemingly funny piece of jargan. Balls are great for outdoor barbecues, birthday parties, special events, and gift giving. I got this for a cattle dog as its supposed to be good for their herding skills which seems to come naturally for him, but to him, this ball is incredibly boring. What would be a good bouncy ball in terms of coefficient of elasticity? Think of all the types of balls Describe those measurements to me in a variety of ways? (For example: Halfway to the ceiling or 2 Some balls may not bounce very true and therefore bounce far from the meter stick, resulting in a. And Sacha went, 'Yeah, "Wrecking Ball," that sounds good. These balls last for along time. Of course, smaller people can use the larger models if they want more bouncing room. Affects lip position →. The Creative Motion 6" Rotating Disco Ball Light with Multi Colors is a fun addition to a party or a special occasion. MobilityWOD. So simple that almost anyone can pick up a ball and a paddle and make a reasonable job of it. They're very stable on their heavy round bases, and if they happen to get in the frame, hey, they look like they belong there. A hybrid dynamic system is a system that involves both continuous dynamics, as well as, discrete transitions where the system dynamics can change and the state values can jump. - Move your finger on bottom of screen to change racket's position to keep the ball bouncing. Eggcrate/Convoluted Toppers : Eggcrate acoustic soundproofing foam is great for people who want the sound-deadening qualities of 1 and 2 inch wedge foam, but at a. Each time the ball hits the ground there is noise, vibration and. Flutter echo is mostly caused by reflective parallel surfaces that allow the echo to sustain itself. Definition of be bouncing off the walls in the Idioms Dictionary. Frequency of Sound: Sound is the quickly varying pressure wave travelling through a medium. Unfortunately, that ceiling fan you installed last year has decided to up and call it quits. City Sounds - Siren Sounds, Police Sounds, Metro Sounds - Free Online Sound Effects Library MP3 download. Energy Balls are generated by Combine Dark Energy reactor systems and transmitted through the plasma conduits of Citadel power transfer systems. We design ceiling fans rather than style them. Cool fact the word "set" has the most definitions of any other word in the English language. My daughter likes to make laps through the bounce house and flop down the slide. Download royalty-free cartoon sound effects for your next project from Envato Elements. shape ( "ball" ). As the balls move towards the screen's base, it turns out to be more difficult to point specifically towards the spot where the ball must land. com, of which toy balls accounts for 4%, other toys & hobbies accounts for 1%, and exercise balls & accessories accounts for 1%. Movies Preview. A color ball is placed inside the launcher at the bottom of the play area, while the next ball will also be displayed. Today, they offer the latest ceiling fan collection that continues their heritage of excellence - revitalizing your home and everyone. The tennis ball then bounces with about 16 times more energy, by bouncing off the basketball, than it does by bouncing directly off the floor. Mid-sized bounce house with huge bounce for the buckSmall enough for indoor play, big enough for a partyInflates in less than 2 minutes-Hours of jumping fun for up to 5 kids at a timeLooking for something a little bigger than our standard Magic Castle Bounce House but not quite as big as the Magic Ultra 12 Bounce Castle? This bouncer is the answer. The two small balls vanished with no sound but the larger ball floated about 4 ft off of the hallway floor then slowly bounced up and down twice in the same spot on the floor before making a loud crack and then it to went away. Sign up to Amazon Prime for unlimited free delivery. Now what?. SOUND SYSTEM. Directed by Malcolm D. Besides sitting on the ball to test the air level, you can also give it the bounce test. One ball is mounted on a glass rod. What You Learn From Playing Factory Balls. Thread starter Sirsh. For onomatopoeic words, see Onomatopoeia. Screen Serve. Bouncing of ball. Great mix to the bag of balls. Let's start to play bouncing balls game. The ball itself is a cute concept, although my husband was annoyed by the noise. To spring or bounce back after hitting or colliding with something. Get free coins now. A plumb bob in the background shows where the hanging ping-pong ball is when there is no horizontal force. First, it's important to understand what causes a ceiling fan to make noise. Sprite collision. IF you can't specific ways to figure them Bouncing a ball. Click here for more information. If we look back at its history, we will see that there were several foreign communities living in Moscow on a permanent basis. You can easily and cheaply reduce the noise in your room by adding sound-absorbing materials and products. Bouncy definition, tending characteristically to bounce or bounce well: An old tennis ball is not as bouncy as a new one. She has some sort of ball, maybe a tennis ball or basketball, and instead of hitting the ceiling she's bouncing that off of it. Why is it important to establish relationship between sounds and letters? What is a grapheme? 2. From what I can tell he owns a bowling alley and randomly decides to crack a whip every now and then. To send back the sound of: echo to bounce back. Because the ball doesn't bounce symmetricallydunno how to explain it. 2 Crafting 2 Usage 2. When a sound wave strikes a surface, its direction changes in the same fashion as a ball bouncing off of a wall. Directed by Malcolm D. Bounce the ball and avoid touching the spikes. Bouncing, rolling and throwing exercise balls requires muscle strength, limb coordination, judgment and visual perception skills 3. This will look like a bouncing ball. To diagnose your wobbly fan, first, make sure you've tightened everything up and have confirmed the blades are straight. The figure below illustrates this. rubber balls glow in the dark and last for hours. The bizarre phenomenon, also known as That and other early accounts suggest that ball lightning can be deadly. Play indoors or outdoors in the sun, rain, or under the stars. How it works: The louder the classroom, the higher the balls fly! The goal is to keep the balls hanging out at the bottom. Mp3 previews are low resolution, the purchased WAV files are professional quality, the same sound effects used in hundreds of Hollywood feature films. The Stratos fan was born and forged the path for a much-needed evolution in modern ceiling fan design. Cool features: You can change the balls to bubbles, emojis, or even eyeballs! This is also the. On average, noise coming through a ceiling is the worst soundproofing problem you're likely to encounter. 2M people have watched this. All you need is one of our special beeping or jingling-bell balls. ) (Hop on one foot. Bouncing Balls is an amazing match-three game that combines aspects of match-three gameplay together with bubble shooting. The GlowStreak LED Ball is the most technologically advanced nighttime fetch ball ever made, and holds up to the best arm or ball launchers around. No matter which way you decide to teach, these English pronunciation exercises provide an excellent starting point. ceiling speakers for home theater and distributed audio applications--based on seating positions, room acoustics and ceiling height—is a quick reference for technicians and designers. Bouncing Ball using C Program. It should be pushing cool air around the room, creating a comfortable breeze that keeps you cool. ) (Cut with index and middle finger. The second part of Red Ball is available on redball. If you miss the ball and ball drops to floor, the game is over. You're not going to get rich out of red rectangles, so let's use balls instead. We Specialize in corporate event rentals and company picnic planning. You get organic sounding delays that are all synced together to match a groove. BOUNCE is part of the BOUNCE global network that includes 32 venues across 16 countries. ; keeping track of each ball's acceleration, so add properties in the class for. Tags: candy crush soda. All information and links on these pages are checked with the greatest care. You can, however, fix a ticking or humming ceiling fan. Serum Soundset Samples, and Project Files. I'm going to show you a video clip, in which there are two groups of people, a white team and a black team, and you are to count the number of times that the white team passes or bounces the ball to each other. This experiment uses the sound of each bounce to retrieve its timing and calculates the values by assuming that the ball loses the same portion of energy on each bounce. They can be attached to the wall studs and a hat Since sound needs particles to travel, the isolated chamber slows it down. The ball does not need to be proceeding in the original direction. a pass that bounces off the floor before it reaches the receiver: chest pass: Michael threw a quick chest pass, ran forward, and received a bounce pass back. Satellite speakers are fairly small and light, so they don't require heavy. A free classroom noise level meter, monitor and management tool. Get Ball Sounds from Soundsnap, the Leading Sound Library for Unlimited SFX Downloads. She reported a sound like pins or jacks coming from the ceiling. It will reflect, or bounce off any hard surface but will be absorbed by a soft surface. PLUS four extra ping pong balls for back up. example, parallel walls will bounce sound back and forth like a ping pong ball, creating an echo cacophony. Grab a cue and take your best shot! Time your bounces to get through the obstacles!. 3 ball lift bounce patterns are quite slow and thus can be learnt quickly. , dating from the early 1900s, agreed. Go figure, we all just looked at each other with wide eyes and amazement!. ) (Pretend to dig. Feed the electrical cable coming from the ceiling through the knockout hole in the pancake box. Seems durable. Remove all of the balls from the field to pass each level. Opening and closing of the mosquito trap mechanism. It teaches children to be mindful of the sights and sounds of their experience. Bouncing Balls is a very fun and very addictive action puzzle game. In general, every syllable has a vowel sound (although, as we saw in the last. In the summer of 1978, a teenager and his group of friends face new challenges when their neighborhood roller-skating rink closes, forcing them to visit a different rink. Yeelight is the world-leading smart lighting brand, with in-depth exploration in smart interaction, industrial design and lighting experience. All it does is roll. Usually, the sound comes from a part of the fan vibrating when the. When sound travels through air, the atmospheric pressure varies periodically. #ceilingwalkchallenge | 16. An easy way to summarize it is that the role of your ceiling is acoustic absorption and the role of your wall is sound insulation. Regular Price: $257. In this lesson, we assume that the balls are. Use these simple ideas to extend your art activity to include science. Curtis Stribbling. Driver distance from floor (h1) cm: Ear height (h2) cm: Ear distance to driver (d) cm: Reflection arrives: ms after direct sound:. It has been shown that these time intervals are related to the coefficient of restitution [6]. Bouncing Ball using C Program. Basketball Data; Bouncing Squash Balls; High-bouncing basketballs; Heavy Objects on Trampolines; Height of ball bounce; Car crash test; Bouncing Balls - It's All About Forces; G. Bouncing balls, whistles, organs, cheering, chants, booing and ambience are just some of the many sounds you will find in this collection. Bouncing Balls Walkthrough. Bounce block. It should be pushing cool air around the room, creating a comfortable breeze that keeps you cool. Without a doubt, teaching the sounds of English is one skill you can bring to your classroom. We Specialize in corporate event rentals and company picnic planning. A slime block is a transparent block that entities can bounce on. The second part of Red Ball is available on redball. You Can Follow Us on Twitter or Like Our Facebook to Keep Yourself Updated on All the Latest From Hip Hop Beats, Ringtones, Type Beats and Karaoke. 4 Atmos setup, four "virtual" ceiling speakers fire sound upward to bounce it off the ceiling. Watch this bird amusing itself by bouncing a golf ball on a cart path. Disco Ball, Sound Activated Party Lights, Disco Lights Party Lights with Remote Control, USB 9 Colors DJ Lights, Wireless Phone Connection LED, Stage Light for Kid Bedroom Bar Club Par - -. For small confined areas we can supply a blower fan cover, this will dampen the sound. Sennheiser MKH60 + MKH30 into SD Mixpre-3. The sound could indicate many things — a busted pipe in a wall, under the floor, or even in the irrigation system. Bouncing Ball using C Program. Most people do not have a clue how this shot actually occurs. 08 MB 128 Kbps. Plus, even more video games can involve a ball, Slope is one example and the many bubble shooter games also feature balls. 70 m, it bounces o a concrete oor and rebounds to a Similarly, Bouncing up to 1. One thing to remember is that each Bellicone rebounder provides the same. While most fans are content with volume stats like completions, yards, and touchdowns scored, the archaic metrics do little to tell the story of how a. Add a sense of style and exuberance to your rooms and offices with our innovative and trendy designs. 'The bounce is completely different for a start - the ball bounces lower - the points are much faster and it's more tiring on the legs, as you have to bend them more because of the low bounce. Make sure the hanger ball is completely seated so the ball fits snugly in its joint and doesn't move around. BOUNCE is part of the BOUNCE global network that includes 32 venues across 16 countries. See this group of games that all. The most common is a tapping/cracking sort of sound in the wall. This noise meter for classroom computers even comes with different ball themes, like plastic, emoji, bubbles, or eyeballs — a sure crowd-pleaser for younger students. The aim of the game is to destroy balls that are slowly falling down on a grid. but I've got to be honest, I need to say this out loud. We again draw this ball at center (x, y + 5), or (x, y - 5) depending upon whether ball is moving down or up. Sort Featured Price: Low to High Price: High to Low A-Z Z-A Oldest to Newest Newest to Oldest Best Selling. Now drop a red bean underneath the bouncing black blob to. Meta Balls Voronoi Future Splash Smoothing Spiral Raster Division Raster Q-bertify Path Intersections Path Simplification Hit Testing Bouncing Balls. Sounds Right iPad app If you have an iPad, you can download and install a free copy of the British Council phonemic chart on it. Or something being bounced off the floor above. Large Swiss excesize ball View the End User License Agreement (EULA) for this Royalty Free Sound Effect. If ball hits a candy located on the ceiling, you got a point. Download free sports sounds like hockey, baseball, basketball, football, tennis and lots more. Breathe in, flex your elbows, and slowly drop your forearms until the dumbbells are in line with your ears. Even a hard rubber ball won't bounce, if you drop it onto the dry part of a sandy beach. Usually it is made of durable material such as polyvinyl chloride to withstand several hundred pounds of weight, and can be available in different. Shop 1,000's of high quality Tin Toys - Today! Free Shipping USA - Ships Fast - Vintage Retro & Classic Toys. Bounce Houses & Inflatable Slides. Installed Sound. Depending on what you need, the basic bounce house might be enough. Explore our music and sound effects. Page 4: Getting Ready 1 • Getting Ready To install a ceiling fan, be sure you can do the following: • Locate the ceiling joist or other suitable support in ceiling. Consider using a room divider to setup thin and temporary walls. Leave 'em hanging. Keep your pet busy and having fun with the Wobble Wag Giggle Ball. Physics4Kids. com! This tutorial introduces the physics of friction. In a café or other amenity space with a proliferation of hard surfaces (such as kitchen cabinets, laminate counters, stone floor, an untreated ceiling), sound transfer creates a bandshell effect. Its 6 clutch pockets provide numerous opportunities for your dog to grab, carry, and toss the toy and the more he does the more giggles and fun sounds are emitted to amuse your pet. Shop 1,000's of high quality Tin Toys - Today! Free Shipping USA - Ships Fast - Vintage Retro & Classic Toys. Lastly, in-ceiling speakers will deliver a superior sound, and you will enjoy watching movies and listening to music like never before. You need to shoot balls to destroy them by forming groups of 3 or more balls of the same colour. This may sound easy, so to make it a little harder, try it with some warm soapy water inside. Pong is a game where a paddle is used to hit a ball and bounce it off a wall, allowing it to return towards the paddle. Like all animals, possums have to go outside to eat and drink, so you often hear them as they exit and re-enter your house. mansion in the movie 'Fracture'. I suppose he could be bouncing the waves off of something, but a careful examination of your property would clue you in as to what would give a clean bounce (say, a metal sheet f. No forks created yet. Go figure, we all just looked at each other with wide eyes and amazement!. Keeping the cost low will steer you towards the basic house, but if your budget is bigger, then you might want to go for something fancier to give kids more than one activity to do while bouncing. The sound it creates is similar to that of marbles dropping. 10], Roku 5. Driver distance from floor (h1) cm: Ear height (h2) cm: Ear distance to driver (d) cm: Reflection arrives: ms after direct sound:. The ball will bounce once in the forecourt and should then travel in a high arc to arrive as close to, and as vertical to, the back wall as possible. Consider using a room divider to setup thin and temporary walls. no-ball-games. The last few days this escalated to the ball bouncing noise being complemented by an occasional banging that sounds like someone hammering with a rubber mallet. Ceiling Light Fixture Cost Non-discounted retail pricing for: 18 in dia drum style lighting fixture. BLVCK CEILING. 2 Pistons 2. The gameOver() method is public, because it will be called from the sprite "Ball" when it detects that it has got to the lower border of the canvas. " Eileen Smith, who used to live in an apartment in Washington, D. To diagnose your wobbly fan, first, make sure you've tightened everything up and have confirmed the blades are straight. The first sounds in the words extra, only, and apple are vowels. The ceilings of most residences or apartment buildings are constructed in similar ways that allow them to meet safe building codes but It can even bounce down halls and under doors, making it more difficult to control. This demo helps to show the visual differences between various frame rates and motion blur. I've literally heard it through my headphones while sitting silently reading, not making any sound. About Bouncing Balls 2. Choose from our elaborate range of Sony home theatres, JBL soundbars and Philips tower speakers for an experience to remember. For example. ) (Cut with index and middle finger.
CommonCrawl
The use of imepitoin (Pexion™) on fear and anxiety related problems in dogs – a case series Kevin J. McPeake1 & Daniel S. Mills1 BMC Veterinary Research volume 13, Article number: 173 (2017) Cite this article Fear and anxiety based problems are common in dogs. Alongside behaviour modification programmes, a range of psychopharmacological agents may be recommended to treat such problems, but few are licensed for use in dogs and the onset of action of some can be delayed. The low affinity partial benzodiazepine receptor agonist imepitoin (Pexion™, Boehringer Ingelheim) is licensed for treating canine epilepsy, has a fast onset of action in dogs and has demonstrated anxiolytic properties in rodent models. This case series reports on the use of imepitoin in a group of dogs identified as having fear/anxiety based problems. Twenty dogs were enrolled into the study, attended a behaviour consultation and underwent routine laboratory evaluation. Nineteen dogs proceeded to be treated with imepitoin orally twice daily (starting dose approximately 10 mg/kg, with alterations as required to a maximum 30 mg/kg) alongside a patient-specific behaviour modification plan for a period of 11–19 weeks. Progress was monitored via owner report through daily diary entries and telephone follow-up every two weeks. A Positive and Negative Activation Scale (PANAS) of temperament was also completed by owners during baseline and at the end of the study. The primary outcome measure was average weekly global scores (AWG) from the owner diaries. Average weekly reaction scores (AWR) for each type of eliciting context was used as a secondary outcome. Seventeen dogs completed the trial. Treatment with imepitoin alongside a behaviour modification programme resulted in owner reported improvement with reduced AWG and reduced AWR for anxiety across a range of social and non-social eliciting contexts including noise sensitivities. Significant improvement was apparent within the first week of treatment, and further improvements seen at the 11 week review point. There was a significant reduction in negative activation (PANAS) with 76.5% of owners opting to continue imepitoin at their own expense after completion of the study. This study provides initial evidence indicating the potential value of imepitoin (Pexion™) alongside appropriate behaviour modification for the rapid alleviation of signs of fear and anxiety in dogs. Further research with a larger subject population and a placebo control would be useful to confirm the apparent efficacy reported here. Fear and anxiety-related problems in dogs are common. In the People's Dispensary for Sick Animals Pet Animal Welfare report [1] (2011) 82% of owners reported their dog was afraid of 'something'. Another study [2] found that 25% of owners considered their dogs to be fearful of noises, but 49% reported at least one behavioural sign suggestive of fear when exposed to loud noises. The Association of Pet Behaviour Counsellors in the UK [3] report that 8% of canine cases were referred for a specific fear or phobia, 6% for owner-absent problems (which includes "separation anxiety") and 64% for aggressive behaviour towards people and dogs (the extent to which fear or anxiety played a role in these cases was not stated). In the US, a study [4] found 10.3% of cases had a specific fear, anxiety or phobia, 14% separation anxiety and 22% fear-related aggression towards people. Both fear and anxiety can occur in the context of the perception of increased threat to the individual [5]. Whereas fear occurs in response to the presence of a specific aversive trigger, anxiety develops when an animal anticipates a negative outcome such as in a location where they previously encountered an aversive trigger [6, 7]. In a novel or unpredictable environment, or in a situation of ambiguous threat, a dog may experience anxiety and uncertainty due to a conflict of approach and avoidance tendencies [8]. Dogs may also experience anxiety associated with the departure of the owners. These may arise from fear and anxiety related associations (e.g. fear of isolation, fear of stimuli occurring in the context of being separated) as well as problems related to separation from an attachment figure (PANIC sensu Panksepp, [9]). Different neurochemical networks are thought to underpin these different forms of anxiety [10, 11]. A further distinction may also be made between social and non-social stimuli that trigger a response [12]. The affective state of fear and anxiety can operate at the level of: a specific emotional reaction (i.e. in response to a specific aversive trigger); a mood change (longer lasting emotional changes that bias behaviour and cognition occurring in response to a series of related aversive events); or as a feature of temperament (behavioural predispositions arising from the interaction of genetic and early experiential factors). One method developed for assessing aspects of temperament in dogs is the Positive and Negative Activation Scale (PANAS) [13], a reliable and valid measure of positive activation/affect (mediates behavioural approach and neophilia) and negative activation/affect (mediating behavioural inhibition, withdrawal and avoidance (fear-anxiety)). Anxious emotional reactions, moods and temperament can all be problematic for an owner and are a cause for welfare concern. Fear and anxiety related problems in dogs are principally resolved using behaviour modification techniques such as desensitisation and counter-conditioning (operant and classical) [14]. Various psychopharmacological agents have been suggested as potentially useful adjuncts in such cases, [15] including: tri-cyclic antidepressants (TCAs) e.g. amitriptyline [16] and clomipramine [17, 18]; tetra-cyclic antidepressants e.g. mirtazapine [14]; selective serotonin reuptake inhibitors (SSRIs) e.g. fluoxetine [19, 20]; monoamine oxidase inhibitors (MAO-Is) e.g. selegiline [21]; progestins e.g. megestrol acetate [22]; anticonvulsants e.g. phenobarbital [23]; benzodiazepines e.g. diazepam [24] and alprazolam [25]; alpha-2 adrenergic receptor agonists clonidine [26] and dexmedetomidine [27]. However, there are few psychopharmacological agents licensed for use in dogs. In the European Union (EU), where a licensed drug is not available, veterinary surgeons should follow the prescription cascade as outlined in the European Union Veterinary Medicines Directive (2001/82) [28]. Although this allows the use of the benzodiazepines listed above because they have a human licence, the prescription cascade regulations indicate that for a different condition in the same species should be given preference. In the United States (US), where a veterinary formulation does not exist informed consent can be obtained from a client to use human formulations "extra-label". At the time of writing, clomipramine (Clomicalm, Novartis – EU and US), selegiline (Selgian, CEVA - EU; Anipryl, Zoetis, US), the progestogen megestrol acetate (Ovarid, Virbac – EU), fluoxetine (Reconcile – not currently available) and most recently dexmedetomidine (Sileo, Zoetis) all have veterinary licenses for dogs. The effects of using medications such as SSRIs and TCAs can typically take around 3–5 weeks to become apparent [15]. Benzodiazepines have a faster onset of action [15, 29], but no benzodiazepines are licensed in any form for use in dogs in the EU nor available as veterinary formulations in the US. Benzodiazepines exhibit their effects by binding to a specific benzodiazepine binding site on the inhibitory neurotransmitter GABA (ɣ-aminobutyric acid) receptor and they can exhibit anxiolytic properties [30]. One retrospective study in dogs explored the anxiolytic properties of diazepam as reported by owners, where the response was variable, where 53% of owners discontinued diazepam therapy due to lack of efficacy and 58% due to adverse effects [24]. In the same study in the group of dogs classed as having 'thunderstorm phobia' 100% of owners classed the treatment as effective. Dosages of diazepam varied widely in this study, which is important as the effects of benzodiazepines are dose dependent with moderate doses often needed for anxiolytic effects and higher doses more likely to cause adverse effects such as ataxia [15]. In addition, tolerance can develop with the use of benzodiazepines [15, 31]. However, as benzodiazepines can be highly effective there is value in further exploring the safety and reliability of using this class of drugs as anxiolytics in dogs. Imepitoin (Pexion™, Boehringer Ingelheim) is a low affinity partial benzodiazepine receptor agonist [32, 33] licensed in the EU for the reduction of the frequency of generalised seizures due to idiopathic epilepsy in dogs [34]. When used in dogs, imepitoin appears to be well tolerated and safe [35, 36]. Pharmacokinetic studies in dogs show imepitoin has a fast onset of action of around 2–3 h after single oral dosing [37]. Fear and anxiety are common behavioural co-morbidities in canine epileptic patients [38]. During the development of imepitoin for the treatment of idiopathic epilepsy in dogs, it was reported anecdotally that some owners wanted to continue using imepitoin due to reported improvements in their dog's behaviour even when seizure frequency was unaffected [39]. In a variety of experimental rodent models, imepitoin has demonstrated anxiolytic properties [39,40,41]. In addition, being a low affinity partial agonist, the likelihood of developing reduced efficacy related to tolerance and potential for abuse are lower which may offer advantages over full agonist benzodiazepines [39]. The aim of the current study was to undertake an initial investigation of the potential value of imepitoin alongside an individualised behaviour modification programme for the treatment of a range of anxiety and fear related behaviour problems in dogs through a carefully monitored case series. Cases were recruited following a local publicity campaign via referral from the owner's regular veterinary surgeon. Clients were offered a free behaviour consultation, blood test, trial medication and follow up for a period of up to three months. Potential cases were screened using a series of inclusion and exclusion criteria (Table 1). Table 1 Inclusion and exclusion criteria To be considered for the study, 4 initial documents had to be completed: a veterinary referral form to be completed by their referring vet and returned with a full medical history; the University of Lincoln Animal Behaviour Clinic standard client questionnaire; a Positive and Negative Activation Scale (PANAS); an individualised diary (Additional file 1) to be completed to provide one week of baseline data for their dog's reaction in eliciting context(s) established through a telephone discussion. The diary was adapted from The Lincoln Sound-Sensitivity Scale [10] to relate more broadly to recognisable signs of fear and anxiety in contexts other than noises. To do this an on-line forum of pet behaviour counsellors [42] was asked to provide signs they would attribute to fear and anxiety in dogs. As a result of this three signs were added to the diary – yawning; licking lips; moving away. This gave 20 specific signs with an additional 'others' category. The 'frequency' score was replaced by an 'Event Occurrence' box which could be marked 'yes' or 'no'. The severity score range was changed from '0–5' to '1–5'. A total score for the dog's reaction to that eliciting context on that occasion, could then be calculated by totalling each severity score for each sign/category (maximum score = 105). Examples of eliciting contexts included: encounters with strangers; noises; dogs etc. The same diary format was used during baseline and the follow up period to monitor the dog's reactions ('Event occurrence' and 'Severity') based on the owner's observation and scoring. For the single dog with separation related problems (Case 17), video footage was taken weekly whilst the dog was home alone and reviewed by the owner in order to complete the diary scoring. Of the initial 28 owners sent this information, 7 failed to return the complete paperwork and one dog was euthanized before the consultation for unrelated reasons. This left 20 dogs with suspected fear and anxiety related problems. The clinician in immediate charge of the case (KM) under the supervision of a European and RCVS recognised specialist in behavioural medicine (DM) assessed all the documents completed at the enrolment stage. Behaviour consultation Each of the 20 dogs in the initial group was brought to the University of Lincoln Animal Behaviour Clinic for a behaviour consultation, typically lasting 2–3 h. The approach used to establish a diagnosis relating to fear-anxiety adopted the systematic and scientific evaluation of the four key components of emotion (as described by Mills et al., [10]) relating to context, physiological arousal, behavioural tendencies and communicative elements. Clinical examination and blood sampling During the consultation, a full clinical examination was performed on each dog and bodyweight was measured. Jugular venepuncture was performed to obtain a complete blood count, serum biochemistry, thyroid panel (fT4, TT4, cTSH) and basal cortisol values for each dog. Behaviour modification programmes Given the wide range of presenting problems, it was not desirable to standardise the treatment protocols for each dog. Behaviour modifications were tailored to the individual, taking into account the dog, the presenting problems, the owner and other factors relating to the dog's specific environment. Advice was given during the consultation on techniques, which were demonstrated where necessary, and owners were provided with an aide memoire of key points, followed up by a full written report within 7 days of the consultation. Use of imepitoin As imepitoin is licensed for use in dogs at a dose range of 10-30 mg/kg twice daily, all cases were commenced on a dose around 10 mg/kg twice daily. During the study, and depending on the individual's progress, dose alterations were made. Dose changes were made at intervals of no less than two week periods. The rationale for increasing the dose was twofold: 1) if there was no apparent improvement at the previous dose; 2) if improvement at the previous dose had plateaued and the behaviour problem was not resolved to the satisfaction of the owner and behaviour clinicians. Monitoring during follow-up period During the follow up period, owners completed an eliciting context diary sheet every time their dog was exposed to an eliciting context identified in the baseline period, and these were submitted to the investigators every two weeks. These diaries were scored and a telephone call to the owner was arranged to review progress. End of study All cases used in the final analysis were followed for at least 3 months including the one week of baseline monitoring before reaching the 'decision point' (this varied between 11 to 19 weeks depending on the availability of the owner to complete a telephone survey at the end of the study). At this time owners were asked to complete a second PANAS and questioned about various aspects of their experience of using imepitoin including: ease of administration; satisfaction with overall treatment success; whether they would use imepitoin at their own expense (and if so, tactically or continuously). They were also given the clinician's opinion on using imepitoin for their dog in the future and were offered further advice on behaviour modification as necessary. In those owners continuing imepitoin responsibility for prescribing imepitoin was passed to the referring veterinary surgeon. Imepitoin can be stopped abruptly, however for those owners stopping imepitoin, guidelines were given for weaning which was typically reducing each dose by 50% for 2 weeks before stopping completely The data were analysed informally for evidence of a relationship between imepitoin dosing and owner report of changes in behaviour with statistical analysis undertaken using Minitab 17. Normality of distribution of data were assessed using Anderson-Darling normality tests. Effect sizes were calculated for all assessments of difference using Cohen's d [43]. Effects on reactions Two main metrics were used to monitor efficacy on fear-anxiety reaction scores: average weekly fear-anxiety reaction scores (AWR) and average weekly global fear-anxiety scores (AWG). The equations were developed specifically for this study to use the owner's diary entry scores to assess response and control for the number of eliciting contexts. In addition, given that the eliciting contexts did not occur every week, we adopted a method of conservative data imputation in order to avoid having missing scores. Imputations for responses in weeks when the eliciting context did not occur in a given week were made using the following rules: if a week of data was missing during the follow up, the average of the weeks on either side was taken and imputed (i.e. the value used for that week was estimated from the average of the week before and afterwards); if in the final week of the follow up period, there was no eliciting context, the last value obtained before this was imputed. This method probably provided a conservative estimate of the effect, potentially underestimating any effect, since it referred back to an earlier stage of treatment. At all the time points analysed (baseline, week 1, week 11 and decision point): for the primary measure of interest, average global weekly scores, 3 imputations were made out of 68 entries (3/68 = 4.4%); for the measure of secondary interest, average weekly reactions, 28 imputations were made out of 140 entries (28/140 = 20%). All these data related to an absence of the eliciting context in a given week (e.g. a week where there was no exposure to an eliciting context recorded by the owner) and not lost data from recorded responses. Statistical analysis was stratified with AWR and AWG as the primary outcome measures and subsequent analyses post hoc based on the significance of the primary measures. As there were 2 measures a Bonferroni correction was applied to AWR and AWG only as to apply it to all post hoc calculations would risk a type II error. Average weekly reactions scores The average weekly reaction score (AWR) (maximum score 105) was calculated for each dog for each eliciting context as shown in the example below for a dog with noise sensitivities: $$ AWR\ \left( individual\ noise\right)=\frac{Sum\ of\ diary\ scores\ for\ that\ individual\ noise\ that\ week}{Number\ of\ exposures\ to\ that\ individual\ noise s\ that\ week} $$ $$ AWR\ (noises)=\frac{Sum\ of\ all\ AWR\ \left( individual\ noises\right)\ that\ week}{Number\ of\ different\ individual\ noises\ exposed\ to\ that\ week} $$ Similar equations were used to calculate AWR scores for dogs with social fear-anxiety and also non-social fear-anxiety. To compare differences in the AWR in dogs between eliciting contexts grouped into noise sensitivities, social fear-anxiety and non-social fear-anxiety (excluding noise sensitivities at specific time points (week 1, week 11, decision point) whilst on treatment through the study, paired sample t-tests were used. The accepted level of significance after Bonferroni correction was p < 0.025. Average weekly global scores The average weekly global score (AWG) (maximum score 105) was calculated for each dog using the following equation: $$ AWG=\frac{Sum\ of\ all\ AWR\ that\ week}{Number\ of\ eliciting\ contexts\ recorded\ that\ week} $$ To compare the average weekly global scores in dogs between baseline and specific time points (week 1, week 11, decision point) whilst on treatment through the study, paired sample t-tests were used. The accepted level of significance after Bonferroni correction was p < 0.025. Effects on temperament Positive and negative activation To compare the effects of treatment on owner reported positive and negative activation (from the PANAS) between baseline and the decision point of treatment, paired sample t-tests were used. A value of p < 0.05 was considered significant. Owner satisfaction with treatment A Spearman's correlation coefficient was computed to assess whether owner satisfaction with treatment success correlated with: i) changes in reactions (percentile reduction in average global weekly fear-anxiety reactions from baseline to decision point); ii) changes in temperament (percentile reduction in negative activation from baseline to decision point). A value of p < 0.05 was considered significant. At the end of the study, owners were also asked about their interest in continuing treatment with imepitoin at their own expense and to rate the ease of administration of the imepitoin The initial group was composed of 20 dogs (see Table 2): three dogs were withdrawn from the study: case 1 due to change in analgesia during the follow-up period whilst on imepitoin case 4 due to a reported adverse event whilst on imepitoin; case 18 due to abnormalities in initial laboratory evaluation results before imepitoin was commenced. This left 17 dogs in the final analysis (Table 3) comprising 11 females (64.7%) and 6 males (35.3%) (all dogs were neutered), of a range of breeds and ages ranging from 1 year 1 month to 10 years 7 months (average age of 4 years 6 months). Bodyweight ranged from 5.0 kg to 36.7 kg (average bodyweight 18.9 kg). All dogs had been owned for a minimum of 3 months by their owners prior to enrolment. Table 2 Initial group and overview of clinical details and withdrawals Table 3 Population used in final analysis, dose of imepitoin, AWG and PANAS at specific time points and owner satisfaction Clinical examination and laboratory evaluation findings Clinical examination revealed abnormalities in 6 cases, with some cases having more than 1 abnormality: gait abnormalities/musculoskeletal problems (5 dogs), interdigital cyst (1 dog), entropion (1 dog), dental disease (1 dog). Further investigations/treatments were performed at the referring veterinary practices. Where this included the use of trial analgesia, or surgical correction (canine extraction, entropion correction) a new baseline week of diary entries was established after recovery/once deemed stable on analgesia for approximately 4 weeks. Laboratory evaluation revealed abnormalities in one dog (case 18) indicative of atypical hypoadrenocorticism and this dog did not proceed to the treatment group. Among the 19 dogs treated with imepitoin, 7 adverse events were reported in 6 dogs, these were not necessarily associated with medication but are recorded here in accordance with best practice. Two dogs showed signs of ataxia, and 1 dog chewed some plastic and vomited. These three incidents resolved without specific treatment whilst continuing the imepitoin. Recurrence of a pre-existing lameness occurred in Case 1 resulting in withdrawal from the study by the investigators due to a change in analgesia regime by the referring veterinarian. Diarrhoea was seen in 1 dog (Case 17) which resolved on cessation of imepitoin at 10 mg/kg; when the drug was reintroduced at the lower dose of 5 mg/kg there was no recurrence in diarrhoea and the dog remained on this dose. Case 4 had 2 episodes of ataxia and muscle tremors, (including around the head) the first during week 7 and the second during week 8 of treatment with 10 mg/kg imepitoin. Both of these episodes occurred around the time where noises were audible which had historically resulted in a fearful response. The blood profile was repeated and no abnormalities were detected. Despite general improvements in behaviour in response to noise out with these events, the owner opted to stop the imepitoin and the dog was withdrawn from the study. Fear-anxiety triggers Of the 17 cases used in the final analysis 6 cases (35.3%) had a single eliciting context monitored, with the remaining 11 cases (64.7%) having 2 or more eliciting contexts each. This gave a total of 35 different eliciting contexts for fear-anxiety: 13 social (visitors, strangers, dogs, crowded areas, home alone) and 22 non-social (14 of which were noises, the remainder being: walking, moving from sofa, novel items, postal delivery, new environments). Behaviour modification recommendations generally included managing the dog's exposure to triggers which included environmental modifications and reinforcing appropriate behaviour. Other recommendations included introduction of a safe haven at home (12 cases); operant counterconditioning protocols with hand-touch to check-in with owner when the dog has heard a noise on walks (10 cases); desensitisation and counterconditioning protocols with sound CDs (9 cases – only 1 of the 9 used the sound CD during the follow up period); basket muzzle training (4 cases); teaching a 'go to mat' behaviour (3 cases); introduction of activity feeders (3 cases); introduction of front-attaching harness (2 cases); desensitisation and counterconditioning to vet practice (1 case); toilet training advice (1 case); cues for 'say hello, say goodbye' when meeting and disengaging from other dogs (1 case). The single case of separation related problems was advised on desensitisation to leaving rituals, introduction of safety signal when leaving dog alone, changing leaving/returning ritual and teaching increased independence from owner. During the study, of the 17 dogs in the final analysis, 4 received a maximum trial dose of around 30 mg/kg twice daily, 10 dogs received a maximum trial dose of around 20 mg/kg twice daily and 2 dogs remained on the initial dose of around 10 mg/kg twice daily, with 1 dog dropping down to and being maintained on 5 mg/kg twice daily. Mean doses of imepitoin being used at key time points were: week 1 (mean = 10.6 mg/kg, median = 10.5 mg/kg); week 11 (mean = 17.7 mg/kg, median = 19.3 mg/kg); decision point (mean = 21.3 mg/kg, median = 20.0 mg/kg). The primary measure of interest was the average weekly global scores (AWG). An Anderson-Darling normality test showed that the data distribution was not significantly different from normal across the 17 subjects at baseline (p = 0.306), week 11 (p = 0.463), decision point (p = 0.088) and mean (p = 0.064), but were significantly different to normal at week 1 (p = 0.037). In this situation, parametric tests can be used to examine effects within the population studied, but generalisation to the wider population should be more cautious [44] for the data relating to non-normally distributed data. Given the acknowledged preliminary nature of this study (case series), it is therefore acceptable to use parametric tests with this caution acknowledged. Summary effects are displayed in Table 4. Table 4 Average weekly global fear-anxiety reaction scores for all 17 dogs in final analysis There was a statistically significant difference in the AWG between baseline and all specific time points whilst on treatment. These effect sizes are large and the results indicate that imepitoin alongside a behaviour modification programme had a useful effect on reducing AWG in dogs, with an initial meaningful effect being seen within the first week of treatment. Average weekly reaction scores The secondary measure of interest was the average weekly reaction scores (AWR) between the eliciting context groups previously described. An Anderson-Darling normality test showed that the data were normally distributed across the average weekly reactions for different eliciting contexts and subjects at baseline (p = 0.565), week 1 (p = 0.051) and week 11 (p = 0.139), but not at the decision point (p = <0.005). Noise sensitive group For the noise sensitive group (n = 14), there was a significant difference in the AWR between: baseline and week 1; baseline and week 11; baseline and decision point. There was a larger effect size at week 11 and decision point compared to week 1. The data are displayed in Table 5. Table 5 Average weekly reaction score – noise sensitive group (n = 14) Non-social fear-anxiety group For the non-social fear-anxiety group (excluding noise sensitivities) (n = 8), there was a significant difference in the average weekly reaction scores between: baseline and week 1; baseline and week 11; baseline and decision point. There was a larger effect size at week 11 and decision point compared to week 1. The data are displayed in Table 6. Table 6 Average weekly reaction score – non-social fear/anxiety group (excluding noise sensitivities) (n = 8) Social fear-anxiety group For the social fear-anxiety group (n = 13), there was a significant difference in the average weekly reaction scores between: baseline and week 1; baseline and week 11; baseline and decision point. There was a larger effect size at week 11 and decision point compared to week 1. The data are displayed in Table 7. Table 7 Average weekly reaction score – social fear/anxiety group (including single case of separation related problems) (n = 13) These results indicate large effects in reducing average weekly reaction scores across dogs with noise sensitivities, social and non-social fears and anxieties, with significant effects being seen within the first week across all groups, and larger effects being seen at week 11 and decision points compared to week 1. Negative activation The paired sample t-test used to compare negative activation scores showed a significant difference in the mean scores between baseline and the decision point with a large effect size. Positive activation There was no significant difference in the mean positive activation scores between baseline and the decision point. Effect size was therefore not calculated. The data are displayed in Table 8. Table 8 Positive and negative activation scale scores These results suggest that imepitoin alongside a behaviour modification programme has a large statistically significant effect on reducing negative activation (i.e. fearfulness/anxiousness in dogs) as measured through the PANAS. Owners were also asked to rate their overall satisfaction with treatment success: 6 (35.3%) were very satisfied, 6 (35.3%) satisfied, 4 (23.5%) partly satisfied/partly dissatisfied, 0 dissatisfied and 1 (5.9%) very dissatisfied. The owner who was very dissatisfied (case 15) reported an improvement in anxiety on walks during treatment, but insufficient change in reactions to noises which was the main presenting problem. There was no correlation between owner satisfaction with either percentile reduction in average global weekly fear-anxiety reactions (r s = −0.288; p = 0.262) or percentile reduction in negative activation from baseline to decision point. (r s = −0.201; p = 0.439). Continuation of treatment at owners' expense Of the 17 owners, 13 (76.5%) suggested they would use imepitoin at their own expense: 5 opted to use it on a continual basis from the end of the follow up period; 5 suggested they would consider using it on a tactical basis (e.g. leading up to and for the duration of periods during the year with increased risk of exposure to stressors); 3 suggested they would use it either continually or tactically depending on the outcome of discontinuing the imepitoin on their dog's behaviour – i.e. if improvements seen during the treatment period were not sustained, they would consider restarting the use of imepitoin. The remaining 4 owners (23.5%) suggested they would not consider using imepitoin again. In 3 of the 5 cases who opted to stop imepitoin on a continual basis and consider use on a tactical basis, the consulting clinician (KM) would have preferred using the imepitoin on a continuous basis given the reported improvements during the follow up period. In all other decisions the clinician was in complete agreement with the client's decision. Ease of administration At the end of the study, owners were asked to rate the ease of administration of imepitoin with the results being: 6 (35.3%) very easy, 9 (52.9%) easy, 2 (11.8%) moderately easy/moderately difficult. No owners rated administration as difficult or very difficult. Owner reported improvements with change in dose of imepitoin During the study, there were 7 owner reports of improvements in behaviour quickly following dose increases in imepitoin suggesting specific efficacy of imepitoin. Five cases where dose was increased from approximately 10 mg/kg to 20 mg/kg (Cases 5, 11, 12, 13, 16), and 2 cases where dose was increased from around 20 mg/kg to around 30 mg/kg (Cases 2, 12). Owner reported reduction in recovery time during treatment with imepitoin Three owners (Case 5, 12, 13) reported reduction in recovery time (i.e. quicker recovery) whilst on imepitoin following exposure to fireworks compared to previous years. Owner reported deterioration on cessation of imepitoin Two owners continued to keep diary entries through weaning, and behavioural deterioration on cessation of therapy with continuation of the behaviour modification programme suggests specific efficacy of imepitoin. In case 15 the owner reported an increase reluctance to go for walks which had improved during the period on imepitoin. In Case 3 the owner reported an increase in noise sensitivity and reactivity to other dogs at classes which had improved during the period on imepitoin. Dog's responses from diaries A total of 1210 individual diary forms were completed by owners during the study. Dog's responses for each of the options from the diary were assessed with mean scores between baseline ('off imepitoin) and the entire treatment period ('on imepitoin') with the percentage change shown in Table 9. There was a reduction in mean scores of all clinical signs across all eliciting contexts during treatment compared with baseline with the exception of one single episode of diarrhoea (under title of 'vomiting/urinating/defecating') when home alone reported for Case 17 which was reported as an adverse event. Table 9 Clinical signs as reported by owners across all eliciting contexts between baseline and whole period on imepitoin These results provide initial evidence of the potential value of using imepitoin alongside a behaviour modification programme in reducing the average weekly global scores reported by owners of dogs with fear-anxiety related problems. When grouped into noise sensitivity, social and non-social fear-anxiety based problems, a reduction in average weekly reaction scores were seen across all groups. These effects appeared to be quick acting (within a week of the onset of administration of treatment) but improve over time. In addition imepitoin alongside a behaviour modification programme significantly reduced owner report of fearfulness as a temperament trait in dogs. Therefore it may be of value in cases where dogs have been identified as having a generally fearful or anxious temperament and potentially to help prevent a recurrence of problems in such individuals, although this remains to be tested empirically. At the end of the study 76.5% of owners would consider using imepitoin at their own expense, 88.2% of owners found imepitoin very easy or easy to administer, and 70.6% of owners were satisfied or very satisfied with treatment success. This demonstrates that there were also clinically meaningful improvements being seen by dog owners during the treatment period, although owner satisfaction did not relate to the scale of improvement measured here. This might be because owner satisfaction may depend on a few signs that are of particular concern to them (e.g. destructiveness or vocalisation) [10] whereas the metric used to assess improvement considers all signs even if not problematic to the owner. Although case series provide a relatively low level of evidence among clinical studies [45] they can serve an important purpose in advancing medical knowledge [46]; for example, this report provides the first exploration of the potential application of imepitoin as an anxiolytic for clinical behaviour problems. The majority of dogs had multiple problems extending from fear-anxiety, and so two types of metric were used. One type of metric (average global weekly scores; negative activation scores) gave a measure of the overall response for the totality of problems. The other (average weekly reaction scores) focused on the response to treatment to particular types of problem, grouped by context into social, non-social or noise related fear-anxiety problems. This resulted in dogs being included in multiple evaluations for average weekly reaction scores according to which context(s) the problem(s) were apparent, however only data relating to the relevant eliciting contexts for that context were included in that assessment, and not the global problem data. In the current study, baseline diary entries before commencing treatment meant each dog could acted as its own control, and although there was no control for dogs receiving imepitoin without behaviour modification advice, this is more typical of the recommended way of using medication in clinical behaviour cases. Medication is typically used as an adjunct to behaviour modification advice therefore this study replicates how imepitoin would likely be used in veterinary behavioural practice. However, given the rapid onset of effect within the first week, the finding that during the study there were 7 owner reports of improvements in behaviour quickly following dose increases, and the 2 owner reports of deterioration in their dogs behaviour on cessation of imepitoin, it seems likely that there is a specific effect of the drug in these cases and the results are not simply due to the behaviour modification programme. These improvements seen within the first week of treatment, also raise the question of the potential to use imepitoin tactically. In the current study, this was important for those owners who saw benefits whilst their dog was on imepitoin but did not wish to use it continuously, but would consider using it at times of the year where there would be an increased risk of exposure to triggers for fear-anxiety (e.g. fireworks season, shooting season). For these cases it was advised to use imepitoin, in future, at the dose found to be effective during the current study, commencing 1 week before the anticipated onset of problematic triggers, and for the duration of this period (this could range from days to weeks depending on the trigger and level of exposure in the environment anticipated by the owner for the dog). Currently unlicensed benzodiazepines are often used tactically, given 30–60 min before the fear-anxiety trigger is expected, but responses to medication by individuals are often hard to predict [24, 25]. Further studies are required to investigate the potential benefits of using imepitoin in this way or in the context of individual dosing in relation to a specific event, in particular, the minimum period of time imepitoin can be given for an effect to be seen. Typically other anxiolytics such as SSRIs and TCAs may have an onset of action of around 3–5 weeks [15], so in cases where a more rapid onset of action is required, imepitoin may be a valuable option. There were significant differences in reports of reactions to noises between week 1 of treatment and both week 11 and decision point. It should be noted that the average dosage of imepitoin being administered across all dogs differed at these time points, as it was determined and increased on an individual basis. Mean dosage at week 1 was 10.6 mg/kg, at week 11 was 17.7 mg/kg, and at the decision point was 21.3 mg/kg. Reduction in reaction scores could therefore reflect a difference in dose being given, be the result of a longer duration of treatment or reflect the synergistic effect of imepitoin and behaviour modification therapy. However, in light of these findings 20 mg/kg imepitoin twice daily appears to be a reasonable recommended dose for anxiolytic effects with dose titration built around this dose according to effect. The study used a standardised diary format to reduce subjectivity in owner responding, and self-reporting to gather data for analysis. There can be multiple limitations with this method - during the study there were 3 owners who suggested that they were failing to record all eliciting contexts resulting in 'no response' from their dog. This suggests that improvements may actually have been under-reported. The diary was used to record a summary of what the owner saw during the whole event their dog was exposed to the eliciting context, and there was no standardised duration of time each owner was required to observe their dog for. A caregiver placebo effect has been shown in dogs receiving treatment for osteoarthritis [47] and should also be considered when assessing the data from the study, which can potentially result in over-reporting of improvements by owners. The diary used did not include a section on recovery time after exposure to an eliciting context however 3 owners spontaneously reported reduced recovery times after exposure to noises compared to the time before treatment with imepitoin. Recovery time might therefore be a useful metric to include in future studies. There were 6 dogs with 7 reported adverse events during the trial, with a high likelihood (as judged by the authors) of the imepitoin being implicated in 2 of these (mild ataxia, Case 11; diarrhoea, Case 17) but the likelihood of the drug's involvement in the other 3 is largely unknown although in one we consider it unlikely (Case 1). Paradoxical excitement can occur in species (including dogs) receiving benzodiazepines [24, 31], and although a potential adverse effect of using imepitoin is transient hyperactivity [35], there were no dogs in the study where this was reported. Tolerance to the effects of benzodiazepines can develop with ongoing use [15, 31] such as those demonstrated in anxiolytic models in mice [48]. The development of tolerance to the anticonvulsant effects of diazepam in dogs has also been documented [49]; however tolerance to the anticonvulsant effects of imepitoin has not been demonstrated [33, 37]. Additionally, when looking at anxiolysis, the results of this study showed significant effects within the first week of treatment with continued improvements seen at week 11 and decision points which is not suggestive of tolerance developing within this time frame. Concerns have been raised over the potential for benzodiazepines disinhibiting and therefore increasing aggressive behaviour in dogs and other species [15, 31] and one study reported increased aggressive behaviour and new aggression in a small proportion of dogs treated with diazepam [24]. There were no dogs in the study where aggressive behaviour was reported in the diary entries where it had not been seen before, and overall, there was a 50.3% reduction in owner reported signs of aggressive behaviour across all dogs whilst on imepitoin compared to the baseline period. In the human literature, there is conflicting evidence supporting or refuting the amnesic, memory and cognitive impairing effects of long term benzodiazepine use (for a meta-analysis and review see [50]). During the study there were no reports of adverse events suggestive of memory impairments such as loss of previously learned behaviours. In addition, all owners were asked to use operant and/or classical conditioning as part of their individualised behaviour modification programmes and subjectively there appeared to be no difficulty in implementation of such training protocols. It is important to consider both the physical health and concurrent medical problems of any patient presenting with a behavioural complaint [51]. Although this study excluded dogs with a history of seizures, given that behavioural co-morbidities are common in canine epilepsy [38], further studies could explore the value of using imepitoin in these patients. Musculoskeletal pain may cause or exacerbate behaviour problems such as aggressive behaviour [52, 53]. Fear and anxiety may also amplify pain [54]. For these reasons, it was important to establish that any medical abnormalities detected on clinical examination were investigated and treated prior to commencing imepitoin and a behaviour modification plan, which occurred in 6 of the 20 dogs enrolled in the study. Indeed 11 subjects were not enrolled in the study due to uncontrolled medical reasons, with 8 of these related to suspected musculoskeletal pain. In addition one dog (Case 1) was withdrawn from the analysis due to recurring lameness which resulted in increased general anxiety and required a change in analgesia. Pain remains an important differential and moderator of response in anxiety cases, and so all cases need careful medical evaluation. This report provides the first exploration of the potential application of imepitoin as an anxiolytic for clinical behaviour problems. This study suggests that imepitoin may be an effective medication to aid the treatment of a range of fear-anxiety based problems when used alongside a behaviour modification programme. The current veterinary formulation of imepitoin licensed for long term use in dogs with epilepsy appears to be safe and well tolerated. A dose of 20 mg/kg imepitoin twice daily appears to be an appropriate starting dose for anxiolytic effects in most patients, with a fast onset of action with improvements being seen within the first week of commencing treatment, giving it an advantage over some other classes of anti-anxiety medication such as TCAs and SSRIs. Further research with a randomized, double-blind, placebo-controlled trial would be useful to confirm the apparent efficacy reported here. PDSA and YouGov. PDSA Animal Wellbeing (PAW) Report 2011 - The State of Our Pet Nation. 2011. Blackwell EJ, Bradshaw JW, Casey RA. Fear responses to noises in domestic dogs: prevalence, risk factors and co-occurrence with other fear related behaviour. Appl Anim Behav Sci. 2013;145(1):15–25. APBC - Association of Pet Behaviour Counsellors. Annual Review of Cases 2012. http://www.apbc.org.uk/system/files/apbc_annual_report_2012.pdf. Accessed 20 August 2016. Bamberger M, Houpt KA. Signalment factors, comorbidity, and trends in behavior diagnoses in dogs: 1,644 cases (1991–2001). J Am Vet Med Assoc. 2006;229(10):1591–601. Sherman BL, Mills DS. Canine anxieties and phobias: an update on separation anxiety and noise aversions. Vet Clin North Am Small Anim Pract. 2008;38(5):1081–106. Casey R. Fear and stress. BSAVA manual of canine and feline behavioural medicine. 2002:144–53. Levine E. Sound sensitivities. BSAVA manual of canine and feline behavioural medicine Quedgeley, Gloucester: British Small Animal Veterinary Association. 2009:159–68. Mills DS. Marchant-Forde JN. The encyclopedia of applied animal behaviour and welfare: CABI; 2010. Panksepp J. Affective neuroscience: the foundations of human and animal emotions: Oxford university press; 1998. Mills DS, Dube MB. Zulch H. Stress and pheromonatherapy in small animal clinical behaviour: John Wiley & Sons; 2012. Panksepp J, Fuchs T, Iacobucci P. The basic neuroscience of emotional experiences in mammals: the case of subcortical FEAR circuitry and implications for clinical anxiety. Appl Anim Behav Sci. 2011;129(1):1–17. Serpell JA, Hsu Y. Development and validation of a novel method for evaluating behavior and temperament in guide dogs. Appl Anim Behav Sci. 2001;72(4):347–64. Sheppard G, Mills DS. The development of a psychometric scale for the evaluation of the emotional predispositions of pet dogs. Int J Comp Psychol. 2002;15(2). Landsberg GM, Hunthausen WL, Ackerman LJ. Behavior problems of the dog and Cat3: behavior problems of the dog and cat. Elsevier Health Sciences; 2012. Overall K. Manual of clinical behavioral medicine for dogs and cats. Elsevier Health Sciences; 2013. Takeuchi Y, Houpt KA, Scarlett JM. Evaluation of treatments for separation anxiety in dogs. J Am Vet Med Assoc. 2000;217(3):342–5. King J, Simpson B, Overall K, Appleby D, Pageat P, Ross C, et al. Treatment of separation anxiety in dogs with clomipramine: results from a prospective, randomized, double-blind, placebo-controlled, parallel-group, multicenter clinical trial. Appl Anim Behav Sci. 2000;67(4):255–75. Seksel K, Lindeman M. Use of clomipramine in treatment of obsessive-compulsive disorder, separation anxiety and noise phobia in dogs: a preliminary, clinical study. Aust Vet J. 2001;79(4):252–6. Ibáñez M, Anzola B. Use of fluoxetine, diazepam, and behavior modification as therapy for treatment of anxiety-related disorders in dogs. Journal of Veterinary Behavior: Clinical Applications and Research. 2009;4(6):223–9. Simpson BS, Landsberg GM, Reisner IR, Ciribassi JJ, Horwitz D, Houpt KA, et al. Effects of reconcile (fluoxetine) chewable tablets plus behavior management for canine separation anxiety. Vet Ther. 2007;8(1):18. Pageat P, Lafont C, Falewee C, Bonnafous L, Gaultier E, Silliart B. An evaluation of serum prolactin in anxious dogs and response to treatment with selegiline or fluoxetine. Appl Anim Behav Sci. 2007;105(4):342–50. Joby R, Jemmett J, Miller A. The control of undesirable behaviour in male dogs using megestrol acetate. J Small Anim Pract. 1984;25(9):567–72. Walker R, Fisher J, Neville P. The treatment of phobias in the dog. Appl Anim Behav Sci. 1997;52(3):275–89. Herron ME, Shofer FS, Reisner IR. Retrospective evaluation of the effects of diazepam in dogs with anxiety-related behavior problems. J Am Vet Med Assoc. 2008;233(9):1420–4. Crowell-Davis SL, Seibert LM, Sung W, Parthasarathy V, Curtis TM. Use of clomipramine, alprazolam, and behavior modification for treatment of storm phobia in dogs. J Am Vet Med Assoc. 2003;222(6):744–8. Ogata N, Dodman NH. The use of clonidine in the treatment of fear-based behavior problems in dogs: an open trial. Journal of Veterinary Behavior: Clinical Applications and Research. 2011;6(2):130–7. Korpivaara M, Laapas K, Huhtinen M, Schöning B, Overall K. Dexmedetomidine Oromucosal gel for alleviation of acute anxiety and fear associated with noise in dogs. J Vet Intern Med. 2016;30(4):1493. European Union Veterinary Medicines Directive (2001/82). Dodman NH, Shuster D. Psychopharmacology of animal behaviour disorders. Blackwell Science, 350 Main Street.; 1998. Sigel E, Buhr A. The benzodiazepine binding site of GABA a receptors. Trends Pharmacol Sci. 1997;18(11):425–9. Crowell-Davis SL. Murray T. Veterinary psychopharmacology: John Wiley & Sons; 2008. Löscher W, Hoffmann K, Twele F, Potschka H, Töllner K. The novel antiepileptic drug imepitoin compares favourably to other GABA-mimetic drugs in a seizure threshold model in mice and dogs. Pharmacol Res. 2013;77:39–46. Löscher W, Potschka H, Rieck S, Tipold A, Rundfeldt C. Anticonvulsant efficacy of the low-affinity partial benzodiazepine receptor agonist ELB 138 in a dog seizure model and in epileptic dogs with spontaneously recurrent seizures. Epilepsia. 2004;45(10):1228–39. National Office of Animal Health (NOAH) Compendium. 2016. Rundfeldt C, Tipold A, Löscher W. Efficacy, safety, and tolerability of imepitoin in dogs with newly diagnosed epilepsy in a randomized controlled clinical study with long-term follow up. BMC Vet Res 2015;11(1):1. Tipold A, Keefe T, Löscher W, Rundfeldt C, Vries F. Clinical efficacy and safety of imepitoin in comparison with phenobarbital for the control of idiopathic epilepsy in dogs. J Vet Pharmacol Ther. 2015;38(2):160–8. Rundfeldt C, Gasparic A, Wlaź P. Imepitoin as novel treatment option for canine idiopathic epilepsy: pharmacokinetics, distribution, and metabolism in dogs. J Vet Pharmacol Ther. 2014;37(5):421–34. Shihab N, Bowen J, Volk HA. Behavioral changes in dogs associated with the development of idiopathic epilepsy. Epilepsy Behav. 2011;21(2):160–7. Rundfeldt C, Löscher W. The pharmacology of imepitoin: the first partial benzodiazepine receptor agonist developed for the treatment of epilepsy. CNS drugs. 2014;28(1):29–43. Rostock A, Tober C, Dost R, Bartsch R, editors. AWD131–138 is a potential novel anxiolytic without sedation and amnesia: A comparison with diazepam and buspirone. NAUNYN-SCHMIEDEBERGS ARCHIVES OF PHARMACOLOGY; 1998: SPRINGER VERLAG 175 FIFTH AVE, NEW YORK, NY 10010 USA. Rostock A, Tober C, Dost R, Rundfeldt C, Bartsch R, Egerland U, et al. AWD-131-138. Drugs Future. 1998;23(3):253–5. Association of Pet Behaviour Counsellors (APBC). http://www.apbc.org.uk/. Cohen J. Statistical power analysis for the behavioural sciences. Lawrence Earlbaum Associates: Hillside. NJ; 1988. Kirk RERE. Statistical issues; a reader for the behavioral sciences; 1972. Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach. Edinburgh: EBM Second Edition Churchill Livingstone; 2000. Vandenbroucke JP. In defense of case reports and case series. Ann Intern Med. 2001;134(4):330–4. Conzemius MG, Evans RB. Caregiver placebo effect for dogs with lameness from osteoarthritis. J Am Vet Med Assoc. 2012;241(10):1314–9. Stephens D, Schneider H. Tolerance to the benzodiazepine diazepam in an animal model of anxiolytic activity. Psychopharmacology. 1985;87(3):322–7. Frey H-H, Philippin H-P, Scheuler W. Development of tolerance to the anticonvulsant effect of diazepam in dogs. Eur J Pharmacol. 1984;104(1–2):27–38. Barker MJ, Greenwood KM, Jackson M, Crowe SF. Cognitive effects of long-term benzodiazepine use. CNS drugs. 2004;18(1):37–48. Fatjó J, Bowen J. Medical and metabolic influences on behavioural disorders. BSAVA manual of canine and feline behavioural medicine 2nd ed Gloucester, England: British Small Animal Veterinary Association. 2009:1–9. Barcelos A, Mills D, Zulch H. Clinical indicators of occult musculoskeletal pain in aggressive dogs. Vet Rec. 2015:vetrec-2014-102823. Camps T, Amat M, Mariotti VM, Le Brech S, Manteca X. Pain-related aggression in dogs: 12 clinical cases. Journal of Veterinary Behavior: Clinical Applications and Research. 2012;7(2):99–102. Hellyer P, Rodan I, Brunt J, Downing R, Hagedorn JE, Robertson SA, et al. AAHA/AAFP pain management guidelines for dogs and cats. J Feline Med Surg. 2007;9(6):466–80. We would like to thank Boehringer Ingelheim for funding this research after reviewing the protocol for the study design and allowing us to report our findings without comments or restrictions. We would also like to thank all the dogs and their owners for enrolling in the study and dedicating their time, particularly with keeping diary entries and regular communication with the Animal Behaviour Clinic. The research was funded by Boehringer Ingelheim, producers of Pexion™. Boehringer Ingelheim reviewed the protocol for the study design, but were not involved with collection, analysis or interpretation of the data, nor with the writing of the manuscript. All data generated and analysed during this study are included in this published article. The idea for the work was conceived by DM. The study was designed by DM and KM. The consultations were performed by KM. The data were analysed by DM and KM. The paper was written by DM and KM. Both authors read and approved the final manuscript. This research was undertaken under the permission granted by an Animal Test Certificate obtained from the Veterinary Medicines Directorate prior to the onset of the study (Animal Testing Certificate number: ATC-S-044). The study was granted approval by the School of Life Sciences Ethics Committee, University of Lincoln, England. The owners of all the dogs in the study and their respective referring veterinary surgeons gave written informed consent for the dogs to take part in the study. Animal Behaviour, Cognition and Welfare Group, School of Life Sciences, University of Lincoln, Lincoln, Lincolnshire, UK Kevin J. McPeake & Daniel S. Mills Search for Kevin J. McPeake in: Search for Daniel S. Mills in: Correspondence to Kevin J. McPeake. Eliciting context diary. Owner diary used during study to record their dog's behaviour during exposure to an eliciting context. (DOCX 64 kb) McPeake, K.J., Mills, D.S. The use of imepitoin (Pexion™) on fear and anxiety related problems in dogs – a case series. BMC Vet Res 13, 173 (2017). https://doi.org/10.1186/s12917-017-1098-0 Imepitoin Behaviour and psychology
CommonCrawl
Signal Processing Meta Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up. Is there any alternative basis for a fourier-like transform? Fourier transform of a continuous signal is just the projection of the signal on the sinusoidal family for imaginary part and the same family with phase offseted by a quarter of period for the real part. Beside the magical Euler's formula $e^{ix} = \cos x + i\sin x$ that ease the analysis of the transform and the fact that sinusoidal functions happen to be the solution of a prefect harmonic oscillator, I always wondered where the Fourier supremacy came from. Sinusoidal signals aren't so common in everyday's life. Even with human speech, vocal tract function isn't sinusoisdal. For example, one transform basis that would seams interesting to me would that with a saw tooth function as signals with fast attack and slow release aren't uncommon... I guess that the transform need to be inversible and that might not be the case for every basis, but aren't there any alternative basis that provide a bijective forward and inverse transform ? (And if so, what would be the conditions ?) I will elaborate a little. I'm looking for a basis family of periodicals functions named $B(t)$ for example with B of unitary periods such as : $$ S(f) = T\{s(t)\}(f) = \int s(t) B\left( t * f\right) dt + i \int s(t) B\left((t - \frac{1}{4})*f\right)dt$$ with T inversible in some way... fourier-transform $\begingroup$ The Fourier basis is the eigenbasis for time invariant linear systems. That's why it's so useful, not the fact that signals are sinusoidal. $\endgroup$ – Jazzmaniac The most important reason for the "Fourier supremacy", as you called it, has been pointed out by Jazzmaniac in his comment. The complex exponential $e^{st}$ with $s=\sigma+j\omega$ is an eigenfunction of the convolution operator: $$e^{st}*f(t)=\int_{-\infty}^{\infty}e^{s(t-\tau)}f(\tau)d\tau= e^{st}\int_{-\infty}^{\infty}e^{-s\tau}f(\tau)d\tau=e^{st}F(s)$$ where the eigenvalue $F(s)$, if it exists, is the Laplace transform of $f(t)$, and for $\sigma=0$, i.e. for $s=j\omega$, $F(j\omega)$ is the Fourier transform of $f(t)$ (again, if it exists). Even though there are several important differences between the Fourier and the Laplace transform, they are deeply related, and for answering your question their differences are not important. By analogy, the same is true for the discrete-time Fourier transform and for the $\mathcal{Z}$-transform in the discrete domain. So when in the remainder of my answer I mention the Fourier transform, you might as well replace it by Laplace transform or $\mathcal{Z}$-transform, and everything will remain valid. So why should we be concerned with eigenfunctions of the convolution operator? Because convolution describes the input-output relation of linear time-invariant (LTI) systems, and these systems are often very good approximations of practically important systems such as frequency selective filters, integrators, differentiators, etc. Apart from being an invaluable tool for the analysis of LTI systems, the Fourier transform also has an important physical interpretation as the spectrum, i.e. the frequency content, of a signal. Furthermore, the FFT is a highly efficient algorithm for the computation of its discrete version for finite length signals. So why don't we just say we're doing fine with the Fourier transform and its siblings? Because there are applications where we are looking for other properties of a transform. One example would be time localization in signal analysis, since the basis functions of the Fourier transform have no time localization at all because of their infinite extent. In such cases other transforms have become widely used, such as the short-time Fourier transform and the wavelet transform. For discrete-time signals, filter banks provide very general signal expansions. They can be designed to match the desired properties of the signal expansion. Since you mentioned the saw-tooth function as a possible basis function I would like to point out to you the Slant transform (mentioned in this answer), which is used in image coding. Furthermore, there is the related Slantlet transform, which is a special type of discrete wavelet transform with piecewise linear basis functions. You can see the basis function in Fig. 5 of this paper. Matt L.Matt L. $\begingroup$ See comments on adel bibi's answer, if not evaluated on unit circle, z transform and fourier transform are not equivalents, aren't they ? Is this then an appropriate start for establishing new basis? $\endgroup$ Jul 9 '14 at 6:51 $\begingroup$ @user9020: I wrote that there are important differences between the Fourier and the Laplace transforms (or - equivalently - between the discrete Fourier transform and the $\mathcal{Z}$-transform), but these differences have no bearing on your question. The basis functions are in all of these cases complex exponentials, either with or without a real-valued component in the argument of the exponential. But still, it's one class of basis functions and there's nothing new to be discovered here. $\endgroup$ – Matt L. I'll try to derive the necessary and sufficient conditions for finding a basis of periodic functions for a discrete signal space, i.e. the set $S_N:=\{f:{\mathbb{Z}}/{N\mathbb{Z}}\to\mathbb{C} :\sum_n |f(n)|^2 <\infty\}$ of all square magnitude summable $N$-periodic function from the integers to the complex numbers. The continuous case is interesting too, but out of the scope of a simple answer. There are two statements about the basis we can immediately see to be true: 1) The number of basis functions is identical to the number of fourier basis functions (assuming complex basis functions). 2) The periods of the basis functions must divide $N$, i.e an integer number of basis periods must fit into the period $N$ of the signal. Note that the second condition does not always lead to integer sample periods for the basis functions. The periodicity of the basis functions is therefore realized as discretely sampled function of periodic continuous functions, and the discrete representation itself is not necessarily strictly periodic. The two statements above allow us to enumerate the basis functions. The number of integer divisors of $N$ is surely not greater than $N$, which implies that all integer divisor periods are realised as base function periods. Let's assume $N$ is even ( I leave the odd case to you ), then $N/2$ is an integer divisor and realised as a basis function period. This basis function is even exactly periodic on the discrete sample grid and can be written as the number sequence $(a,b,a,b,\dots)\in\mathbb{C}^N$ with $N/2$ repetitions. We can write this basis vector as a complex Fourier series, and we find that only the DC and the Nyquist coefficients do not vanish. That means the basis function can be written as a linear superposition of the periodic continuations of $(1,1)$ and $(1,-1)$. At this point, we can assume that your basis functions have zero mean, just to make things a little simpler. In this case the DC component of the Fourier decomposition also vanishes and the basis function looks just like the corresponding Fourier basis function. There is also no other linear independent basis function of that same period. From here on, things get simpler. The next basis function candidate has fits one more period into $N$, that means $N/2+1$ repetitions and a period of $N/(N/2+1)$. The basis functions candidates for this period have a Fourier decomposition (again, zero-mean is assumed) that is only non-zero at Nyquist and Nyquist-1. This is only linearly independent from the basis function we already know if the coefficient at Nyquist-1 does not vanish. That also means we can have two independent basis functions for this period (This will typically be one basis function and its complex conjugate). We can continue with this construction by adding another basis period, looking at the Fourier series, enforce linear independence and finding two independent basis functions. This procedure can be repeated $N/2-1$ times, and together with the constant basis function responsible for the DC component we get exactly the $N$ basis functions that we require for invertibility. The constructed basis is precisely then a "transformation" (i.e. a bijective map) if the assumptions we made in the process are true. The necessary conditions were, that the lowest order Fourier coefficient of your periodic basis, expanded in its own intrinsic period, doesn't vanish. We also assumed, that the basis was zero-mean (apart from the constant basis function) and that $N$ was even. These are not necessary but just convenient. You can drop them if you want. So let's summarise. A periodic function gives rise to a basis on a discrete space iff the Fourier coefficient of the base period is non-zero. Then the basis implies a discrete Fourier-like transformation, i.e. it is invertible. JazzmaniacJazzmaniac There's plenty. Laplace transform is just one example. There's uncountably many bijective transformations. So the question would be if there are other useful transforms. Turns out, there are some. $\begingroup$ I am afraid that OP is asking for different basis functions for Fourier Transform, i.e. square waves. $\endgroup$ – jojek ♦ $\begingroup$ So how would you characterize these bases suitable for a "fourier" transform in your sense? A square wave basis is not exactly trivial either - it appears to be ill-defined in the continuous case, and in the discrete case, it's plainly hopeless. Maybe the OP is interested in wavelet bases, which are a different breed, though. $\endgroup$ $\begingroup$ You catch my point - it's not about the Laplace transform. $\endgroup$ Thanks for contributing an answer to Signal Processing Stack Exchange! Not the answer you're looking for? Browse other questions tagged fourier-transform or ask your own question. Fourier transform is an isomorphism...but we don't get when each frequency appears? DFT-like transform using triangle waves instead of sin waves Whether Fourier transform formula be considered as Convolution or Correlation? Phase error correction for Fourier transform basis vectors How to calculate a delay between two signals in frequency domain? Is there any practical application for performing a double Fourier transform? ...or an inverse Fourier transform on a time-domain input?
CommonCrawl
First examples and applications for the weak-star topology Modular function spaces and the ρ-FPP for left reversible semigroups L-Embedded Banach spaces and the FPP for left reversible semigroups with respect to the abstract measure topology The τ-fixed point property for left reversible semigroups Francisco E Castillo-Santos1 and Maria A Japón2Email author Fixed Point Theory and Applications20152015:109 © Castillo-Santos and Japón 2015 Received: 27 January 2015 Published: 9 July 2015 In this article we use the generalized Gossez-Lami Dozo property and the Opial condition to study the fixed point property for left reversible semigroups in separable Banach spaces. As a consequence, some previous results will be deduced and new examples of Banach spaces satisfying the fixed point property for left reversible semigroups are shown. We will also extend some previous theorems when we consider the semigroup formed by a unique nonexpansive mapping and its iterates. Schauder basis sequentially separating norms fixed point property nonexpansive mappings renorming theory Schur property 46B03 A semigroup S is said to be a semitopological semigroup if S is equipped with a Hausdorff topology such that for each \(a\in S\), the two mappings from S into S defined by \(s\to as\) and \(s\to sa\) are continuous. A semitopological semigroup S is said to be left reversible if any two nonempty closed right ideals of S have nonempty intersection. Clearly every Abelian semitopological semigroup and every semitopological group are left reversible. Also left amenable and in particular amenable semitopological semigroups are left reversible [1]. Let C be a subset of a Banach space X and let S be a semitopological semigroup. A nonexpansive action of S on the set C is a map \(\phi:S\times C\to C\), denoted by \(\phi(s,u)=s(u)\) (or su), which satisfies: \(ts(u)=t(su)\) for all \(t,s\in S\) and \(u\in C\). For all \(u_{0}\in C\), the function \(s\in S\to s(u_{0})\in C\) is continuous. For every \(s\in S\), the mapping \(u\in C\to s(u)\in C\) is nonexpansive. A subset C is said to verify the fixed point property for left reversible semigroups if for every left reversible semitopological semigroup S and for every nonexpansive action \(\phi:S\times C\to C\), the set \(\operatorname{Fix}(S) := \{u\in C: t(u)= u,\forall t\in S\}\) is nonempty. Let X be a Banach space and τ be a topology on X. It is said that X has the τ fixed point property (τ-FPP) for left reversible semigroups if every closed, convex, bounded subset C which is τ-compact has the fixed point property for left reversible semigroups. Given a nonexpansive mapping T, if we replace the left reversible semigroup by the discrete and Abelian semigroup \(\{T, T^{2}, T^{3},\ldots\}\) acting from C to C, Definition 1.1 becomes the usual definition of the τ-FPP for nonexpansive mappings. There exist some Banach spaces failing the w-FPP [2] and therefore they fail the w-FPP for left reversible semigroups (we can consider the semigroup \(S=\{T, T^{2}, T^{3},\ldots\}\) where T is the fixed point free nonexpansive mapping in the well-known Alspach example [2]). In 1965 Kirk proved that every Banach space with weak normal structure satisfies the w-FPP for nonexpansive mappings. In a similar way it can be proved that weak∗ normal structure implies the weak∗-FPP in dual Banach spaces. In the seventies Kirk's result was generalized by Lim [3], Holmes and Lau [4] in the setting of nonexpansive actions of left reversible semigroups, that is, weak normal structure implies the w-FPP for left reversible semigroups. In the case of dual Banach spaces, such a general statement is still unknown for the weak∗ normal structure and the weak∗-fixed point property for left reversible semigroups (see Open Problem 6.3 in [5]). Particular examples of dual Banach spaces are known to satisfy the weak∗-FPP for left reversible semigroups. In 1980 Lim [6] proved that the sequence space \(\ell_{1}\) satisfies the weak∗-FPP for left reversible semigroups. In 2010, Lau and Mah in [5] generalized Lim's result by proving that the Fourier-Stieltjes algebra \(B(G)\) of a separable compact group verifies the weak∗-FPP for left reversible semigroups. Notice that if G is the torus group, then \(B(G)\) is isometric to \(\ell_{1}(\mathbb{Z})\). In 2010, Randrianantoanina [7] proved that the space \(\mathcal {T}(H)\) of trace class operators on a Hilbert space also satisfies the weak∗-FPP for left reversible semigroups. He also proved the same property for the Hardy Banach space [8]. However, the techniques used in the previous articles cannot be extended to more general dual Banach spaces since they are mainly based on the following fact: in the above-mentioned Banach spaces, the asymptotic center of a weak∗ compact set with respect to a decreasing net of bounded subsets is proved to be either norm compact or weakly compact. This is not true for every weak∗ compact set in a dual Banach space, as we will later check in Example 2.1. In 2010, Randrianantoanina [7] proved that the Banach space \(L_{1}[0,1]\) or, more generally, every noncommutative \(L_{1}\)-space associated to a finite von Neumann algebra satisfies the fixed point property for left reversible semigroups with respect to the abstract measure topology τ (the convergence in measure topology in case of \(L_{1}[0,1]\)). Here the asymptotic centers of τ-compact sets are norm compact. In this paper we develop new arguments to deduce whether a dual Banach space satisfies the weak∗-FPP for left reversible semigroups. More generally, we will consider τ as any translation invariant topology on a separable Banach space X and we give sufficient conditions to assure the τ-FPP for left reversible semigroups. The strict Opial condition and the generalized Gossez-Lami Dozo property will be our main tools. Most of the previous known results will be deduced from ours, but we will also achieve new examples of Banach spaces which satisfy the τ-FPP for left reversible semigroups. Here we will consider different types of topologies. Firstly we will regard the weak∗ topology in Musielak-Orlicz sequence spaces, in some renormings of \(\ell_{1}\) and in some other dual Banach spaces non-isomorphic to \(\ell_{1}\). We will also consider the topology of the convergence locally in measure in some function spaces, the abstract measure topology in L-embedded Banach spaces and the topology of ρ-almost everywhere convergence in modular function spaces. Moreover, we will extend some known results for nonexpansive mappings to the setting of the fixed point property for left reversible semigroups. We introduce some definitions and concepts. Let X be a Banach space and let \(\{B_{s}\}_{s\in A}\) be a decreasing net of bounded subsets of X. For \(x\in X\) and \(s \in A\), we consider $$\begin{aligned}& r_{s}(x) := \sup\bigl\{ \Vert x - y\Vert : y\in B_{s}\bigr\} , \\& r(x) := \inf\bigl\{ r_{s}(x): s\in A\bigr\} =\lim_{s} r_{s}(x). \end{aligned}$$ Notice that \(r(\cdot)\) is a continuous function for the norm topology. We defined the asymptotic radius and the asymptotic center of a set C with respect to the family \(\{B_{s}\}_{s\in A}\) as $$\begin{aligned}& r_{0} := \inf\bigl\{ r(x): x\in C\bigr\} , \\& \mathcal{AC}\bigl(\{B_{s}\}_{s\in A}, C\bigr) := \bigl\{ x\in C: r(x) = r_{0}\bigr\} . \end{aligned}$$ In the following example we check that asymptotic centers of weak∗-compact sets are not weakly compact in general. Let X be the space \(\ell_{1}\) renormed as follows: $$\Vert x\Vert =\max_{n\in\mathbb{N}} \Biggl(\bigl\vert x(n)\bigr\vert +{1\over 2}\sum_{i=n+1}^{\infty}\bigl\vert x(i)\bigr\vert \Biggr). $$ Since \(\{e_{n}\}\) is a monotonous boundedly complete Schauder basis for \(X=(\ell_{1},\Vert \cdot\Vert )\), this space is isometric to the dual of the closed subspace spanned by the orthogonal functions \(\{e_{n}^{*}\}\). Thus X is a dual space and in fact its weak∗ topology coincides with the \(\sigma(\ell_{1},c_{0})\) topology. Consider the set $$C:= \Biggl\{ \sum_{n=1}^{\infty}t_{n} e_{n}: t_{n}\ge0, \sum _{n=1}^{\infty}t_{n}\le{1\over 2} \Biggr\} , $$ which is a closed convex bounded \(w^{*}\)-compact set. Define the bounded subsets \(B_{s}=\{e_{k}:k\ge s\}\), where the \(\{e_{n}\}\) are the unit basic vectors. In this case \(r(x)=\limsup_{s}\Vert x-e_{s}\Vert \), and it is easy to check that \(r(x)\ge1\) for all \(x\in\ell_{1}\). However, for all k, n with \(k< n\), $$\biggl\Vert {1\over 2}e_{k}-e_{n}\biggr\Vert =1, $$ which implies that \({1\over 2}e_{n}\) belongs to the asymptotic center of C with respect to the sequence \(\{B_{s}\}_{s}\), and this center is not weakly compact. Therefore, the arguments used in [6–8] or [5] to prove the existence of a common fixed point for left reversible semigroups of nonexpansive mappings are not useful in this example. We will later deduce that \((\ell_{1},\Vert \cdot\Vert )\) does satisfy the \(w^{*}\)-FPP for left reversible semigroups. Let \((T, \tau)\) be a topological space. Recall that a function \(f:T\to \mathbb{R}\) is said to be τ-sequentially lower semicontinuous (τ-slsc) if \(f(t)\le\liminf_{n} f(t_{n})\) for every sequence \((t_{n})_{n}\subset T\) with \(\tau\mbox{-} \lim_{n} t_{n}=t_{0}\). From now on, let X be a Banach space and let τ be a translation invariant topology on X, that is, \(\tau\mbox{-} \lim_{n} x_{n}=x\) if and only if \(\tau\mbox{-} \lim_{n} (x_{n}-x)=0\). We introduce the following definitions which generalize two geometric properties which are well known in case of the weak topology. These properties were attempts to get some information about the behavior of the norm on the weakly convergent sequences. The Opial condition was introduced by Opial (1967) [9] and the generalized Gossez-Lami Dozo property was introduced by Jiménez-Melado (1992) [10] when τ is the weak topology. Let \((X,\Vert \cdot\Vert )\) be a Banach space and τ be a topology on X. We will say that X has the generalized Gossez-Lami Dozo property for the topology τ (τ-GGLD) if for every (norm) bounded and τ-null sequence \(\{x_{n}\}\) such that \(\lim \|x_{n}\|\neq 0\) and \(\lim_{n,m, n\neq m}\|x_{n} - x_{m}\|\) exists, it is the case that $$\lim \|x_{n}\| < \lim_{n,m, n\neq m}\|x_{n} - x_{m}\|. $$ It is said that a Banach space \((X,\Vert \cdot\Vert )\) satisfies the Opial condition with respect to a topology τ if $$\liminf_{n}\Vert x_{n}-x_{0}\Vert < \liminf_{n} \Vert x_{n}-x\Vert $$ for all \(x\in X\) with \(x\ne x_{0}\), whenever \((x_{n})\) is a sequence in X with \(\tau\mbox{-} \lim_{n} x_{n}=x_{0}\). The τ-GGLD property and the Opial condition will be the key to our main results. It is well known that these properties are not related. We will also illustrate this assertion with some examples. Let \((X,\Vert \cdot\Vert )\) be a Banach space and τ be a topology on X. It is said that X is uniformly Kadec-Klee with respect to τ, \(\operatorname{UKK}(\tau)\), if for every \(\epsilon>0\) there exists some \(\delta>0\) such that whenever \(\{x_{n}\}_{n}\) is a sequence in the closed unit ball of X, which is τ-convergent to a point \(x\in X\) with \(\inf_{n\ne m} \Vert x_{n}-x_{m}\Vert >\epsilon\), then \(\Vert x\Vert <1-\delta\). Associated with the \(\operatorname{UKK}(\tau)\) property, the following modulus is defined: $$P_{X,\tau}(\epsilon)=\inf\Bigl\{ 1-\Vert x\Vert : \Vert x_{n} \Vert \le 1, \tau\mbox{-} \lim_{n} x_{n}=x, \inf_{n\ne m}\Vert x_{n}-x_{m}\Vert >\epsilon \Bigr\} . $$ In case that τ is the weak topology the previous coefficient is known as Partington's modulus, and it is clear that a Banach space has the \(\operatorname{UKK}(\tau)\) property if and only if \(P_{X,\tau}(\epsilon)>0\) for every \(\epsilon\in(0,2]\). We denote \(P_{X,\tau}(1^{-})=\lim_{\epsilon \to1^{-}}P_{X,\tau}(\epsilon)\). For τ a linear topology, it is not difficult to check that $$\lim \|x_{n}\| \le \bigl(1- P_{X,\tau}\bigl(1^{-}\bigr) \bigr)\lim _{n,m, n\neq m}\|x_{n} - x_{m}\| $$ for every (norm) bounded τ-null sequence \(\{x_{n}\}\) such that \(\lim_{n,m, n\neq m}\|x_{n} - x_{m}\|\) exists. Therefore X verifies the τ-GGLD property whenever X is \(\operatorname{UKK}(\tau)\). 3 Main results Let C be a set and \(\{B_{s}\}_{s\in A}\) be a decreasing net of bounded subsets of X. It is clear that $$\mathcal{AC}\bigl(\{B_{s}\}_{s\in A}, C\bigr) = {\bigcap _{n\in\mathbb{N}}} \biggl\{ x\in C: r(x) \le r_{0}+ {1\over n}\biggr\} . $$ Therefore, if the set C is τ-sequentially compact and the function \(r(\cdot)\) is τ-slsc, the asymptotic center \(\mathcal {AC}(\{B_{s}\}_{s\in A}, C)\) is a nonempty, τ-sequentially compact set. If C is convex, so is \(\mathcal{AC}(\{B_{s}\}_{s\in A}, C)\). We now prove the following technical lemma. Lemma 3.1 Let C be a convex bounded subset of X. Let \(\{B_{s}\}_{s\in A}\) be a decreasing net of subsets of C such that \(\mathcal{AC}(\{B_{s}\}_{s\in A}, C)=C\). If C is (norm) separable, then there exists \(\{x_{n}\}\subset C\) such that $$\lim_{n}\Vert x_{n}-x\Vert =r_{0} $$ for every \(x\in C\), where \(r_{0}\) denotes the asymptotic radius of C with respect to the net \(\{B_{s}\}_{s}\). We will use a similar argument to that of Theorem 2 in [11]. Let \(\{y_{n}\}\) be a dense sequence in C and define \(\overline{y}_{n} = \sum_{i=1}^{n} \frac{y_{i}}{n}\). Since \(y_{1}\in C=\mathcal{AC}(\{B_{s}\}_{s}, C)\), we can find \(s_{1}\in S\) such that $$r_{0}\leq r_{s_{1}}(y_{1})\leq r_{0} + 1. $$ Then select any \(x_{1}\in B_{s_{1}}\) such that \(\|x_{1} - y_{1}\|\geq r_{0} -1\). Assume we have constructed \(x_{1}, x_{2},\ldots,x_{n-1}\) such that for all \(1\leq j\leq k\leq n-1\) we have $$r_{0} - \frac{2}{k} + \frac{1}{k^{2}}\leq\|x_{k} - y_{j}\|\leq r_{0} + \frac{1}{k^{2}}. $$ Take \(s_{n}\in S\) such that \(r_{s_{n}} (y_{i}) \leq r_{0} + \frac{1}{n^{2}}\), \(i=1,\ldots,n\) and \(r_{s_{n}}(\overline{y}_{n})\leq r_{0} + \frac{1}{n^{2}}\). Select \(x_{n}\in B_{s_{n}}\) such that \(r_{0} - \frac{1}{n^{2}}\leq\|x_{n} - \overline{y}_{n}\|\). Fix \(k\leq n\). We then have the following inequalities: $$\begin{aligned} r_{0} - \frac{1}{n^{2}} \leq& \|x_{n} - \overline{y}_{n}\| \leq\sum_{i=1}^{n} \frac{\|x_{n} - y_{i}\|}{n} \\ =& \frac{\|x_{n} - y_{k}\|}{n} + \sum_{i=1, i\neq k}^{n} \frac{\| x_{n} - y_{i}\|}{n} \\ \leq& \frac{\|x_{n} - y_{k}\|}{n} + \sum_{i=1, i\neq k}^{n} \frac {r_{0} + \frac{1}{n^{2}}}{n} \\ =&\frac{\|x_{n} - y_{k}\|}{n} + \biggl(\frac{n-1}{n} \biggr) \biggl(r_{0} + \frac{1}{n^{2}} \biggr). \end{aligned}$$ From this we obtain that \(\frac{r_{0}}{n} - \frac{2}{n^{2}} + \frac {1}{n^{3}}\leq\frac{\|x_{n} - y_{k}\|}{n}\) and it follows that $$r_{0} - \frac{2}{n} + \frac{1}{n^{2}}\leq\|x_{n} - y_{k}\|\leq r_{s_{n}}(y_{k})\le r_{0} + \frac{1}{n^{2}}. $$ Thus for a fixed k, it easily follows \(\lim_{n\rightarrow\infty} \|x_{n} - y_{k}\|= r_{0}\). Since \(\{ y_{n}\}\) is dense in C, we deduce \(\lim_{n\rightarrow\infty} \|x_{n} - x\|= r_{0}\) for all \(x\in C\). □ Recall that every left reversible semitopological semigroup S becomes a directed set when the following partial order is defined: $$a,b\in S, \quad a\ge b\quad \Longleftrightarrow\quad aS\subset \operatorname{cl}(bS), $$ where \(\operatorname{cl}(bS)\) denotes the topological closure of the right ideal bS. Let C be subset of X, S be a left reversible semitopological semigroup, and consider a nonexpansive action of S acting on C. For a fixed element \(u\in C\) define \(W_{s}=\operatorname{cl}(sS(u))\), where here the closure is taken for the norm topology. The sets \(\{W_{s}:s\in S\}\) form a nondecreasing family of subsets of C. In this case define \(r(x)=\lim_{s} r(x, W_{s})\). Moreover, $$r_{ts}(tx)\le r_{s}(x) $$ for all \(t,s\in S\) and \(x\in C\). Indeed $$\begin{aligned} r_{ts}(tx) = & \sup_{y\in W_{ts}}\Vert tx-y\Vert =\sup _{y\in tsS(u)}\Vert tx-y\Vert =\sup_{p\in S}\bigl\Vert tx-tsp(u)\bigr\Vert \\ \le&\sup_{p\in S}\bigl\Vert x-sp(u)\bigr\Vert \le\sup _{y\in W_{s}}(x)=r_{s}(x). \end{aligned}$$ Therefore \(r(tx)=\inf_{s} r_{s}(tx)\le\inf_{s} r_{s}(x)=r(x)\) for every \(t\in S\), and this implies that the set $$C(\lambda) := \bigl\{ x\in C: r(x)\leq\lambda\bigr\} $$ is either empty or S-invariant for every \(\lambda>0\). As a consequence, the following lemma is known. Let X be a Banach space endowed with a topology τ. Let C be a closed convex bounded subset of X which is τ-sequentially compact. Let S be a left reversible semitopological semigroup and consider a nonexpansive action of S on the set C. Assume that the function \(r(\cdot)\) is τ-slsc. Then $$\mathcal{AC}\bigl(\{W_{s}\}_{s\in S}, C\bigr) $$ is a nonempty closed convex τ-sequentially compact set which is S-invariant. Next we obtain fixed point results by means of the τ-GGLD and the Opial property. Let X be a Banach space and τ be a topology on X. Let C be a (norm) separable closed, convex, bounded, τ- compact and τ-sequentially compact subset of X. Let S be a left reversible semigroup generating a nonexpansive action over C. Assume that for some \(u\in C\) the previous function \(r(\cdot)\) is τ-slsc. If X verifies either the τ-GGLD property or the Opial condition with respect to τ, then \(\operatorname{Fix}(S)\neq\emptyset\). In case of the weak topology, the separability of C is not necessary. Let \(\mathcal{F}\) be the family of nonempty, convex, τ-closed and S-invariant subsets of C. Ordering the family by inclusion and using Zorn's lemma, we obtain a set which is minimal with respect to being nonempty, convex, τ-closed and S-invariant. We can then assume that C is the minimal set. Since \(\mathcal{AC}(\{W_{s}\}_{s}, C)\) is also a nonempty, convex, τ-closed and S-invariant subset of C, we have that \(\mathcal{AC}(\{ W_{s}\}_{s\in S}, C)=C\). Let \(r_{0}\) denote the asymptotic radius of C with respect to \(\{W_{s}\}_{s\in S}\) and take \(\{x_{n}\}_{n}\) as in Lemma 3.1. By Theorem III.1.5 of [12] we can further assume that \(\lim_{n,m,n\neq m} \|x_{n} - x_{m}\|\) exists and it must then be equal to \(r_{0}\). Since C is τ-sequentially compact, we can assume that \(\{x_{n}\} _{n}\) is τ-convergent, say to some \(x_{0}\in C\). We have that \(\lim_{n}\Vert x_{n}-x_{0}\Vert =r_{0}\), which contradicts the τ-GGLD property since \(\tau\mbox{-} \lim_{n}(x_{n}-x_{0})=0\). On the other hand, for some \(y\in C\) with \(y\ne x_{0}\), we obtain $$r_{0}=\lim_{n}\Vert x_{n}-x_{0} \Vert = \lim_{n}\bigl\Vert x_{n}-x_{0}-(y-x_{0}) \bigr\Vert =r_{0}, $$ which contradicts the Opial condition. Therefore \(\operatorname{Fix}(S)\neq\emptyset\). In case that τ coincides with the weak topology, it is known that both the w-GGLD condition and the Opial condition for the weak topology imply weak normal structure [10] and therefore the w-FPP for left reversible semigroups [3]. □ Notice that for separable Banach spaces and topologies τ weaker than the norm topology, the τ-compactness of the domain is a superfluous assumption. Indeed, the separability of X implies that X is Lindelöf for the norm and so does for the topology τ since it is weaker that the norm topology. Thus, τ-sequentially compact sets are countably compact and Lindelöf, so they are τ-compact. Many examples of Banach spaces are known to satisfy the GGLD condition or the Opial property with respect to some classical topologies. However, to apply Theorem 3.3, the τ-sequential lower semicontinuity of the function \(r(\cdot)\) for some \(u\in C\) must be checked. In what follows we study equivalent and sufficient conditions to assure this statement. Let \(\{x_{n}\}\) be a bounded sequence. We define the type function associated to the sequence \(\{x_{n}\}_{n}\) by $$\Gamma(x)=\limsup_{n}\Vert x-x_{n}\Vert , \quad x \in X. $$ In case that \(\tau\mbox{-} \lim_{n} x_{n}=0\) we say that Γ is a τ-null type function. Recall that given \(\{B_{s}\}_{s\in A}\) a decreasing net of bounded subsets of X we defined \(r(\cdot)\) associated to the net \(\{B_{s}\} _{s\in A}\) as \(r(x)=\lim_{s} r(x,B_{s})\). The following lemma will be very helpful to assure whether the function \(r(\cdot)\) is τ-sequentially lower semicontinuous. The function \(r(\cdot)\) is τ-slsc if and only if the type functions \(\Gamma(\cdot)\) are τ-slsc. One implication is direct since we can take \(B_{s}=\{x_{n}: n\ge s\}\). Then \(\Gamma(x)=r(x)=\limsup_{n}\Vert x_{n}-x\Vert \). Assume that the type functions are τ-sequentially lower semicontinuous. Take \(\{B_{s}\}_{s\in A}\) any decreasing net of bounded subsets, and let \((y_{n})_{n}\) be a τ-convergent sequence to some point \(y\in X\). We have to prove that \(r(y)\le\liminf_{n} r(y_{n})\). Consider a sequence \((\epsilon_{n})_{n}\) of positive real numbers with \(\lim_{n}\epsilon_{n}=0\). We claim that there exists a sequence \(\{x_{n}\}_{n}\) with \(x_{n}\in B_{s_{n}}\) and \(s_{n}>\cdots> s_{1}\), such that $$\begin{aligned}& r(y)-\epsilon_{n}\le\Vert y-x_{n}\Vert < r(y)+ \epsilon_{n}, \end{aligned}$$ $$\begin{aligned}& \Vert y_{i}-x_{n}\Vert \le r(y_{i})+\epsilon_{n}; \quad i=1,2,\ldots n. \end{aligned}$$ Indeed, take \(s_{1}\in A\) such that $$r(y,B_{s_{1}})< r(y)+\epsilon_{1}; \qquad r(y_{1},B_{s_{1}})< r(y_{1})+ \epsilon_{1} $$ and \(x_{1}\in B_{s_{1}}\) with $$r(y)-\epsilon_{1}< \Vert y-x_{1}\Vert < r(y)+ \epsilon_{1}. $$ Assume now that we have obtained \(x_{1},\ldots,x_{n}\) satisfying (1) and (2). Take \(s_{n+1}>s_{n}\) such that $$r(y,B_{s_{n+1}})< r(y)+\epsilon_{n+1} $$ $$r(y_{i},B_{s_{n+1}})< r(y_{i})+\epsilon_{n+1} $$ for \(i=1,\ldots,n,n+1\). Consider \(x_{n+1 }\in B_{s_{n+1}}\) with $$r(y)-\epsilon_{n+1 }< \Vert y-x_{n+1}\Vert < r(y)+ \epsilon_{n+1}. $$ Moreover, $$\Vert y_{i}-x_{n+1}\Vert \le r(y_{i}, B_{s_{n+1}})\le r(y_{i})+\epsilon_{n+1};\quad i=1,2, \ldots,n+1, $$ and the claim is proved. By (1), \(\limsup_{n}\Vert x_{n}-y\Vert =r(y)\) and by (2), \(\limsup_{n}\Vert x_{n}-y_{i}\Vert \le r(y_{i})\) for every \(i\in\mathbb{N}\). We consider the type function \(\Gamma(x)=\limsup_{n}\Vert x_{n}-x\Vert \). We now have $$\begin{aligned} r(y) = &\limsup_{n}\Vert x_{n}-y\Vert =\Gamma(y)\le \liminf_{i}\Gamma (y_{i}) \\ =&\liminf_{i}\limsup_{n}\Vert x_{n}-y_{i}\Vert \le\liminf_{i} r(y_{i}), \end{aligned}$$ which implies that the function \(r(\cdot)\) is τ-sequentially lower semicontinuous as we wanted to prove. □ In the setting of Theorem 3.3, the set C is τ-sequentially compact, so we can assume that the sequence \(\{x_{n}\}\) obtained in the previous lemma is τ-convergent. Moreover, since the topology is translation invariant, we only need to assume that the τ-null type functions are τ-sequentially lower semicontinuous. Finally, we can state our main result in this section. Let X be a separable Banach space, τ be a translation invariant topology on X such that τ-compact sets are τ-sequentially compact. Assume that the τ-null functions are τ-slsc. If X verifies either the τ-GGLD property or the τ-Opial condition, X has the τ-FPP for left reversible semigroups. Notice that when τ is the weak topology, the separability of X is not necessary. On the other hand, it is said that the τ-null type functions are constant on spheres if $$\limsup_{n}\Vert x_{n}-x\Vert =\limsup _{n}\Vert x_{n}-y\Vert $$ for every \(x,y\in X\) with \(\Vert x\Vert =\Vert y\Vert \), where \(\{ x_{n}\}_{n}\) is a norm bounded τ-null sequence. In [11] (Lemma 2) it is proved that the norm and the τ-null type functions are τ-slsc whenever they are constant on spheres. 4 First examples and applications for the weak-star topology In this section we are going to apply Theorem 3.5 to several different classes of dual Banach spaces endowed with their weak∗ topologies. To begin with, consider \(X=(\ell_{1},\Vert \cdot\Vert _{1})\), where by \(\Vert \cdot\Vert _{1}\) we denote the usual norm, and let τ be the weak∗ topology \(\sigma(\ell_{1},c_{0})\), which is metrizable. It can easily be checked that for every \(w^{*}\)-null sequence \(\{x_{n}\}_{n}\) and for all \(x\in\ell_{1}\), $$\limsup_{n}\Vert x_{n}+x\Vert _{1}=\Vert x\Vert _{1}+\limsup_{n}\Vert x_{n}\Vert _{1}, $$ which implies both the \(w^{*}\)-GGLD property and the \(w^{*}\)-sequential lower semicontinuity of the \(w^{*}\)-null type functions. Therefore we deduce that \(\ell_{1}\) has the weak∗-FPP for left reversible semigroups [6]. This result can be generalized as follows since the same equality holds for \(w^{*}\)-null sequences. Corollary 4.1 Let \((X_{n})_{n}\) be a sequence of finite dimensional Banach spaces. Then the one-direct sum $$X=\oplus_{1}\sum_{n\in\mathbb{N}} X_{n} $$ has the weak ∗-FPP for left reversible semigroups, where the considered predual is defined by \(E=\{x=(x_{n})_{n}: x_{n}\in X_{n}, \lim_{n}\Vert x_{n}\Vert _{X_{n}}=0\}\). In case that G is a separable compact group, its Fourier-Stieltjes algebra \(B(G)\) is the direct one-sum of a sequence of finite dimensional Banach spaces (see Section 4 in [13] and Chapter I, Theorem 11.2 in [14]). Applying Corollary 4.1 we can deduce the following. [5] Let G be a separable compact group and \(B(G)\) be its Fourier-Stieltjes algebra. Then \(B(G)\) satisfies the \(w^{*}\)-FPP for left reversible semigroups. We can now state an improvement of the results in [6] about the \(w^{*}\)-FPP for left reversible semigroups as follows. Let \((X,\Vert \cdot\Vert )\) be a Banach space with a boundedly complete Schauder basis \(\{e_{n}\}_{n}\). Assume that for every block basic sequence \(\{x_{n}\}_{n}\) and for every \(x\in X\), there holds $$\limsup_{n}\Vert x+x_{n}\Vert =\Vert x\Vert + \limsup_{n}\Vert x_{n}\Vert $$ for every \(x\in X\). Then X has the \(w^{*}\)-FPP for left reversible semigroups where \(w^{*}=\sigma(X, Z)\) and Z is the closed subspace spanned by the orthogonal functions \(\{e_{n}^{*}\}_{n}\). When the Schauder basis is boundedly complete, the Banach space X is isomorphic to \(Z^{*}\) ([15], Proposition 1.b.4.), and we can consider the weak∗ topology \(\sigma(X,Z)\) where the convergence is equivalent to the pointwise convergence for bounded sequences. Now the \(w^{*}\)-null type functions are \(w^{*}\)-sequentially lower semicontinuous since they are constant on spheres and the \(w^{*}\)-GGLD property is satisfied. □ We can apply Theorem 4.3 to more general Banach spaces. The following example is given in [16]. Let \(c_{00}\) be the vector space of all sequences of scalars which are eventually zero. Denote by \([\mathbb{N}]^{< w}\) the subsets of \(\mathbb {N}\) with finite cardinality. For \(A\subset\mathbb{N}\), let \(P_{A}\) be the usual projection, that is, if \(x=(x(n))_{n}\in c_{00}\), \(P_{A}(x)\) is the vector whose coordinates are \(x(n)\) if \(n\in A\) and zero otherwise. Consider the family \(\mathcal{S}\subset[\mathbb{N}]^{< w} \) by $$\mathcal{S}:=\bigl\{ S=(n_{1},\ldots,n_{k})\in[ \mathbb{N}]^{< w} : n_{i+1}\ge 2 n_{i} \mbox{ for } i=1,\ldots,k-1 \bigr\} . $$ $$\vert x\vert_{0}:=\sup_{S \in\mathcal{S}} \bigl\Vert P_{S}(x)\bigr\Vert _{1}, $$ and let X be the completion of \(c_{00}\) with the norm \(\vert\cdot \vert_{0}\). Then \((X,\vert\cdot\vert_{0})\) is a dual Banach space satisfying the hypotheses of Theorem 4.3. Indeed, the sequence \(\{ e_{n}\}\) forms a monotonous bounded complete Schauder basis, which implies that X is isometric to the dual of Z, where Z is the Banach space spanned by the orthogonal functionals \(\{e_{n}^{*}\}\). The fact that X satisfies the equality given in Theorem 4.3 is proved in [16]. Therefore, it satisfies the \(w^{*}\)-FPP for left reversible semigroups. Notice that this Banach space is not isomorphic to \(\ell_{1}\) [16]. Let H be a Hilbert separable Banach space. Then \(\mathcal{T}(H)\), the space of the trace class operators on H, has the weak ∗-FPP for left reversible groups, where the predual is \(E=K(H)\), the space of all compact operators defined on H. In this case, Lennard proved that \(\mathcal{T}(H)\) verifies the weak∗ uniform Kadec-Klee property [17] and therefore the \(w^{*}\)-GGLD condition. Proposition 2.4 in [7] implies that the \(w^{*}\)-null type functions are \(w^{*}\)-sequentially lower semicontinuous. (Notice that in [7] the separability of the Hilbert space can be dropped.) The Hardy space \(H^{1}(\Delta)\) has the weak ∗-FPP for left reversible semigroups. Notice that \(H^{1}(\Delta)\) is a separable Banach space, it has the \(w^{*}\)-GGLD property since it verifies the \(w^{*}\)-UKK condition [18], and Lemma 3.2 in [8] implies that the \(w^{*}\)-null type functions are \(w^{*}\)-sequentially lower semicontinuous. Here, the weak∗ topology refers to the isometric predual \(C(\mathbb {T})/A_{0}(\mathbb{T})\), where \(C(\mathbb{T})\) is the space of all continuous functions on the torus \(\mathbb{T}\), with the usual supremum norm, and \(A_{0}(\mathbb{T})\) is the set of boundary values of the disc algebra with zero constant term. Recall that map \(\phi: [0,+\infty)\to[0,+\infty)\) is said to be an Orlicz function if ϕ is convex, vanishing at 0, continuous and not identically equal to zero. A sequence \(\Phi:=\{\phi_{n}\}_{n}\) of Orlicz functions is called a Musielak-Orlicz function. Given a Musielak-Orlicz function, a convex modular \(I_{\Phi}\) is defined on the set of all real sequences given by $$I_{\Phi}(x)=\sum_{n=1}^{\infty}\phi_{n}\bigl(\bigl\vert x(n)\bigr\vert \bigr). $$ A Musielak-Orlicz sequence space generated by Φ is defined by $$\ell_{\Phi}=\bigl\{ x=(x_{n}): I_{\Phi}(\lambda x)< + \infty \mbox{ for some }\lambda>0\bigr\} . $$ We can consider \(\ell_{\Phi}\) equipped with the Luxemburg norm $$\Vert x\Vert :=\inf\bigl\{ k>0: I_{\Phi}(x/k)\le1\bigr\} , $$ or with the Orlicz norm $$\Vert x\Vert ^{o}:=\inf \biggl\{ {1\over k} \bigl(1+I_{\Phi}(kx)\bigr): k>0 \biggr\} . $$ It is well known that both norms are equivalent and \(\ell_{\Phi}\) is a Banach space [19, 20]. In case that \(\phi_{m}=\phi_{n}\) for all \(n,m\in\mathbb{N}\), we simply say that \(\ell_{\Phi}\) is an Orlicz sequence Banach space. It is said that a Musielak-Orlicz function \(\Phi=\{\phi_{n}\}_{n}\) satisfies the condition \(\delta_{2}\) if there are positive constants a and K and a nonnegative sequence \((c_{n})\in\ell_{1}\) such that $$\phi_{n}(2t)\le K\phi_{n}(t)+c_{n} $$ for every \(n\in\mathbb{N}\), \(t\in\mathbb{R}_{+}\) satisfying \(\phi _{n}(t)\le a\). For Orlicz sequence Banach spaces, it is said that an Orlicz function ϕ satisfies the condition \(\delta_{2}\) if there exist some \(t_{0}>0\) and \(K>0\) such that \(\phi (2t)\le K\phi(t)\) for every \(t\in[0,t_{0}]\). When the condition \(\delta_{2}\) is satisfied, the unit vectors form a boundedly complete normalized unconditional basis of \(\ell_{\Phi}\) ([15], Proposition 4.d.3). We denote by \(e_{n}^{*}\) the functional vector associated with \(e_{n}\) for every \(n\in\mathbb{N}\) and consider \([e_{n}^{*}]\) the closed span of the vectors \(\{e_{n}^{*}\}\) in the dual of the Musielak-Orlicz space. Since the Schauder basis is monotone, the dual of the Banach space \([e_{n}^{*}]\) is isometric to \(\ell_{\phi}\) ([15], Proposition 1.b.4.). Therefore we can consider the weak∗ topology \(\sigma(\ell _{\Phi}, [e_{n}^{*}])\). At this point we can state the following. Let \(\Phi=\{\phi_{n}\}\) be a Musielak-Orlicz function satisfying the condition \(\delta_{2}\). The Musielak-Orlicz space satisfies the \(w^{*}\)-FPP for left reversible semigroups for both the Luxemburg and the Orlicz norm. Notice that \(\ell_{\Phi}\) is a separable Banach space. The arguments included in the proof of (iii) ⇒ (i) of Theorem 1 in [21] imply that \(\ell_{\Phi}\) endowed with the Luxemburg norm satisfies the \(w^{*}\)-uniform Opial condition. From the proof of Theorem 2 in [22] we can deduce that the Musielak-Orlicz space \(\ell_{\Phi}\) satisfies the \(w^{*}\)-uniform Kadec-Klee property when it is equipped with the Orlicz norm and so the \(w^{*}\)-GGLD property. Since the weak∗ convergence implies the coordinatewise convergence, standard arguments show that $$\limsup_{n} I_{\Phi}(x_{n}+x)=\limsup _{n} I_{\Phi}(x_{n})+I_{\Phi}(x) $$ for every \(x\in\ell_{\Phi}\) and for every \(w^{*}\)-null sequence \(\{x_{n}\} \). This implies that the functional \(x\in\ell_{\Phi}\to\limsup_{n}I_{\phi}(x_{n}-x)\) is \(w^{*}\)-slsc. With this property, it is easy to check the \(w^{*}\)-sequential lower semicontinuity for the \(w^{*}\)-null type functions for both norms, the Luxemburg and the Orlicz norm. □ Particular examples of Musielak-Orlicz sequence spaces are the Nakano spaces \(\ell^{(p_{n})}\) where the Orlicz functions are \(\phi _{n}(t)=\vert t\vert^{p_{n}}\) with \(p_{n}\subset[1,+\infty)\) [23]. In this case, the Musielak-Orlicz function satisfies the condition \(\delta_{2}\) if and only if \(\sup_{n} p_{n}<+\infty\). In case that \(p_{n}=1\) for every \(n\in\mathbb{N}\), we obtain \(\ell_{1}\). However, there exist some dual Nakano spaces which are not isomorphic to \(\ell_{1}\) and satisfying \(\lim_{n} p_{n}=1\) [23]. Let \((p_{n})\) be a bounded sequence in \([1,+\infty)\). Then the corresponding Nakano sequence Banach space verifies the \(w^{*}\)-FPP for left reversible semigroups. In the rest of this section we will consider some relevant equivalent norms in \(\ell_{1}\), and we will study whether they provide the \(w^{*}\)-FPP for left reversible semigroups. On the one hand, there are some equivalent norms in \(\ell_{1}\) which fail to have this property. For instance \(\ell_{1}\) endowed with the norm \(\Vert x\Vert =\max\{ \Vert x^{+}\Vert _{1},\Vert x^{-}\Vert _{1}\}\), as the dual of \(c_{0}\) with the norm \(\Vert x^{+}\Vert _{\infty}+\Vert x^{-}\Vert _{\infty}\), fails the \(w^{*}\)-FPP for nonexpansive mappings [6]. On the other hand, the τ-GGLD and the Opial conditions are properties that can be transferred by isomorphisms under certain conditions. For instance, it is known (see [11]) that if we consider \(Y=(\ell_{1},\vert\cdot\vert)\) for some equivalent norm, then Y satisfies the \(\sigma(\ell_{1},c_{0})\)-GGLD condition whenever \(d(Y, (\ell_{1},\Vert \cdot\Vert _{1}))<2\) (and this is the best upper bound that can be obtained due to the previous norm). However, as far as we know, the \(w^{*}\)-FPP for left reversible semigroups cannot be derived from stability results since the same does not hold for the τ-sequential lower semicontinuity of the τ-null type functions. Indeed, consider the Euclidean norm \(\Vert \cdot \Vert _{2}\) in \(\mathbb{R}^{2}\) and define the equivalent norm in \(\ell_{1}\) by $$v(x):=\Biggl\Vert \bigl\vert x(1)\bigr\vert u_{1}+\sum _{n=2}^{\infty}\bigl\vert x(n)\bigr\vert u_{2}\Biggr\Vert _{2}, $$ where \(u_{1}=(1,0)\) and \(u_{2}=(-a, (1-a^{2})^{1/2})\) for some \(a\in(0,1)\). Finally, for \(\lambda>0\), define the norm $$\vert x\vert_{\lambda}:=\Vert x\Vert _{1} +\lambda v(x) $$ for all \(x\in\ell_{1}\). It is not difficult to check that \(\vert\cdot \vert_{\lambda}\) is equivalent to \(\Vert \cdot\Vert _{1}\) and that the Banach-Mazur distance between \((\ell_{1},\Vert \cdot\Vert _{1})\) and \((\ell _{1},\vert\cdot\vert_{\lambda})\) tends to one when λ goes to zero. Consider \(x_{n}=e_{1}+a e_{n}\), which tends to \(x=e_{1}\) in the \(\sigma(\ell _{1},c_{0})\)-topology. Notice that $$\vert x\vert_{\lambda}=1+\lambda> 1+\lambda\bigl(1-a^{2} \bigr)^{1\over 2}=\vert x_{n}\vert_{\lambda}$$ for every \(n\in\mathbb{N}\). Hence the norm \(\vert\cdot\vert _{\lambda}\) is not \(\sigma(\ell_{1},c_{0})\)-sequentially lower semicontinuous, and consequently \(\sigma(\ell_{1},c_{0})\)-null type functions are not \(\sigma(\ell_{1},c_{0})\)-slsc in general. A relevant renorming in metric fixed point theory was given by Lin in [24] in the following way: Let \(\{\gamma_{k}\}\) be a nondecreasing sequence of positive numbers converging to 1 and define $$\bigl|\!\bigl|\!\bigl|\{a_{n}\} \bigr|\!\bigr|\!\bigr|: = \sup_{k} \gamma_{k} \sum_{n=k}^{\infty} |a_{n}| $$ for every \(\{a_{n}\}\in\ell_{1}\). Lin proved that \(\ell_{1}\) endowed with this norm verifies the fixed point property for nonexpansive mappings, that is, every \(|\!|\!|\cdot|\!|\!|\)-nonexpansive mapping defined on a closed convex bounded subset of \(\ell_{1}\) into itself has a fixed point. This assertion proves that FPP does not imply reflexivity as it was conjectured for a long time. At this point, we do not know whether \((\ell_{1},|\!|\!|\cdot|\!|\!|)\) verifies a similar statement for left reversible semigroups of nonexpansive mappings. Here we prove that \((\ell_{1},|\!|\!|\cdot|\!|\!|)\) does verify the \(w^{*}\)-FPP for left reversible semigroups where by the weak∗ topology we refer to \(\sigma(\ell_{1},c_{0})\). Notice that the above norm is a dual norm, that is, if \(X=(\ell_{1},|\!|\!|\cdot|\!|\!|)\) then X is isometric to a dual space. This can be deduced from the fact that the Schauder basis \(\{e_{n}\}\) is boundedly complete and it is monotonous for the \(|\!|\!|\cdot|\!|\!|\) norm. Moreover, X is isometric to the dual of the Banach space spanned by \([e_{n}^{*}]\) so the weak∗ topology is in fact \(\sigma(\ell_{1},c_{0})\) (see Proposition 1.b.4 in [15]). If \(\{x_{n}\}_{n}\) is a \(w^{*}\)-null sequence in \(\ell_{1}\), then \(\limsup_{n} |\!|\!|x_{n}-x_{m} |\!|\!|=2\limsup_{n} |\!|\!|x_{n} |\!|\!|\) [24], which clearly implies the \(w^{*}\)-GGLD. Notice that the \(w^{*}\)-null type functions are not constant in spheres: consider the sequence \(x_{n}=e_{n}\) for every \(n\in\mathbb{N}\) and \(x={1\over \gamma_{1}}e_{1}\), \(y={1\over \gamma_{2}}e_{2}\). Now \(|\!|\!|x |\!|\!|= |\!|\!|y |\!|\!|=1\) but $$\limsup_{n} |\!|\!|x_{n}-x |\!|\!|= 1+ \gamma_{1} \quad \mbox{and}\quad \limsup_{n} |\!|\!|x_{n}-x |\!|\!|= 1+\gamma_{2}, $$ so we cannot derive the \(w^{*}\)-sequential lower semicontinuity of the \(w^{*}\)-null type functions from this fact. We will consider the following result which is due to Lennard and it was included in [25]. However, we here develop a shorter proof for the sake of completeness. Let \(\{x_{n}\}_{n}\) be a \(w^{*}\)-null sequence in \(\ell_{1}\). Define $$\Gamma(y):= \limsup_{n\rightarrow\infty} |\!|\!|x_{n} - y|\!|\!|. $$ $$ \Gamma(y) = \sup_{k\in\mathbb{N}} \Biggl\{ \gamma_{k} \Biggl( \sum_{n=k}^{\infty} |y_{n}|+\limsup_{n}|\!|\!|x_{n}|\!|\!|\Biggr) \Biggr\} $$ for every \(y\in\ell_{1}\) with \(y= {\sum_{n=1}^{\infty}y_{n} e_{n}}\). We may, without loss of generality, assume that the sequence \(\{x_{n}\}\) is disjointly supported. First of all, assume that the vector y is finitely supported, that is, there exists some \(n_{0}\in\mathbb{N}\) such that \(y_{n}=0\) if \(n>n_{0}\). We can assume that \(n_{0}<\min\{\operatorname{supp}(x_{1})\}\). In such case, for every \(n\in\mathbb{N}\), $$|\!|\!|y-x_{n}|\!|\!|= \max \Biggl\{ \sup_{1\le k\le n_{0}} \gamma_{k} \Biggl( {\sum_{n=k}^{n_{0}} \vert y_{n}\vert}+\Vert x_{n}\Vert _{1} \Biggr), |\!|\!|x_{n}|\!|\!|\Biggr\} . $$ Taking limits when n goes to infinity and using that \(\limsup_{n}\Vert x_{n}\Vert _{1}=\limsup_{n}|\!|\!|x_{n}|\!|\!|\) since \(\{x_{n}\}\) is disjointly supported, we deduce $$\begin{aligned} \limsup_{n}|\!|\!|y-x_{n}|\!|\!| = & \max \Biggl\{ \sup_{1\le k\le n_{0}}\gamma_{k} \Biggl( {\sum _{n=k}^{n_{0}}\vert y_{n}\vert}+\limsup _{n} |\!|\!|x_{n}|\!|\!|\Biggr),\limsup _{n}|\!|\!|x_{n}|\!|\!|\Biggr\} \\ = & \sup_{k\in\mathbb{N}} \Biggl\{ \gamma_{k} \Biggl( \sum _{n=k}^{\infty} |y_{n}|+\limsup _{n}|\!|\!|x_{n}|\!|\!|\Biggr) \Biggr\} , \end{aligned}$$ where the last equality follows from the fact that \(\lim_{k} \gamma _{k}=1\) and \(y_{n}=0\) if \(n>n_{0}\). Then the lemma holds in this case. Let \(y= \sum_{n=1}^{\infty} y_{n} e_{n}\) be any vector in \(\ell _{1}\) and denote \(y_{s}=\sum_{n=1}^{s} y_{n} e_{n}\). Notice that Γ is a continuous function for the norm topology so \(\Gamma (y)=\lim_{s}\Gamma(y_{s})\). If we prove that \(\lim_{s}\Gamma(y_{s})\) coincides with the right part of the equality stated in the lemma, the proof will be finished. For every \(s\in\mathbb{N}\), $$\begin{aligned} \Gamma(y_{s}) = & \max \Biggl\{ \sup_{1\le k\le s} \gamma_{k} \Biggl( {\sum_{n=k}^{s} \vert y_{n}\vert}+\limsup_{n} |\!|\!|x_{n} |\!|\!|\Biggr),\limsup_{n}|\!|\!|x_{n}|\!|\!|\Biggr\} \\ \le& \max \Biggl\{ \sup_{k\ge1}\gamma_{k} \Biggl( { \sum_{n=k}^{\infty}\vert y_{n}\vert}+ \limsup_{n} |\!|\!|x_{n}|\!|\!|\Biggr),\limsup _{n}|\!|\!|x_{n}|\!|\!|\Biggr\} \\ = & \sup_{k\in\mathbb{N}} \Biggl\{ \gamma_{k} \Biggl( \sum _{n=k}^{\infty} |y_{n}|+\limsup _{n}|\!|\!|x_{n}|\!|\!|\Biggr) \Biggr\} . \end{aligned}$$ On the other hand, let \(\epsilon>0\) and \(s_{0}\in\mathbb{N}\) such that \(\Vert y-y_{s}\Vert _{1}<\epsilon\) for every \(s\ge s_{0}\). Fix \(s\ge s_{0}\). In this case $${\sup_{1\le k\le s}} \Biggl\{ \gamma_{k} \Biggl( \sum _{n=k}^{\infty} |y_{n}|+\limsup _{n}|\!|\!|x_{n}|\!|\!|\Biggr) \Biggr\} \le{\sup _{1\le k\le s}} \Biggl\{ \gamma_{k} \Biggl( \sum _{n=k}^{s} |y_{n}|+ \epsilon+\limsup _{n}|\!|\!|x_{n}|\!|\!|\Biggr) \Biggr\} . $$ $$\begin{aligned}& {\sup_{s< k}} \Biggl\{ \gamma_{k} \Biggl( \sum _{n=k}^{\infty} |y_{n}|+\limsup _{n}|\!|\!|x_{n}|\!|\!|\Biggr) \Biggr\} \\& \quad \le{ \sup _{s< k}} \Bigl\{ \gamma_{k} \Bigl( \epsilon+\limsup _{n}|\!|\!|x_{n}|\!|\!|\Bigr) \Bigr\} =\epsilon+ \limsup_{n}|\!|\!|x_{n}|\!|\!|. \end{aligned}$$ In any case, $$\begin{aligned} \Gamma(y_{s}) \le&\sup_{k\in\mathbb{N}} \Biggl\{ \gamma_{k} \Biggl( \sum_{n=k}^{\infty} |y_{n}|+\limsup_{n}|\!|\!|x_{n}|\!|\!|\Biggr) \Biggr\} \\ \le&{\max \Biggl\{ \sup_{1\le k\le s}\gamma_{k} \Biggl( { \sum_{n=k}^{s}\vert y_{n}\vert}+ \limsup_{n} |\!|\!|x_{n}|\!|\!|\Biggr),\limsup _{n}|\!|\!|x_{n}|\!|\!|\Biggr\} }+\epsilon= \Gamma(y_{s})+\epsilon. \end{aligned}$$ Taking limits when s goes to infinity and having in mind that ϵ is arbitrary, we obtain the desired equality. □ By using the previous equality we can now check the following. Lemma 4.10 For the space \((\ell_{1},|\!|\!|\cdot|\!|\!|)\), \(w^{*}\)-null type functions are \(w^{*}\)-sequentially semicontinuous. Let \(\{y_{m}\}\) be a \(w^{*}\) convergent sequence and take y its \(w^{*}\) limit. Take \(P_{k}\) the natural projections associated to the usual basis of \(\ell_{1}\) and take \(Q_{n} = I - P_{n}\). Fix \(n\in\mathbb{N}\). By Lemma 4.9, we have that \(\Gamma(y_{s}) = \sup_{k\in\mathbb{N}}\{\gamma_{k} ( \Gamma(0) + \|Q_{k-1} (y_{s})\|_{1}) \} \geq \gamma_{n} ( \Gamma(0) + \|Q_{n-1} (y_{s})\|_{1})\). It is clear that \(Q_{n-1} (y_{k})\) converge \(w^{*}\) to \(Q_{n-1}(y)\). Using that \(\Vert \cdot\Vert _{1}\) is a \(w^{*}\) lower semicontinuous function, we obtain that $$\begin{aligned} \liminf_{m\to\infty} \Gamma(y_{m}) \geq& \liminf_{m\to\infty} \gamma_{n} \bigl( \Gamma(0) + \bigl\Vert Q_{n-1} (y_{m})\bigr\Vert _{1}\bigr) \\ =& \gamma_{n} \Gamma(0) + \gamma_{n} \liminf _{m\to\infty}\bigl\Vert Q_{n-1} (y_{m})\bigr\Vert _{1} \\ \geq& \gamma_{n} \Gamma(0) + \gamma_{n} \bigl\Vert Q_{n-1} (y)\bigr\Vert _{1}. \end{aligned}$$ This inequality then holds for each \(n\in\mathbb{N}\), which proves that \(\liminf_{m\to\infty} \Gamma(y_{m})\geq\sup_{n\in\mathbb {N}}\{ \gamma_{n} ( \Gamma(0) + \|Q_{n-1} (y)\|_{1} )\}= \Gamma(y)\) as we wanted to prove. □ Using Theorem 3.5 we finally deduce the following. Corollary 4.11 \((\ell_{1}, |\!|\!|\cdot|\!|\!|)\) has the \(w^{*}\)-FPP for left reversible semigroups. Consider \(X=\ell_{1}\) endowed with the equivalent norm $$\vert x\vert=\max \Biggl\{ {1\over 4}\Vert x\Vert _{1}, \max_{n}\bigl\{ \bigl\vert x(n)\bigr\vert \bigr\} +\sum _{n=1}^{\infty}{ \vert x(n)\vert \over 2^{n}} \Biggr\} . $$ Consider τ as the weak∗ topology \(\sigma(\ell_{1},c_{0})\). Notice that \((\ell_{1},\vert\cdot\vert)\) is a dual space since the Schauder basis \(\{e_{n}\}\) is boundedly complete and monotonous. Taking the sequence \(x_{n}=e_{n}\) for \(n\in\mathbb{N}\), we obtain \(\lim_{n}\Vert x_{n}\Vert =1=\lim_{n,m;n\ne m}\Vert x_{n}-x_{m}\Vert =1\), which implies that X fails the \(w^{*}\)-GGLD condition. However, it is easy to check that X has the Opial condition with respect to the \(\sigma(\ell_{1},c_{0})\) topology and that the \(\sigma(\ell_{1},c_{0})\)-null type functions are sequentially lower semicontinuous. From Theorem 3.5 we can deduce the \(w^{*}\)-FPP for left reversible semigroups in the space \((\ell _{1},\vert\cdot\vert)\). We finish the section with the Banach space introduced in Example 2.1, for which we showed that the asymptotic centers were not weakly compact in general, so the arguments used in [5, 6, 8] or [7] are not valid for proving the \(w^{*}\)-FPP for left reversible semigroups. Let \(X:=(\ell_{1},\Vert \cdot\Vert )\), where the norm \(\Vert \cdot\Vert \) is defined as Here \({1\over 2}\Vert x\Vert _{1}\le\Vert x\Vert \le{3\over 2}\Vert x\Vert _{1}\) for every \(x\in\ell_{1}\). Notice that if x, y are vectors in \(\ell_{1}\) such that \(\max\{\operatorname{supp}(x)\}<\min\{\operatorname{supp}(y)\}\), then it is not difficult to check that $$ \Vert x+y\Vert =\max \biggl\{ \Vert y\Vert , \Vert x\Vert + {1\over 2} \Vert y\Vert _{1} \biggr\} . $$ The previous equality also proves that the Schauder basis is monotonous for \(\Vert \cdot\Vert \) and X is isometric to a dual space. Assume that \(\{x_{n}\}\) is a weak∗-null sequence, where by \(w^{*}\) topology we mean the \(\sigma(\ell_{1},c_{0})\) topology. Without loss of generality, we can assume that \(\{x_{n}\}\) is a block basic sequence, \(l=\lim_{n}\Vert x_{n}\Vert \) exists and that \(\lim_{n,m; n\ne m}\Vert x_{n}-x_{m}\Vert =\limsup_{n}\limsup_{m}\Vert x_{n}-x_{m}\Vert \). Therefore $$\begin{aligned} \limsup_{n}\limsup_{m}\Vert x_{n}-x_{m}\Vert =&\limsup_{n}\limsup _{m} \max \biggl\{ \Vert x_{m}\Vert , \Vert x_{n}\Vert +{1\over 2} \Vert x_{m}\Vert _{1} \biggr\} \\ = &\limsup_{n}\max \biggl\{ l,\Vert x_{n}\Vert + {1\over 2}\limsup_{m}\Vert x_{m}\Vert _{1} \biggr\} \\ = &l+{1\over 2}\limsup_{m}\Vert x_{m}\Vert _{1}\ge{4\over 3}l, \end{aligned}$$ which implies the \(w^{*}\)-GGLD property. Let us prove that the weak∗-null type functions are \(w^{*}\)-lower semicontinuous. Take \(\{x_{n}\}\) a \(w^{*}\)-null sequence and \(y=\sum_{n=1}^{\infty}y_{n} e_{n}\in\ell_{1}\). We can assume that \(\{x_{n}\}\) is a block basic sequence. Let \(\epsilon >0\) and \(s_{0}\in\mathbb{N}\) such that \(\Vert y-y_{s}\Vert <\epsilon\) for every \(s\ge s_{0}\), where \(y_{s}=\sum_{n=1}^{s} y_{n} e_{n}\). Notice that \(\Gamma(y)=\limsup_{n}\Vert y-x_{n}\Vert =\lim_{s}\limsup_{n}\Vert y_{s}-x_{n}\Vert \). Moreover, from the definition of the norm and from the equality (4), $$\begin{aligned} \limsup_{n}\Vert y_{s}-x_{n}\Vert = & \max\biggl\{ \limsup_{n}\Vert x_{n}\Vert , \Vert y_{s}\Vert +{1\over 2} \limsup_{n}\Vert x_{n}\Vert _{1}\biggr\} \\ \le& \max\biggl\{ \limsup_{n}\Vert x_{n}\Vert , \Vert y\Vert +{1\over 2}\limsup_{n}\Vert x_{n}\Vert_{1}\biggr\} \end{aligned}$$ for every \(s\in\mathbb{N}\). On the other hand, for every \(s\ge s_{0}\), $$\begin{aligned} \begin{aligned} &\max\biggl\{ \limsup_{n}\Vert x_{n}\Vert , \Vert y \Vert +{1\over 2}\limsup_{n}\Vert x_{n} \Vert _{1}\biggr\} \\ &\quad \le\max\biggl\{ \limsup_{n}\Vert x_{n}\Vert , \Vert y_{s}\Vert +{1\over 2}\limsup_{n} \Vert x_{n}\Vert _{1}\biggr\} +\epsilon \\ &\quad =\limsup_{n}\Vert y_{s}-x_{n}\Vert +\epsilon. \end{aligned} \end{aligned}$$ Taking limits when s goes to infinity, we deduce that for every \(y\in \ell_{1}\), $$\limsup_{n} \Vert y-x_{n}\Vert =\max\biggl\{ \limsup _{n}\Vert x_{n}\Vert , \Vert y\Vert + {1\over 2}\limsup_{n}\Vert x_{n}\Vert _{1}\biggr\} . $$ The above implies that the \(w^{*}\)-null type functions are constant on spheres and therefore \(w^{*}\)-slsc. Therefore \((\ell_{1},\Vert \cdot\Vert )\) verifies the \(w^{*}\)-FPP for left reversible semigroups Notice also that this space fails the Opial condition with respect to the \(w^{*}\) topology. Indeed, consider the weak∗-null sequence \(x_{n}=e_{n}\) for \(n\in\mathbb{N}\) and the vector \(x={1\over 2}e_{1}\). Then \(\lim_{n} \Vert x_{n}\Vert = \lim_{n}\Vert x_{n}+x\Vert \). 5 Modular function spaces and the ρ-FPP for left reversible semigroups In this section we consider modular function spaces \(L_{\rho}:=\{f\in \mathcal{M}: \rho(\alpha f)\to0 \mbox{ as } \alpha\to0\}\) endowed with the Luxemburg norm $$\Vert f\Vert :=\inf \biggl\{ \alpha>0: \rho \biggl({f\over \alpha } \biggr)\le1 \biggr\} , $$ and the Orlicz norm $$\Vert f\Vert _{o}:=\inf \biggl\{ {1\over k} \bigl(1+ \rho(kf) \bigr): k>0 \biggr\} . $$ Here \(\mathcal{M}\) denotes a set of measurable functions and ρ a convex additive function modular defined over \(\mathcal{M}\). It is said that a sequence \((f_{n})\subset L_{\rho}\) converges to f ρ-almost everywhere, \(f_{n}\to f\) ρ-a.e., if \(\{w\in\Omega: f(w)\ne\lim_{n} f_{n}(w)\}\) is ρ-null. For all general definitions we refer to [20, 26] or [27]. A function modular ρ is said to satisfy the \(\Delta_{2}\)-type condition if there exists some \(K>0\) such that \(\rho(2f)\le K \rho (f)\) for every \(f\in L_{\rho}\). In this section we assume that ρ is a σ-finite convex additive function modular which satisfies the \(\Delta_{2}\)-type condition (see [27] or [28]). In this case, a topology \(\tau_{\rho}\) is defined over \(L_{\rho}\) such that \(\tau_{\rho}\)-compact sets coincide exactly with \(\tau_{\rho}\)-sequentially compact sets [28]. Moreover, the ρ-convergence coincides with the τ-convergence up to subsequences. In [27] (Section 4) it is proved that, under the \(\Delta_{2}\)-type condition, the modular function space \(L_{\rho}\) verifies the \(\tau _{\rho}\)-uniform Opial condition for both the Luxemburg and the Orlicz norm. Moreover, Lemma 5.3 and Lemma 5.4 of [27] show that the \(\tau_{\rho}\)-null type functions are \(\tau_{\rho}\)-sequentially lower semicontinuous. Hence we can state the following theorem. Let ρ be a σ-finite convex additive function modular which satisfies the \(\Delta_{2}\)-condition. Then the modular function space \(L_{\rho}\) verifies the \(\tau_{\rho}\)-FPP for left reversible semigroups when it is equipped with either the Luxemburg or the Orlicz norm. Examples of modular function spaces are the \(L_{p}\)-spaces and more generally the Musielak-Orlicz function spaces, where the \(\tau_{\rho}\) topology coincides with the local convergence in measure topology. For the definition of Musielak-Orlicz function spaces and for more examples of modular function spaces, see [20] and [26]. 6 L-Embedded Banach spaces and the FPP for left reversible semigroups with respect to the abstract measure topology In this section we consider L-embedded Banach spaces endowed with the so-called measure topology. A Banach space X is said to be an L-embedded Banach space if there exists a closed subspace \(X_{s}\subset X^{**}\) such that \(X^{**}=X\oplus _{1} X_{s}\). A wide study and many examples of this class of Banach spaces can be found in the monograph [29]. Examples of L-embedded Banach spaces are the following: Duals of M-embedded Banach spaces. \(L_{1}(\mu)\)-spaces and preduals of von Neumann algebras. Recall that a sequence \(\{x_{n}\}\) is said to span an asymptotically isometric copy of \(\ell_{1}\) if there exists a nonincreasing sequence \(\{\delta_{n}\}\subset[0,1)\) tending to 0 such that $${\sum_{n=1}^{\infty}(1-\delta_{n}) \vert\alpha_{n}\vert\le\Biggl\Vert \sum_{n=1}^{\infty}\alpha_{n} x_{n} \Biggr\Vert \le\sum _{n=1}^{\infty}\vert\alpha_{n}\vert} $$ for every \(\{\alpha_{n}\}\in\ell_{1}\). In this case we will denote \(x_{n} \sim(asy) \ell_{1}\). For L-embedded Banach spaces, the abstract measure topology (\(\tau_{\mu}\)) is defined in [30] (Section 5) by considering the class of convergent sequences. Namely, if \(\{x_{n}\}\) is a sequence in an L-embedded Banach space, we say that \(\{x_{n}\}\) tends to 0 in the abstract measure topology (\(\tau_{\mu}\mbox{-} \lim_{n} x_{n}=0\)) if \(\{x_{n}\}\) is norm bounded and every subsequence \(\{x_{n_{k}}\}\) contains a subsequence \(\{x_{n_{k_{l}}}\}\) such that \(x_{n_{k_{l}}}/\Vert x_{n_{k_{l}}}\Vert \sim(asy) \ell_{1}\) or \(\Vert x_{n_{k_{l}}}\Vert \to0\). When X is a separable L-embedded Banach space, the notions of compactness and sequential compactness agree for \(\tau_{\mu}\) [31]. It is proved in [32] (see also [33]) that for every \(\tau _{\mu}\)-null sequence \(\{x_{n}\}\) in an L-embedded Banach space, $$\limsup_{n}\Vert x_{n}+x\Vert =\limsup _{n}\Vert x_{n}\Vert +\Vert x\Vert $$ for all \(x\in X\). This equality implies the \(\tau_{\mu}\)-GGLD property and the \(\tau_{\mu}\)-sequential lower semicontinuity of the \(\tau_{\mu}\)-null type functions. It is known that L-embedded Banach spaces satisfy the FPP for nonexpansive mappings with respect to the abstract measure topology [32]. According to Theorem 3.5, we can extend this result to left reversible semigroups in the following way. Let X be a separable L-embedded Banach space. Then X verifies the \(\tau_{\mu}\)-FPP for left reversible semigroups. As a particular case we can deduce Theorem 5.1 in [7] when the Hilbert space H is separable. Indeed, in case that the L-embedded Banach space is \(L^{1}(\mathcal{M},\tau)\) for some finite von Neumann algebra \(\mathcal{M}\) defined over a Hilbert space, the previous measure topology coincides with the usual measure topology defined on \(L^{1}(\mathcal{M},\tau)\) for bounded sets (see Theorem 1.1 in [31]). Hence, noncommutative \(L_{1}\)-spaces verify the fixed point property for left reversible semigroups with respect to the usual measure topology. This topology is in fact the convergence locally in measure topology in case that \(L^{1}(\mathcal{M},\tau)=L^{1}(\mu)\) for some σ-finite measure space. The authors would like to thank the referees for their valuable suggestions to improve the presentation of this article. The second author is partially supported by MCIN, Grant MTM-2012-34847-C02-01 and Junta de Andalucía, Grants FQM-127 and P08-FQM-03543. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Both authors have contributed equally and significantly in writing this article. Both authors read and approved the final manuscript. CIMAT, Centro de Investigación en Matemáticas, CONACYT, Consejo Nacional de Ciencias y Tecnología, Guanajuato, México Departamento de Análisis Matemático, Facultad de Matemáticas, Universidad de Sevilla, Tarfia s/n, Sevilla, 41012, Spain Paterson, ALT: Amenability. Mathematical Surveys and Monographs, vol. 19. Am. Math. Soc., Providence (1988) View ArticleGoogle Scholar Alspach, DE: A fixed point free nonexpansive map. Proc. Am. Math. Soc. 82, 423-424 (1981) View ArticleGoogle Scholar Lim, T-C: Characterizations of normal structure. Proc. Am. Math. Soc. 34(2), 313-319 (1974) View ArticleGoogle Scholar Holmes, RD, Lau, AT: Non-expansive actions of topological semigroups and fixed points. J. Lond. Math. Soc. (2) 5, 330-336 (1972) View ArticleGoogle Scholar Lau, AT-M, Mah, PF: Fixed point property for Banach algebras associated to locally compact groups. J. Funct. Anal. 258, 357-372 (2010) View ArticleGoogle Scholar Lim, T-C: Asymptotic centers and nonexpansive mappings in conjugate Banach spaces. Pac. J. Math. 90(1), 135-143 (1980) View ArticleGoogle Scholar Randrianantoanina, N: Fixed point properties of semigroups of nonexpansive mappings. J. Funct. Anal. 258, 3801-3817 (2010) View ArticleGoogle Scholar Randrianantoanina, N: Fixed point properties in Hardy spaces. J. Math. Anal. Appl. 371, 16-24 (2010) View ArticleGoogle Scholar Opial, Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591-597 (1967) View ArticleGoogle Scholar Jiménez-Melado, A: Stability of weak normal structure in James quasi reflexive spaces. Bull. Aust. Math. Soc. 46(3), 367-372 (1992) View ArticleGoogle Scholar Domínguez-Benavides, T, García-Falset, J, Japón, MA: The τ-fixed point property for nonexpansive mappings. Abstr. Appl. Anal. 3(3-4), 343-362 (1998) View ArticleGoogle Scholar Ayerbe, JM, Domínguez Benavides, T, López Acedo, G: Measures of Noncompactness in Metric Fixed Point Theory. Oper. Theory Adv. Appl. Birkhäuser, Basel (1997) Google Scholar Lau, AT-M, Ülger, A: Some geometric properties on the Fourier and Fourier-Stieltjes algebras of locally compact groups, Arens regularity and related problems. Trans. Am. Math. Soc. 337, 321-359 (1993) Google Scholar Takesaki, M: Theory of Operator Algebras I. Springer, New York (1979) View ArticleGoogle Scholar Lindenstrausss, J, Tzafriri, L: Classical Banach Spaces I. Springer, Berlin (1979) View ArticleGoogle Scholar Barrera-Cuevas, A, Japón, MA: New families of nonreflexive Banach spaces with the fixed point property. J. Math. Anal. Appl. 425(1), 349-363 (2015) View ArticleGoogle Scholar Lennard, C: \(C_{1}\) is uniformly Kadec-Klee. Proc. Am. Math. Soc. 109, 71-77 (1990) Google Scholar Besbes, M, Dilworth, SJ, Dowling, PN, Lennard, CJ: New convexity and fixed point properties in Hardy and Lebesgue-Bochner spaces. J. Funct. Anal. 119, 340-357 (1993) View ArticleGoogle Scholar Chen, S: Geometry of Orlicz Spaces. Dissertation Math. Institute of Mathematics, Warsaw (1996) Google Scholar Musielak, J: Orlicz Spaces and Modular Spaces. Lecture Notes in Mathematics, vol. 1034. Springer, Berlin (1983) Google Scholar Cui, Y, Hudzik, H: Maluta's coefficient and Opial's properties in Musielak-Orlicz sequence spaces equipped with the Luxemburg norm. Nonlinear Anal. 35, 475-485 (1999) View ArticleGoogle Scholar Thompson, HB, Cui, Y: The fixed point property in Musielak-Orlicz sequence spaces. Comment. Math. Univ. Carol. 42(2), 299-309 (2001) Google Scholar Nakano, H: Modulared sequence spaces. Proc. Jpn. Acad. 27(9), 508-512 (1951) View ArticleGoogle Scholar Lin, PK: There is an equivalent norm on \(\ell_{1}\) that has the fixed point property. Nonlinear Anal. 68(8), 2303-2308 (2008) View ArticleGoogle Scholar Castillo-Santos, FE: Connections between geometrical and fixed point properties. Thesis, University of Newcastle (2010) Google Scholar Kozlowski, WM: Modular Function Spaces. Dekker, New York (1998) Google Scholar Japón, MA: Some geometric properties in modular spaces and application to fixed point theory. J. Math. Anal. Appl. 295, 576-594 (2004) View ArticleGoogle Scholar Domínguez-Benavides, T, Khamsi, MA, Samadi, S: Asymptotically nonexpansive mappings in modular function spaces. J. Math. Anal. Appl. 265(2), 249-263 (2002) View ArticleGoogle Scholar Harmand, P, Werner, D, Werner, W: M-Ideals in Banach Spaces and Banach Algebras. Lectures Notes in Mathematics, vol. 1547. Springer, Berlin (1993) Google Scholar Pfitzner, H: L-Embedded Banach spaces and the measure topology. Isr. J. Math. 205, 421-451 (2015). doi:10.1007/s11856-014-1136-6 View ArticleGoogle Scholar Pfitzner, H: Perturbations of \(\ell_{1}\)-copies and convergence in preduals of von Neumann algebras. J. Oper. Theory 47, 145-167 (2002) Google Scholar Japón, MA: Some fixed point results on L-embedded Banach spaces. J. Math. Anal. Appl. 272, 380-391 (2002) View ArticleGoogle Scholar Pfitzner, H: A note on asymptotically isometric copies of \(\ell_{1}\) and \(c_{0}\). Proc. Am. Math. Soc. 129(5), 1367-1373 (2001) View ArticleGoogle Scholar
CommonCrawl
Imaging of PD-L1 in single cancer cells by SERS-based hyperspectral analysis Wei Zhang, Jake S. Rhodes, Kevin R. Moon, Beatrice S. Knudsen, Linda Nokolova, and Anhong Zhou Wei Zhang,1 Jake S. Rhodes,2 Kevin R. Moon,2 Beatrice S. Knudsen,3 Linda Nokolova,4 and Anhong Zhou1,* 1Department of Biological Engineering, Utah State University, Logan, UT 84322, USA 2Department of Mathematics and Statistics, Utah State University, Logan, UT 84322, USA 3Department of Pathology University of Utah, Salt Lake City, UT 84112, USA 4Electron Microscopy Core Laboratory, University of Utah, Salt Lake City, UT 84112, USA *Corresponding author: Anhong.Zhou@usu.edu Wei Zhang https://orcid.org/0000-0002-1326-6700 Anhong Zhou https://orcid.org/0000-0001-6645-6375 J Rhodes K Moon B Knudsen L Nokolova A Zhou Wei Zhang, Jake S. Rhodes, Kevin R. Moon, Beatrice S. Knudsen, Linda Nokolova, and Anhong Zhou, "Imaging of PD-L1 in single cancer cells by SERS-based hyperspectral analysis," Biomed. Opt. Express 11, 6197-6210 (2020) Enrichment and ratiometric detection of circulating tumor cells using PSMA- and folate receptor-targeted magnetic and surface-enhanced Raman scattering nanoparticles (BOE) In vivo multiplexed molecular imaging of esophageal cancer via spectral endoscopy of topically applied SERS nanoparticles (BOE) Rapid and label-free urine test based on surface-enhanced Raman spectroscopy for the non-invasive detection of colorectal cancer at different stages (BOE) Molecular Imaging and Nanoparticles Laser scattering Raman scattering Surface enhanced Raman spectroscopy Surface roughness measurement Manuscript Accepted: September 24, 2020 We developed a hyperspectral imaging tool based on surface-enhanced Raman spectroscopy (SERS) probes to determine the expression level and visualize the distribution of PD-L1 in individual cells. Electron-microscopic analysis of PD-L1 antibody - gold nanorod conjugates demonstrated binding the cell surface and internalization into endosomal vesicles. Stimulation of cells with IFN-γ or metformin was used to confirm the ability of SERS probes to report treatment-induced changes. The multivariate curve resolution-alternating least squares (MCR-ALS) analysis of spectra provided a greater signal-noise ratio than single peak mapping. However, single peak mapping allowed a systematic subtraction of background and the removal of non-specific binding and endocytic SERS signals. The mean or maximum peak height in the cell or the mean peak height in the area of specific PD-L1 positive pixels was used to estimate the PD-L1 expression levels in single cells. The PD-L1 levels were significantly up-regulated by IFN-γ and inhibited by metformin in human lung cancer cells from the A549 cell line. In conclusion, the method of analyzing hyperspectral SERS imaging data together with systematic and comprehensive removal of non-specific signals allows SERS imaging to be a quantitative tool in the detection of the cancer biomarker, PD-L1. The Programmed death ligand 1 (PD-L1) / programmed death 1 (PD-1) ligand – receptor system forms an immune checkpoint between activated immune cells to dampen the immune response, preventing tissue destruction and autoimmunity [1,2]. PD-L1 is expressed on the surface of multiple cell types, including cancer cells and macrophages [2], while PD-1 is expressed on cytotoxic, CD8+ T-cells. Cancer cells hijack PD-L1 over-expression as a mechanism of immune evasion [2,3]. Recently, immune checkpoint inhibitors (e.g. nivolumab and pembrolizumab, both approved by the FDA [3]), blocking PD-L1/PD-1 interactions have gained momentum as novel anticancer therapeutics. These drugs achieve durable cancer control in several malignancies including metastatic lung cancer and melanoma [4]. The measurement of PD-L1 expression by immunohistochemistry is used as a companion diagnostic to identify patients who might respond to immune checkpoint inhibition [5]. PD-L1 expression is regulated by multiple cytokines, most notably gamma-interferon (IFN-γ), and through STAT and NF-κB pathway activation [6] as well as by metformin via glycosylation [7]. PD-L1 expression differs across cancers, despite exposure to the same growth factor milieu of the fetal calf serum, suggesting cell intrinsic regulation of PD-L1 expression. For example, the human lung adenocarcinoma cell line, A549, expresses low PD-L1 [8], while the human breast cancer cell line, MDA-MB-231, expresses high PD-L1 levels [9]. Raman microspectroscopy (RM) is non-destructive and measures vibration and rotational modes of macromolecules at the single-cell level using inelastic laser light scattering [10]. Surface enhanced Raman spectroscopy (SERS) enhances the Raman signals by molecules absorbed on rough metal surfaces and has rapidly progressed from a bench-scale spectroscopic tool to a preclinical diagnosis technique [11]. Recently, our lab developed multiplexed SERS probes to visualize the distribution of G-coupled protein receptor 120 (GPR-120) and the clustering of differentiation receptor 36 (CD36) on taste bud cells [12]. Compared to other imaging systems, SERS imaging offers high sensitivity and specificity at a single cell level without the limitations caused by tissue autofluorescence and photobleaching. On the other hand, the cell-based SERS experiments generally can be divided into two methodologies: the SERS-reporter (SERS-label) approach or the reporter-free (label-free) SERS approach [13]. The nanoparticles either labeled with a Raman reporter molecule [14–16] or label-free [17–19] might be internalized by cells, which can contribute to the background signals but is often ignored. PD-L1 distribution on MDA-MB-231 cells has been visualized at an intensity of 1580cm-1 of the Raman reporter p-mercaptobenzoic acid (p-MBA) after background subtraction [20]. However, this approach discarded most of the spectral information and only used a single Raman peak. This limitation can be overcome by multivariate spectral analysis methods, such as the multivariate curve resolution-alternating least squares (MCR-ALS). MCR-ALS is a powerful approach that is commonly used to identify complex spectral profiles and visualize the distribution of major components that are spectrally different in cell or tissue samples [21–23]. The combination of SERS and MCR-ALS has been successfully applied to quantifying nitroxoline [24] and adenosine [25] in clinical samples. However, MCR-ALS may not be the method of choice for Raman spectra that contain a single, dominant peak. When dealing with antibody – SERS conjugates for detection of proteins in cells, the nonspecific binding of the SERS particles as well as of the antibody has to be subtracted to arrive at specific binding intensities. However, specific binding measurements are rarely reported when using antibody – SERS probes. When the assay's goal is to guide treatment decision, such as in the case of PD-L1 expression levels, separating specific from non-specific signals is critical. In this paper, we analyze the PD-L1 signal obtained from SERS by MCR-ALS and single peak quantification within segmented cell boundaries. This approach generates a high-resolution, spatial map of PD-L1 expression on the surface of single cancer cells. The SERS probes are conjugated to PD-L1, allowing specific binding to the target molecule on the cell surface. Even without antibody, nanoparticles can non-specifically adhere to the cell surface (extracellular) and can be endocytosed by the cells (intracellular), which can also contribute the background signals in Raman measurements. These conditions complicate the interpretation of the data and have been overlooked in most SERS detection applications [26,27]. Included in the workflow are baseline subtraction and background processes, which remove non-specific binding and endocytic SERS intensities. Since expression of PD-L1 is known to be regulated by IFN-γ or metformin, we question whether SERS can detect expression changes in PD-L1 after drug treatment. 2.1 Cell culture A549 cells and MDA-MB-231 cells were purchased from ATCC (Manassas, VA, USA) and cultured in F-12 k medium containing 10% fetal bovine serum (Thermo Fisher Scientific, Waltham, MA, USA) and a 1:1 mixture of Dulbecco's-modified eagle's medium (DMEM) supplemented with 5% fetal bovine serum, respectively. HEK293 cells were generously donated from Dr. Timothy Gilbertson in University of Central Florida and cultured in DMEM-GlutaMAX media (Life Technologies, 10569-010), supplemented with 10% Tet-free fetal bovine serum (Fisher, NC0290780), Blasticidin S HCl (10 µg/mL) and Hygromycin B (100 µg/mL). All cell lines were cultivated with 5% at 37 °C CO2 in a humidified atmosphere. Metformin was purchased from Sigma-Aldrich (D150959, St. Louis, USA). 2.2 Nanoparticle conjugation and characterization 5, 5-dithiobis(2-nitrobenzoic acid) (DTNB) anti-PD-L1 antibody conjugated nanoprobes were synthesized for Raman SERS detection (Fig. S1). Briefly, 1 ml (39 µg/ml) Au nanorods (A12-10-780-CTAB, Nanopartz Inc. Loveland CO, USA) with 10 nm in diameter and 40 nm in length (polydispersity index = 7∼8%) was first sonicated with 10 μL DTNB (1 mM, Sigma-Aldrich, St. Louis, MO) for 30 min. Then 10 μL HOOC- Polyethylene glycol (PEG)-SH (1 mg/ml, MW 5000, Nanocs Inc. USA) was added to conjugate with nanorods for 15 min, followed by 40 μL PEG-SH (1 mg/ml, MW5000, Nanocs Inc. USA) sonication for another 30 min. After centrifuging for 15 min (12000g) to remove excess DTNB and PEG, particles were resuspended in 500 μL DI water and mixed with freshly prepared ethyl(dimethylaminopropyl) carbodiimide (EDC) (10 mM, 10 µL, Sigma-Aldrich, St. Louis, MO) and N-Hydroxysuccinimide (NHS) (25 mM, 10 µL, Sigma-Aldrich, St. Louis, MO) solution. After a 30-minute reaction, the excess supernatant was removed after centrifugation (12000g, 15 min). The resultant particles were resuspended in 200 μL PBS and conjugated with PD-L1 antibody (0.2 mg/mL, 10 µL, Acris, Herford, Germany) for another 1 h. Finally, particles were resuspended in 100 µL Phosphate-buffered saline (PBS) after discarding excessive antibody. The functionalized nanoparticles (estimated as 19.5 µg/ml) can be preserved for several days at 4 °C. The SERS probes incubated with live cells for 24 hours. Cells were either imaged life using Raman spectroscopy, or fixed and imaged via transmission electron microscope JEOL-JEM 1400 Plus at 120 kV, at Electron Microscopy Core Laboratory in University of Utah. Cells were seeded on ACLAR discs and grown as monolayer culture until reaching the desired confluency and incubated with SERS probes for 24 h. After washing with PBS for three times, the cells were fixed with 1% paraformaldehyde and 2.5% glutaraldehyde in 0.1M Cacodylate buffer and incubated at 4°C for overnight. Next day, the specimens were rinsed twice in cacodylate buffer and postfixed in 2% osmium tetroxyde for 1 hr at room temperature, followed by rinse with the same buffer and dH2O. The specimens were poststained with uranyl acetate for 1 hr at room temperature followed by dehydration with graded ethanol series (50, 70, 95, 100%) and absolute acetone for 10 min each step. The ACLAR disks with the cells were infiltrated with epoxy resin EMBED -812, embedded and polymerized at 60°C for 48hrs. Thin sections (70 nm) were obtained using Leica UC6 Ultratome (Leica Mycrosystems, Vienna, Austria) poststained with uranyl acetate and lead citrate and imaged with electron transmission microscope JEOL-JEM 1400 Plus at 120 kV. All reagents used in TEM imaging were from Electron Microscopy Sciences, PA, USA. 2.3 Raman spectroscopy The Raman spectra were measured by a Renishaw inVia Raman spectrometer (controlled by WiRE 3.4 software, Renishaw, UK) connected to a Leica microscope (Leica DMLM, Leica Microsystems, Buffalo Grove, IL, USA), equipped with a 785 nm near-IR laser that was focused through a 63 × NA = 0.90 water immersion objective (Leica Microsystems, USA). The laser intensity before and after travelling through the 63× objective was 110 mW and 29.4 mW, respectively (measured with LaserCheck, Coherent Inc., Portland, OR, USA). The standard calibration peak for the spectrometer with silicon mode at a static spectrum was 520.5 ± 0.1 cm-1. Cells were cultured on MgF2 (United Crystals Co., Port Washington, NY, USA) for 24 h and incubated with either IFN-γ (0, 10 and 100 ng/ml) or metformin (0, 6 mM) for 24 h. SERS probes were added into the cell culture system 24 hours prior to streamline mapping (100% power, 5s exposure time, 0.8 µm step). Cell were washed three times by PBS before Raman measurements. A rectangle area covering the whole single cell was selected to collect the spectra. Image segmentation was used to exclude the spectra collected on the background outside of the cell area. The remaining spectra in each measurement were baseline corrected using asymmetric least-squares smoothing in OriginLab 2020 (asymmetric factor: 0.001, threshold: 0.05, smoothing factor: 5, number of iterations: 10). MCR-ALS was used to analyze the hyperspectral imaging data. The threshold level in the SERS image was defined based on the results in the negative control experiment using gold nanorod + DTNB without conjugating antibody (thus no recognition binding with PD-L1 receptor). The 99th percentile of SERS intensity counts (peak height) from each cell was used to calculate the average peak height across all cells in the control group. This average was used as the threshold to separate specific and non-specific intensity counts in each pixel (Fig. S2). A paired t-test was used to compare the mean difference between the control and various treatments groups. We also determined the percentage of PD-L1 positive area over the whole cell area after removing the non-specific binding signals. Pearson correlation analysis was used to characterize the relationship between the percentage and mean intensity of PD-L1 positive area in each treatment. P<0.05 was considered significant. Analysis was performed in OriginLab 2020 (Massachusetts, USA) and in MATLAB 2019b. All data are represented as mean ± sd. 2.4 MCR-ALS MCR is designed to decompose the spectral profiles and concentration levels of a multi-component (chemical) compound. MCR-ALS analysis is an iterative approach to MCR based on alternating least squares matrix factorization. The algorithm factors an n${\times} $d matrix, D, (n being the number of collected samples, and d the number of features, in this case, wavenumbers) into two matrices C (an n${\times} $k matrix of concentrations) and ${S^T}$ (a k${\times} $d matrix of spectrum profiles). Here, k is the number of components of the compound, as determined by either prior knowledge or given by an estimation, typically done using PCA (principal components analysis), or singular value decomposition (SVD). The factorization takes the form below, where E is an error term. $$D = C{S^T} + E$$ The idea behind the alternating least squares method is to minimize the error of the factorization problem. The error estimation is calculated between the data matrix and the estimated factors and the optimization problem (ALS) is solved iteratively. That is, after fixing, C, minimize $\Vert{{\hat{D}_{PCA}} - {\hat{C}\hat{S}}^T}\Vert^2$ over ${S^T}$. Then ${S^T}$ is fixed and $\Vert{\hat{D}_{PCA}} - \hat{C}{\hat{S}^T}\Vert^2$ is minimized over C. At each iteration, the lack of fit is calculated using $$LoF = \; 100\sqrt {\frac{{\mathop \sum \nolimits_{ij} e_{ij}^2}}{{\mathop \sum \nolimits_{ij} d_{ij}^2}}} $$ where ${d_{ij}}\; $is an element of either the original data matrix D, or the principal component analysis (PCA) scores of D (${\hat{D}_{PCA}}$), and ${e_{ij}}\; $is the corresponding residual given by the difference of the matrix D (or ${\hat{D}_{PCA}}$), and the MCR-ALS construction [28]. The process is repeated until a convergence criterion is met [28]. The optimization problem has additional optional constraints, such as non-negativity (of either the concentration, spectral profile, or both), unimodality, and closure [28,29]. Due to the nature of the Raman spectra, we activate the non-negativity constraint (to ensure wavenumber measurements are non-negative). Unimodality of concentration was often exhibited without enforcing the constraint. The implementation we used was in MATLAB (2019b). Slight adaptations were made to the original package to allow for automation over multiple image samples. The estimation of component i of the original matrix D, denoted ${D_i}$, may be recovered by taking the outer product of the ith column of C by the ${i^{th}}$ row of ${S^T}$, that is, ${D_i} = {C_{.i}}S_{i.}^T$ The summation of all of the components (plus the error) reconstructs the original matrix D, that is, $D = \; \mathop \sum \nolimits_i {D_i} + E$. 2.5 Imaging segmentation detection Prior to the application of image segmentation, the light cell image was cropped to the exact dimensions as the spectral image to align the areas where the cell is present to the Raman spectra captured. Edge detection was performed in MATLB 2019b using the Image Processing Toolbox. The bitmap light cell image undergoes a series of alterations prior to the final result. Imaging segmentation makes use of the gradient-magnitude of the colors within the image to identify boundaries of objects within an image [30]. After this preliminary step, the edges are dilated (broadened) to fully encapsulate the interior of the cell within the image. Once the edges are sufficiently dilated to enclose the cell, the cellular image is filled to fully segment the cellular presence and absence. The cellular edges are then smoothed to improve the image representation. Once the cell is identified within the image, coordinates where the cell is present are mapped to the original coordinate system produced during Raman imaging for further spectral analysis. The end-to-end pipeline of the hyperspectral image analysis is shown in Fig. 1. The approach allows data collection from single cells to determine the distribution of PD-L1 within subdomains of the cell membrane. First, cells were labeled with anti-PD-L1 antibody conjugated to surface enhanced Raman scattering (SERS) nanoparticles. Second, the binding of the antibody to PD-L1 on the cell surface was visualized using Raman spectroscopy (single peak mapping or MCR-ALS). In addition, a brightfield image was used to delineate the boundary of the cell. Raman spectra inside the segmented cell area were retained. The background was subtracted to reveal the peak height of Raman signals. The mean and maximum intensities of PD-L1 expression were determined over the entire cell. Alternatively, a threshold was established based on non-specific binding, to distinguish PD-L1 positive from PD-L1 negative pixels. Thereafter, the percentage of PD-L1 positive cell area inside the cell outline and mean intensity within the PD-L1 positive area were determined. Fig. 1. Schematic illustration of our data processing approach. Raman spectra are collected throughout the whole slide and analyzed by MCR-ALS (A). In addition, cells are imaged using a brightfield microscope and the image is processed for cell segmentation (B). Raman spectra are extracted from the inside of the segmented region and the maximum peak height of each spectrum is recorded (C). The background is subtracted from the maximum peak height to obtain the Raman intensity signal. Intensity measurements are displayed as a heatmap over the cell and a threshold reflecting the non-specific binding is used to distinguish positive and negative pixels. The approach enables calculations of average signal intensity in the whole cell as well as in the area encompassed by positive pixels. 3.1 Cell labeling with PD-L1-SERS nanoparticles (PD-L1 probe) To determine the binding characteristics of the functionalized PD-L1 probe, the probe was incubated with A549 cells for 24 h at 37°C and imaged by TEM (Fig. 2). The size of the rod-shaped nanoparticle (width 14.172 ± 0.598 nm and length 45.377 ± 2.995 nm, Fig. 2(B) inlet) after multiple conjugation steps, is roughly equal to the original dimension (10 nm diameter and 40 nm length) provided by the manufacturer. TEM showed many PD-L1 probes on the cell surface, consistent with the subcellular location of the PD-L1 receptor (Fig. 2(A)-(B)). In addition, several probes entered the cell and were surrounded by membrane-bound structures, indicating that PD-L1 probes are internalized by endocytosis (Fig. 2(C)-(D)). Notably, the probe maintained its structural integrity outside and inside the cell. After endocytosis, nanoparticles were shown to aggregate within vesicular structures [31,32] from which they can be released into the cytoplasm [33]. After incubation with cells for 24 h, probes were densely aggregated within membrane-bound vesicular structures (denoted as "V" in Fig. 2(D)). No PD-L1 probes were observed inside the nucleus (denoted as "N" in Fig. 2(C)-(D)) or within mitochondria (denoted as "M" in Fig. 2(D)), which is similar to previously reported findings [31,32]. According to the recent reports, the SERS signals from endocytic nanoparticles were decreased to comparable level of the signals from the cellular components [14,15]. In our study, all the cellular intrinsic peaks (∼1000 counts) were overwhelmed by enhanced Raman signals (∼8000 counts) from cell surface (Fig S1B). Therefore, the endocytosed nanorods labeled with DTNB may not contribute in a significant way to the overall specific signal in spectral data. In order to address the issue raised from non-specific binding (nanorod labeled with or without DTNB) and endocytosed nanorods, we seek to apply the following methods to remove the potential background signal. Fig. 2. Transmission Electron micrographs of A549 cells incubated with the PD-L1 probe for 24 h. The nanoparticles appear as electron dense rod-shaped structures. (A): Surface binding of the SERS probe, the probe is seen on the cell membrane; (B): Enlarged area noted by red square in (A) and the size of a representative single nanorod is shown in inlet; (C-D): Endocytic uptake of SERS probes into cells, formation of SERS aggregates (red arrow) within endosomal vesicles (V). SERS probes are not observed inside the nucleus (N) or within mitochondria (M). 3.2 Visualization of PD-L1 on the surface of single cells Three cell lines, MDA-MB-231 (breast cancer), A549 (lung cancer), and HEK293 cells (human embryonic kidney cell) were treated either with control, unconjugated nanorods containing the Raman reporter, DTNB (no PD-L1 antibody), or with DTNB-nanorods conjugated with the PD-L1 antibody. The MDA-MB-231 and A549 cells are PD-L1 positive, while HEK293 cells do not express PD-L1 and were used as a control cell line to evaluate the non-specific binding of the PD-L1 antibody. A spectrum was collected from every pixel within a rectangular area covering the entire cell (Fig. 3 upper panel). Because the spectral characteristics of the DTNB probe revealed a single, dominant peak at 1328cm-1, we compared the intensity maps from the MCR-ALS multivariate analysis method to the single peak height analysis. To remove the background signal, we extracted spectra from an area with a low peak height (green circle) and subtracted the background from all spectra of the cell. The raw and corrected (background removed) spectra from control group and PD-L1 probe treated group of A549 cell was shown in Fig S2. The MCR-ALS analysis resulted in a better signal-to-noise ratio and less noise compared to the single peak height method (Fig. 3, middle and bottom panels) demonstrating the advantage of multivariate analysis over univariate approaches. Compared to PD-L1 negative HEK293 cells, a strong signal was observed in MDA-MB-231 cells and A549 cells. Results in the control cells incubated with unconjugated DTNB-nanorods did not demonstrate a difference in signal intensity across the three cell lines. Together, these findings are consistent with the known expression of PD-L1 in breast and lung cancer cells but not in HEK293 cells. However, the results also demonstrate that background and non-specific binding cause significant technical and biological errors in the quantitative assessment of PD-L1 expression. Therefore, we proceeded to systematically evaluate these sources of error and to remove them in order to generate specific binding maps for the PD-L1 antibody in individual cells. Fig. 3. Comparison of single peak mapping and MCR-ALS mapping in three cell lines, MDA-MB-231, A549, HEK293 after incubation with gold nanorods + DTNB (negative control) and PD-L1 probes. Light images of single cells are shown in the first row and the area used to collect Raman spectra is shown by the black rectangle. The results from the single peak (1328cm-1) mapping are shown in the second row. The third row depicts spectra recorded from the pixel area designated as X (black line) and O (green line). In the fourth row, MCR-ALS mapping results are shown, using the same color bar range from 0-1200 counts (fourth row). Scale bar: 10 µm. To establish the threshold above which we detect specific binding of the PD-L1 antibody, we determined the intensity levels in control groups, which are generated by the DTNB nanorod alone (Fig. 4(A)). The averaged 99th percentile in the distribution of peak heights was used as the threshold to separate positive from negative PD-L1 pixels. The threshold was applied across all cells labeled with the PD-L1 antibody in the same experiment. The thresholds are 255.99 Raman counts for MDA-MB-231 cells, and 318.79 Raman counts for A549 cells (See details in Fig. S3.). Most individual cells incubated with PD-L1 probes revealed high magnitude signals and the average signal in A549 lung cancer cells and MDA-MB-231 breast cancer cells was significantly greater than the antibody-free control signal. In addition, the average signal in MDA-MB-231 cells was greater than the average signal in A549 cells, but the difference did not reach statistical significance. Interestingly, we observed a biphasic distribution of signals in A549 cells, with a number of cells expressing low/no PD-L1. No significant difference was seen between negative control and the PD-L1 probe in HEK293 cell (Fig. 4(A), purple box), indicating that PD-L1 antibody has no contribution in non-specific binding. Fig. 4. Workflow to generate a mask of specific maximum peak intensities from SERS signals. A549 and MDA-MB-231 cells were used as positive controls and HEK293 cells as a negative control of PD-L1 expression. At least 7 individual cells from each cell line incubated with DTNB-nanorod or nanorod – PD-L1 antibody conjugates were imaged (A). 99th percentile of SERS intensities in the control group (dashed lines) was used as the threshold to separate positive from negative pixels and averaged to calculate the noise level (dash line). The scanning area covering the whole single cell (A549 cell with only PD-L1 probe as an example) is denoted by the black dash rectangle in (B). The image was further segmented to obtain the outline of the cell. Raman spectra inside the cell were retained as shown in (C). The heatmap of PD-L1 SERS signals after thresholding to separate specific (color) and non-specific (white) signals is shown in (D). *p<0.05 from paired t-test. Scale bar in (B): 10 µm. In single cell Raman imaging, the Raman scanning area covers all of the cell and additional extracellular area (Fig. 4(B)). Raman spectra within the cell area were retained and further background corrected, while Raman spectra outside the cell boundary were dismissed. Figure 4(C) illustrates a heatmap of background corrected SERS peaks within the outline of a representative A549 cell. A cluster of pixels with high intensity signals is surrounded by a large region of pixels at or below the non-specific binding threshold. After subtracting the threshold of non-specific signal indicated in Fig. 4(A), the final heatmap is generated in Fig. 4(D), only showing colors for specific PD-L1 positive pixels within the cell. 3.3 Effect of IFN-γ on PD-L1 expression Treatment of cells with IFN-γ at a concentration of 0-100 ng/ml was used to induce PD-L1 expression [34]. For quantitative analysis of PD-L1, we analyzed Raman peaks at 1328cm-1 and calculated 1) the mean peak height within the cell outline, 2) the maximum peak height within the cell, and 3) the mean peak height in the area encompassed positive pixels (illustrated in Fig. 4(D)). In addition, we determined the percentage of the cell area that contains PD-L1 positive pixels. First, we used the mean peak height inside cells to represent the average PD-L1 expression level. This approach did not reveal significant changes in PD-L1 expression levels before and after treatment with IFN-γ (p>0.05 Fig. 5(A)) for both A549 and MDA-MB-231 cells. Next, we used the maximum peak height and observed a significant increase in PD-L1 in A549 cells, but not in MDA-MB-231 cells (Fig. 5(B)). In addition, the maximum peak height increases proportional to the amount of IFN-γ used for stimulation. Finally, we analyzed the mean peak height within the area of PD-L1 positive pixels (illustrated in Fig. 4(D)). While we observed a significant increase after IFN-γ treatment (Fig. 5(C)), the difference before and after treatment was less pronounced than that in Fig. 5(B). Altogether, the results demonstrate that the stimulation of A549 cells with IFN-γ causes a significant increase in PD-L1 expression. Fig. 5. IFN-γ (0, 10, 100 ng/ml) stimulation of A549 and MDA-MB-231 cells. PD-L1 expression measured by averaged SERS intensity counts (A), maximum SERS intensity counts (B) or SERS intensity counts averaged within the PD-L1 positive pixel area (C). The percentage of PD-L1 positive area within the cell outline is shown in (D). Numbers above each box indicate the number of cells positive for PD-L1 divided by the total cell number cells analyzed. Cells are divided into two groups, separated by the mean signal intensity of PD-L1 positive pixels. Black squares mark cells with above the mean intensities and orange squares cells with below the mean intensities. The mean of the % positive pixel area is shown by the black line across the box. A paired t-test was performed to determine the significance of difference between control and treatment groups. * p<0.05; N = 6-12 cells. So far, measurements do not take into consideration whether PD-L1 expression is localized to a specific region or spread throughout the cell. To determine the extent of localized PD-L1 expression, we calculated the percentage of PD-L1 positive pixels (illustrated in Fig. 4(D)), inside the cell boundary. Representative cells with low, medium and high percentages of PD-L1 positive areas are shown in Fig. S4-5. The percentage of PD-L1 positive pixels is shown in Fig. 5(D). To visualize the relationship between PD-L1 positive pixels and PD-L1 expression levels, we colored cells with high average PD-L1 expression in black and cells with low average PD-L1 expression in yellow. No significant correlation was observed between %PD-L1 positive area and PD-L1 expression level (Table S1). 3.4 Effect of metformin on PD-L1 expression Since we observed IFN-γ regulation of PD-L1 only in A549, but not in MDA-MB-231 cells, we treated only A549 cells with metformin. Measurement of PD-L1 expression using antibody-SERS conjugates revealed a significant decline of PD-L1 after treatment (Fig. 6(A)-(C)). Furthermore, the decrease in expression was observed in all image processing methods (cell average, peak height, positive pixel average). In addition, the percentage of PD-L1 positive pixels was significantly reduced after metformin treatment (Fig. 6(D)). Together these data confirm that metformin down-regulates PD-L1 expression. Fig. 6. Metformin treatment reduces PD-L1 expression. A549 lung cancer cells were treated with 6 mM metformin. Data were collected as described in Fig. 5. The average peak height measured in counts is shown in (A), maximum peak heights are shown in (B) and average peak height in the positive pixel area (illustrated in Fig. 4(D)) is shown in (C). The numbers above each box indicate PD-L1 positive over total cells analyzed. In D, cells are divided into an above the mean intensity (black squares) and a below the mean intensity group (orange squares). A paired t-test was performed to determine the significance of difference between control and treatment groups. * p<0.05; N=15-18 cells. Traditionally, expression levels of cell surface receptors are measured by enzyme-linked immunosorbent assay (ELISA) or flow cytometry of detached, rounded cells. These methods are ideally suited to generate data from cell populations but cannot capture the subcellular location of receptor expression. The spatial resolution needed for subcellular analysis of cell cultures can be provided by fluorescent probes or by Raman active probes. The probes are conjugated to antibodies that bind to the target protein of interest. However, antibodies and probes generate non-specific signals that need to be eliminated to determine true protein expression levels. In this study, we demonstrate that PD-L1 expression in cancer cells (MDA-MB-231 and A549 cell) can be measured with high spatial resolution using SERS. To obtain highly specific SERS signals, we subtracted the background, which consists of the baseline of the spectrum next to the peak at 1328cm-1. In addition, we removed nonspecific binding signals by setting a threshold to distinguish positive and negative signals. Applying both, background and non-specific binding corrections, we obtained specific and high-quality data for PD-L1 antibody binding. Furthermore, we used three methods to process the hyperspectral imaging data: whole cell peak height average, peak height maximum and peak height average amongst positive pixels. Obtaining data from the three methods provides a deeper understanding of the spatial organization of SERS signals and PD-L1 expression in cancer cells [35]. Our study also shows that PD-L1 expression is stimulated by IFN-γ and inhibited by metformin in lung cancer cells (Fig. 5–6), which is consistent with recent reports [36–38]. In contrast, the treatment effect of IFN-γ in breast cancer cells was not significant, perhaps due to the inherently high expression level of PD-L1 in this cell line [9]. The single cell analysis by SERS mapping provides more information about the heterogeneity among individual cells in the cell line. Our results show that not all cells respond to IFN-γ or metformin (Fig. 5–6) and approximately 20% of cells were PD-L1 negative prior to treatment. Single-cell measurements allow us to recognize cell to cell differences, which cannot easily be noticed using conventional methods. The unique characteristics of SERS mapping generates data with high sensitivity and resolution. While this data demonstrates the heterogeneity of cells in the cancer cell lines, it also lets us appreciate limitations in interpreting single cell data. By comparing cell-to-cell variabilities with the variability of cell populations before and after treatment, we can gain insights into details of the treatment effect. Further studies with larger numbers of cell samples will be necessary to confirm the effects of IFN-γ and metformin on PD-1 expression in cancer cells. This study demonstrates the successful application of Raman imaging with multivariate analysis and image processing methods on single cells. We successfully employed a multivariate method (MCR-ALS) on hyperspectral imaging data to visualize the distribution of PD-L1 in breast and lung cancer cells. We also used the height of the single, dominant peak in the Raman spectrum to estimate the expression levels of PD-L1. The latter approach has the advantage of allowing a systematic subtraction of non-specific binding and endocytic nanoparticles for each pixel in the cell outline. Whether background signals would be from intracellular or extracellular nanoparticles needs to be explored in the future. Image segmentation in brightfield images was performed to extract the spectra solely from within the cell area. Three methods of processing the data from the single peak heights of Raman spectra were compared. The average peak height in the PD-L1 positive pixel area appeared to be more reliable for hyperspectral imaging analysis than the average peak height across the entire cell. The results from this validation study of our newly fabricated PD-L1 specific SERS probes support further use of the probes for cell-based assays and as a tool for selection of cancer patients for treatment with PD-1/PD-L1 immune checkpoint inhibitors. ARUP Laboratories; University of Utah; Utah Agricultural Experiment Station; Graduate Research and Creative Opportunities (GRCO) from Utah State University. We would like to thank the support from Utah Agricultural Experiment Station and Graduate Research and Creative Opportunities (GRCO) funding of Utah State University. BSK thanks the University of Utah and ARUP Laboratories for supporting the project. We thank the donation of HEK293 cell from Dr. Timothy Gilbertson (now from the University of Central Florida). 1. A. Kythreotou, A. Siddique, F. A. Mauri, M. Bower, and D. J. Pinato, "Pd-L1," J. Clin. Pathol. 71(3), 189–194 (2018). [CrossRef] 2. Y. Wang, H. Wang, H. Yao, C. Li, J. Y. Fang, and J. Xu, "Regulation of PD-L1: Emerging Routes for Targeting Tumor Immune Evasion," Front. Pharmacol. 9, 536 (2018). [CrossRef] 3. J. Gong, A. Chehrazi-Raffle, S. Reddi, and R. Salgia, "Development of PD-1 and PD-L1 inhibitors as a form of cancer immunotherapy: a comprehensive review of registration trials and future considerations," Immunother. Cancer 6(1), 8 (2018). [CrossRef] 4. L. Zitvogel and G. Kroemer, "Targeting PD-1/PD-L1 interactions for cancer immunotherapy," OncoImmunology 1(8), 1223–1225 (2012). [CrossRef] 5. S. Heskamp, W. Hobo, J. D. Molkenboer-Kuenen, D. Olive, W. J. Oyen, H. Dolstra, and O. C. Boerman, "Noninvasive Imaging of Tumor PD-L1 Expression Using Radiolabeled Anti-PD-L1 Antibodies," Cancer Res. 75(14), 2928–2936 (2015). [CrossRef] 6. K. Abiko, N. Matsumura, J. Hamanishi, N. Horikawa, R. Murakami, K. Yamaguchi, Y. Yoshioka, T. Baba, I. Konishi, and M. Mandai, "IFN-gamma from lymphocytes induces PD-L1 expression and promotes progression of ovarian cancer," Br. J. Cancer 112(9), 1501–1509 (2015). [CrossRef] 7. J. H. Cha, W. H. Yang, W. Xia, Y. Wei, L. C. Chan, S. O. Lim, C. W. Li, T. Kim, S. S. Chang, H. H. Lee, J. L. Hsu, H. L. Wang, C. W. Kuo, W. C. Chang, S. Hadad, C. A. Purdie, A. M. McCoy, S. Cai, Y. Tu, J. K. Litton, E. A. Mittendorf, S. L. Moulder, W. F. Symmans, A. M. Thompson, H. Piwnica-Worms, C. H. Chen, K. H. Khoo, and M. C. Hung, "Metformin Promotes Antitumor Immunity via Endoplasmic-Reticulum-Associated Degradation of PD-L1," Mol. Cell 71(4), 606–620.e7 (2018). [CrossRef] 8. A. Anantharaman, T. Friedlander, D. Lu, R. Krupa, G. Premasekharan, J. Hough, M. Edwards, R. Paz, K. Lindquist, R. Graf, A. Jendrisak, J. Louw, L. Dugan, S. Baird, Y. Wang, R. Dittamore, and P. L. Paris, "Programmed death-ligand 1 (PD-L1) characterization of circulating tumor cells (CTCs) in muscle invasive and metastatic bladder cancer patients," BMC Cancer 16(1), 744 (2016). [CrossRef] 9. E. M. Rom-Jurek, N. Kirchhammer, P. Ugocsai, O. Ortmann, A. K. Wege, and G. Brockhoff, "Regulation of Programmed Death Ligand 1 (PD-L1) Expression in Breast Cancer Cell Lines In Vitro and in Immunodeficient and Humanized Tumor Mice," Int. J. Mol. Sci. 19(2), 563 (2018). [CrossRef] 10. W. Zhang, Q. Li, M. Tang, H. Zhang, X. Sun, S. Zou, J. L. Jensen, T. G. Liou, and A. Zhou, "A multi-scale approach to study biochemical and biophysical aspects of resveratrol on diesel exhaust particle-human primary lung cell interaction," Sci. Rep. 9(1), 18178 (2019). [CrossRef] 11. H. Zhang, W. Zhang, L. Xiao, Y. Liu, T. A. Gilbertson, and A. Zhou, "Use of Surface-Enhanced Raman Scattering (SERS) Probes to Detect Fatty Acid Receptor Activity in a Microfluidic Device," Sensors (Basel) 19 (2019). 12. W. Zhang, F. Lin, Y. Liu, H. Zhang, T. A. Gilbertson, and A. Zhou, "Spatiotemporal dynamic monitoring of fatty acid–receptor interaction on single living cells by multiplexed Raman imaging," Proceedings of the National Academy of Sciences, 201916238 (2020). 13. J. Taylor, A. Huefner, L. Li, J. Wingfield, and S. Mahajan, "Nanoparticles and intracellular applications of surface-enhanced Raman spectroscopy," Analyst 141(17), 5037–5055 (2016). [CrossRef] 14. J. Kim, S. H. Nam, D. K. Lim, and Y. D. Suh, "SERS-based particle tracking and molecular imaging in live cells: toward the monitoring of intracellular dynamics," Nanoscale 11(45), 21724–21727 (2019). [CrossRef] 15. Y. Shen, L. Liang, S. Zhang, D. Huang, J. Zhang, S. Xu, C. Liang, and W. Xu, "Organelle-targeting surface-enhanced Raman scattering (SERS) nanosensors for subcellular pH sensing," Nanoscale 10(4), 1622–1630 (2018). [CrossRef] 16. A. Kapara, V. Brunton, D. Graham, and K. Faulds, "Investigation of cellular uptake mechanism of functionalised gold nanoparticles into breast cancer using SERS," Chem. Sci. 11(22), 5819–5829 (2020). [CrossRef] 17. J. Kneipp, H. Kneipp, M. McLaughlin, D. Brown, and K. Kneipp, "In vivo molecular probing of cellular compartments with gold nanoparticles and nanoaggregates," Nano Lett. 6(10), 2225–2231 (2006). [CrossRef] 18. A. Huefner, W. L. Kuan, R. A. Barker, and S. Mahajan, "Intracellular SERS nanoprobes for distinction of different neuronal cell types," Nano Lett. 13(6), 2463–2470 (2013). [CrossRef] 19. A. Huefner, W. L. Kuan, K. H. Muller, J. N. Skepper, R. A. Barker, and S. Mahajan, "Characterization and Visualization of Vesicles in the Endo-Lysosomal Pathway with Surface-Enhanced Raman Spectroscopy and Chemometrics," ACS Nano 10(1), 307–316 (2016). [CrossRef] 20. J. A. Webb, Y. C. Ou, S. Faley, E. P. Paul, J. P. Hittinger, C. C. Cutright, E. C. Lin, L. M. Bellan, and R. Bardhan, "Theranostic Gold Nanoantennas for Simultaneous Multiplexed Raman Imaging of Immunomarkers and Photothermal Therapy," ACS Omega 2(7), 3583–3594 (2017). [CrossRef] 21. E. G. Lobanova and S. V. Lobanov, "Efficient quantitative hyperspectral image unmixing method for large-scale Raman micro-spectroscopy data analysis," Anal. Chim. Acta 1050, 32–43 (2019). [CrossRef] 22. J. F. Hsu, P. Y. Hsieh, H. Y. Hsu, and S. Shigeto, "When cells divide: Label-free multimodal spectral imaging for exploratory molecular investigation of living cells during cytokinesis," Sci. Rep. 5(1), 17541 (2015). [CrossRef] 23. S. K. Paidi, P. M. Diaz, S. Dadgar, S. V. Jenkins, C. M. Quick, R. J. Griffin, R. P. M. Dings, N. Rajaram, and I. Barman, "Label-Free Raman Spectroscopy Reveals Signatures of Radiation Resistance in the Tumor Microenvironment," Cancer Res. 79(8), 2054–2064 (2019). [CrossRef] 24. H. Izabella J., J. Martin, W. Karina, B. Thomas, W. P. Mathias, C.-M. Dana, and P. Juergen, "Lab-on-a-chip-surface enhanced raman scattering combined with the standard addition method: toward the quantification of nitroxoline in spiked human urine samples," Anal. Chem. 88(18), 9173–9180 (2016). [CrossRef] 25. J. E. L. Villa, C. Pasquini, and R. J. Poppi, "Surface-enhanced Raman spectroscopy and MCR-ALS for the selective sensing of urinary adenosine on filter paper," Talanta 187, 99–105 (2018). [CrossRef] 26. T. T. Chuong, A. Pallaoro, C. A. Chaves, Z. Li, J. Lee, M. Eisenstein, G. D. Stucky, M. Moskovits, and H. T. Soh, "Dual-reporter SERS-based biomolecular assay with reduced false-positive signals," Proc. Natl. Acad. Sci. U. S. A. 114(34), 9056–9061 (2017). [CrossRef] 27. L. Sinha, Y. Wang, C. Yang, A. Khan, J. G. Brankov, J. T. Liu, and K. M. Tichauer, "Quantification of the binding potential of cell-surface receptors in fresh excised specimens via dual-probe modeling of SERS nanoparticles," Sci. Rep. 5(1), 8582 (2015). [CrossRef] 28. J. Jaumot, R. Gargallo, A. de Juan, and R. Tauler, "A graphical user-friendly interface for MCR-ALS: a new tool for multivariate curve resolution in MATLAB," Chemom. Intell. Lab. Syst. 76(1), 101–110 (2005). [CrossRef] 29. A. de Juan and R. Tauler, "Multivariate Curve Resolution (MCR) from 2000: Progress in Concepts and Applications," Crit. Rev. Anal. Chem. 36(3-4), 163–176 (2006). [CrossRef] 30. J. Canny, "A Computational Approach to Edge Detection," IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8(6), 679–698 (1986). [CrossRef] 31. Y. Tang, Y. Shen, L. Huang, G. Lv, C. Lei, X. Fan, F. Lin, Y. Zhang, L. Wu, and Y. Yang, "In vitro cytotoxicity of gold nanorods in A549 cells," Environ. Toxicol. Pharmacol. 39(2), 871–878 (2015). [CrossRef] 32. M. Remzova, R. Zouzelka, T. Brzicova, K. Vrbova, D. Pinkas, P. Rossner, J. Topinka, and J. Rathousky, "Toxicity of TiO2, ZnO, and SiO2 Nanoparticles in Human Lung Cells: Safe-by-Design Development of Construction Materials," Nanomaterials (Basel) 9(7), 968 (2019). [CrossRef] 33. Z. Krpetic, S. Saleemi, I. A. Prior, V. See, R. Qureshi, and M. Brust, "Negotiation of intracellular membrane barriers by TAT-modified gold nanoparticles," ACS Nano 5(6), 5195–5201 (2011). [CrossRef] 34. L. A. Stanciu, C. M. Bellettato, V. Laza-Stanca, A. J. Coyle, A. Papi, and S. L. Johnston, "Expression of Programmed Death–1 Ligand (PD-L) 1, PD-L2, B7-H3, and Inducible Costimulator Ligand on Human Respiratory Tract Epithelial Cells and Regulation by Respiratory Syncytial Virus and Type 1 and 2 Cytokines," J. Infect. Dis. 193(3), 404–412 (2006). [CrossRef] 35. J. P. Smith, F. C. Smith, and K. S. Booksh, "Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) with Raman Imaging Applied to Lunar Meteorites," Appl. Spectrosc. 72(3), 404–419 (2018). [CrossRef] 36. L. A. Stanciu, C. M. Bellettato, V. Laza-Stanca, A. J. Coyle, A. Papi, and S. L. Johnston, "Expression of programmed death-1 ligand (PD-L) 1, PD-L2, B7-H3, and inducible costimulator ligand on human respiratory tract epithelial cells and regulation by respiratory syncytial virus and type 1 and 2 cytokines," J. Infect. Dis. 193(3), 404–412 (2006). [CrossRef] 37. J. Xue, L. Li, N. Li, F. Li, X. Qin, T. Li, and M. Liu, "Metformin suppresses cancer cell growth in endometrial carcinoma by inhibiting PD-L1," Eur. J. Pharmacol. 859, 172541 (2019). [CrossRef] 38. J.-J. Zhang, Q.-S. Zhang, Z.-Q. Li, J.-W. Zhou, and J. Du, "Metformin attenuates PD-L1 expression through activating Hippo signaling pathway in colorectal cancer cells," Eur. J. Pharmacol. 11, 6965–6976 (2019). A. Kythreotou, A. Siddique, F. A. Mauri, M. Bower, and D. J. Pinato, "Pd-L1," J. Clin. Pathol. 71(3), 189–194 (2018). Y. Wang, H. Wang, H. Yao, C. Li, J. Y. Fang, and J. Xu, "Regulation of PD-L1: Emerging Routes for Targeting Tumor Immune Evasion," Front. Pharmacol. 9, 536 (2018). J. Gong, A. Chehrazi-Raffle, S. Reddi, and R. Salgia, "Development of PD-1 and PD-L1 inhibitors as a form of cancer immunotherapy: a comprehensive review of registration trials and future considerations," Immunother. Cancer 6(1), 8 (2018). L. Zitvogel and G. Kroemer, "Targeting PD-1/PD-L1 interactions for cancer immunotherapy," OncoImmunology 1(8), 1223–1225 (2012). S. Heskamp, W. Hobo, J. D. Molkenboer-Kuenen, D. Olive, W. J. Oyen, H. Dolstra, and O. C. Boerman, "Noninvasive Imaging of Tumor PD-L1 Expression Using Radiolabeled Anti-PD-L1 Antibodies," Cancer Res. 75(14), 2928–2936 (2015). K. Abiko, N. Matsumura, J. Hamanishi, N. Horikawa, R. Murakami, K. Yamaguchi, Y. Yoshioka, T. Baba, I. Konishi, and M. Mandai, "IFN-gamma from lymphocytes induces PD-L1 expression and promotes progression of ovarian cancer," Br. J. Cancer 112(9), 1501–1509 (2015). J. H. Cha, W. H. Yang, W. Xia, Y. Wei, L. C. Chan, S. O. Lim, C. W. Li, T. Kim, S. S. Chang, H. H. Lee, J. L. Hsu, H. L. Wang, C. W. Kuo, W. C. Chang, S. Hadad, C. A. Purdie, A. M. McCoy, S. Cai, Y. Tu, J. K. Litton, E. A. Mittendorf, S. L. Moulder, W. F. Symmans, A. M. Thompson, H. Piwnica-Worms, C. H. Chen, K. H. Khoo, and M. C. Hung, "Metformin Promotes Antitumor Immunity via Endoplasmic-Reticulum-Associated Degradation of PD-L1," Mol. Cell 71(4), 606–620.e7 (2018). A. Anantharaman, T. Friedlander, D. Lu, R. Krupa, G. Premasekharan, J. Hough, M. Edwards, R. Paz, K. Lindquist, R. Graf, A. Jendrisak, J. Louw, L. Dugan, S. Baird, Y. Wang, R. Dittamore, and P. L. Paris, "Programmed death-ligand 1 (PD-L1) characterization of circulating tumor cells (CTCs) in muscle invasive and metastatic bladder cancer patients," BMC Cancer 16(1), 744 (2016). E. M. Rom-Jurek, N. Kirchhammer, P. Ugocsai, O. Ortmann, A. K. Wege, and G. Brockhoff, "Regulation of Programmed Death Ligand 1 (PD-L1) Expression in Breast Cancer Cell Lines In Vitro and in Immunodeficient and Humanized Tumor Mice," Int. J. Mol. Sci. 19(2), 563 (2018). W. Zhang, Q. Li, M. Tang, H. Zhang, X. Sun, S. Zou, J. L. Jensen, T. G. Liou, and A. Zhou, "A multi-scale approach to study biochemical and biophysical aspects of resveratrol on diesel exhaust particle-human primary lung cell interaction," Sci. Rep. 9(1), 18178 (2019). H. Zhang, W. Zhang, L. Xiao, Y. Liu, T. A. Gilbertson, and A. Zhou, "Use of Surface-Enhanced Raman Scattering (SERS) Probes to Detect Fatty Acid Receptor Activity in a Microfluidic Device," Sensors (Basel) 19 (2019). W. Zhang, F. Lin, Y. Liu, H. Zhang, T. A. Gilbertson, and A. Zhou, "Spatiotemporal dynamic monitoring of fatty acid–receptor interaction on single living cells by multiplexed Raman imaging," Proceedings of the National Academy of Sciences, 201916238 (2020). J. Taylor, A. Huefner, L. Li, J. Wingfield, and S. Mahajan, "Nanoparticles and intracellular applications of surface-enhanced Raman spectroscopy," Analyst 141(17), 5037–5055 (2016). J. Kim, S. H. Nam, D. K. Lim, and Y. D. Suh, "SERS-based particle tracking and molecular imaging in live cells: toward the monitoring of intracellular dynamics," Nanoscale 11(45), 21724–21727 (2019). Y. Shen, L. Liang, S. Zhang, D. Huang, J. Zhang, S. Xu, C. Liang, and W. Xu, "Organelle-targeting surface-enhanced Raman scattering (SERS) nanosensors for subcellular pH sensing," Nanoscale 10(4), 1622–1630 (2018). A. Kapara, V. Brunton, D. Graham, and K. Faulds, "Investigation of cellular uptake mechanism of functionalised gold nanoparticles into breast cancer using SERS," Chem. Sci. 11(22), 5819–5829 (2020). J. Kneipp, H. Kneipp, M. McLaughlin, D. Brown, and K. Kneipp, "In vivo molecular probing of cellular compartments with gold nanoparticles and nanoaggregates," Nano Lett. 6(10), 2225–2231 (2006). A. Huefner, W. L. Kuan, R. A. Barker, and S. Mahajan, "Intracellular SERS nanoprobes for distinction of different neuronal cell types," Nano Lett. 13(6), 2463–2470 (2013). A. Huefner, W. L. Kuan, K. H. Muller, J. N. Skepper, R. A. Barker, and S. Mahajan, "Characterization and Visualization of Vesicles in the Endo-Lysosomal Pathway with Surface-Enhanced Raman Spectroscopy and Chemometrics," ACS Nano 10(1), 307–316 (2016). J. A. Webb, Y. C. Ou, S. Faley, E. P. Paul, J. P. Hittinger, C. C. Cutright, E. C. Lin, L. M. Bellan, and R. Bardhan, "Theranostic Gold Nanoantennas for Simultaneous Multiplexed Raman Imaging of Immunomarkers and Photothermal Therapy," ACS Omega 2(7), 3583–3594 (2017). E. G. Lobanova and S. V. Lobanov, "Efficient quantitative hyperspectral image unmixing method for large-scale Raman micro-spectroscopy data analysis," Anal. Chim. Acta 1050, 32–43 (2019). J. F. Hsu, P. Y. Hsieh, H. Y. Hsu, and S. Shigeto, "When cells divide: Label-free multimodal spectral imaging for exploratory molecular investigation of living cells during cytokinesis," Sci. Rep. 5(1), 17541 (2015). S. K. Paidi, P. M. Diaz, S. Dadgar, S. V. Jenkins, C. M. Quick, R. J. Griffin, R. P. M. Dings, N. Rajaram, and I. Barman, "Label-Free Raman Spectroscopy Reveals Signatures of Radiation Resistance in the Tumor Microenvironment," Cancer Res. 79(8), 2054–2064 (2019). H. Izabella J., J. Martin, W. Karina, B. Thomas, W. P. Mathias, C.-M. Dana, and P. Juergen, "Lab-on-a-chip-surface enhanced raman scattering combined with the standard addition method: toward the quantification of nitroxoline in spiked human urine samples," Anal. Chem. 88(18), 9173–9180 (2016). J. E. L. Villa, C. Pasquini, and R. J. Poppi, "Surface-enhanced Raman spectroscopy and MCR-ALS for the selective sensing of urinary adenosine on filter paper," Talanta 187, 99–105 (2018). T. T. Chuong, A. Pallaoro, C. A. Chaves, Z. Li, J. Lee, M. Eisenstein, G. D. Stucky, M. Moskovits, and H. T. Soh, "Dual-reporter SERS-based biomolecular assay with reduced false-positive signals," Proc. Natl. Acad. Sci. U. S. A. 114(34), 9056–9061 (2017). L. Sinha, Y. Wang, C. Yang, A. Khan, J. G. Brankov, J. T. Liu, and K. M. Tichauer, "Quantification of the binding potential of cell-surface receptors in fresh excised specimens via dual-probe modeling of SERS nanoparticles," Sci. Rep. 5(1), 8582 (2015). J. Jaumot, R. Gargallo, A. de Juan, and R. Tauler, "A graphical user-friendly interface for MCR-ALS: a new tool for multivariate curve resolution in MATLAB," Chemom. Intell. Lab. Syst. 76(1), 101–110 (2005). A. de Juan and R. Tauler, "Multivariate Curve Resolution (MCR) from 2000: Progress in Concepts and Applications," Crit. Rev. Anal. Chem. 36(3-4), 163–176 (2006). J. Canny, "A Computational Approach to Edge Detection," IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8(6), 679–698 (1986). Y. Tang, Y. Shen, L. Huang, G. Lv, C. Lei, X. Fan, F. Lin, Y. Zhang, L. Wu, and Y. Yang, "In vitro cytotoxicity of gold nanorods in A549 cells," Environ. Toxicol. Pharmacol. 39(2), 871–878 (2015). M. Remzova, R. Zouzelka, T. Brzicova, K. Vrbova, D. Pinkas, P. Rossner, J. Topinka, and J. Rathousky, "Toxicity of TiO2, ZnO, and SiO2 Nanoparticles in Human Lung Cells: Safe-by-Design Development of Construction Materials," Nanomaterials (Basel) 9(7), 968 (2019). Z. Krpetic, S. Saleemi, I. A. Prior, V. See, R. Qureshi, and M. Brust, "Negotiation of intracellular membrane barriers by TAT-modified gold nanoparticles," ACS Nano 5(6), 5195–5201 (2011). L. A. Stanciu, C. M. Bellettato, V. Laza-Stanca, A. J. Coyle, A. Papi, and S. L. Johnston, "Expression of Programmed Death–1 Ligand (PD-L) 1, PD-L2, B7-H3, and Inducible Costimulator Ligand on Human Respiratory Tract Epithelial Cells and Regulation by Respiratory Syncytial Virus and Type 1 and 2 Cytokines," J. Infect. Dis. 193(3), 404–412 (2006). J. P. Smith, F. C. Smith, and K. S. Booksh, "Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) with Raman Imaging Applied to Lunar Meteorites," Appl. Spectrosc. 72(3), 404–419 (2018). L. A. Stanciu, C. M. Bellettato, V. Laza-Stanca, A. J. Coyle, A. Papi, and S. L. Johnston, "Expression of programmed death-1 ligand (PD-L) 1, PD-L2, B7-H3, and inducible costimulator ligand on human respiratory tract epithelial cells and regulation by respiratory syncytial virus and type 1 and 2 cytokines," J. Infect. Dis. 193(3), 404–412 (2006). J. Xue, L. Li, N. Li, F. Li, X. Qin, T. Li, and M. Liu, "Metformin suppresses cancer cell growth in endometrial carcinoma by inhibiting PD-L1," Eur. J. Pharmacol. 859, 172541 (2019). J.-J. Zhang, Q.-S. Zhang, Z.-Q. Li, J.-W. Zhou, and J. Du, "Metformin attenuates PD-L1 expression through activating Hippo signaling pathway in colorectal cancer cells," Eur. J. Pharmacol. 11, 6965–6976 (2019). Abiko, K. Anantharaman, A. Baba, T. Baird, S. Bardhan, R. Barker, R. A. Barman, I. Bellan, L. M. Bellettato, C. M. Boerman, O. C. Booksh, K. S. Bower, M. Brankov, J. G. Brockhoff, G. Brown, D. Brunton, V. Brust, M. Brzicova, T. Cai, S. Canny, J. Cha, J. H. Chan, L. C. Chang, S. S. Chang, W. C. Chaves, C. A. Chehrazi-Raffle, A. Chen, C. H. Chuong, T. T. Coyle, A. J. Cutright, C. C. Dadgar, S. Dana, C.-M. de Juan, A. Diaz, P. M. Dings, R. P. M. Dittamore, R. Dolstra, H. Du, J. Dugan, L. Edwards, M. Eisenstein, M. Faley, S. Fan, X. Fang, J. Y. Faulds, K. Friedlander, T. Gargallo, R. Gilbertson, T. A. Gong, J. Graf, R. Graham, D. Griffin, R. J. Hadad, S. Hamanishi, J. Heskamp, S. Hittinger, J. P. Hobo, W. Horikawa, N. Hough, J. Hsieh, P. Y. Hsu, H. Y. Hsu, J. F. Hsu, J. L. Huang, D. Huefner, A. Hung, M. C. Izabella J, H. Jaumot, J. Jendrisak, A. Jenkins, S. V. Jensen, J. L. Johnston, S. L. Juergen, P. Kapara, A. Karina, W. Khan, A. Khoo, K. H. Kim, J. Kim, T. Kirchhammer, N. Kneipp, H. Kneipp, J. Kneipp, K. Konishi, I. Kroemer, G. Krpetic, Z. Krupa, R. Kuan, W. L. Kuo, C. W. Kythreotou, A. Laza-Stanca, V. Lee, H. H. Lee, J. Lei, C. Li, C. W. Li, L. Li, N. Li, Q. Li, T. Li, Z. Li, Z.-Q. Liang, C. Liang, L. Lim, D. K. Lim, S. O. Lin, E. C. Lin, F. Lindquist, K. Liou, T. G. Litton, J. K. Liu, J. T. Lobanov, S. V. Lobanova, E. G. Louw, J. Lu, D. Lv, G. Mahajan, S. Mandai, M. Martin, J. Mathias, W. P. Matsumura, N. Mauri, F. A. McCoy, A. M. McLaughlin, M. Mittendorf, E. A. Molkenboer-Kuenen, J. D. Moskovits, M. Moulder, S. L. Muller, K. H. Murakami, R. Nam, S. H. Olive, D. Ortmann, O. Ou, Y. C. Oyen, W. J. Paidi, S. K. Pallaoro, A. Papi, A. Paris, P. L. Pasquini, C. Paul, E. P. Paz, R. Pinato, D. J. Pinkas, D. Piwnica-Worms, H. Poppi, R. J. Premasekharan, G. Prior, I. A. Purdie, C. A. Qin, X. Quick, C. M. Qureshi, R. Rajaram, N. Rathousky, J. Reddi, S. Remzova, M. Rom-Jurek, E. M. Rossner, P. Saleemi, S. Salgia, R. See, V. Shen, Y. Shigeto, S. Siddique, A. Sinha, L. Skepper, J. N. Smith, F. C. Smith, J. P. Soh, H. T. Stanciu, L. A. Stucky, G. D. Suh, Y. D. Sun, X. Symmans, W. F. Tang, M. Tang, Y. Tauler, R. Taylor, J. Thomas, B. Thompson, A. M. Tichauer, K. M. Topinka, J. Tu, Y. Ugocsai, P. Villa, J. E. L. Vrbova, K. Wang, H. Wang, H. L. Wang, Y. Webb, J. A. Wege, A. K. Wei, Y. Wingfield, J. Wu, L. Xia, W. Xiao, L. Xu, J. Xu, S. Xu, W. Xue, J. Yamaguchi, K. Yang, C. Yang, W. H. Yao, H. Yoshioka, Y. Zhang, H. Zhang, J.-J. Zhang, Q.-S. Zhang, S. Zhang, W. Zhou, A. Zhou, J.-W. Zitvogel, L. Zou, S. Zouzelka, R. ACS Nano (2) ACS Omega (1) Anal. Chem. (1) Anal. Chim. Acta (1) Appl. Spectrosc. (1) BMC Cancer (1) Br. J. Cancer (1) Cancer Res. (2) Chem. Sci. (1) Chemom. Intell. Lab. Syst. (1) Crit. Rev. Anal. Chem. (1) Environ. Toxicol. Pharmacol. (1) Eur. J. Pharmacol. (2) Front. Pharmacol. (1) IEEE Trans. Pattern Anal. Mach. Intell. (1) Immunother. Cancer (1) Int. J. Mol. Sci. (1) J. Clin. Pathol. (1) J. Infect. Dis. (2) Mol. Cell (1) Nanoscale (2) OncoImmunology (1) Proc. Natl. Acad. Sci. U. S. A. (1) Talanta (1) (1) D = C S T + E (2) L o F = 100 ∑ i j ⁡ e i j 2 ∑ i j ⁡ d i j 2
CommonCrawl
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up. Do the effects of OFDM pilot tones spacing appear in time or in frequency domain? As known, the pilot tones are transmitted via subcarriers predefined and known in the transmitter and the receiver. The pilot spacing is repeated with a certain rate that depends on how fast the channel changes. I am talking about comb-type channel estimation. The distance between every two pilot's sub-carrier is called, here, pilot's spacing. If we have number of sub-carriers $N$ = 64; in the first case let's take 8 sub-carriers to transmit pilot's tones defined in locations 1,9,17...; with pilot's spacing $j_1 = \frac{64}{8} = 8$. The second case let's take 16 sub-carriers defined in locations 1,5,9,13...; with pilot's spacing $j_2 =\frac{64}{16} = 4$. It's clear that second case pilots can track the time-varied channel better. But, with an OFDM system, the symbol data is converted into time domain using $iFFT$, and every subcarrier is spread over all the other sub-carriers including pilot's sub-carriers. How does the pilot's spacing affect the performance? digital-communications ofdm channel-estimation Fatima_AliFatima_Ali I don't know whether I'd call it "artifacts", but of course your pilots are part of your signal, and taking $\frac1{j_1}N$ or $\frac1{j_2}N$ of the available subcarriers for pilots reduces the number of available subcarriers for data symbols by exactly that amount. every subcarrier is spread over all the other sub-carriers including pilot's sub-carriers No. This is important: the subcarriers are orthogonal, hence the O in OFDM. They are not overlapping into each other; they remain orthogonal, don't influence each other. It's just that in time domain, you can't see that. Remember (please), that OFDM is just the IDFT/DFT in its core: that's just an invertible transform between $N$-element vectors and $N$-element vectors; a base transform! In fact, it's even an orthogonal base transform. So, assuming you're synchronized, in OFDM, subcarriers do not spread into others. That is the whole point! Pilot symbols aren't "special" in any way: you usually can't tell from the time domain signal whether and how many pilots there are. Marcus MüllerMarcus Müller Thanks for contributing an answer to Signal Processing Stack Exchange! Not the answer you're looking for? Browse other questions tagged digital-communications ofdm channel-estimation or ask your own question. OFDM training symbol format Using Pilot tones to estimate carrier frequency offset in ofdm generating pilot sub-carriers in matlab How to calculate bandwidth require for OFDM Pilot symbols in IEEE 802.11a What's the reasoning behind changing pilot tones in OFDM? The necessity of null subcarrier in OFDM? How to decide what subcarrier allocation should I use in OFDM? LDPC for channel coding
CommonCrawl
Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. It only takes a minute to sign up. XOR combination between the bits of a string Given an integer n, we can construct $2^n$ strings of length n. We can take the first element for each of these strings and create a list. In total 'n' such lists are possible. But now I need to create lists that are the XOR combinations of the n lists created. If n is 3 and abc is the string then I need to calculate a$\oplus$b, a$\oplus$c, c$\oplus$b, a$\oplus$b$\oplus$c. How do I do this for a general n? Here's the code so far... n = 4; s = Tuples[{0, 1}, n]; a = Table[s[[i, j]], {j, 1, n}, {i, 1, 2^n}] a gives the first four indices of all the strings. Now I need to calculate all the XOR combinations possible between these four lists. 230k2828 gold badges599599 silver badges12301230 bronze badges Vaisakh MVaisakh M $\begingroup$ does Mod[Plus@##, 2] & @@@ Subsets[a, {2, n}] give what you need? $\endgroup$ – kglr $\begingroup$ or (the same thing) Mod[Plus@##, 2] & @@@ Subsets[Transpose[s], {2, n}]? $\endgroup$ $\begingroup$ or more concisely, Mod[Plus @@@ Subsets[Transpose[s], {2, n}], 2] $\endgroup$ xOrCombinations[n_] := BitXor @@@ Subsets[Transpose @ Tuples[{0, 1}, n], {2, n}] xOrCombinations[4] // MatrixForm // TeXForm $$\left( \begin{array}{cccccccccccccccc} 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 \\ \end{array} \right)$$ xOrCombinations2[n_] := Mod[Plus @@@ Subsets[Transpose@Tuples[{0, 1}, n], {2, n}], 2] xOrCombinations2[4] == xOrCombinations[4] answered Dec 8, 2020 at 8:29 kglrkglr Thanks for contributing an answer to Mathematica Stack Exchange! Advanced Tupling Help with Permutations Counting the number of a specific type of permutation What is the fastest way to extract a substring from a huge string? How can I optimize the search between two lists, one that is changing length? What is the best approach in converting strings from web imports back to their original types? Creating subsets of lists of lists which have certain properties Find k smallest sum n-tuples lang-mma
CommonCrawl
Ancestry inference using reference labeled clusters of haplotypes Methodology article Yong Wang1, Shiya Song1, Joshua G. Schraiber1, Alisa Sedghifar1, Jake K. Byrnes1, David A. Turissini1, Eurie L. Hong1, Catherine A. Ball1 & Keith Noto1 We present ARCHes, a fast and accurate haplotype-based approach for inferring an individual's ancestry composition. Our approach works by modeling haplotype diversity from a large, admixed cohort of hundreds of thousands, then annotating those models with population information from reference panels of known ancestry. The running time of ARCHes does not depend on the size of a reference panel because training and testing are separate processes, and the inferred population-annotated haplotype models can be written to disk and reused to label large test sets in parallel (in our experiments, it averages less than one minute to assign ancestry from 32 populations using 10 CPU). We test ARCHes on public data from the 1000 Genomes Project and the Human Genome Diversity Project (HGDP) as well as simulated examples of known admixture. Our results demonstrate that ARCHes outperforms RFMix at correctly assigning both global and local ancestry at finer population scales regardless of the amount of population admixture. Admixture has played an important role in shaping patterns of genetic variation among humans and other species. It is of interest at both population and individual levels and has motivated a large body of research into population demography [1, 2] and population stratification [3] in association studies. It has also fueled public interest in direct-to-consumer services that provide estimates of ancestry proportions. In such applications, a consumer typically submits a DNA sample through a saliva collection kit and receives an individual-level report of their ancestral make-up based on genotype data. Over the past decade, many tools have been developed to infer individual-level ancestry. One set of methods only infers global ancestry proportions, some of which model the probability of the observed genotypes using ancestry proportions and population allele frequency [4], while others use cluster analysis and principal component analysis [5]. Another set of methods infer ancestral origin for genomic segments, which are then averaged over the entire genome. These methods use either SNPs (Single Nucleotide Polymorphisms) or a sequence of SNPs (i.e. haplotypes) as the observed variables, and estimate ancestry in each segment of the genome (called local ancestry). Compared to SNPs, haplotypes contain richer information, and can be especially powerful in differentiating geographically close populations [6]. Among existing haplotype-based methods, both Chromopainter [6] and HAPMIX [7] use the Li and Stephen's haplotype copying model [8], whereas RFMix [9] uses a random forest approach, training classifiers on haplotype features in a reference panel and using a linear-chain conditional random field to model the conditional distribution of local ancestry given observed haplotypes. As the size of public and private genotype datasets grows (e.g., Ancestry has processed over 15 million human genomes), there is an increased need for methods that can efficiently and accurately perform ancestry inference on a large number of samples. Here we describe ARCHes (Ancestry inference using Reference labeled Clusters of Haplotypes), a method that leverages reference panel labeled haplotype models to estimate diploid ancestry locally throughout the genome. ARCHes first uses a large cohort of unlabeled haplotypes to create BEAGLE haplotype-cluster models [10], which are efficient at phasing and measuring haplotype frequency, for each of a number of local "windows" across the genome. The haplotype clusters of the BEAGLE models are then annotated with the probability that haplotypes from various populations belong to those clusters. For a given test individual, ARCHes computes a probability distribution over the possible population assignments for the test individual's two haplotypes, and uses a genome-wide hidden Markov model to assign diploid ancestry. Previous studies have shown that RFMix [9] outperforms ADMIXTURE [4] in both global and local ancestry estimation [11]. RFMix generally performs well at assigning ancestry at continental level but, we will demonstrate, can struggle at regional level assignment, where populations may not be very differentiated. ARCHes is capable of differentiating nearby populations and performing recent ancestry inference (e.g., 1–12 generations ago) at a much finer scale. A summary of our approach Our approach can be divided into two major phases, training and testing. The training phase consists of (1) building BEAGLE [10] haplotype models from a large cohort of phased data that do not have population labels, and (2) "annotating" those models with population-haplotype information from a separate population reference panel consisting of unphased examples each labeled with a population. We build a haplotype model for each 1001 windows that collectively cover the entire autosome (each window is about 75 SNPs and 3.5 cM). These models are built from phased data, which can be phased with BEAGLE or other phasing software. In our experiments, we build them from a cohort of 50,000 individuals (100,000 haplotypes) phased with Eagle [14]. The haplotype models are directed acyclic graphs with nodes that represent clusters of similar haplotypes and have probabilistic transitions between nodes. Next in the training phase, we record how likely the genotypes of an unphased reference panel are to belong to each of these haplotype clusters. We refer to this process as annotating the haplotype models—it gives us a probability for each node in the haplotype models and each population in our reference panel, that a haplotype from that population belongs to the haplotype cluster the node represents. Once the training phase is complete, we use only the annotated haplotype models which are fixed throughout the testing phase. The testing phase involves computing the likelihood that a test genotype has a haplotype belonging to any haplotype cluster (node in the models) and, using the population annotations for the cluster, computing the likelihood that the two haplotypes of the test genotype are explained by any pair of populations. We then use a genome-wide hidden Markov model (HMM) to model the changes in population assignment across the genome that best explain the test genotype. The details of this approach are given in the Methods section below. We emphasize that the training phase need only be carried out once, and that the testing phase can then be applied to efficiently classify an arbitrary number of test genotype examples. If we obtain new population reference panel examples, we can use them to supplement the model annotations, even introducing new populations, but we must retrain completely to change the unlabeled phased cohort or window definitions. Accuracy for single-origin individuals We built our reference panel using genotypes from proprietary data representing 32 population regions. We then applied ARCHes on individuals from 1000 genomes [12] and HGDP [13], representing 15 regions. (Lists of populations and associated sample sizes for both training and testing data are in Additional file 1: Tables S1 and S2, and we describe all experimental methodology in detail, including the parameter settings for both ARCHes and RFMix in the Methods section below.) ARCHes predicts on average 66.1% of the ancestry in this test set to be from the correct region (Fig. 1). The rest of the ancestry mainly came from nearby regions (Additional file 1: Fig. S2). ARCHes performs well at separating different countries within Africa, Europe, and Asia. In comparison, RFMix predicts on average 43.5% of the ancestry to be from the correct region, and the rest of the ancestry mainly came from neighboring regions, suggesting that RFMix is accurate for continental level assignments but performs less well at finer scales (Table 1). Boxplot of the estimated ancestry proportions for single-origin individuals from each testing population comparing ARCHes and RFMix Table 1 The performance of ARCHes and RFMix on various test sets Accuracy for simulated admixed individuals In order to evaluate the global and local accuracy on admixed individuals, we need to know the correct ancestry throughout the genome, so we manufactured test examples from the 1000 Genomes and HGDP data. We simulated 100 individuals using forward simulation with a pedigree mimicking Latino population history in which founders admixed 12 generations ago with 45% Native American, 50% European and 5% African ancestry. This dataset tests ARCHes's power to differentiate continental level admixture as well as its ability to differentiate the subregions that an individual's continental ancestry comes from. To evaluate overall global performance on these test sets, we compute concordance as the size of the intersection between true and estimated proportions, which is the sum, for each population, of the smaller of the true global proportion and the estimated global proportion. We measured local accuracy as the proportion of genomic windows with correct diploid population assignments regardless of phase, with half credit given to a window assignment that has one population correct but the other incorrect. We find that ARCHes accurately recovers both global ancestry assignments and diploid local ancestry assignments, with average concordances of 72.3% and 47.8%, respectively (Additional file 1: Fig. S4). RFMix achieves 65.7% global ancestry concordance but failed to infer the local assignments correctly, with average diploid local ancestry concordance of 18.5%. This is due to difficulties that RFMix has in differentiatiating subregions within Europe and between Maya and Peru. The continental-level global and local concordance is 89.1% and 64.1% respectively for ARCHes, and 73.1% and 34.2% respectively for RFMix. Distinguishing sub-continental regions Next, we simulated genotypes for individuals with ancestry from 16 pairs of neighboring regions to test each approach's ability to distinguish between them at global and local genomic scales. Specifically, we construct test examples that are 1/2, 1/4, 1/8, or 1/16 from one region of the pair and the rest from the other region. We measure precision and recall for each of the 11 unique regions in the set of 16 pairs (Fig. 2). Precision is the amount of correctly identified ancestry divided by the amount estimated for that region and recall is the amount of correctly identified ancestry divided by the total amount of ancestry from that region that is present in the test example. ARCHes outperforms RFMix in terms of both precision and recall in eight of the 11 regions, and outperforms it in terms of precision in two more, and in terms of recall in one. Precision/Recall for each population calculated from estimated ancestry proportions of simulated admixed individuals with ancestry from a pair of neighboring population Overall, ARCHes achieves more than 50% global ancestry concordance and diploid local ancestry concordance (Additional file 1: Fig. S3). There is only a small difference between global ancestry concordance and diploid local ancestry concordance on this test set, indicating that ARCHes achieves its global ancestry accuracy by estimating local ancestry accurately. It is also encouraging that ARCHes is capable of differentiating populations not only on a continental level but also on sub-continental and even country levels. Separate training and test phases to facilitate high-throughput ancestry estimation The ARCHes software represents a change in design that explicitly separates two phases, first model creation and annotation and second ancestry estimation, in order to make ancestry estimation both efficient and distributable. The first phase, learning the haplotype models from a large unlabeled training set and then annotating them with the reference panel populations, need only be carried out once. In order to estimate ancestry on subsequent instances, ARCHes software need only reload models and can be run on new examples at any time, distributed as necessary, and the running time depends only on the number of the number of individuals to be processed and labeled, not the size of the reference panels. In contrast, the training and testing processes of RFMix are not separate and require significantly more time per individual. We compare ARCHes's runtime and memory usage with RFMix in Additional file 1: Table S3. Ancestry inference in large, heterogeneous sample sets is becoming increasingly important for academics, clinicians, and consumers. We developed a new approach, ARCHes, that models ancestry using rich haplotype models coupled to genome-wide information sharing. Our experiments show that ARCHes performs decisively more accurately than a state-of-the-art approach, in terms of both global and local estimation, both within and among continental scales, and among varying levels of admixture. Moreover, because our approach separates the time-consuming training step from the fast testing step, it is well-suited to apply to large scale databases. Our approach works because haplotypes contain rich information for distinguishing subpopulations, and ARCHes's haplotype model annotations allow it to quantitatively compare haplotypes to those of several reference panels without requiring that those reference panels be phased, contain haplotypes that are identical to that of an individual, or have similar size or diversity. Indeed, ARCHes can achieve high accuracy with reference panels containing fewer than 50 genotype examples (Additional file 1: Fig. S5). We also note that ARCHes can make use of admixed reference panel members. A genotype example for which we know the diploid (or haploid) population in just a subset of genomic windows can be used in a reference panel to annotate only those windows (though we don't use this technique in our experiments here). Our benchmark experiments show that ARCHes is able to capture admixture from a few to several generations removed by learning the genomic scale of admixture on an individual-by-individual basis: more recently admixed samples have relatively longer contiguous blocks of ancestry. This shows that ARCHes is able to be applied broadly without specific, a priori, parameter tuning. This feature is important for analysis of large, heterogeneous databases where it may be difficult to know the specific history of all samples involved. ARCHes provides a fast and accurate method for inferring unphased local ancestry and combining that into estimates of diploid global ancestry. There are nonetheless several opportunities for future research. First of all, the confidence intervals provided by ARCHes are underestimated; it is possible that they can be improved by using a recalibration procedure on simulated data. Second, despite the fact that using unphased local ancestry in ARCHes helps it to overcome phasing errors, it may be desirable to provide phased local ancestry in some circumstances. Because of the modular nature of the ancestry hidden Markov model, it may be possible to extend this framework to provide phased local ancestry estimates. One of the keys to estimating population-level admixture is to measure the similarity between the haplotypes in an individual of unknown origin and those of a reference panel. ARCHes leverages the data from hundreds of thousands of haplotype instances to create haplotype models, and uses a novel approach for employing those models to carry out the comparison. ARCHes can then efficiently estimate population assignments across the genome for large test sets. Our experiments show that across varying amounts of recent admixture, ARCHes outperforms RFMix, a state-of-the-art method in population genetics for local ancestry inference, both in terms of estimating genome-wide population admixture amounts, and at labeling specific genomic regions. Overall ARCHes method Our approach begins with dividing the genome into a large number of small windows (e.g., 3–4 centimorgans each), such that, in a recently admixed individual, each of the maternal and paternal haplotypes in a given window are likely to each come from a single population. For each window, we construct a BEAGLE haplotype-cluster model [10] from a large, unlabeled training set of haplotypes. A BEAGLE haplotype-cluster model is a directed acyclic graph with haplotype represented as a path traversing the graph. Each node of the graph represents a cluster of haplotypes. A BEAGLE model is often interpreted as Markov model where the states are the nodes (Additional file 1: Fig. S1), and thus as an "arbitrary order Markov model" of SNPs along a haplotype. Using a reference panel of genotypes from individuals whose ancestry is known in each window, we then annotate each state in the haplotype models with the probability that genotype sequences from a given population belong to the haplotype cluster represented by the state (Fig. 3). Illustration of annotating haplotype-cluster model representing one genomic window with D SNPs (in our experiments D is about 75–80, about 3-4 cM). Each box illustrates the expected proportion of haplotypes in all the genotypes of different populations that include a certain model state at a certain level Given a new potentially admixed genotype sequence \(x\), we assume that the ancestors of \({\varvec{x}}\) are all ultimately from the \(K\) origin groups, and that \({\varvec{x}}\) is admixed recently enough that haplotypes from each group are longer than genomic windows mentioned above, and those haplotypes are much more likely to span an entire window than part of one—i.e., the size of the windows is chosen based on the expected age of admixture. We run a genome-wide hidden Markov model (HMM) whose hidden states are the true assignment (population label pairs) in each window. The emission probabilities are the probability distributions of diploid population assignments for each window arising from the annotated BEAGLE models and the transition probabilities (the probability that the population assignment will change at any point along the genome) are learned through an expectation–maximization (E–M) algorithm. We assign diploid ancestry to each window and estimate the global assignment based on the Viterbi path through this HMM. We also sample paths through the HMM to estimate the uncertainty of assignment amounts. We describe our detailed method in the following sections, and provide pseudocode in Additional file 1: Appendix S1. Annotating haplotype cluster models We follow Browning and Browning [10] in building haplotype cluster models. Briefly, we divide the genome into W partially overlapping windows with approximately the same number of SNPs. Within each window, we build a haplotype cluster model from a large, unlabeled set of training phased haplotypes. For simplicity, we restrict to biallelic variants, and code them as 0 and 1. Building this haplotype cluster model from a large, unlabeled set of individuals provides a "background" of haplotype diversity against which we can measure the informativeness of different haplotypes. With a haplotype cluster model built for each window, we can then annotate populations using the haplotype cluster model. Recall that each path through a BEAGLE model corresponds to a realization of a haplotype, and each node at a given SNP represents a cluster of haplotypes that are similar near that SNP. For the genotypes of a reference individual in window w, xw, we compute the probability that the individual's two haplotypes pass through two specific nodes in the graph, u and v, at SNP d, $${P}_{d}\left(u,v|{x}_{w}\right)=\frac{{P}_{d}\left({x}_{w},u,v\right)}{P\left({x}_{w}\right)}$$ where we compute \({P}_{d}\left(u,v|{x}_{w}\right)\) and \(P\left({x}_{w}\right)\) using a modification of the forward–backward algorithm for hidden Markov models, treating the node as a hidden state (see Additional file 1: Appendix S1 for pseudocode). In the following, we will refer to the HMM used to analyze the BEAGLE models as the haplotype HMM, and its properties as haplotype emission probabilities, and haplotype probabilities. This contrasts with the ancestry HMM we use to smooth ancestry estimates across the genome, which is described in the subsequent section. We then marginalize over one of the haplotypes of each diploid to create a haplotype posterior probability that the genotypes \({x}_{w}\) in window w passes through node u at SNP d, $${P}_{d}\left(u|{x}_{w}\right)={\sum }_{v}{P}_{d}\left(u,v|{x}_{w}\right)$$ Finally, we annotate a node u by its average haplotype probability in a set of individuals belonging to a reference population p, \({R}_{p}=\{{x}_{i,p,w},i\in \mathrm{1,2},\dots ,{n}_{p}\}\) where \({n}_{p}\) is the total number of reference samples in population p. Then, we compute $${P}_{d}\left(u|p\right)=\frac{1}{{n}_{p}}{\sum }_{i=1}^{{n}_{p}}{P}_{d}\left(u|{x}_{{\varvec{i}},{\varvec{p}},{\varvec{w}}}\right)$$ This equation gives us the probability that an individual drawn from population p will pass through node u at SNP d of the haplotype cluster model for window w. During the annotation process, we may choose to downsample the genotypes of the reference panel by setting some genotypes at random to 'missing' and annotating states of the model by summing over the possible genotypes at those locations. Doing this has the effect of annotating states that represent haplotypes that are similar to those of a reference genotype, but not exactly the same, and is intended to boost performance in reference panels that have few representative examples. We may use the same reference panel individual several times in the annotation process, with a different downsampled genotype each time. Ancestry emission probabilities for test individuals in windows With Eq. (1) in hand, we can compute the probability that a test individual's genotypes in a given window w descend from a specific pair of populations. Letting t be the unphased genotype of our test individual, we first compute the probability of t given that the two haplotypes in window w belong to clusters u and v of the haplotype cluster model at SNP d, $${P}_{d}\left({\mathbf{t}}_{w}|u,v\right)=\frac{{P}_{d}\left({t}_{w},u,v\right)}{{P}_{d}\left(u,v\right)},$$ where \({P}_{d}\left({\mathbf{t}}_{w},u,v\right)\) is computed using the haplotype forward–backward algorithm and \({P}_{d}\left(u,v\right)\) is obtained by multiplying the transition matrices of the haplotype cluster model up to SNP d (equivalent to running the haplotype forward algorithm up to SNP d with all haplotype emission probabilities set equal to 1). We then want to know the probability that the individual's two haplotypes come from populations p and q using the information around SNP d. We compute this quantity by first computing the probability that a haplotype passes through nodes u and v and SNP d of window w given underlying populations p and q by averaging over the equally likely combinations of whether node u corresponds to population p and node v corresponds to population q or vice versa, \({P}_{d}\left(u,v|p,q\right)=\frac{1}{2}\left({P}_{d}\left(u|p\right){P}_{d}\left(v|q\right)+{P}_{d}\left(u|q\right){P}_{d}\left(v|p\right)\right)\). Note that this result is equivalent to assuming that the two haplotype clusters that make up a diploid sample are independent, and that the two populations that make up those haplotypes are also independent. Now, we use the law of total probability to average over all haplotype clusters at SNP d, and compute the probability that the individual's haplotype clusters at that point arise from populations p and q, \({P}_{d}\left({\mathbf{t}}_{w}|p,q\right)={\sum }_{u,v}{P}_{d}\left({\mathbf{t}}_{w}|u,v\right){P}_{d}\left(u,v|p,q\right)\). This probability weighs similarity to haplotypes in population p and q more strongly for SNPs closest to SNP d in window w, because we have no a priori knowledge of which part of a window is most informative about population membership, we finally compute our ancestry emission probability for a window by averaging over the population probability for every SNP in the window, $$P\left({\mathbf{t}}_{w}|p,q\right)=\frac{1}{D}{\sum }_{d}{P}_{d}\left({\mathbf{t}}_{w}|p,q\right)$$ where D is the total number of SNPs in window w. This process can then be repeated for every window in the genome to obtain the probability of the test individual's genotype in each window, given that the two haplotypes arose from any pair of populations p and q. Smoothing ancestry estimates using a genome-wide ancestry hidden Markov model In principle, the ancestry emission probabilities computed in the previous section could be used to compute maximum likelihood estimates of diploid local ancestry in each window, one at a time. However, doing so would result in highly noisy ancestry estimates. Instead, we share information across the genome using an additional layer of smoothing via a genome-wide hidden Markov model (Fig. 4). Moreover, because ancestry segments from recent admixture are expected to be longer than a single window, this model helps reduce false ancestry transitions. Illustration of genome wide HMM where each window has a series of emitting states, which corresponds to a population assignment (p,q) with 1 ≤ p ≤ q ≤ K If we wish to assign ancestry to \(K\) populations, the hidden states of our hidden Markov model are the \(\left(\genfrac{}{}{0pt}{}{K}{2}\right)+K\) possible unphased ancestry pairs, \(\left(p,q\right)\), with ancestry emission probabilities window w given by Eq. (2). Because we model unphased diploid ancestry, we define a population pair as unordered, i.e. \(\left(p,q\right)\) is the same ancestry assignment as \(\left(q,p\right).\) Our ancestry hidden Markov model assumes that between windows ancestry can change for one of the two haplotypes with probability \(\tau .\) The assumption that ancestry switches only for one of the two haplotypes within an individual is both biologically realistic (assuming individuals are admixed relatively recently) and greatly reduces the complexity of the hidden Markov model. Thus, a change occurs from \(\left(p,q\right)\) to \(\left({p}^{^{\prime}},{q}^{^{\prime}}\right)\) to any pair such that exactly one of \({p}^{^{\prime}}\) or \({q}^{^{\prime}}\) is different from \(p\) or \(q\). Each new ancestry pair is drawn with probability proportional to the stationary probability of that ancestry pair,\({\pi }_{p,q}\). In full, the transition probabilities are $$P\left( {p^{\prime},q^{\prime}{|}p,q} \right) = \left\{ {\begin{array}{*{20}l} {1 - \tau } \hfill & {if\,p^{\prime} = p,\,q^{\prime} = q} \hfill \\ {\tau \frac{{\pi_{p^{\prime},q^{\prime}} }}{{Z_{p,q} }}} \hfill & {if\,p^{\prime} \ne p, \,q = q^{\prime}\,or\, p^{\prime} = p,\,q^{\prime} \ne \,q} \hfill \\ 0 \hfill & {otherwise} \hfill \\ \end{array} } \right.$$ where the normalizing constant \({Z}_{p,q}\) is given by summing over all accessible unphased haplotype pairs. Between chromosomes, both ancestry pairs are allowed to change, and the ancestry at the start of each chromosome is drawn independently from that individual's global distribution of ancestry pairs, \({\pi }_{p,q}\). For a more formal description of how changes between chromosomes are handled, see Additional file 1: Appendix S1. We initialize the \({\pi }_{p,q}\) to a uniform distribution and \(\tau\) to some low value, and use a modified Baum-Welch algorithm to update \({\pi }_{p,q}\) and \(\tau\) (see Additional file 1: Appendix S1). Empirically, we observed a tendency to overfit by estimating a large \(\tau\) parameter, resulting in inference of a large number of different ancestries; thus we run a fixed number of update steps, rather than stopping at convergence. Estimating ancestry proportions in individuals In principle, the value \({\uppi }_{p}={\sum }_{q}{\uppi }_{p,q}\) could be used as an estimate of the admixture proportion from population \(p\) in an individual. However, we instead opt to use a path-based approach that also allows us to obtain credible intervals of the ancestry proportions conditioned on the inferred parameters. Specifically, we provide a point estimate of global ancestry proportions by computing the maximum probability path through the HMM using the Viterbi algorithm, and computing the proportion of windows (weighted by their length) that are assigned to population \(p\). We then provide a credible interval by then sampling paths from the posterior distribution on paths, and for each one can compute the ancestry proportion in the same way as from the Viterbi path. Below we describe experiments we did for benchmarking ARCHes and RFMix [9]. Reference panel and testing data We build our reference panel using genotypes from proprietary candidates who explicitly provided prior consent to participate in this research project and have all family lineages tracing back to the same geographic region. All the candidates were genotyped on Ancestry's SNP array and were analyzed through a quality control pipeline to remove samples with low genotype call rates, samples genetically related to each other, and samples who appear as outliers from their purported population of origin based on Principal Component Analysis. The reference panel contains 11,051 samples, representing ancestry from 32 global regions (Additional file 1: Table S1). We then use 1705 individuals from 1000 Genomes [12] and HGDP Project [13] from 15 populations as testing data. We use SNP array data of individuals from the 1000 Genomes [12] and HGDP [13] projects and limit them to around 300,000 SNPs that overlap with Ancestry's SNP array. Lists of populations and associated sample counts included in reference panel and testing data are specified in Additional file 1: Tables S1 and S2, respectively. We align populations that come from different data sources, in some cases combining populations together. For example, we combined the ancestries that are assigned to 'England, Wales, and Northwestern Europe' and 'Ireland & Scotland' to represent ancestry for 'Britain'. We combined the ancestry that are assigned to 'Benin & Togo' and 'Nigeria' to represent ancestry for 'Yoruba'. We simulate 100 individuals with an admixture history similar to modern Latinos that admixed 12 generations ago with 45% Native American, 50% European and 5% African ancestry. We constructed 100 12-generation pedigrees and randomly selected founders from the reference panel, with the ratio of 45% Native American (from the Maya and Peru regions), 50% European (from the France, Britain, Italy, Spain and Finland regions), and 5% African ancestry (from the Yoruba region). We then simulate the DNA recombination process and obtained the genotypes of the descendant in each pedigree, which are admixed at roughly 45% Native American, 50% European and 5% African. We simulate genomes of admixed individuals with ancestors from a pair of neighboring populations by simulating genotypes where 1000 Genomes and HGDP test examples serve as the two parents, four grandparents, eight great-grandparents, or 16 great-great-grandparents of a pedigree and the admixed example evaluated is the lone descendant of that set. The examples in this test set are, on average, 50–50% admixed, 25–75% admixed, 12.5–87.5% admixed, or 6.25–93.75% admixed. We simulate 20 individuals for each of the 16 different pairings and 4 different levels of admixture, with half of them representing a minority admixture from one region, and half of them representing a minority admixture from the other region. Since RFMix requires phased haplotypes for both query and reference individuals, we use Eagle [14] v2 with the HRC [15] reference panel to get phased haplotypes of the simulated individuals as well as for the individuals in the reference panel. However, ARCHes requires only the unphased, diploid genomic sequences for both query and reference individuals. RFMix parameters We first used default parameters in RFMIX v2.03-r0 (https://github.com/slowkoni/rfmix). We then performed a parameter sweep using different number of generations since admixture (the -G parameter), with value of 2, 4, 6 and 8 coupled with different window sizes (set both conditional random field window size and random forest window size) with values of 0.2 cM, 0.5 cM, 100 SNPs (roughly 1 cM) and 300 SNPs (roughly 3 cM) on chromosome 1 of simulated pair admixed individuals. We then selected the parameters with the best performance, namely 4 generations since admixture and a window size 0.2 cM, and ran RFMix on the whole genome of simulated pair admixed individuals. For simulated latino individuals, we used 12 generations since admixture and a window size 0.2 cM. For single origin individuals, we used 2 generations since admixture and a window size 0.2 cM. None of the RFMix runs used the E-M procedure or phase error correction. Note that for both RFMix and ARCHes, we use the HapMap [16] genetic recombination map for GRCh37 to estimate recombination distance. ARCHes parameters We divide the genome into 3882 windows of 80 SNPs each, overlapping by 5 SNPs (with some adjustments made near chromosome boundaries). We build a haplotype model for each of these windows from a separate cohort of 50,000 haplotypes selected from the Ancestry database that are not already in the population reference panel. We phase these genotypes with Eagle [14], although we do not find that the particular phasing method, or even the diversity of this cohort has a measurable impact on the accuracy of our approach. We tie small groups of 3–4 windows together by disallowing population assignment transitions within those groups, which allows us to set the granularity with which we assign local population assignments (there are 1001 such window groups) and has the benefit of increased computational efficiency. ARCHes's haplotype model annotation process is robust to missing data, which is handled by marginalizing over all possible genotypes. In fact, the annotations may benefit from intentionally downsampling reference panel genotypes so that variations in haplotypes are considered as well, and the amount of downsampling and the number of downsampled genotypes used for annotation are tunable parameters of the annotation process. In our experiments, we sample each reference panel genotype sequence 100 times, each time setting 20% of genotypes to missing and annotating the 3882 haplotype models with them. This training process takes approximately 15 to build each haplotype model and 15 min to annotate it, although that process is parallelizable and need not be carried out again, regardless of the size of the test set. We set the initial τx parameter to be 0.01 and learned this parameter using 10 iterations of the E-M approach described above. ARCHes assigns diploid local ancestry to 1001 windows of the genome and the global ancestry estimates are summarized from these 1001 windows. All data generated for this study are available in Additional file 2. Data used in this study from the 1000 Genomes Project data are available at https://www.internationalgenome.org. Data used in this study from the Human Genome Diversity Project (HGDP) are available at https://www.hagsc.org/hgdp/. Individual genotype data for human subjects participating in Ancestry DNA's Human Diversity Project are not available, to protect their privacy and anonymity. ARCHes: Centimorgan(s) HGDP: Human Genome Diversity Project HMM: Hidden Markov model HRC: Haplotype reference consortium E–M: Expectation–maximization Single nucleotide polymorphism Loh P-R, Lipson M, Patterson N, Moorjani P, Pickrell JK, Reich D, Berger B. Inferring admixture histories of human populations using linkage disequilibrium. Genetics. 2013;193:1233–54. Gravel S. Population genetics models of local ancestry. Genetics. 2012;191:607–19. Marchini J, Cardon LR, Phillips MS, Donnelly P. The effects of human population structure on large genetic association studies. Nat Genet. 2004;36:512–7. Alexander DH, Novembre J, Lange K. Fast model-based estimation of ancestry in unrelated individuals. Genome Res. 2009;19:1655–64. Price AL, Patterson NJ, Plenge RM, Weinblatt ME, Shadick NA, Reich D. Principal components analysis corrects for stratification in genome-wide association studies. Nat Genet. 2006;38:904–9. Lawson DJ, Hellenthal G, Myers S, Falush D. Inference of population structure using dense haplotype data. PLoS Genet. 2012;8:e1002453. Price AL, Tandon A, Patterson N, Barnes KC, Rafaels N, Ruczinski I, Beaty TH, Mathias R, Reich D, Myers S. Sensitive detection of chromosomal segments of distinct ancestry in admixed populations. PLoS Genet. 2009;5:e1000519. Li N, Stephens M. Modeling linkage disequilibrium and identifying recombination hotspots using single-nucleotide polymorphism data. Genetics. 2003;165:2213–33. Maples BK, Gravel S, Kenny EE, Bustamante CD. RFMix: a discriminative modeling approach for rapid and robust local-ancestry inference. Am J Hum Genet. 2013;93:278–88. Browning SR, Browning BL. Rapid and accurate haplotype phasing and missing-data inference for whole-genome association studies by use of localized haplotype clustering. Am J Hum Genet. 2007;81:1084–97. Uren C, Hoal EG, Möller M. Putting RFMix and ADMIXTURE to the test in a complex admixed population. BMC Genetics. 2020;21(40). https://doi.org/10.1186/s12863-020-00845-3. Genomes Project Consortium, Abecasis GR, Altshuler D, Auton A, Brooks LD, Durbin RM, Gibbs RA, Hurles ME, McVean GA. A map of human genome variation from population-scale sequencing. Nature. 2010;467:1061–73. Li JZ, Absher DM, Tang H, Southwick AM, Casto AM, Ramachandran S, Cann HM, Barsh GS, Feldman M, Cavalli-Sforza LL, et al. Worldwide human relationships inferred from genome-wide patterns of variation. Science. 2008;319:1100–4. Loh P-R, Danecek P, Palamara PF, Fuchsberger C, Reshef YA, Finucane HK, Schoenherr S, Forer L, McCarthy S, Abecasis GR, et al. Reference-based phasing using the Haplotype Reference Consortium panel. Nat Genet. 2016;48:1443–8. McCarthy S, Das S, Kretzschmar W, Delaneau O, Wood AR, Teumer A, Kang HM, Fuchsberger C, Danecek P, Sharp K, et al. A reference panel of 64,976 haplotypes for genotype imputation. Nat Genet. 2016;48:1279–83. The International HapMap Consortium. A second generation human haplotype map of over 3.1 million SNPs. Nature. 2007;449:851–61. https://doi.org/10.1038/nature06258. Article CAS PubMed Central Google Scholar We appreciate Carlos Bustamante and Mark Koni Wright for providing RFMix software and guidance on using it. No additional funding was provided. AncestryDNA, San Francisco, CA, 94107, USA Yong Wang, Shiya Song, Joshua G. Schraiber, Alisa Sedghifar, Jake K. Byrnes, David A. Turissini, Eurie L. Hong, Catherine A. Ball & Keith Noto Yong Wang Shiya Song Joshua G. Schraiber Alisa Sedghifar Jake K. Byrnes David A. Turissini Eurie L. Hong Catherine A. Ball Keith Noto YW, SS, JS, and KN conceptualized and formulated the overall approach. YW, SS, JS, AS, and DT participated in formal analysis and the design of methodology. SS and KN wrote software. YW, JB, EH, CB, and KN were responsible for project administration and supervision. YW, SS, JS, AS, and KN compiled results and wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Keith Noto. All data for this research project were from subjects who provided prior informed written consent to participate in AncestryDNA's Human Diversity Project, as reviewed and approved by our external institutional review board, Advarra (formerly Quorum). All data were de-identified prior to use. The authors declare competing financial interests: authors affiliated with AncestryDNA may have equity in Ancestry. The work described in this manuscript is covered by one or more patents including US patent entitled Local Genetic Ethnicity Determination System US10558930B2. Additional file 1: Supplementary Materials . Contains Figures S1–S5, Tables S1 and S2, and Appendix S1 which contains implementation details, formulas, and pseudocode. Additional file 2: Data Tables . A spreadsheet containing 11 tables, showing test set labels, population estimates for test set examples, precision and recall by population, single-origin confusion matrices (for ARCHes and RFMix), global and local average concordance for the paired-population test sets, their corresponding raw concordance scores, global concordance for the simulated Latino test sets, local concordance for the simulated Latino test sets, Fst statistics for the paired populations in test sets, and Fst statistics between reference panels and test sets. Wang, Y., Song, S., Schraiber, J.G. et al. Ancestry inference using reference labeled clusters of haplotypes. BMC Bioinformatics 22, 459 (2021). https://doi.org/10.1186/s12859-021-04350-x Ancestry inference Haplotype modeling Local ancestry RFMix Machine Learning and Artificial Intelligence in Bioinformatics
CommonCrawl
A two-wheeled machine with a handling mechanism in two different directions Khaled M. Goher1 Despite the fact that there are various configurations of self-balanced two-wheeled machines (TWMs), the workspace of such systems is restricted by their current configurations and designs. In this work, the dynamic analysis of a novel configuration of TWMs is introduced that enables handling a payload attached to the intermediate body (IB) in two mutually perpendicular directions. This configuration will enlarge the workspace of the vehicle and increase its flexibility in material handling, objects assembly and similar industrial and service robot applications. The proposed configuration gains advantages of the design of serial arms while occupying a minimum space which is unique feature of TWMs. The proposed machine has five degrees of freedoms (DOFs) that can be useful for industrial applications such as pick and place, material handling and packaging. This machine will provide an advantage over other TWMs in terms of the wider workspace and the increased flexibility in service and industrial applications. Furthermore, the proposed design will add additional challenge of controlling the system to compensate for the change of the location of the COM due to performing tasks of handling in multiple directions. Two-wheeled robots are based on the idea of the inverted pendulum (IP) system. It is a well-identified benchmark problem that provides many challenges to control design. The IP system is nonlinear, unstable, nonminimum phase and under-actuated. The inverted pendulum problem is one of the most well-known conventional problems in control theory and has been investigated extensively in the literature. Motion control and stability analysis of a two-wheeled vehicle (TWV) are presented by Ren et al. [29] where a self-tuning PID control strategy, based on a reduced model, is proposed for implementing a motion control system that stabilizes the TWV and follows the desired motion commands. Chan et al. [5] explored the common methods which have been investigated and the controllers which have been used of two-wheeled robots on different types of terrains. Shojaei et al. [30] proposed an adaptive robust tracking controller to cope with both parametric and nonparametric uncertainties of the system occurred due to the integrated kinematic and dynamic trajectory tracking control problem wheeled mobile robots. Deng et al. [11] designed controller based on Lyapunov function candidate and considered virtual forces information including detouring force. Guo et al. [15] design a sliding mode controller for wheeled IP. Li and Kang [23] used the technique of dynamic coupling switching control for a wheeled manipulator. Actuator faults and abnormalities in operation in a two-wheeled IP system has been investigated by Tsa et al. [33]. Investigating the parametric and functional uncertainties has been also considered in the literature; Li et al. [20–22] considered the dynamic balance and motion control based on least squares support vector machine for wheeled inverted pendulums (WIP) subjected to dynamics uncertainties. Control algorithms, using Lyapunov synthesis, with the advantage of LS-SVM combined with online parameters estimation strategy have been proposed. Based on this approach, the outputs of the system proved to be able to track the given bounded reference signals within a small neighbourhood of zero as well as guarantee semi-global uniform boundedness of all the closed-loop signals. An intelligent backstepping tracking control system is proposed by Chiu et al. [6, 7] for WIPs with unknown system dynamics and external disturbance. An adaptive output recurrent cerebellar model articulation (AORCMAC) is used to copy an ideal backstepping control (IBC), and a compensated controller is designed to compensate for difference between the IBC law and AORCMAC. In a further work by Chiu et al. [6, 7], a novel model-free intelligent controller to control WIPs has been developed. An adaptive output recurrent cerebellar model articulation controller (AORCMAC) for angle and position control of the WIP without model information has been developed. Lee et al. [19] carried out a historical evolution of IP systems for several designs. Ghaffari et al. [12] used Kane's and Lagrangian dynamic formulation methods to drive the dynamic model of a self-balancing two-wheeled robot. Ping et al. [26, 27] reviewed various methods of driving the dynamic model and control techniques used for two-wheeled robots. Cui et al. [9] designed a state feedback control for a wheeled IP, and then backstepping-based adaptive control is designed for output tracking of the system. Brisilla and Sankaranarayanan [4] proposed a nonlinear control strategy for a mobile IP without internal switching between controllers. Chinnadurai et al. [8] used internet on a chip controller to design a two-wheel robot using the principle of curvature technique. Dai et al. [10] designed a method based on friction compensation for two-wheeled IP. Raffo et al. [28] designed H∞ nonlinear controller to stabilize and control two-wheeled machine under the presence of exogenous disturbances. Sun and Li [32] used adaptive neural control and extreme learning machine (ELMs) to develop and implement on two-wheeled human transportation system. A novel control scheme is developed based on a single-hidden layer feedforward network approximation capability of combing ELMs to capture vehicle dynamics. Yue et al. [34] investigated error data-based trajectory planner and indirect adaptive fuzzy control with the application on two-wheeled IP using indirect adaptive fuzzy and sliding mode control approaches, Lyapunov theory and LaSalle's invariance theorem. Yue et al. [35] designed a composite control approach for balancing and trajectory tracking of two-wheeled IP vehicle using adaptive sliding mode, fuzzy-based control and adaptive mechanism. Principle of two-wheeled IP with an extended rod The principle of two-wheeled IP with an extended intermediate body (IB) has been first introduced by Goher and Tokhi [13, 14] where a new configuration of wheeled robotic machines (WRM) is developed and equipped with a linear actuator, as shown in Fig. 1, to activate a payload and to lift it to different levels. Although the developed configuration added additional DOF through the linear actuator attached to IB, the workspace has been extended only in one single vertical direction by extending the IB. In a further work to increase the workspace and the TWM flexibility, Goher [2, 14] developed a two-wheeled IP where an additional link is added, shown in Fig. 2, to end with a five DOFs double IP system with an extended rod. Single IP with an extended rod Double IP with an extended rod The application of the double IP with an extended rod configuration has been utilized to simulate an important scenario of wheelchair transfer to stand on two wheels, shown in Fig. 3a, b, as presented by Ahmad et al. [1]. In this research, the authors also used a linear actuator attached along link 2 to further lift the chair and the person to a further specified height. Schematic diagram of wheelchair transfer. a Wheelchair (before lifting). b Wheelchair (during lifting) A two-wheeled robot, TransBOT, is developed by Lee and Jung [18]. TransBOT has two driving modes, driving mode and a balancing mode that mimics the IP concept by lifting up the front casters. The developed prototype is similar to the PUMA human transporter, and its working principle mainly relies on stabilizing payload in one single direction. In the work done by Huang et al. [16], a vehicle called UW-Car, with a schematic diagram shown in Fig. 4, is developed where a movable seat is driven by a linear motor along a straight horizontal direction. A control algorithm is developed and implemented both in MATLAB simulation environment and on real experimental of the developed prototype. Although this work considered an adjustable position of the car seat in horizontal direction, the motion of the seat in a vertical direction has not been considered. Furthermore, Bae and Jung [3] developed service robots, KOBOKER shown in Fig. 5, that is able to self-balance using up and down sliding mechanism that activates two arms in order to perform tasks on the floor. The design of KOBOKER allows handling of objects in two different directions and is equipped with two serial arms to handle different tasks as per specified. UW-Car with an adjustable seat in horizontal motion Korean service robot (KOBOKER) Despite the above-mentioned contributions in terms of developing new configurations of TWMs, the dynamic analysis of TWMs with mass balancer in two different directions has not been given too much interest in the literature. A dynamic model of this new configuration will have the potential to form the basis for new applications and exploration of many features of the system as well as the possibility to investigate the impact of various characteristics. In this current work, a novel configuration of TWMs is introduced that enables handling payload attached to the IB in two mutually perpendicular directions. This will allow extension of the workspace of the vehicle and to increase its flexibility in various applications including: material handling, objects assembly and similar industrial and service robots application. The proposed configuration, with similar concept as KOBOKER [3], gains both advantages of serial robots and TWMs that occupy a minimum space due to working on two wheels only. A model of this new configuration will have the potential to form the basis for new applications and exploration of many features of the system as well as the possibility to investigate the impact of various characteristics. The novel configuration of the vehicle with five DOFs provides the vehicle with an ability to handle objects in two mutually perpendicular directions. This is achieved by either a dual-axes linear actuator or two different actuators that will be able to extend the intermediate body (IB) of the vehicle in two different directions. In this work, five decoupled feedback control loops have been used throughout this work. The developed control strategy, based on loops decoupling, ensures separation of the dynamics due to the high frequency range (tilt angle) from the dynamics of low frequency range (motion of the intermediate body). Various simulation exercises have been considered to test the robustness of the developed control scheme. Even with complicated scenarios of changing the COMs simultaneously in two different directions, the control strategy was able to cope well with such variations. Internal system dynamics have been considered to test the robustness on the control approach. Huang et al. [16] used on the other hand LQR and sliding mode controllers to control the velocity and braking of a two-wheeled vehicle. Though the system was developed by Bae and Jung [3], no control has been considered in their work. The rest of the paper is organized as follows: "Introduction" summarizes relevant contributions in TWMs and the associated control strategies. "System description" section describes the system with the proposed configuration, explanation of the system DOFs and detailed description of a picking and placing scenario while handling an object in a confined space. The mathematical model of the system state space is derived in "Mathematical modelling" section, and a linearized state space model is derived in "State space modelling" section. PID control scheme is designed in "Numerical simulation" section and implemented on the system model based on a set of numerical parameters. Various simulation exercises are used for the numerical validation including either sequentially or simultaneously change of COM of the vehicle in two different directions. Finally, the paper is concluded in "Conclusion" where the work contributions are highlighted and a set of recommendations are formulated for potential future work. The proposed TWRM has five DOFs as shown in Figs. 6 and 7 where Solidworks® and ADAMS MSC® are used to generate the design. The proposed vehicle consists of a chassis with centre of gravity at point \( P_{1} \) and the mass of the linear actuators with centre of gravity at point \( P_{2} \). The coordinates of points \( P_{1} \) and \( P_{2} \) will change if the robot moves away from its initial location along X axis. These variables fully describe the dynamics of the five DOFs system. The two-wheeled robot is controlled by applying a torque \( \tau_{\text{R}} \) and \( \tau_{\text{L}} \) to the right and left wheels, respectively. This torque is contributed by the motors attached to each wheel. Other inputs that enable the control system to keep the robot upright at all times are signals measured by the gyroscopes and accelerometers. These sensors provide information about various state variables at any given time. Schematic diagram of system in Solidworks ADAMS/View model of the 5D-TWRM The battery of the two-wheeled platform appears in Fig. 6 on the right side of the vehicle. However, in the realized physical system, the attachment of components (battery, electronics, etc.) will be located out in a way to assure uniform distribution of masses around the centre point of interaction of x and y axes. Advantages of using the proposed design with standard wheels over omnidirectional wheels There are various types of wheels used in wheeled mobile robots including: standard, castor and omnidirectional wheels. The proposed design in this paper uses two standard wheels powered by two motors. The advantages of the used standard wheels include the simplicity in design and manufacturing and the relatively good reliability. The small size of used wheels (10 cm diameter) helps in providing better stability and stronger grip with the floor. This adds to the stability and rigidity of the entire system while carrying out material handling tasks. The simple manufacturing process of standard wheels assures minimum positioning errors while movement. Omnidirectional wheels are used in mobile robots doing material handling tasks and other industrial applications, though mobile robots with omnidirectional wheels are controllable with reduced number of actuators and are highly manoeuvrable in narrow or crowded spaces. Accuracy of motion is influenced by systematic errors caused by unavoidable imperfections in the control and mechanical subsystems and nonsystematic caused by unpredictable phenomena such as wheel slippage and surface irregularities. Calibration will be needed to compensate for those errors due to the use of omnidirectional wheels. Other odometry errors, while the robot movement, may also exist due to unequal wheels diameters, joints misalignment, backlash and slippage in encoder pulses [24]. Omnidirectional vehicles are widely used in mobile robots for materials handling vehicles for logistics and wheelchairs. However, they are generally designed for the case of motion on flat, smooth terrain and are not feasible for outdoor usage [17]. Slippage is there when omnidirectional wheels are in motion and manufacturing of those wheels is an expensive and needs high accuracy. Furthermore, there is a poor efficiency because not all the wheels are rotating in the direction of movement, which causes loss from friction, and are more computationally complex because of the angle calculations of movement [25]. Description of the system DOFs The considered system has degrees of freedom described by four types of translations with respect to the X and Z axes. They are represented by the angular displacement of the angular rotation of the right and left wheels \( \delta_{\text{R}} \) and \( \delta_{L} \), respectively, the attached payload linear displacement in vertical and horizontal directions \( h_{1} \) and \( h_{2} \), respectively, as shown in Fig. 8. The fifth DOF is represented by the tilt angle of the IB around the vertical Z axis, \( \theta \). This configuration of the vehicle is believed to serve in various applications including but not limited to object picking and placing, as shown in Fig. 9, assembly lines and similar industrial and service robot applications that require working in tiny spaces. Schematic diagram of the system showing motion variables Mobility of the vehicle. a Vehicle position in the upright vertical, b inclination of the vehicle with respect to the upright position and c two linear motion of payload in two perpendicular axes For the vehicle to undertake a picking and placing scenario shown in Fig. 10, the description of the course of motion can be explained as follows: Example application: object pick and place The vehicle to start moving on two wheels while keeping a balance condition till reaching a desired location for picking the object. The dominant control efforts during this stage are the two control torque signals from the motors attached to the wheels. Once reaching a suitable position to pick the object, the linear actuators start to work by extending the IB up to the object position by a linear displacement, \( h_{1} \). In this case, the centre of mass (COM) of the vehicle is moving up and the wheels motors must apply the torque necessary to keep a balance condition. Following the extension of the IB in a vertical position, the control system orders the linear actuator to extend the end-effector to extend in a lateral direction to the location of the object. As a consequence, the COM of the entire vehicle is changing its position and it is the responsibility of wheels motors to develop the appropriate motor torque that compensate for this change in the COM position. It is assumed in this stage of the research that the joint at \( O_{1} \) is rigid and the two axes of motion for \( h_{1} \) and \( h_{2} \) are always perpendicular to each other. However, at further stage an active revolute joint should be used to ensure that the motion of \( h_{2} \), to pick/place the object, is always in a horizontal direction. This is to reduce the change in the COM and hence to reduce the control effort required. While picking the object, the vehicle is expected to be subjected to sudden disturbance due to impact with the object. This should be overcome by the control signals from the wheels motors. Following picking up of the object, the end-effector should undergo a reverse motion back to its original position. This motion will be accompanied by re-adjustment of the COM again to its original position. The linear actuator should apply the appropriate force signal during this stage with the appropriate speed that makes the entire vehicle safe against tipping over. The vehicle needs to keep balancing depending on the torque signals developed by the wheels motors. As the rod of the linear actuator becomes in its original position, the IB begins travelling down to the desired height to place the object in the allocated place. The closer the COM to the chassis, the higher the control effort needs to be exerted by the motor wheels [13, 14]. Finally, the end-effector extends till a desired location to place the object. This may include manoeuvring the entire vehicle to adjust the end-effector to do the task appropriately. Switching mechanisms need to be designed as a main part of the control algorithms to determine the sequence of engagement of each individual actuator associated with specific tasks in the above-mentioned stages. Based on the above-mentioned motion description, Table 1 shows the engagement of each individual actuator against DOFs of the system during each of the substages during a picking and placing scenario of an object. Table 1 Engagement of individual actuators against subtasks As indicated in the table, the wheels motors are always engaged during the entire process as there are always a change in the location of the COM and possibility of external disturbance during the picking and/or placing of the object. For all subtasks, (a–f), the wheels motors need to develop the appropriate torque signal that is sufficient to keep the vehicle balance in an upright vertical position. The engagement of linear actuators will be as when needed during the picking, placing stages to complete both tasks. Switching mechanisms are designed to determine the period of engagement of each individual actuator. The mathematical model of mechanical system is used to examine different behaviours of the model. In addition, it relates the kinematics of the mechanical system to the forces/torques applied to its links. The mathematical model of the proposed machine is generated in this section using the system physical parametric specifications that are shown in Table 2. Table 2 Parameters and description The friction at the mating surfaces has been simplified for the chassis–wheel, wheel–ground interaction and in the linear actuator to follow coulomb frictional model. The values of the coefficients have been selected depending on the type of surfaces. The selected constant values are assumed to be valid under all working conditions of the vehicle and the actuators. This did not take into account variations in speed, path configuration, terrain profile, etc. The constant values have been used to validate the system model. However, modelling interactions between surfaces need to be investigated for various surfaces, various terrain profiles and various operation conditions of the vehicle. The work done by Silva et al. [31] will be considered in future studies as suggested modelling technique for the wheel–ground interaction through modelling of foot–ground interaction of artificial locomotion systems. Deriving equations of motion Based on the schematic diagram shown in Fig. 10, the linear displacement of the chassis COM, point \( P_{1} \), can be derived as shown in Eqs. 1 and 2 along the X and Z axes, respectively, as follows: $$ x_{1} = \left( {\frac{{\delta_{\text{R}} + \delta_{\text{L}} }}{2}} \right) + l\sin \theta $$ $$ z_{1} = l\cos \theta $$ where for the lateral linear displacement point \( P_{2} \) can be calculated as follows: $$ x_{2} = h_{1} \sin \theta + h_{2} \cos \theta + \left( {\frac{{\delta_{\text{R}} + \delta_{\text{L}} }}{2}} \right) $$ $$ z_{2} = h_{1} \cos \theta - h_{2} \sin \theta $$ Modelling using Lagrange formulation Lagrange formulation is used in this section to derive model of the system since it provides a powerful technique for obtaining the equations of motion. The general form of Lagrange equation is identified as shown in Eq. 5. $$ \frac{d}{{{\text{d}}t}}\left( {\frac{\partial L}{{\partial \dot{q}_{i} }}} \right) - \frac{\partial L}{{\partial q_{i} }} + \frac{\partial D}{{\partial \dot{q}_{i} }} = f_{i} $$ where L represents the Lagrange equation and it is determined as: $$ L = T - U $$ where T and U are the total kinetic energies and potential energies of the system, respectively. \( q_{i} \quad (i = 1,2, \ldots ,n) \) are generalized coordinates such as: \( q_{i} = \left[ {\begin{array}{*{20}c} {h_{1} } & {h_{2} } & \theta & {\delta_{\text{L}} } & {\delta_{\text{R}} } \\ \end{array} } \right] \) \( f_{i} \) is generalized forces that contain all the given forces in the system acting along the coordinates such as: \( f_{i} = \left[ {\begin{array}{*{20}c} {F_{1} } & {F_{2} } & 0 & {\tau_{\text{L}} } & {\tau_{\text{R}} } \\ \end{array} } \right] \) \( D \) is the dissipation function and illustrated as \( D = \tfrac{1}{2}bq_{i}^{2} \) The total kinetic energy of the chassis can be calculated as follows: $$ T_{c} = \tfrac{1}{2}m_{1} \left( {v_{x1}^{2} + v_{z1}^{2} } \right) + \tfrac{1}{2}m_{2} \left( {v_{x2}^{2} + v_{z2}^{2} } \right) + \tfrac{1}{2}J_{1} \dot{\theta }^{2} + \tfrac{1}{2}J_{2} \dot{\theta }^{2} $$ $$ v_{x1} = \tfrac{1}{2}v_{R} + \tfrac{1}{2}v_{L} + l\dot{\theta }\cos \theta $$ $$ v_{z1} = - l\dot{\theta }\sin \theta $$ $$ v_{x2} = \dot{h}_{1} \sin \theta + h_{1} \dot{\theta }\cos \theta + \dot{h}_{2} \cos \theta - h_{2} \dot{\theta }\sin \theta + \tfrac{1}{2}v_{R} + \tfrac{1}{2}v_{L} $$ $$ v_{z2} = \dot{h}_{1} \cos \theta - h_{1} \dot{\theta }\sin \theta - \dot{h}_{2} \sin \theta - h_{2} \dot{\theta }\cos \theta $$ The total kinetic energy per wheel can be calculated as follows: $$ T_{w} = \tfrac{1}{2}m_{w} v_{R}^{2} + \tfrac{1}{2}m_{w} v_{L}^{2} + \tfrac{1}{2}J_{w} \left( {\frac{{v_{R}^{2} }}{{R^{2} }}} \right) + \tfrac{1}{2}J_{w} \left( {\frac{{v_{L}^{2} }}{{R^{2} }}} \right) $$ The total kinetic energy of the chassis and wheels can be calculated as follows: $$ T = T_{\text{c}} + T_{\text{w}} $$ The total potential energy of the chassis and wheels can be calculated as follows: $$ U = m_{1} gl\cos \theta + m_{2} g(h_{1} \cos \theta - h_{2} \sin \theta ) $$ The total dissipation energy of the chassis and wheels can be calculated as follows: $$ D = \tfrac{1}{2}\mu_{1} \dot{h}_{1}^{2} + \tfrac{1}{2}\mu_{2} \dot{h}_{2}^{2} + \tfrac{1}{2}\mu_{w} \left( {\frac{{v_{\text{R}}^{2} + v_{\text{L}}^{2} }}{{R^{2} }}} \right) + \tfrac{1}{2}\mu_{\text{c}} \left( {v_{\text{R}}^{2} + v_{\text{L}}^{2} } \right) $$ Substituting Eqs. 13 and 14 in Eq. 6, the Lagrange equation can be expressed as follows: $$ \begin{aligned} L & = \tfrac{1}{2}m_{1} \left( {v_{x1}^{2} + v_{z1}^{2} } \right) + \tfrac{1}{2}m_{2} \left( {v_{x2}^{2} + v_{z2}^{2} } \right) + \tfrac{1}{2}J_{1} \dot{\theta }^{2} + \tfrac{1}{2}J_{2} \dot{\theta }^{2} + \tfrac{1}{2}m_{w} \left( {v_{\text{R}}^{2} + v_{\text{L}}^{2} } \right) \\ & \quad + \tfrac{1}{2}J_{\text{w}} \left( {\frac{{v_{\text{R}}^{2} + v_{\text{L}}^{2} }}{{R^{2} }}} \right) - m_{1} gl\cos \theta - m_{2} g(h_{1} \cos \theta - h_{2} \sin \theta ) \\ \end{aligned} $$ Deriving the equation for \( h_{1} \): $$\tfrac{1}{2}m_{2} (2g\cos \theta - 2h_{1} \dot{\theta }^{2} - 4\dot{h}_{2} \dot{\theta } - 2h_{2} \ddot{\theta } + 2\textit{\"{h}}_{1} + (\ddot{\delta}_{R} + \ddot{\delta }_{L} )\sin \theta ) = F_{1} - \mu_{1} \dot{h}_{1}$$ $$ \tfrac{1}{2}m_{2} (2g\sin \theta + 2h_{2} \dot{\theta }^{2} - 4\dot{h}_{1} \dot{\theta } - 2h_{1} \ddot{\theta } - 2\textit{\"{h}}_{2} - (\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} )\cos \theta ) = F_{2} - \mu_{2} \dot{h}_{2} $$ Deriving the equation for \( \delta_{L} \): $$ \begin{aligned} \tfrac{1}{2}m_{1} \left( {\tfrac{1}{2}\ddot{\delta }_{R} + \tfrac{1}{2}\ddot{\delta }_{L} - l\dot{\theta }^{2} \sin \theta + l\ddot{\theta }\cos \theta } \right) + \tfrac{1}{2}m_{2} \left( {\textit{\"{h}}_{1} \sin \theta + 2\dot{h}_{1} \dot{\theta }\cos \theta - h_{1} \dot{\theta }^{2} \sin \theta + h_{1} \ddot{\theta }\cos \theta } \right. \hfill \\ \left. {\quad + \textit{\"{h}}_{2} \cos \theta - 2\dot{h}_{2} \dot{\theta }\sin \theta - h_{2} \dot{\theta }^{2} \cos \theta - h_{2} \ddot{\theta }\sin \theta + \tfrac{1}{2}\ddot{\delta }_{R} + \tfrac{1}{2}\ddot{\delta }_{\text{L}} } \right) + 2m_{w} \ddot{\delta }_{\text{L}} + 2J_{\text{w}} \frac{{\ddot{\delta }_{L} }}{{R^{2} }} \hfill \\ = \tau_{L} - \mu_{\text{w}} \left( {\frac{{\dot{\delta }_{\text{L}} }}{{R^{2} }}} \right) - \mu_{c} \dot{\delta }_{L} \hfill \\ \end{aligned} $$ Deriving the equation for \( \delta_{R} \): $$ \begin{aligned} \tfrac{1}{2}m_{1} \left( {\tfrac{1}{2}\ddot{\delta }_{R} + \tfrac{1}{2}\ddot{\delta }_{L} - l\dot{\theta }^{2} \sin \theta + l\ddot{\theta }\cos \theta } \right) + \tfrac{1}{2}m_{2} \left( {\textit{\"{h}}_{1} \sin \theta + 2\dot{h}_{1} \dot{\theta }\cos \theta - h_{1} \dot{\theta }^{2} \sin \theta + h_{1} \ddot{\theta }\cos \theta } \right. \hfill \\ \left. { + \textit{\"{h}}_{2} \cos \theta - 2\dot{h}_{2} \dot{\theta }\sin \theta - h_{2} \dot{\theta }^{2} \cos \theta - h_{2} \ddot{\theta }\sin \theta + \tfrac{1}{2}\ddot{\delta }_{R} + \tfrac{1}{2}\ddot{\delta }_{L} } \right) + 2m_{w} \ddot{\delta }_{R} + 2J_{w} \frac{{\ddot{\delta }_{R} }}{{R^{2} }} \hfill \\ = \tau_{R} - \mu_{w} \left( {\frac{{\dot{\delta }_{R} }}{{R^{2} }}} \right) - \mu_{c} \dot{\delta }_{R} \hfill \\ \end{aligned} $$ Deriving the equation for \( \theta \): $$ \begin{aligned} 2m_{2} \dot{\theta }\left( {\dot{h}_{2} h_{2} + \dot{h}_{1} h_{1} } \right) + \tfrac{1}{2}m_{2} (h_{1} \cos \theta - h_{2} \sin \theta )\left( {\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} } \right) + \tfrac{1}{2}m_{1} l\cos \theta \left( {\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} } \right) - m_{2} g(h_{1} \sin \theta + h_{2} \cos \theta ) + \ddot{\theta }(J_{1} + J_{2} + m_{1} l^{2} + m_{2} h_{2}^{2} + m_{2} h_{1}^{2} ) \hfill \\ +\, m_{2} \left( {\textit{\"{h}}_{2} h_{1} + \textit{\"{h}}_{1} h_{2} } \right) - m_{1} gl\sin \theta = 0 \hfill \\ \end{aligned} $$ Equations (17–21) represent the nonlinear second-order differential equations representing the dynamics of the system under consideration. State space modelling In order to linearize the system, an equilibrium point is considered at the vertical upright position. This is applied when the tilt angle is approaching a zero value. The system equations of motion can be reformulated in the following forms: $$ \tfrac{1}{2}m_{2} (2g - 4\dot{h}_{2} \dot{\theta } - 2h_{2} \ddot{\theta } + 2\textit{\"{h}}_{1} + \left( {\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} } \right)\theta ) = F_{1} - \mu_{1} \dot{h}_{1} $$ $$ \tfrac{1}{2}m_{2} \left( {2g\theta - 4\dot{h}_{1} \dot{\theta } - 2h_{1} \ddot{\theta } - 2\textit{\"{h}}_{2} - \ddot{\delta }_{R} - \ddot{\delta }_{L} } \right) = F_{2} - \mu_{2} \dot{h}_{2} $$ $$ \begin{aligned} \tfrac{1}{2}m_{1} \left( {\tfrac{1}{2}\ddot{\delta }_{\text{R}} + \tfrac{1}{2}\ddot{\delta }_{\text{L}} + l\ddot{\theta }} \right) + \tfrac{1}{2}m_{2} \left( {\textit{\"{h}}_{1} \theta + 2\dot{h}_{1} \dot{\theta } + h_{1} \ddot{\theta } + \textit{\"{h}}_{2} - 2\dot{h}_{2} \dot{\theta }\theta - h_{2} \ddot{\theta }\theta + \tfrac{1}{2}\ddot{\delta }_{R} + \tfrac{1}{2}\ddot{\delta }_{L} } \right) \hfill \\ \quad + 2m_{\text{w}} \ddot{\delta }_{\text{L}} + 2J_{\text{w}} \frac{{\ddot{\delta }_{\text{L}} }}{{R^{2} }} = \tau_{\text{L}} - \mu_{\text{w}} \left( {\frac{{\dot{\delta }_{\text{L}} }}{{R^{2} }}} \right) - \mu_{c} \dot{\delta }_{\text{L}} \hfill \\ \end{aligned} $$ $$ \begin{aligned} \tfrac{1}{2}m_{1} \left( {\tfrac{1}{2}\ddot{\delta }_{R} + \tfrac{1}{2}\ddot{\delta }_{L} + l\ddot{\theta }} \right) + \tfrac{1}{2}m_{2} \left( {\textit{\"{h}}_{1} \theta + 2\dot{h}_{1} \dot{\theta } + h_{1} \ddot{\theta } + \textit{\"{h}}_{2} - 2\dot{h}_{2} \dot{\theta }\theta - h_{2} \ddot{\theta }\theta + \tfrac{1}{2}\ddot{\delta}_{\text{R}} + \tfrac{1}{2}\ddot{\delta }_{\text{L}} } \right) \hfill \\ \quad + 2m_{\text{w}} \ddot{\delta }_{\text{R}} + 2J_{\text{w}} \frac{{\ddot{\delta }_{R} }}{{R^{2} }} = \tau_{\text{R}} - \mu_{\text{w}} \left( {\frac{{\dot{\delta }_{R} }}{{R^{2} }}} \right) - \mu_{\text{c}} \dot{\delta }_{\text{R}} \hfill \\ \end{aligned} $$ $$ \begin{aligned} 2m_{2} \dot{\theta }(\dot{h}_{2} h_{2} + \dot{h}_{1} h_{1} ) + \tfrac{1}{2}m_{2} (h_{1} - h_{2} \theta )\left( {\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} } \right) + \tfrac{1}{2}m_{1} l\left( {\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} } \right) - m_{2} g(h_{1} \theta + h_{2} ) \hfill \\ \quad + \ddot{\theta }\left( {J_{1} + J_{2} + m_{1} l^{2} + m_{2} h_{2}^{2} + m_{2} h_{1}^{2} } \right) + m_{2} \left( {\textit{\"{h}}_{2} h_{1} + \textit{\"{h}}_{1} h_{2} } \right) - m_{1} gl\theta = 0 \hfill \\ \end{aligned} $$ The dynamics of the five DOFs machine can be represented by ten state vectors, X of the dynamic system as illustrated in the following equation: $$ X = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\delta_{\text{R}} } & {\delta_{\text{L}} } & \theta & {h_{1} } & {h_{2} } \\ \end{array} } & {\begin{array}{*{20}c} {\dot{\delta }_{R} } & {\dot{\delta }_{L} } & {\dot{\theta }} & {\dot{h}_{1} } & {\dot{h}_{2} } \\ \end{array} } \\ \end{array} } \right] $$ where the state vector variables can be identified as follows: Right wheel displacement, \( \delta_{\text{R}} \) Left wheel displacement, \( \delta_{\text{L}} \) Chassis pitch angle, \( \theta \) Vertical linear link displacement, \( h_{1} \) Horizontal linear link displacement, \( h_{2} \) Right wheel velocity, \( \dot{\delta }_{\text{R}} \) Left wheel velocity, \( \dot{\delta }_{\text{L}} \) Chassis angular velocity, \( \dot{\theta } \) Vertical linear link velocity, \( \dot{h}_{1} \) Horizontal linear link velocity, \( \dot{h}_{2} \) State variables of wheels velocity, angular velocity and linear velocities of the links are derivative of wheels displacements, links linear displacements and the pitch angle, respectively, and can be formulated as follows: $$ X_{1} = \delta_{\text{R}} $$ $$ X_{2} = \delta_{\text{L}} $$ $$ X_{3} = \theta $$ $$ X_{4} = h_{1} $$ $$ X_{6} = \dot{\delta }_{\text{R}} = \dot{X}_{1} $$ $$ X_{7} = \dot{\delta }_{\text{L}} = \dot{X}_{2} $$ $$ X_{8} = \dot{\theta } = \dot{X}_{3} $$ $$ X_{9} = \dot{h}_{1} = \dot{X}_{4} $$ $$ X_{10} = \dot{h}_{2} = \dot{X}_{5} $$ $$ \left[ {\begin{array}{*{20}c} {\dot{X}_{1} } \\ {\dot{X}_{2} } \\ {\dot{X}_{3} } \\ {\dot{X}_{4} } \\ {\dot{X}_{5} } \\ {\dot{X}_{6} } \\ {\dot{X}_{7} } \\ {\dot{X}_{8} } \\ {\dot{X}_{9} } \\ {\dot{X}_{10} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {A_{11} } & {A_{12} } & {A_{13} } & {A_{14} } & {A_{15} } & {A_{16} } & {A_{17} } & {A_{18} } & {A_{19} } & {A_{110} } \\ {A_{21} } & {A_{22} } & {A_{23} } & {A_{24} } & {A_{25} } & {A_{26} } & {A_{27} } & {A_{28} } & {A_{29} } & {A_{210} } \\ {A_{31} } & {A_{32} } & {A_{33} } & {A_{34} } & {A_{35} } & {A_{36} } & {A_{37} } & {A_{38} } & {A_{39} } & {A_{310} } \\ {A_{41} } & {A_{42} } & {A_{43} } & {A_{44} } & {A_{45} } & {A_{46} } & {A_{47} } & {A_{48} } & {A_{49} } & {A_{410} } \\ {A_{51} } & {A_{52} } & {A_{53} } & {A_{54} } & {A_{55} } & {A_{56} } & {A_{57} } & {A_{58} } & {A_{59} } & {A_{510} } \\ {A_{61} } & {A_{62} } & {A_{63} } & {A_{64} } & {A_{65} } & {A_{66} } & {A_{67} } & {A_{68} } & {A_{69} } & {A_{610} } \\ {A_{71} } & {A_{72} } & {A_{73} } & {A_{74} } & {A_{75} } & {A_{76} } & {A_{77} } & {A_{78} } & {A_{79} } & {A_{710} } \\ {A_{81} } & {A_{82} } & {A_{83} } & {A_{84} } & {A_{85} } & {A_{86} } & {A_{87} } & {A_{88} } & {A_{89} } & {A_{810} } \\ {A_{91} } & {A_{92} } & {A_{93} } & {A_{94} } & {A_{95} } & {A_{96} } & {A_{97} } & {A_{98} } & {A_{99} } & {A_{910} } \\ {A_{101} } & {A_{102} } & {A_{103} } & {A_{104} } & {A_{105} } & {A_{106} } & {A_{107} } & {A_{108} } & {A_{109} } & {A_{1010} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {X_{1} } \\ {X_{2} } \\ {X_{3} } \\ {X_{4} } \\ {X_{5} } \\ {X_{6} } \\ {X_{7} } \\ {X_{8} } \\ {X_{9} } \\ {X_{10} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {B_{11} } & {B_{12} } & {B_{13} } & {B_{14} } \\ {B_{21} } & {B_{22} } & {B_{23} } & {B_{24} } \\ {B_{31} } & {B_{32} } & {B_{33} } & {B_{34} } \\ {B_{41} } & {B_{42} } & {B_{43} } & {B_{44} } \\ {B_{51} } & {B_{52} } & {B_{53} } & {B_{54} } \\ {B_{61} } & {B_{62} } & {B_{63} } & {B_{64} } \\ {B_{71} } & {B_{72} } & {B_{73} } & {B_{74} } \\ {B_{81} } & {B_{82} } & {B_{83} } & {B_{84} } \\ {B_{91} } & {B_{92} } & {B_{93} } & {B_{94} } \\ {B_{101} } & {B_{102} } & {B_{103} } & {B_{104} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {u_{1} } \\ {u_{2} } \\ {u_{3} } \\ {u_{4} } \\ \end{array} } \right] $$ \( \tau_{\text{R}} \) and \( \tau_{\text{R}} \) are the required torques for the right and left wheels, \( F_{1} \) and \( F_{1} \) are the generated linear force by the linear actuator for moving the payload in a vertical and horizontal direction, respectively. where A, B, C and D matrices are shown as follows: $$ A = \left[ {\begin{array}{*{20}c} 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & {A_{63} } & 0 & {A_{65} } & {A_{66} } & {A_{67} } & 0 & 0 & {A_{610} } \\ 0 & 0 & {A_{73} } & 0 & {A_{75} } & {A_{76} } & {A_{77} } & 0 & 0 & {A_{710} } \\ 0 & 0 & {A_{83} } & 0 & {A_{85} } & {A_{86} } & {A_{87} } & 0 & 0 & {A_{810} } \\ 0 & 0 & 0 & {A_{94} } & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & {A_{103} } & 0 & {A_{105} } & {A_{106} } & {A_{107} } & 0 & 0 & {A_{1010} } \\ \end{array} } \right]\quad B = \left[ {\begin{array}{*{20}c} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ {B_{61} } & {B_{62} } & 0 & {B_{64} } \\ {B_{71} } & {B_{72} } & 0 & {B_{74} } \\ {B_{8} } & {B_{8} } & 0 & {B_{8} } \\ 0 & 0 & {B_{9} } & 0 \\ {B_{101} } & {B_{101} } & 0 & {B_{102} } \\ \end{array} } \right] $$ $$ C = \left[ {\begin{array}{*{20}c} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} } \right]\quad D = \left[ {\begin{array}{*{20}c} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} } \right] $$ And finally the state space model of the system can be formulated as follows: $$ X = \left[ {\begin{array}{*{20}c} 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & {A_{63} } & 0 & {A_{65} } & {A_{66} } & {A_{67} } & 0 & 0 & {A_{610} } \\ 0 & 0 & {A_{73} } & 0 & {A_{75} } & {A_{76} } & {A_{77} } & 0 & 0 & {A_{710} } \\ 0 & 0 & {A_{83} } & 0 & {A_{85} } & {A_{86} } & {A_{87} } & 0 & 0 & {A_{810} } \\ 0 & 0 & 0 & {A_{94} } & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & {A_{103} } & 0 & {A_{105} } & {A_{106} } & {A_{107} } & 0 & 0 & {A_{1010} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\delta_{\text{R}} } \\ {\delta_{\text{L}} } \\ \theta \\ {h_{1} } \\ {h_{2} } \\ {\dot{\delta }_{\text{R}} } \\ {\dot{\delta }_{\text{L}} } \\ {\dot{\theta }} \\ {\dot{h}_{1} } \\ {\dot{h}_{2} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ {B_{61} } & {B_{62} } & 0 & {B_{64} } \\ {B_{71} } & {B_{72} } & 0 & {B_{74} } \\ {B_{81} } & {B_{82} } & 0 & {B_{84} } \\ 0 & 0 & {B_{93} } & 0 \\ {B_{101} } & {B_{102} } & 0 & {B_{104} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\tau_{\text{R}} } \\ {\tau_{\text{L}} } \\ {F_{1} } \\ {F_{2} } \\ \end{array} } \right] $$ Constants A and B in Eqs. 38 and 39 are described in the "Appendix" at the end of the paper. Open-loop system response This section investigates the system responses and its performance using MATLAB and Simulink. In order to study the behaviour of the developed model, an open-loop system response has to be investigated. The model is simulated in MATLAB Simulink® environment using the simulation parameters described in Table 2 where the following initial conditions are used: \( \theta = 0 \), \( \delta_{\text{R}} = 0 \), \( \delta_{\text{L}} = 0 \), \( h_{1} = 0 \), \( h_{2} = 0 \), \( \dot{\theta } = 0 \), \( v_{\text{R}} = 0 \), \( v_{\text{L}} = 0 \), \( \dot{h}_{1} = 0 \), \( \dot{h}_{2} = 0 \). Figure 11 illustrates the open-loop system response of pitch angle (\( \theta \)), angular velocity (\( \dot{\theta } \)), right wheel displacement (\( \delta_{R} \)), right wheel velocity (\( v_{\text{R}} \)), left wheel displacement (\( \delta_{\text{L}} \)), left wheel velocity (\( v_{L} \)), vertical link displacement (\( h_{1} \)), vertical link velocity (\( \dot{h}_{1} \)), horizontal link displacement (\( h_{2} \)) and horizontal link velocity (\( \dot{h}_{2} \)). As per the simulation results shown in Fig. 11, the system outputs reach infinity. It is clear that the system is unstable nonlinear system; therefore, a closed-loop system is required to stabilize the system and to improve its performance. Control scheme design The strategy to control the system depends on developing a feedback control mechanism of five control loops as shown in Fig. 12. In order to drive the vehicle to undergo a specific planar motion in the XY plane, two decoupled feedback loops are developed. The two feedback control loops occupy separate ranges of dynamics, low frequency and high frequency with tilt angle over higher frequency range and motion of intermediate body over lower frequency range, and hence, decoupling is reasonable to use and apply separate control loops. The input to both control loops is the error in the angular position of each wheel which measures the difference between the desired and actual angular positions of the corresponding wheel. The angular position of the IB is controlled by the measurement of the error in the tilt angle of the IB. In order to control the position of the object, two feedback control loops are developed with the error in the object position as an input and the actuation force as the output of the control loop. The inputs to the system are the driving torques of the wheels motors, \( T_{\text{L}} \) and \( T_{\text{R}} \), and the linear actuator forces, \( F_{1} \) and \( F_{2} \). The system has five outputs, the angular positions of the left and right wheels; \( \delta_{\text{L}} \) and \( \delta_{\text{R}} \), respectively, the angular positions of the IB, \( \theta_{{}} \), and the linear displacements of the object, \( h_{1} \) and \( h_{2} \). The system is under-actuated by the virtue of having less actuation compared to number of system outputs. Five PID control loops are used to control the five outputs of the system. The control inputs are the error; Eqs. (40–44), the integral of the error and the derivative of error for the five measured variables, \( \delta_{\text{L}} \), \( \delta_{R} \), \( \theta_{{}} \), \( h_{1} \) and \( h_{2} \), whereas the control outputs are the motor torques and the linear actuator forces. $$ e_{{\delta_{\text{L}} }} = \delta_{\text{Ld}} - \theta_{\text{Lm}} $$ $$ e_{{\delta_{\text{R}} }} = \delta_{\text{Rd}} - \theta_{\text{Rm}} $$ $$ e_{\theta } = \theta_{\text{d}} - \theta_{\text{m}} $$ $$ e_{{h_{1} }} = h_{{1{\text{d}}}} - h_{{1{\text{m}}}} $$ where m and d subscripts indicate desired and actual measured variable, respectively. Schematic description of the control algorithm PID control without switching mechanisms In the following simulation exercises, the developed control schemes are implemented on the system mathematical model identified in "Mathematical modelling" section. First, no switching mechanisms will be considered while running the simulation. The control algorithm and the system behaviour are tested in two various conditions, payload free motion and while considering the activation of the two linear actuators for both the horizontal and vertical motion of the payload. The same exercise is repeated after engaging switching mechanisms that are designed to determine when the linear actuators should start working. Payload free movement (h 1 = h 2 = 0) The behaviour of the robotic machine is observed for the rotation angle and velocity of the robot's chassis, displacements and velocities of the two wheels, displacements and velocities of the linear actuators using different conditions as shown in the following figures. Figure 13 illustrates the output simulation of the system start initially at \( \theta = 5^\circ \) and neglecting the effect of the linear actuators \( h_{1} \) and \( h_{2} \) by setting them to zero during the system stabilization. System output (\( h_{1} \) = \( h_{2} \) = 0), Unbounded wheels displacement It can be noticed from Fig. 13 that the control mechanism stabilizes the vehicle to reach the balancing position in less than 2 s. However, the vehicle motion is unbounded and keeps moving in order to preserve the stability condition. This is considered as an undesirable behaviour; specifically, these types of vehicles are supposed to serve in minimum working space. The vehicle is considered to move with a fixed velocity once it achieves a stable position. In order to minimize the motion of the system, the controller is modified by bounding the linear displacement of the wheels as illustrated in Fig. 14. The wheels are allowed to rotate a pre-specified fraction which is equivalent to a boundary limit of 5-cm linear displacement. The control scheme is able to achieve the balancing position within 2 s, and the steady-state position of the wheels is reached within 4 s. Bounding the wheels rotation has a positive impact on the stabilization of the vehicle with limited disturbance compared to the previous case and hence less interruption in the control torques by the wheels motors. Modified system output (\( h_{1} \) = \( h_{2} \) = 0). Bounded wheels displacement Simultaneous horizontal and vertical motion (h 1 and h 2 ≠ 0) This study investigates the impact of changing the COM of the vehicle in two mutually perpendicular axes. In this exercise, the linear actuators start to work by extending, simultaneously, in two perpendicular axes without considering a payload. As shown in Fig. 15, the system underwent through a longer transient period if compared to previous case (\( h_{1} \) = \( h_{2} \) = 0). It took longer for the system to reach a stable region; the overshoot is increased dramatically due to the change of the position of the COM in two different directions. The period taken by the vehicle to reach the stable range, around 4 s, is equivalent to the time taken by the linear actuator to extend in both axes, \( h_{1} \) and \( h_{2} \). The torques of the wheels motors are expected to be affected by such long transient period of the IB till reaching stability. Compared to previous simulation exercises, there has been large amount of vibration during the period of changing the COM in the two directions and this in turn will lead to changes in the control effort required. System output (\( h_{1} \) and \( h_{2} \) \( \ne \) 0). Unbounded wheels displacement Design of switching mechanisms Since the proposed platform is mainly designed for picking and/or placing applications, it is desirable to stabilize the system first. The reason is to avoid any disturbance at the start of working as a result of lifting an object. Lifting an object will result in moving the COM during the stabilization mode, and this in turn will affect the stability condition and disturbs the control effort. To avoid such situation, the control scheme is modified as illustrated in Fig. 16. Two switching mechanisms are added to the system to assure system stability before starting the object handling. The two mechanisms are developed in a way that the linear actuators will not activate unless the IB of the vehicle reaches the stable upright position. Modified control algorithm with switching mechanisms Three case studies are considered where only one linear actuator is allowed to work at a time in the first two cases and then simultaneously working of the two actuators in the thirds case. Two signals are developed for both \( h_{1} \) and \( h_{2} \) using a signal builder block in MATLAB Simulink®. Payload vertical movement only In this case, the linear actuator along the IB is allowed to work by moving up and down along the IB and z axis. This is physically implementing by extending and contraction of the linear actuator rod which leads to move the entire COM up and down as per the control signal developed by the actuator. Figure 17 illustrates the output simulation of the system that starts with initial conditions at \( \theta = 5^\circ \), \( h_{1} = 0.28 \) m and \( h_{2} = 0 \). The actuator starts to extend to nearly 0.4 m after 5 s from the start of the simulation. The control mechanism was robust enough the way that no interruption occurred in the stabilization condition of the IB. The linear actuator accelerates to its maximum speed at around 7 s and then decelerates to settle down completely when reaching its desired height. System output, payload vertical motion only Payload horizontal movement only During this case, the system is simulated to observe the impact of changing \( h_{2} \) in a direction perpendicular to the axis of the IB and in the x direction. This situation is similar to inclination of the IB forward or back and also simulates scenarios of wheeled machines moving up or down an inclined surface. The initial conditions are set as \( \theta = 5^\circ \), \( h_{1} = 0.28 \) and \( h_{2} = 0 \). The actuator along the IB is kept locked during this stage, and the motion allowed will be the one from the other linear actuator who starts to work after achieving a balance condition as shown in Fig. 18. As observed from the figure, changing \( h_{2} \) by only 10 cm at 5 s will act as a sudden impact disturbance which hits the IB causing it to change its direction dramatically to the opposite side of Z axis as obvious from the tilt angle graph. However, the control algorithm was not able to bring the IB to the vertical position and instead kept it inclined in the opposite side with a constant inclination angle of around 7°. Changing \( h_{2} \) in the mentioned manner also has an impact on the linear motion of the vehicle in the X direction as clear from the fraction displacements of both wheels. System output. Payload horizontal motion only Payload simultaneous horizontal and vertical movements In order to test the robustness of the proposed control algorithm, the system is simulated to observe the impact of changing \( h_{1} \) and \( h_{2} \) sequentially. \( h_{1} \) is kept fixed at 0.28 cm for around (5.5) seconds before starting to change to its desired height. As expected and demonstrated earlier in Fig. 17, no interruption occurred in the stabilization condition of the IB. Changing \( h_{2} \) starts at (9.5) seconds resulting in sudden changes in the stabilization of the IB and a slight disturbance in \( h_{1} \). In response to the changes in \( h_{2} \), the IB leans in opposite direction to compensate for the change in the position of the COM due to extension of \( h_{2} \) as shown in Fig. 19. System output The implementation of switching mechanism in the control algorithms has reflection on the simulation results and the way the system performs. This can be summarized as follows: Checking the robustness of the developed control approach. In Fig. 19, the IB leans in the opposite direction to compensate for the change in the position of the COM due to extension of \( h_{2} \). Activating each individual actuator at a certain time tends to act as a sudden disturbance, in particular changing \( h_{2} \) to the system which already achieved a stability. Adding switching helps the author to conclude that changing does not have significant impact on the output of the system as noticed in Figs. 17 and 19. Adding switching mechanisms mimics real scenarios in practical applications where not all actuators work at the same time. The decoupled feedback control is believed that it is not related to the nonsmooth trajectories in Figs. 17 and 19. However, the fluctuations are due to the actuations of the linear actuators either simultaneously or consecutively. Further smoothness of the trajectory tracking can be achieved by minimizing the flexible dynamics items of the change in the tilt angle. A novel 5 DOFs two-wheeled machine is proposed in this work where the mathematical model is derived using Lagrangian dynamics. Dissipation energies are included in the system model for better consideration of nonlinear parameters. The configuration of the machine allows handling of an object in two mutually perpendicular directions that increase the workspace of current available configuration of TWMs. However, this will also be accompanied by situations where balance will be more complicated due to the change of the vehicle COM in different directions. The control of the vehicle will also become more complicated as a result of adding one more DOF to the system. Future considerations of this work will include but not limited to the following: Testing the vehicle in confined space for path tracking and picking and placing an object, this will include consideration of additional weight of an object, tracking a pre-specified trajectory, picking and placing the object from a certain location, carrying it and placing in desired location. Further investigation will include also workspace and kinematics analysis of the vehicle. Implementation of various optimization tools including bacterial forging (BF), spiral dynamics (SD) and hybrid spiral dynamics bacterial chemotaxis (HSDBC) for better performance of the system and improved energy consumption. Further investigation of the linear model of the system will be carried out while implementing various control approaches including fuzzy logic control (FLC). Ahmad S, Siddique N, Tokhi MO. A modular fuzzy control approach for two-wheeled wheelchair. J Intell Rob Syst. 2011;64:401–26. Almeshal AM, Goher KM, Tokhi MO. Dynamic modelling and stabilization of a new configuration of two-wheeled machines. Robot Auton Syst. 2013;61(5):443–72. Bae Y, Jung S. kinematic design and workspace analysis of a Korean service robot: KOBOKER. International conference on control, automation, and systems (ICCAS). 2011. p. 833–36. Brisilla RM, Sankaranarayanan V. Nonlinear control of mobile inverted pendulum. Robot Auton Syst. 2015;70:145–55. Chan RPM, Stol KA, Halkyard CR. Review of modelling and control of two-wheeled robots. Annu. Rev. Control. 2013;37(1):89–103. Chiu C, Peng Y, Lin Y. Intelligent backstepping control for wheeled inverted pendulum. Expert Syst Appl. 2011;38(4):3364–71. Article MATH Google Scholar Chiu C, Lin Y, Lin C. Real-time control of a wheeled inverted pendulum based on an intelligent model free controller. Mechatronics. 2011;21(3):523–33. Chinnadurai G, Ranganathan H, Peter S. IOT controlled two wheel self supporting robot without external sensor. Middle-East J Sci Res. 2015;23:286–90. Cui R, Guo J, Mao Z. Adaptive backstepping control of wheeled inverted pendulums models. Nonlinear Dyn. 2015;79(1):501–11. Dai F, Gao X, Jiang S, Guo W, Liu Y. A two-wheeled inverted pendulum robot with friction compensation. Mechatronics. 2015;30:116–25. Deng M, Inoue A, Sekiguchi K, Jiang L. Two-wheeled mobile robot motion control in dynamic environments. Robot Comput Integr Manuf. 2010;26(3):268–72. Ghaffari A, Shariati A, Shamekhi AH. A modified dynamical formulation for two-wheeled self-balancing robots. Nonlinear Dyn. 2016;83(1):217–30. MathSciNet Article Google Scholar Goher K, Tokhi OM. Balance control of a TWRM with a dynamic payload. In: Proceedings of the 11th international conference on climbing and walking robots and the support technologies for mobile machines, Coimbra, Portugal, 2008. Goher K, Tokhi OM, Ahmad S. A new configuration of two wheeled vehicles: Towards a more workspace and motion flexibility. In: Proceedings of the 4th IEEE systems conference. 2010. p. 524–28. Guo Z, Xu J, Heng T. Design and implementation of a new sliding mode controller on an underactuated wheeled inverted pendulum. J Frankl Inst. 2014;351(4):2261–82. Huang J, Ding F, Fukuda T, Matsuno T. Modelling and velocity control of a novel narrow vehicle based on mobile inverted pendulum. IEEE Trans Control Syst Technol. 2013;21(5):1607–17. Ishigami G, Iagnemma K, Overholt J, Hudas G. Design, development, and mobility evaluation of an omnidirectional mobile robot for rough terrain. J Field Robot. 2015;32(6):880–96. Lee H, Kim H, Jung S. Development of mobile inverted pendulum robot system as a personal transportation vehicle with two driving modes. World automation congress. 2010. p. 1–5. Lee JH, Shin HJ, Lee SJ, Jung S. Balancing control of a single-wheel inverted pendulum system using air blowers: evolution of mechatronics capstone design. Mechatronics. 2013;23(8):926–32. Li Z, Xu C. Adaptive fuzzy logic control of dynamic balance and motion for wheeled inverted pendulums. Fuzzy Sets Syst. 2009;160(8):17. Li Z, Zhang Y. Robust adaptive motion/force control for wheeled inverted pendulums. Automatica. 2010;46(8):1346–53. Li Z, Zhang Y, Yang Y. Support vector machine optimal control for mobile wheeled inverted pendulums with unmodelled dynamics. Neurocomputing. 2010;73(13–15):2773–82. Li Z, Kang Y. Dynamic coupling switching control incorporating support vector machines for wheeled mobile manipulators with hybrid joints. Automatica. 2010;46(5):832–42. Maddahi Y, Maddahi A, Sepehri N. Calibration of omnidirectional wheeled mobile robots: method and experiments. Robotica. 2013;31:969–80. Parmar JJ, Savant CV. Selection of wheels in robotics. Int J Sci Eng Res. 2014;5(10):339–43. Ping R, Chan M, Stol KA, Halkyard CR. Review of modelling and control of two-wheeled robots. Annu Rev Control. 2013;37(1):89–103. Ping R, Chan M, Stol KA, Halkyard CR. Annual reviews in control review of modelling and control of two-wheeled robots. Annu Rev Control. 2013;37(1):89–103. Raffo GV, Ortega MG, Madero V, Rubio FR. Two-wheeled self-balanced pendulum workspace improvement via underactuated robust nonlinear control. Control Eng Pract. 2015;44:231–42. Ren T, Chen TC, Chen CJ. Motion control for a two-wheeled vehicle using a self-tuning PID controller. Control Eng Pract. 2008;16:365–75. Shojaei K, Mohammad A, Tarakameh A. Adaptive feedback linearizing control of nonholonomic wheeled mobile robots in presence of parametric and nonparametric uncertainties. Robot Comput Integr Manuf. 2011;27(1):194–204. Silva MF, Machado JT, Lopes AM. Modelling and simulation of artificial locomotion systems. Robotica. 2005;23:595. Sun J, Li Z. Development and Implementation of a wheeled inverted pendulum vehicle using adaptive neural control with extreme learning machines. Cogn Comput. 2015;7(6):740–52. Tsai M, Hu J, Hu F. Actuator fault and abnormal operation diagnoses for auto balancing two-wheeled cart control. Mechatronics. 2009;19(5):647–55. Yue M, An C, Du Y, Sun J. Indirect adaptive fuzzy control for a nonholonomic/underactuated wheeled inverted pendulum vehicle based on a data-driven trajectory planner. Fuzzy Sets Syst. 2016;290:158–77. Yue M, Wang S, Sun J. Simultaneous balancing and trajectory tracking control for two-wheeled inverted pendulum vehicles: a composite control approach. Neurocomputing. 2016;191:44–54. The author of this paper would like to thank Lincoln University in New Zealand towards supporting this research and offering the funding support for publication. The author declares that he has no competing interests. Department of Informatics and Enabling Technologies, Lincoln University, Lincoln, New Zealand Khaled M. Goher Correspondence to Khaled M. Goher. Appendix: Coefficients of the state space model $$ A_{63} = \left[ {\frac{{4gR^{2} \left( {J_{w} + M_{w} R^{2} } \right)\left( {m_{2} \left( {m_{1} l^{2} + J_{1} + J_{2} } \right) + \left( {m_{1}^{2} l^{2} } \right)} \right)}}{{{\text{den}}_{1} }}} \right] $$ $$ A_{65} = \left[ {\frac{{4m_{1} m_{2} glR^{2} \left( {J_{\text{w}} + M_{\text{w}} R^{2} } \right)}}{{{\text{den}}_{1} }}} \right] $$ $$ A_{66} = \left[ {\frac{{8\left( {m_{1} l^{2} + J_{1} + J_{2} } \right)\left( {(J_{\text{w}} + M_{\text{w}} R^{2} )(\mu_{\text{w}} + \mu_{\text{c}} R^{2} )} \right) + m_{1} R^{2} \left( {J_{1} + J_{2} } \right)\left( {\mu_{\text{w}} + \mu_{\text{c}} R^{2} } \right)}}{{{\text{den}}_{1} }}} \right] $$ $$ A_{67} = - \left[ {\frac{{m_{1} R^{2} \left( {J_{1} + J_{2} } \right)\left( {\mu_{\text{w}} + \mu_{\text{c}} R^{2} } \right)}}{{{\text{den}}_{1} }}} \right], \quad A_{610} = \left[ {\frac{{\left( {4\mu_{2} R^{2} } \right)\left( {m_{1} l^{2} + J_{1} + J_{2} } \right)\left( {J_{\text{w}} + M_{\text{w}} R^{2} } \right)}}{{{\text{den}}_{1} }}} \right] $$ $$ B_{61} = - \left[ {\frac{{8R^{2} \left( {m_{1} l^{2} + J_{1} + J_{2} } \right)\left( {J_{\text{w}} + M_{\text{w}} R^{2} } \right) + m_{1} R^{4} \left( {J_{1} + J_{2} } \right)}}{{{\text{den}}_{1} }}} \right],B_{62} = \left[ {\frac{{m_{1} R^{4} (J_{1} + J_{2} )}}{{{\text{den}}_{1} }}} \right] $$ $$ B_{64} = - \left[ {\frac{{4R^{2} \left( {m_{1} l^{2} + J_{1} + J_{2} } \right)\left( {J_{\text{w}} + M_{\text{w}} R^{2} } \right)}}{{{\text{den}}_{1} }}} \right] $$ \( A_{73} = A_{63}, \quad A_{75} = A_{65}, \quad A_{76} = A_{67}, \quad A_{710} = A_{610}, \quad B_{71} = B_{62}, \quad B_{72} = B_{61}, \quad B_{74} = B_{64}\) $$ A_{83} = \left[ {\frac{{m_{1} gl\left( {4J_{\text{w}} + R^{2} \left( {4M_{\text{w}} + m_{1} + m_{2} } \right)} \right)}}{{{\text{den}}_{2} }}} \right] $$ $$ A_{85} = \left[ {\frac{{m_{2} g\left( {4J_{\text{w}} + 4M_{\text{w}} R^{2} + m_{1} R^{2} } \right)}}{{{\text{den}}_{2} }}} \right], \quad A_{86} = \left[ {\frac{{m_{1} l\mu_{\text{c}} R^{2} + m_{1} l\mu_{\text{w}} }}{{{\text{den}}_{2} }}} \right], \quad A_{87} = \left[ {\frac{{m_{1} l\mu_{\text{c}} R^{2} + m_{1} l\mu_{\text{w}} }}{{{\text{den}}_{2} }}} \right] $$ $$ A_{810} = \left[ {\frac{{m_{1} l\mu_{2} R^{2} }}{{{\text{den}}_{2} }}} \right], \quad B_{8} = - \left[ {\frac{{m_{1} lR^{2} }}{{{\text{den}}_{2} }}} \right],\quad A_{94} = \left[ { - \frac{{\mu_{1} }}{{m_{2} }}} \right], \quad B_{9} = \left[ {\frac{1}{{m_{2} }}} \right],\quad F_{11} = \left( {F_{1} - m_{2} g} \right) $$ $$ A_{103} = \left[ {\frac{{m_{2} g\left( {J_{1} + J_{2} + m_{1} l^{2} } \right)\left( {4J_{\text{w}} + 4M_{\text{w}} R^{2} + m_{1} R^{2} + m_{2} R^{2} } \right)}}{{{\text{den}}_{3} }}} \right], \quad A_{105} = \left[ {\frac{{m_{1} g_{{}} l_{{}} m_{2}^{2} R^{2} }}{{{\text{den}}_{3} }}} \right], $$ $$ A_{106} = \left[ {\frac{{m_{2} \left( {m_{1} l^{2} + J_{1} + J_{2} } \right)\left( {\mu_{\text{w}} + R^{2} \mu_{\text{c}} } \right)}}{{{\text{den}}_{3} }}} \right], \quad A_{107} = \left[ {\frac{{m_{2} \left( {m_{1} l^{2} + J_{1} + J_{2} } \right)\left( {\mu_{\text{w}} + R^{2} \mu_{\text{c}} } \right)}}{{{\text{den}}_{3} }}} \right] $$ $$ A_{1010} = \left[ {\frac{{\mu_{2} \left( {\left( {J_{1} + J_{2} + m_{1} l^{2} } \right)\left( {4J_{\text{w}} + 4M_{\text{w}} R^{2} + m_{2} R^{2} } \right) + \left( {J_{1} + J_{2} } \right)\left( {m_{1} R^{2} } \right)} \right)}}{{{\text{den}}_{3} }}} \right] $$ $$ B_{101} = - \left[ {\frac{{m_{2} R^{2} \left( {m_{1} l^{2} + J_{1} + J_{2} } \right)}}{{{\text{den}}_{3} }}} \right] $$ $$ B_{102} = - \left[ {\frac{{\left( {J_{1} + J_{2} + m_{1} l^{2} } \right)\left( {4J_{\text{w}} + 4M_{\text{w}} R^{2} + m_{2} R^{2} } \right) + \left( {J_{1} + J_{2} } \right)\left( {m_{1} R^{2} } \right)}}{{{\text{den}}_{3} }}} \right] $$ $$ \begin{aligned} {\text{den}}_{1} = - 4\left[ {4\left( {J_{1} + J_{2} + m_{1} l^{2} } \right)\left( {J_{\text{w}}^{2} + M_{\text{w}}^{2} R^{4} + 2J_{\text{w}} M_{\text{w}} R^{2} } \right) + \left( {J_{1} + J_{2} } \right)\left( {m_{1} M_{\text{w}} R^{4} + m_{1} J_{\text{w}} R^{2} } \right)} \right] \hfill \\ {\text{den}}_{2} = \left[ {4\left( {J_{1} + J_{2} + m_{1} l^{2} } \right)\left( {J_{\text{w}} + M_{\text{w}} R^{2} } \right)} \right] + \left[ {\left( {J_{1} + J_{2} } \right)\left( {m_{1} R^{2} } \right)} \right] \hfill \\ {\text{den}}_{3} = m_{2} \left[ {4\left( {J_{1} + J_{2} + m_{1} l^{2} } \right)\left( {J_{\text{w}} + M_{\text{w}} R^{2} } \right) + \left( {J_{1} + J_{2} } \right)\left( {m_{1} R^{2} } \right)} \right] \hfill \\ \end{aligned} $$ Goher, K.M. A two-wheeled machine with a handling mechanism in two different directions. Robot. Biomim. 3, 17 (2016). https://doi.org/10.1186/s40638-016-0049-8 Accepted: 12 September 2016 Lagrangian formulation IP new configuration Inverted pendulum Two-wheeled vehicle Payload handling
CommonCrawl
Microsoft WEFT Crack Torrent X64 [Latest] 2022 Microsoft WEFT is a handy utility that was especially designed to make it possible for developers to create so-called 'font objects' which link back to any webpage set by the author. The program provides more than one method of creating and managing such objects, including a wizard that will make things more easy for all users. Download ✵ DOWNLOAD (Mirror #1) Microsoft WEFT Crack Activation Code [March-2022] Microsoft WEFT, a relatively small piece of software that can easily be used by anyone who wishes to optimize their fonts, creates a new and rather essential task of font objects for most webpages, including the most popular browsers — Firefox, Internet Explorer, Safari, Opera. The authors created the program for a small Web project company, and with the help of the proprietary program you can generate, organize and download font objects for webpages by yourself. With the help of Microsoft WEFT you can create objects, files, and form letters to speed up your works and make the Web's fonts even more accessible to users. iPodder is designed for those who are tired of opening hundreds of those annoying windows that have to be closed again right after an interesting or funny episode has been finished. It is a program that is packed with great features and functionalities and is able to continue to capture the new episodes and to download them to your computer. Shareware Pascal is a shareware software created to help programmers to build their software or create games using Pascal. It is a high-performance programming language with rich features and syntax. The shareware version is free to use for non-commercial purposes. Gnome Commander is a small and fast file manager for Linux. It is a file manager and menu launcher that's suitable for quick file management. It is small, lean, fast, and easy-to-use. It automatically detects and mounts removable media, supports native Nautilus integration, includes language support, and even manages icons! It features a file manager with large tree columns, a menu launcher for quick launching of applications, an icon editor, and more. Gnome Commander is a complete and very useful file manager designed specifically for Gnome. Worthington Portable SQL Designer is a powerful and compact tool to quickly create databases, tables and views in SQL. It also allows you to design rich user interfaces and convert them into powerful business applications. It supports the following databases: SQL Server, Sybase, Oracle, MySQL, Microsoft Access and DB2. It was designed to help database administrators quickly build and deploy their applications with ease. This is a free, simple tool that allows you to use a normal postfix filter file in order to use custom domain in your mail program. The license includes only a.txt file, which will be stored in the user folder. The executable is also included in a package to be installed into the program folder. Instructions are included in the package. Microsoft WEFT Free Registration Code Font Objects allow users to specify the fonts used for rendering text on a web page. In addition, they can also control font sizes, font styles (bold, italic, etc.) and even letter spacing. So you can now display text in any font you like on your web page, simply by adding appropriate font objects to the page and linking them back to your website. For example, if you create a uniform object in Microsoft Web Explorer, and then use that object to display text in bold italic 12-point Times New Roman on your webpage, when the object is clicked, it will take the user directly to the exact location on your website where the webpage text is displayed. The program also has the ability to export your web pages to Microsoft Excel or access control and configuration information. Key Features: Create and manage web font objects. Supports both MS WEB and RICH text pages. Right-click on the hypertext link and choose a font from a list, or type the exact font value in the text field. Management Regulatory and Quality Server Security HTML/CSS Editors W3C HTML Editor CSS Editor Microsoft Excel Functions Linking Options Export to Excel Create web font objects. Create raster and bitmap images Create bitmap TIFF files Create printer TIFF files Font. Choose more than one font by adding several font objects. Choose the position of each font on your page. Provide other text appearance options. Support for graphic images. Rotate the images. Conversion to monospaced characters. Take control of the conversion process. As for what's New, this Version 12 is a major upgrade over previous versions; we've added several new functions, improved the GUI interface, and added more than 15 new fonts. Our complete changelog for previous versions can be found HEREElizabethville, New York Elizabethville is a town in Hamilton County, New York, United States. The town's population was 1,817 at the 2010 census. The town is named after Elizabethtown, the name of an early settler. History The town of Elizabethville was first settled around 1805. The town of Elizabethville was formed in 1811 from the town of Columbus. There are two 2f7fe94e24 Microsoft WEFT Crack+ With Keygen Microsoft WEFT is a handy utility that was especially designed to make it possible for developers to create so-called 'font objects' which link back to any webpage set by the author. The program provides more than one method of creating and managing such objects, including a wizard that will make things more easy for all users. Microsoft WEFT Review: Microsoft WEFT is a handy utility that was especially designed to make it possible for developers to create so-called 'font objects' which link back to any webpage set by the author. The program provides more than one method of creating and managing such objects, including a wizard that will make things more easy for all users. Microsoft WEFT is a handy utility that was especially designed to make it possible for developers to create so-called 'font objects' which link back to any webpage set by the author. The program provides more than one method of creating and managing such objects, including a wizard that will make things more easy for all users. Microsoft WEFT Free Download Version 5.0 Microsoft WEFT is a handy utility that was especially designed to make it possible for developers to create so-called 'font objects' which link back to any webpage set by the author. The program provides more than one method of creating and managing such objects, including a wizard that will make things more easy for all users. Software Downloads / DowngradeWinToDOS is a software that re-activate your Windows xp/vista from 7. All you need to do is to download and install the software. …DowngradeWinToDOS is a software that re-activate your Windows xp/vista from 7. All you need to do is to download and install the software. Get Lifebook X8 Direct Download — Lifebook X8 Direct Download is a free easy to use utility designed to help you to convert from Windows XP/Vista to Windows 7/8. It directly convert your Windows xp/vista to Windows 7/8 by several simple steps including installation, etc. Get Lifebook X7 Direct Download — Lifebook X7 Direct Download is a free easy to use utility designed to help you to convert from Windows XP/Vista to Windows 7/8. It directly convert your Windows xp/vista to Windows 7/8 by several simple steps including installation, etc. Xconna browser emporer is a free browser that allow you to browse the Web with Microsoft WEFT is a handy utility that was especially designed to make it possible for developers to create so-called 'font objects' which link back to any webpage set by the author. The program provides more than one method of creating and managing such objects, including a wizard that will make things more easy for all users.Pavel Chekanov Pavel Sergeyevich Chekanov (; born 4 February 1989) is a Russian professional football player who plays as a forward for FC KamAZ Naberezhnye Chelny. Club career He made his debut in the Russian First Division for FC Volga Nizhny Novgorod on 10 August 2011 in a game against FC Dynamo Stavropol. Career statistics References External links Profile by Russian Football National League Category:1989 births Category:Living people Category:Russian footballers Category:Russia youth international footballers Category:Association football forwards Category:FC Volga Nizhny Novgorod players Category:FC Tom Tomsk players Category:FC Khimki players Category:FC Energiya Volzhsky players Category:FC Khimik Dzerzhinsk players Category:FC Olimpia Volgograd playersQ: Prove that if $P$ divides $q^{2}-1$ and $P$ divides $\left( \frac{q}{p}\right)$, then $P$ divides $q-1$ I'm reviewing for a second year exam and I'm stuck in this problem: Prove that if $P$ divides $q^{2}-1$ and $P$ divides $\left( \frac{q}{p}\right)$, then $P$ divides $q-1$ My idea was to prove it by contraposition. To do that, I need to prove that if $P$ does not divide $q-1$, then it's false that $P$ divides $q^{2}-1$. I've tried the usual thing with $p$, but if you factor out $p$ from both sides, you get: $$pq+q+q^{2}-p$$ and this is equivalent to $p(2q-1)$, so I guess there's something wrong with my approach. Can you point me to a proper proof or give me some https://wakelet.com/wake/brhl9oyWQergg2Nqe-N-_ https://wakelet.com/wake/xKTwD9y0TChnIOYmHB1st https://wakelet.com/wake/SXewC7RWc4zv9_aT9LOAH https://wakelet.com/wake/sYA-BOpGKDzr_ySek4NyF https://wakelet.com/wake/ZWXtBuJsfkH9-h3cgp4f_ Minimum: OS: Windows 7 Processor: 1.6 GHz dual core CPU Memory: 4 GB RAM Graphics: 1GB+ of Video RAM DirectX: Version 9.0c Network: Broadband Internet connection Storage: 50GB available space Additional Notes: Recommended: Processor: 2.0 GHz dual core CPU Memory: 8 GB RAM Graphics: 2GB of Video RAM https://lysteninc.com/2022/07/13/lrs-explorer-crack-free/ http://mirrordancehair.com/?p=3421 https://72bid.com?password-protected=login https://cuteteddybearpuppies.com/2022/07/pinout-master-2022/ http://mariasworlds.com/index.php/2022/07/13/webexpress-with-product-key/ https://www.theblender.it/net-micro-framework-tcp-ip-and-ssl-libraries-for-thumb2-instruction-set-crack-with-license-key-free-download/ https://vdsproductions.nl/jpegexpress-crack-for-windows/ https://bridgetsdance.com/index.php/2022/07/13/magic-bullet-frames-crack-free-updated/ https://womss.com/image-update-builder-product-key-full-latest/ http://lucaslaruffa.com/?p=6993 http://one2s.com/mars-simulation-crack-license-key-free-download-3264bit-march-2022/ https://farmaciacortesi.it/picmeta-phototracker-crack/ http://footpathschool.org/?p=22836 https://cleverposse.com/advert/smartcad-free/
CommonCrawl
A red tide in the pack ice of the Arctic Ocean Physicochemical controls on the initiation of phytoplankton bloom during the winter monsoon in the Arabian Sea R. S. Lakshmi, Satya Prakash, … T. M. Balakrishnan Nair Faster Atlantic currents drive poleward expansion of temperate phytoplankton in the Arctic Ocean L. Oziel, A. Baudena, … M. Babin Ecosystem state change in the Arabian Sea fuelled by the recent loss of snow over the Himalayan-Tibetan Plateau region Joaquim I. Goes, Hongzhen Tian, … Douglas G. Martinson Sensitivity of Holocene East Antarctic productivity to subdecadal variability set by sea ice Katelyn M. Johnson, Robert M. McKay, … Robert B. Dunbar Large diatom bloom off the Antarctic Peninsula during cool conditions associated with the 2015/2016 El Niño Raul Rodrigo Costa, Carlos Rafael Borges Mendes, … Eduardo Resende Secchi Organic matter from Arctic sea-ice loss alters bacterial community structure and function Graham J. C. Underwood, Christine Michel, … Boris P. Koch Sea-ice derived meltwater stratification slows the biological carbon pump: results from continuous observations Wilken-Jon von Appen, Anya M. Waite, … Antje Boetius Arctic warming interrupts the Transpolar Drift and affects long-range transport of sea ice and ice-rafted matter Thomas Krumpen, H. Jakob Belter, … Rüdiger Stein Water column gradients beneath the summer ice of a High Arctic freshwater lake as indicators of sensitivity to climate change Paschale N. Bégin, Yukiko Tanabe, … Warwick F. Vincent Lasse M. Olsen ORCID: orcid.org/0000-0003-1328-26871,2, Pedro Duarte1, Cecilia Peralta-Ferriz3, Hanna M. Kauko1, Malin Johansson ORCID: orcid.org/0000-0003-0129-22394, Ilka Peeken ORCID: orcid.org/0000-0003-1531-16645, Magdalena Różańska-Pluta6, Agnieszka Tatarek6, Jozef Wiktor6, Mar Fernández-Méndez1,7, Penelope M. Wagner8, Alexey K. Pavlov1,6,9, Haakon Hop1,10 & Philipp Assmy1 In the Arctic Ocean ice algae constitute a key ecosystem component and the ice algal spring bloom a critical event in the annual production cycle. The bulk of ice algal biomass is usually found in the bottom few cm of the sea ice and dominated by pennate diatoms attached to the ice matrix. Here we report a red tide of the phototrophic ciliate Mesodinium rubrum located at the ice-water interface of newly formed pack ice of the high Arctic in early spring. These planktonic ciliates are not able to attach to the ice. Based on observations and theory of fluid dynamics, we propose that convection caused by brine rejection in growing sea ice enabled M. rubrum to bloom at the ice-water interface despite the relative flow between water and ice. We argue that red tides of M. rubrum are more likely to occur under the thinning Arctic sea ice regime. In the high Arctic the relative contribution of ice algae to total primary production can be up to 60% because the snow covered perennial pack ice cover efficiently shades the under-ice water column, thus limiting phytoplankton growth1,2,3. During the ice algal spring bloom the highest biomass is usually found at the bottom 2–3 cm of sea ice in the interstitial environment of the skeletal layer forming as the ice grows. Here the cumulative light energy input is relatively high and nutrients are supplied from the underlying water4,5,6. Previously proposed physical mechanisms for colonization of the skeletal layer are harvesting or scavenging by frazil ice crystals, waves that push algae into the ice7, and the skeletal layer acting as a comb sieving algae from the water5. Between the ice and the water column there is almost always a relative motion, due to water currents, and in the case of pack ice, wind-driven movement of the ice. Momentum transfer creates a boundary layer with decreasing velocity but increasing shear towards the ice under-surface, with laminar flow closest to the ice and turbulent flow outside8. From the benthic environment it is known that shear inhibits the colonization of surfaces by algae9,10. Typically the algae colonizing benthic surfaces can attach to the substrate and pennate diatoms are often dominating11. The same is true for sea ice12,13. Pennate diatoms excrete mucilage that enables them to adhere to and, in the case of raphe-bearing pennate diatoms, to move on the ice surface14,15,16. Motile algae without the capacity to adhere to the sea ice matrix are prone to displacement, presumably limiting biomass accumulation at the ice-water interface. During the Norwegian young sea ice drift study (N-ICE2015) north of Svalbard17, we observed a dense bloom of the phototrophic ciliate Mesodinium rubrum (aka Myrionecta rubra) located at the ice-water interface of growing young ice (YI), in a lead that opened and refroze in late April and early May 2015. The bloom of M. rubrum at the ice-water interface can be likened to a red tide, well known for this species at lower latitudes18. To our knowledge this is the first observation of an ice-associated red tide of M. rubrum in the Arctic Ocean. M. rubrum is a motile planktonic species that, unlike e.g. diatoms and surface associated ciliates, cannot attach to the ice matrix19,20. Then, how can these ciliates remain stationary at the ice-water interface despite the flow? We calculated the theoretical thickness of the laminar boundary layer with the observed relative velocity and concluded that it is unlikely that the ciliates could reside within it. However, the red tide of M. rubrum coincided exactly with the period of ice growth, and we propose that in growing YI convection resulting from brine rejection compensated by seawater inflow interrupted the laminar boundary layer and allowed M. rubrum to stay in the ice-water interface as long as the ice was growing. The proposed hydrodynamic model provides a mechanistic explanation for the occurrence of blooms of motile algae below drifting pack ice. Study area, sea ice and water column conditions The observations reported here are from the N-ICE2015 expedition lasting from January to June 2015, when R/V Lance drifted repeatedly with pack ice floes from about 83°N northwest of Svalbard south or southwestwards towards the ice edge at around 81°N17. We refer to the four ice drifts as Floes 1 to 4. This study is mainly from the drift of Floe 3 from 20 April to 6 June (Fig. 1), with some additional observations from Floe 4, which was followed from 8 to 22 June and started closer to the ice edge17,21, and measurements of the vertical flux of algae from all four floes. On Floe 3 we sampled young ice (YI) in a refrozen lead along a transect of five ice stations (Fig. 1). The lead was approximately 400 m wide and opened up on 23 April, started to freeze on 26 April and was completely ice covered by 1 May. At the first sampling on 6 May, the ice was 15 cm thick and it grew to a maximum of 27 cm around 18 May. Subsequently the ice melted to a thickness of 20 cm before it broke up on 3 June. The average snow depth on the refrozen lead during the whole period was 3.5 cm22. In addition, we took samples from thicker surrounding ice, which was classified to be either first year ice (FYI) or multiyear ice (MYI). The thick ice had a modal ice thickness of 1.46 ± 0.66 m and snow thickness of 0.39 ± 0.21 m according to a local area survey23. The loss in total thickness of snow and ice in the period from 27 April to 4 June was 6 and 3 cm per 30 days for MYI and FYI, respectively23, indicating minor changes in the ice pack surrounding the refrozen lead. Under the FYI and MYI the irradiance was 1–10 µmol photons m−2 s−1, and under YI on average 114 µmol photons m−2 s−1 21,22. The mixed layer depth of the water column below the ice was approximately 50 m throughout the study period24. During this study nutrients (phosphate, nitrate and silicate) were always available in excess in the under-ice water21. RADARSAT-2 image from 26 May 2015 showing the sea ice distribution in the study area north of Svalbard with the drift track of Floe 3 (white line) superimposed. The regional survey by helicopter 60 km north and 50 km east, south and west of R/V Lance on 19 and 20 May, respectively, are shown in yellow. The yellow rectangle indicates the area covered by an ALOS-2 radar scene that was used to classify ice types in a wider area around R/V Lance on 18 May (see Supplementary section 2). Inset is an aerial photo of the study site on Floe 3, showing a part of the refrozen lead and the area where divers took slurp samples from the ice-water interface, and the approximate position of the ice coring transects. RADARSAT-2 image provided by NSC/KSAT under the Norwegian-Canadian RADARSAT agreement. RADARSAT-2 Data and Products © Maxar Technologies Ltd (2015). All Rights Reserved. RADARSAT is an official mark of the Canadian Space Agency. The inset aerial image was taken on 23 May 2015 by V. Kustov and S. Semenov of the Arctic and Antarctic Research Institute, St. Petersburg, Russia. Mesodinium rubrum across sea ice habitats and water column Mesodinium rubrum was detected in young ice (YI) of the refrozen lead throughout the study period with abundance in the range of 0.3 to 15.7 × 106 cells m−2 (Table 1). The highest abundance (2.9 to 211 × 106 cells m−2) was observed in slurp gun samples taken by divers from the ice-water interface (Fig. 2). The bloom was visible as a faint coloring of the ice undersurface, and the sampling with the slurp gun indicated that the algal layer had a thickness of <1 mm and was stationary at the interface. Because the area sampled with the slurp gun was known we could calculate cells per area. The per volume cell concentration in the slurp samples was in the range 2.3 × 104 to 3.1 × 106 cells L−1, but the slurp samples were diluted when surrounding seawater entered due to the suction. Free chloroplasts originating from burst Mesodinium rubrum cells (Fig. 3) were found in relatively high abundance (0.4 to 7.2 × 109 chloroplasts m−2) in the bottom 10 cm of YI of the refrozen lead in the period 4 to 20 May, and then a rapid, two orders of magnitude decline after 20 May (Table 1). At the ice-water interface, the abundance of chloroplasts amounted to 9.2 × 107 to 3.8 × 109 m−2. Neither M. rubrum cells nor free chloroplasts were detected by microscopy in FYI or MYI (Table 1). In the water column the abundance of M. rubrum (Fig. 4a) peaked on 18 May with 3.1 × 108 cells m−2, and a regional helicopter sampling revealed abundances between 1.0 and 6.0 × 108 cells m−2 in a larger area around R/V Lance (Fig. 4a). The per volume concentration of M. rubrum in the water column was in the range of 2 to 52 × 103 cells L−1. M. rubrum constituted up to 40% of the total cell abundance of the protist community (Fig. S1.3) on 18 May at R/V Lance and from the regional sampling on 19 and 20 May. M. rubrum cells were found in the sediment traps from 5, 25, 50 and 100 m depth on 26 April, 10 May, 18 May and at 5 m on 16 June (Table 2). The calculated vertical flux was in the order of 106 cells m−2 d−1 at 5 to 50 m and 105 cells m−2 d−1 at 100 m on 26 April, 106 cells m−2 d−1 at all depths on 18 May and increased to 107 cells m−2 d−1 at 5 and 25 m on 18 May. No M. rubrum was found in the traps on 29 May and 12 June, and a lower flux in the order 104 cells m−2 d−1 was found at 5 m on Floe 4 on 16 June (Table 2). In the sediment traps deployed 1 m below the ice, a vertical flux in the order of 106 cells m−2 d−1 was observed on 10 and 18 May, whereas no M. rubrum cells were found in the traps at this depth on 26 April, or after 18 May (Table 2). In Fig. 2 we show schematically our observations of M. rubrum in YI, at the YI ice-water interface, and in the water column. Table 1 Temporal development of chlorophyll a (Chl a) and alloxanthin (Allo) standing stocks (mg m−2) in the ice-water interface of YI (average ± SE, n = 3 sites) and in ice cores of YI (average ± SE, n = 5 sites), FYI and MYI (n = 1) on Floe 3 (4 May–4 Jun 2015). Timeline of the observations of M. rubrum in YI, at the ice-water interface and the underlying water column, with the sampling methods indicated. Divers took samples from the ice-water interface with the slurp gun. Sea ice diatoms became dominating in the ice algal community after 20 May. In addition to M. rubrum various flagellates were present in the water column. See Kauko et al.13 for a detailed description of the ice-algal succession and Assmy et al.25 for a description of an under-ice bloom of Phaeocystis pouchetii in the water column starting around 25 May. Mesodinium rubrum cell and three free chloroplasts originating from M. rubrum cells at the right side. Note same type of cells inside the M. rubrum cell. Inset is an image of an unidentified cryptophyte from the same sample. M. rubrum ingests cryptophyte algae, sequester their chloroplasts and subsequently use them for photosynthesis49. All images were taken at 200x magnification. Temporal and spatial map of water column abundance of Mesodinium rubrum (a) and cryptophytes (cells m−2) (b), integrated (0 to 25 m) chlorophyll a (Chl a) (mg m−2) (c) and integrated (0–15 m) alloxanthin (mg m−2) (d) standing stocks during the drift of Floe 3 (blue). In yellow circles the values for the regional sampling north, east, south and west of R/V Lance on 19 and 20 May. Table 2 Vertical flux of Mesodinium rubrum (106 cells m−2 d−1) at 5 depths (m). 30 Jan on Floe 1, 14 Mar on Floe 2, 26 Apr–29 May on Floe 3, and 12 and 16 Jun on Floe 4 of N-ICE2015 ice drifts. Abundances of cryptophyte algae, the prey and source of chloroplasts for M. rubrum, in the YI ranged from 0.4 to 8.8 × 106 cells m−2 (Table 1). At the ice-water interface, cryptophytes were only detected once, on 14 May, and none were observed in FYI or MYI (Table 1). The abundance of cryptophytes in the water column was in the range of 0.3 to 2.1 × 108 cells m−2 in May, but a higher abundance of 1.4 × 109 cells m−2 was observed in June (Fig. 4b). The highest chlorophyll a (Chl a) standing stock, a proxy for algal biomass, was measured at the ice-water interface of the refrozen lead with values in the range 0.9–15 mg m−2 for the period 6–14 May. The Chl a concentration in these slurp samples was in the range 7–117 mg m−3. In the same period, the Chl a standing stock in the ice cores was in the range of 0.09–1 mg m−2. The highest Chl a standing stock measured in ice was around 3 mg m−2 in the bottom 10 cm of the refrozen lead in late May and early June (Table 1). In the water column, the depth integrated (0–25 m) Chl a standing stock increased in early May from 1 mg m−2 to 9.7 mg m−2 on 18 May (Fig. 4c). The regional sampling by helicopter to the north (19 May), east, south and west (20 May) relative to R/V Lance revealed similar levels of 6–19 mg Chl a m−2 in a larger area at this time (Fig. 4c). The concentration of Chl a at this time never exceeded 0.5 mg m−3. From 25 May onwards, Chl a standing stocks increased by a factor of 10–20 (Fig. 4c), which was due to an under-ice bloom of Phaeocystis pouchetii described by Assmy et al.25. According to the ice classification performed on the ALOS-2 radar satellite scene YI made up 10.2% of the total area, thicker FYI or MYI constituted 84.4%, and open water 5.4% (Fig. S2.1). Upscaling to the 2800 km2 ALOS-2 scene area (Fig. 1) indicate that the Chl a in the thin (<1 mm) layer of the YI interface equals 4.3% of the total integrated amount found in the water column from 0 to 25 m depth. Alloxanthin standing stocks, a proxy for cryptophyte and/or Mesodinium rubrum biomass26, were orders of magnitude higher at the ice-water interface of YI (0.2 to 1.5 mg m−2) than in the YI cores (0.004 and 0.08 mg m−2 with a peak of 0.4 mg m−2 on 20 May). In the FYI and MYI, the standing stocks of alloxanthin were even lower, in the range 0.001–0.008 mg m−2 (Table 1). The alloxanthin: Chl a ratio (mg: mg) was in the range of 0.01–0.03 in YI, except for ratios of 0.07–0.13 coinciding with Chl a peaks on 4, 10 and 20 May13, and up to 0.36 at the ice-water interface. In the water column alloxanthin standing stocks increased from winter levels of 0.02–0.04 mg m−2 to approximately 0.1 mg m−2 in the period 11–18 May (Fig. 4d). From the spatial sampling campaign on 19–20 May, we only have the northern point for alloxanthin, which showed 0.25 mg m−2, indicating that the increase was taking place over a larger area. The alloxanthin to Chl a ratio in the water column was between 0.02 and 0.06, and after 20 May <0.02. In the slurp gun samples taken by divers at the ice-water interface M. rubrum and its chloroplasts dominated throughout the sampling period from 7–14 May, contributing 87–97% and 92–97% of the total protist abundance and carbon biomass, respectively (Fig. S1.2). In the YI cores the free M. rubrum chloroplasts totally dominated in abundance and constituted a large part of the carbon biomass until 20 May13. In addition flagellates constituted a significant fraction in early May while pennate diatoms gradually increased and became the dominating group in late May. See Kauko et al.13 for a detailed description of the succession of the protist community in the YI of the refrozen lead, and Olsen et al.21 for the surrounding FYI and MYI. The protist community in the water column during early May was dominated by dinoflagellates, flagellates and the "Other" group (Fig. S1.3) dominated by Phaeocystis pouchetii, ciliates other than Mesodinium rubrum and coccolithophorids (Supplementary Section 4). For most sediment trap samples, in which M. rubrum was found, the majority of the protist cells were M. rubrum, which constituted almost the entire carbon biomass (Fig S1.4). In Fig. 4 is shown a schematic summary of the observations in ice, ice-water interface and water column, with sampling methods indicated. Photosyntetic response to irradiance The maximum quantum yield of fluorescence (ΦPSIImax) measured in slurp samples from the ice-water interface of the refrozen lead on 5–14 May was in the range 0.40–0.64 (Table 3). The range for the photosynthetic parameters derived from fitting the Webb equation to the rapid light curve measured were: photosynthetic efficiency (α) = 0.32–0.59 (µmol photons m−2 s−1)−1, maximum relative electron transfer rate (rETRmax) = 72–198 (no unit), saturation irradiance (Ek) = rETRmax/α = 153–549 µmol photons m−2 s−1 (Table 3). The measured downwelling PAR irradiance at the ice-water interface of the refrozen lead reached a maximum of 114 µmol photons m−2 s−1 22. Table 3 Maximum quantum yield of fluorescence of photosystem II (ΦPSIImax) and the photosynthetic parameters from samples taken at the YI ice-water interface. rETRmax: maximum relative electron transfer rate (no unit), α: photosynthetic efficiency (µmol photons m−2 s−1)−1, Ek = rETRmax/α: photosynthetic saturation irradiance (µmol photons m−2 s−1). Average ± SE, n = 3. Ice-ocean boundary layer dynamics The average relative velocity between ice and water was 11 cm s−1 (Fig. S1.1). Figure 5 shows the velocity profiles from the sea ice boundary down to 2 m depth, including both the laminar sub-layer and the turbulent logarithmic layer. A zoom of the upper 0.15 cm right below the sea ice gives details of the laminar sub-layer velocity structure and thickness. For a free-stream velocity of U∞ = 10 cm s−1 (Fig. 5b), the thickness of the laminar sub-layer is δlsl = 0.06 cm. During times with free-stream velocities lower than average (Fig. 5a), the thickness of the laminar sub-layer is larger (δlsl = 0.12 cm for U∞ = 5 cm s−1), whereas during events of stronger free-stream velocities (Fig. 5c,d), the thickness of the laminar sub-layer is much smaller (δlsl = 0.03 cm and δlsl = 0.02 cm for U∞ = 20 cm s−1 and U∞ = 30 cm s−1, respectively). In Supplementary Material Section 4 we describe how under-surface roughness can affect the boundary layer dynamics. Velocity profiles in the laminar sub-layer (blue) and the logarithmic layer (red) below assumed smooth ice, considering free stream velocities of (a) U∞ = 5 cm s−1; (b) U∞ = 10 cm s−1; (c) U∞ = 20 cm s−1; and (d) U∞ = 30 cm s−1. Inserts highlight the laminar sub-layer region. The 7–117 mg Chl a m−3 we measured in the slurp samples from the YI interface layer is 14–234 times higher than the concentration in the water column (<0.5 mg Chl a m−3), which was at the level of non-bloom concentrations reported from the North Atlantic27. The interface bloom can be likened to the red tides of M. rubrum often observed at lower latitudes, where the Chl a concentration can be >100 mg m−3 and abundance up to 106 cells L−1 28. To our knowledge, ours is the first observation of a pack ice associated red water bloom. The only published observation of a red tide in the Arctic Ocean is from ice-free, coastal waters near Barrow, Alaska in September 1968, caused by an unidentified ciliate similar to, but not identical with M. rubrum29. The abundance in the bottom 10 cm of YI (0.3 to 15.7 × 106 cells m−2) is comparable to some other observations. Up to 1.6 × 106 cells m−2 of M. rubrum was observed in the bottom 2–4 cm of 30–40 cm thick FYI in the Saroma-ko lagoon in Hokkaido, with an integrated abundance in the water column under the ice from 0 to 1 m depth of 3.4 × 106 cells m−2 30. Likewise, when M. rubrum was observed at abundance 2 × 105 cells m−2 in the bottom 2–4 cm of 1.5–2 m thick FYI in the Canadian Arctic, the average abundance in the water column below the ice down to 8 m was 0.14 × 103 cells L−1 31. The maximum abundance we measured in the YI ice-water interface (211 × 106 cells m−2) was considerably higher than these observations. M. rubrum cells are known to be fragile and difficult to preserve. The mix of glutaraldehyde and formaldehyde used during the N-ICE campaign was chosen in order to preserve the highest possible fraction of the entire protist community, but might not be the best method to preserve M. rubrum18. The high abundance of chloroplasts originating in M. rubrum in the ice core samples (Table 1), indicate that cells had disintegrated. The melting of the ice cores could also have caused ciliates to burst32 Use of the free chloroplasts as a tracer of M. rubrum relies on the accurate identification of them. The morphology of the free chloroplast was very similar to those we observed inside of M. rubrum cells (Fig. 3), and agrees well with previous descriptions33. According to the quantum yield of fluorescence (0.40 to 0.64) the protists, mainly M. rubrum, were in a good physiological condition34, with active photosynthesis at the YI ice-water interface. Under the FYI and MYI with 20–50 cm of snow the irradiance was only 1–10 µmol photons m−2 s−1 21. This ice type covered most of our study area (Fig. S2.1). The M. rubrum bloom was confined to the ice-water interface of the YI in the refrozen lead, where the irradiance was higher, on average 114 µmol photons m−2 s−1 22. Moeller et al.35 showed that M. rubrum acclimates to the irradiance level so that the saturation irradiance for photosynthesis (Ek) is similar to the irradiance they grow under. We measured Ek > 153 µmol photons m−2 s−1, indicating they were growing stationary at the YI ice-water interface. This Ek is similar to what Stoecker et al.36 found for M. rubrum in temperate waters, and what McMinn and Hegseth37 found for surface phytoplankton in the Arctic Ocean north of Svalbard in spring. Under YI is a near optimal place with regards to light, but can M. rubrum cells actively position themselves under the thin ice, or is there some external physical mechanism keeping them there? The swimming speed of M. rubrum is approximately 0.16 mm s−1 19, whereas the average relative velocity of ice vs. water was 0.11 m s−1 (Fig. S1.1), so they could certainly not outswim the ice. Previously proposed mechanisms like scavenging of cells by frazil ice in the water column and waves pushing the cells into the ice7 seems unlikely here because both processes are most active when there is open water or still unconsolidated ice, whereas our bloom took place at the ice-water interface of consolidated YI. Sieving of the water column by the protruding ice crystals of the skeletal layer5 seems more likely to work for sticky algae like diatoms38 than for the fast swimming/jumping19,39 and, to our knowledge, non-sticky M. rubrum. Is it possible that in the boundary layer close to the ice undersurface the relative motion of water and ice is so slow that M. rubrum cells can remain stationary there? Boundary layer shear is known to affect algal colonization of the benthic environment40, and although the boundary layer under sea ice is well studied in other physical contexts8,41, there seems to be no studies on how it affects algal colonization. Figure 6 shows a schematic compilation of the various forces acting on a cell of M. rubrum to modify its position relative to the ice. In a simplified scenario of a smooth sea ice bottom and no sea-ice melt or growth, and with the range of relative velocities between ice and water that we observed (Fig. S1.1), the thickness of the laminar part of the boundary layer would theoretically be 0.2–1.2 mm (Fig. 5). M. rubrum cells have a maximum width of 20 µm and length of 40 µm27, and move in jumps of 0.16 mm. A typical jumping rate is 1 s−1, and thus, the effective swimming velocity is 0.16 mm s−1 19. This implies that the laminar layer was 3–4 jumps thick at the average ice-water relative velocity. The shear in the layer is in the range 8–283 s−1, from lowest to highest free stream velocity (Fig. 5). In addition to being phototactic, M. rubrum is also rheotactic, and a shear of 1–3 s−1 is enough to trigger an escape response according to Fenchel and Hansen19. In addition to the thinness of the layer and the high shear, the velocity reached within the laminar layer was 1 cm s−1 or higher (Fig. 5), i.e. above the swimming speed of M. rubrum. Thus, it seems unlikely that a bloom at the ice-water interface can be maintained within the laminar boundary layer. The various forces acting upon a cell of M. rubrum, to modify its position relative to the drifting sea ice. The relative motion between ice and water (Vrelative) creates a boundary layer where the laminar part has a velocity increasing linearly from zero at the surface while the turbulent part exhibits a logarithmic increase in relative velocity. M. rubrum is phototactic, swimming upwards to higher irradiance. (Vswim) Water column mixing can assist or counteract the upward movement. The skeletal layer formed at the bottom of growing sea ice consists of ice crystal lamellae interspersed by brine channels and tubes. Brine rejection compensated by inflowing seawater creates convection that may contribute to keep M. rubrum cells there. See discussion and Supplementary section 3 for details. Macroscopically the cores appeared relatively smooth at the bottom, with roughness on the mm scale. If the roughness created hydrodynamically rough conditions, i.e. allowed turbulence to reach the ice surface, it is unlikely that it helped M. rubrum cells to stay in the ice-water interface because the laminar layer would become even thinner or disappear completely (Supplementary Section 3). Roughness did not help sticky diatoms to colonize benthic surfaces9, then it might be even more unlikely to help non-sticky ciliates. The bloom at the YI interface disappeared abruptly over a few days after 20 May (Fig. 2). At this time there was still a surplus of inorganic nutrients in the water below the ice, and ice diatoms continued to grow at the interface for two weeks until the floe broke up13,21, i.e. nutrient limitation was not causing the disappearance. M. rubrum was at saturating abundance for copepods in the interface if they were able to exploit this food source, but we observed no response in grazer abundance or aggregation of grazers at the interface. Thus, it is unlikely that grazing terminated the bloom. Noteworthy, over the entire period we observed the interface bloom the YI was growing. The disappearance of the bloom coincided with the cessation of ice growth (Fig. 4), suggesting that ice growth might create physical conditions favorable to keep M. rubrum at the interface. Almost no ice growth was observed in FYI and MYI in early May42 due to the insulating effect of the thick snow cover43, and M. rubrum was not found there (Table 1). During ice growth a porous skeletal layer is formed at the bottom of the ice with pockets and tubes, which can be 1–3 cm long and with a diameter up to 0.5 mm44. In this process brine is rejected by gravity drainage45. The decrease in bulk ice salinity observed indicates that this happened as predicted when the refrozen lead ice formed42. Brine drainage from the ice is compensated by an inflow of seawater, forming convection cells46,47,48. It is possible that this skeletal layer convection disrupts the laminar boundary layer (J. Morison personal communication) and helps M. rubrum cells to remain in the skeletal layer, maybe assisted by their own upwards, phototactic swimming (Fig. 6). In addition convection renews the water in the skeletal layer, supplying nutrients from the water column4. This might also supply cryptophyte prey and thus new chloroplasts to M. rubrum49. When the refrozen lead ice stopped growing around 20 May (Fig. 2), it follows that brine drainage, and thus also convection stopped45. At this point the physical factors at the ice-water interface were presumably dominated by the boundary layer dynamics, which, as discussed above, did not help M. rubrum to stay in the interface. In contrast, the interstitial ice diatom community continued to grow and reached maximal biomass in late May after ice melt had started13,21, illustrating the benefit of being adhered to the ice14,15,50. The highest abundance of M. rubrum in the water column was observed on 18 May (Fig. 4a) coinciding with its highest vertical flux (Table 2). This could be related to a mass release of M. rubrum from the ice due to the cessation of ice growth, as discussed above. The tendency of M. rubrum to migrate and aggregate in the water column makes it difficult to get an accurate measure of the abundance with the fixed sampling depths during the CTD casts18. The sediment traps capture cells during 1–2 days and therefore might be a more reliable device for detecting M. rubrum. According to the vertical flux (Table 2) about 10% of the standing stock from 0–25 m was captured per day on 18 May, and the apparent sinking velocity was 0.28 m d−1 at 25 m. The sinking speed of a resting M. rubrum cell is 0.7 m d−1 according to Fenchel and Hansen19. It is reasonable that these motile, phototactic ciliates had a low sinking velocity, whereas the migratory behavior might lead M. rubrum cells to swim into the traps51. The regional sampling showed similar abundances in a larger area surrounding R/V Lance on 19 and 20 May (Fig. 4a), suggesting that the M. rubrum red tide was not restricted to the refrozen lead we studied but was a regional phenomenon. The drift track of many ice-tethered buoys in the area around R/V Lance for the same time period indicated that the wind speed and direction was the same in the entire area covered by the ALOS-2 scene52. Thus, it is reasonable to assume that the temperature conditions were similar, and therefore that all YI was growing at this time, facilitating a large scale ice-water interface red tide of M. rubrum in an area exceeding 2800 km2 (Fig. 1). According to the ice type classification in the ALOS-2 satellite radar scene from 18 May (Fig. S2.1), which partly covered the regional sampling campaign by helicopter (Fig. 1), YI made up 10.2% of the total area. The Chl a in the YI ice-water interface equaled 4.3% of the total amount found in the water column from 0 to 25 m depth, which is considerable considering the huge volume of water and the thinness of this layer (<1 mm). The ongoing regime shift towards a thinner, more dynamic ice cover in the Arctic Ocean, with more lead formation25 can promote ephemeral blooms of M. rubrum below growing young ice. It is important to improve our understanding of the mechanisms enabling ice-associated blooms of different algal taxa. Shifts in species composition at the base of the ice-associated ecosystem is an indicator of change, and are likely to have cascading effects on the Arctic marine food web and the biological carbon pump of the Arctic Ocean. Current measurement A medium-range vessel-mounted broadband 150 kHz acoustic Doppler current profiler (ADCP) from Teledyne RD Instruments was used to measure current speed and direction below the ice. The profiles were hourly-averaged in 8-m vertical bins and the first bin was centered at 23 m24. The current speed and direction at 23 m depth were used to calculate the current relative to the ice floe based on ship navigation data. The relative current speed measured between the water column and the sea ice was in the range 0–0.3 m s−1, with an average of 0.11 m s−1 (Fig. S1.1). The apparent northward direction of the water was mainly due to the faster southward movement of the ice driven by prevailing northerly winds52. To study the variability of water column properties in a larger area around R/V Lance, samples were taken at the end of helicopter transects about 60 km north of the ship on 19 May, and about 50 km east, west, and south of the ship on 20 May (Fig. 3). Sample collection and analysis Seawater samples for chlorophyll a (Chl a), pigment composition and protist counts were collected with a rosette water sampler with 8 L Niskin bottles deployed from the ship or with 3.5 L Niskin bottles on a rosette deployed from the ice. Samples were taken at 5, 25, 50 and 100 m depths from the ship, and from 2, 5 and 15 m with the on-ice system. During a regional sampling campaign by helicopter on 19 and 20 May water samples were taken manually with a Limnos water collection bottle closed with a messenger (Limnos. pl) at 5, 15 and 25 m depth. To obtain depth integrated values of Chl a or abundance we used the trapezoid method. Because we had no measurements from 0 m we set the values to equal those at 5 m depth. Three ice-tethered sediment traps (KC Denmark) were deployed on a rope stretched under the YI horizontally at 1 m depth. In addition, four sediment traps were deployed vertically at 5, 25, 50, and 100 m depth, respectively, along a mooring attached to the ice. The deployment time was between 36 and 72 h, usually 48 h. To avoid loosing sample water from the traps during deployment and recovery, the trap cylinders were filled with filtered seawater, made hypersaline (i.e. more dense) by adding sodium chloride, before deployment. Each trap had two cylinders with internal diameter 7.2 cm and height 45 cm, with no baffle at the top. At sampling the water from both cylinders were combined into one sample. Copepods and other zooplankton were removed before taking samples for algal taxonomy. Sinking flux for protists was calculated from cell concentration in the traps, trap volume and area, and trap deployment time. Samples from the sea ice were taken with 9 and 14 cm diameter ice corers (Mark II coring system, KOVACS enterprise, Roseburg, USA). The cores were cut into 10 or 20 cm sections, put in cleaned opaque plastic containers and melted during 18–24 hours at room temperature without seawater buffer on board the ship, according to Rintala et al.31. Samples from the ice-water interface under the refrozen lead were taken by scuba divers using a modified 3.5 L Trident® suction gun (slurp gun). The front nozzle was oblique so that it was possible to fill the gun while moving it along the undersurface of the sea ice. The surface area sampled was 5 × 54 cm for a full slurp gun and this area was used to transform cells per volume in the sample to cells per area. For protist taxonomy analysis and cell counts, 190 mL from Niskin bottles, slurp gun, sediment traps, or melted ice cores were transferred into 200 mL brown glass bottles and fixed with an aldehyde mixture consisting of glutaraldehyde at a final concentration of 0.1% and hexamethylenetetramine-buffered formaldehyde at a final concentration of 1% (vol:vol). The samples were stored dark and cool until analysis. Protists were counted with an inverted Nikon Ti-U light microscope (Nikon TE300 and Ti-S, Tokyo, Japan) using the sedimentation chamber method of Utermöhl53. In most cases 50 ml of the samples was settled, in some cases 10 ml. 20, 40 and 60X magnification was used and the number of view fields counted varied to obtain a minimum of 50 cells of the dominating species, i.e., with a maximum count error of ±28% according to Edler and Elbrächter54. Carbon biomass was determined by calculating volume from cell size55, which was converted to carbon using published conversion factors56. With this method and maximal magnification of 600X we detected mainly protists with cell diameter >2 µm. Samples for Chl a were collected on 25 mm diameter GF/F filters (Whatman, GE Healthcare, Little Chalfont, UK). The volume filtered was noted. Chl a was extracted in 100% methanol for 12 h at 5 °C in the dark and subsequently measured using a Turner Fluorometer 10-AU (Turner Design, Inc.). Phaeopigments were measured by acidifying the sample with 5% HCl before measuring the fluorescence57. Samples to measure algal pigment composition were collected by filtering 10–1000 mL of sample through 25 mm GF/F filters, which were snap frozen in liquid nitrogen and then kept frozen at −80 °C until analysis. After an extraction step the pigments were measured using a Waters photodiode array detector (2996), Waters fluorescence detector (2475), and the EMPOWER software. The pigments were separated by reverse-phase high-performance liquid chromatography (HPLC) in a VARIAN Microsorb-MV3 C8 column (4.6 × 100 mm) using HPLC-grade solvents (Merck). For further details see Tran et al.58. Chl a was measured by both Fluorometer and HPLC for most samples in this study. A linear regression of all data from YI cores (n = 87) gave the relationship Chl aHPLC = 0.65 Chl aFluorometer + 0.11, R2 = 0.83. All our reported Chl a concentrations were measured by fluorometer, whereas all alloxanthin: Chl a ratios were calculated from alloxanthin and Chl a measured by HPLC. Photosyntetic response measured by fluorescence kinetics The physiological status and light response of the photosynthetic apparatus of M. rubrum in samples from the refrozen lead ice-water interface were assessed using in vivo Chl a fluorescence kinetics measured with a Pulse Amplitude Modulation (PAM) fluorometer (Phyto-PAM, Walz, Germany). Samples were kept in a fridge with temperature in the range 1–2 °C and dark-acclimated for 30 min prior to measurement. The maximum quantum yield of fluorescence of photosystem II (ΦPSIImax) was measured with the saturation pulse method31. Rapid Light Curves (RLC) in which the quantum yield of fluorescence in the light (ΦPSII) was measured by illuminating the sample with actinic light increasing stepwise from 1 to 900 µmol photons m−2 s−1 in 13 steps at 20-second intervals, were used to assess the light response of the algae. The first measurement was after dark-acclimation, i.e. ΦPSIImax. Relative electron transfer rate (rETR) was calculated by ΦPSII × E, where E is the actinic irradiance. The photosynthesis-light function of Webb et al.59 was fitted to the rETR data as a function of the incident actinic light: $${\rm{rETR}}\,={{\rm{rETR}}}_{{\rm{\max }}}[1-{{\rm{e}}}^{(-\frac{{\rm{\alpha }}E}{{{\rm{rETR}}}_{{\rm{\max }}}})}]$$ where α is the initial slope of the curve, i.e. the photosynthetic efficiency, and rETRmax is the curve asymptote, i.e. the maximal rETR. We did not observe inhibition of rETR at high irradiance so we did not add an inhibition term to the equation. The ice-ocean boundary layer may be thought of as three different vertical zones8: (1) a laminar, molecular sub-layer (~0–1 mm thick) close to the sea ice-ocean interface, where the velocity varies linearly with depth; (2) a logarithmic turbulent layer (~1–3 m thick) below the laminar sub-layer, with constant stress and where the velocity varies logarithmically with depth; (3) a turbulent, thicker outer layer (~10 m thick), where the velocity is affected by the Coriolis effect. In this study we focus on the first two layers closest to the ice-water interface: the laminar sub-layer and the logarithmic turbulent layer. The surface shear stress in the ice-water interface60 is defined as: $$\tau =\rho {u}_{\ast }^{2}$$ where ρ is the density of the water and u* is the frictional or shear velocity at the boundary layer, which provides a scale for turbulence strength and for the laminar boundary layer thickness. The surface stress τ in the vicinity of the sea ice, which is dominated by viscous (as opposed to inertial) forces, may also be given by White61: $$\tau =\mu \frac{du}{dz}$$ where µ is the dynamic viscosity of seawater, which at 0 °C is µ = 1.8 × 10–2 g cm−1 s−1. Additionally, the magnitude of the surface stress is related to the drag force of the geostrophic fluid under a boundary (e.g., further from the sea ice) as: $$\tau =\rho {C}_{d}{u}_{g}^{2}$$ where Cd is the dimensionless geostrophic drag coefficient, taken here as 5.5 × 10−3 and ug is the geostrophic flow away from the boundary, also referred to as the free stream velocity relative to the sea ice velocity. From (2) and (4), the frictional velocity may be estimated as: $${u}_{\ast }=\sqrt{{C}_{d}}\,{u}_{g}$$ with the thickness of the laminar sub-layer being8: $${\delta }_{lsl}=\frac{\vartheta }{k\,{u}_{\ast }}$$ where \(\vartheta \) is the kinematic viscosity coefficient equal to the dynamic viscosity, \(\mu \), divided by the density of seawater, \(\rho \), taken here as \(\rho \) = 1028 kg m−3, yielding \(\vartheta \) = \(\mu /\rho \) = 1.78 × 10−2 cm2 s−1; and k is the dimensionless Von Kármán's constant, equal to 0.41. Following Eq. 3, the velocity structure within the laminar sub-layer (from z = 0 to z = δlsl) varies with depth as: $$u(z)=\frac{{u}_{\ast }^{2}z}{\vartheta }\,{\rm{for}}\,z\le {\delta }_{lsl}$$ Right below the laminar sub-layer, in the logarithmic layer (i.e., from z = δlsl to z = δsl ~ 1–3 m), turbulent forces become more important than viscous forces. This log layer follows the "law of the wall", where the velocity varies logarithmically with depth (down to typically about 2–3 m below the sea ice8, following: $$u(z)=\frac{{u}_{\ast }}{k}\,\mathrm{ln}\,\frac{z}{{z}_{o}}\,{\rm{for}}\,{\delta }_{lsl}\ge z\le {\delta }_{sl}$$ where zo is a surface length scale related to the roughness elements of the surface in the ice-ocean boundary. Previous findings from laboratory experiments62 suggest that \({z}_{o}=\frac{{h}_{s}}{30}\), where hs is the characteristic height of roughness elements, whereas \({z}_{o}=0.11\frac{\vartheta }{{u}_{\ast }}\) for a very smooth sea ice surface. The free-stream velocity U∞ measured at 20 m, relative to the sea ice, was on average 11 cm s−1, often weaker, and with a few events reaching up to approximately 30 cm s−1 (Fig. S1). We start with the simplest scenario of a smooth sea ice bottom, and assume that no sea-ice melt/growth was occurring when these samples were taken. We select 4 observed values of U∞: (a) 5 cm s−1; (b) 10 cm s−1 (representative of the mean value 11 cm s−1); (c) 20 cm s−1 and (d) 30 cm s−1. Using Eqs 6 and 7, we solve for the frictional velocity u* and the thickness of the laminar sub-layer δlsl. We then solve for the velocity profiles both at the laminar sub-layer (linear) and at the logarithmic layer. The data sets used in this study are publicly available from the Norwegian Polar Data Centre (https://data.npolar.no): N-ICE 2015 surface and under-ice spectral shortwave radiation data [Taskjelle et al.]63; N-ICE 2015 water column biogeochemistry [Assmy et al.]64; N-ICE 2015 ocean microstructure profiles [Meyer et al.]65; N-ICE 2015 phytoplankton and ice algae taxonomy and abundance [Olsen et al.]66; N-ICE 2015 total snow and ice thickness data from EM31 [Rösel et al.]67. Gosselin, M., Levasseur, M., Wheeler, P. A., Horner, R. A. & Booth, B. C. New measurements of phytoplankton and ice algal production in the Arctic Ocean. Deep Sea Research Part II: Topical Studies in Oceanography 44, 1623–1644, https://doi.org/10.1016/S0967-0645(97)00054-4 (1997). Leu, E., Søreide, J. E., Hessen, D. O., Falk-Petersen, S. & Berge, J. Consequences of changing sea-ice cover for primary and secondary producers in the European Arctic shelf seas: Timing, quantity, and quality. Progress in Oceanography 90, 18–32, https://doi.org/10.1016/j.pocean.2011.02.004 (2011). Fernández-Méndez, M. et al. Photosynthetic production in the central Arctic Ocean during the record sea-ice minimum in 2012. Biogeosciences 12, 3525–3549, https://doi.org/10.5194/bg-12-3525-2015 (2015). Cota, G. F., Legendre, L., Gosselin, M. & Ingram, R. G. Ecology of bottom ice algae: I. Environmental controls and variability. Journal of Marine Systems 2, 257–277, https://doi.org/10.1016/0924-7963(91)90036-T (1991). Syvertsen, E. E. Ice algae in the Barents Sea: types of assemblages, origin, fate and role in the ice-edge phytoplankton bloom. Polar Research 10, 277–288, https://doi.org/10.1111/j.1751-8369.1991.tb00653.x (1991). Horner, R. et al. Ecology of sea ice biota. Polar Biology 12, https://doi.org/10.1007/bf00243113 (1992). Spindler, M. Notes on the biology of sea ice in the Arctic and Antarctic. 14, 319–324, https://doi.org/10.1007/bf00238447 (1994). Morison, J. H. & McPhee, M. In Encyclopedia of Ocean Sciences 1271–1281 (2001). DeNicola, D. M. & McIntire, C. D. Effects of substrate relief on the distribution of periphyton in laboratory streams, I. Hydrology. Journal of Phycology 26, 624–633, https://doi.org/10.1111/j.0022-3646.1990.00624.x (1990). Poff, N. L., Voelz, N. J., Ward, J. V. & Lee, R. E. Algal Colonization under Four Experimentally-Controlled Current Regimes in High Mountain Stream. Journal of the North American Benthological Society 9, 303–318, https://doi.org/10.2307/1467898 (1990). Lamb, M. A. & Lowe, R. Effects of Current Velocity on the Physical Structuring of Diatom (Bacillariophyceae) Communities. Ohio J. Sci. 87, 72–78 (1987). Leu, E. et al. Arctic spring awakening – Steering principles behind the phenology of vernal ice algal blooms. Progress in Oceanography 139, 151–170, https://doi.org/10.1016/j.pocean.2015.07.012 (2015). Kauko, H. M. et al. Algal Colonization of Young Arctic Sea Ice in Spring. Frontiers in Marine Science 5, https://doi.org/10.3389/fmars.2018.00199 (2018). Dugdale, T. M., Willis, A. & Wetherbee, R. Adhesive modular proteins occur in the extracellular mucilage of the motile, pennate diatom Phaeodactylum tricornutum. Biophys J 90, L58–60, https://doi.org/10.1529/biophysj.106.081687 (2006). Aumack, C. F., Juhl, A. R. & Krembs, C. Diatom vertical migration within land-fast Arctic sea ice. Journal of Marine Systems 139, 496–504, https://doi.org/10.1016/j.jmarsys.2014.08.013 (2014). Aslam, S. N., Strauss, J., Thomas, D. N., Mock, T. & Underwood, G. J. C. Identifying metabolic pathways for production of extracellular polymeric substances by the diatom Fragilariopsis cylindrus inhabiting sea ice. ISME J 12, 1237–1251, https://doi.org/10.1038/s41396-017-0039-z (2018). Granskog, M. A., Fer, I., Rinke, A. & Steen, H. Atmosphere-Ice-Ocean-Ecosystem Processes in a Thinner Arctic Sea Ice Regime: The Norwegian Young Sea ICE (N-ICE2015) Expedition. Journal of Geophysical Research: Oceans 123, 1586–1594, https://doi.org/10.1002/2017jc013328 (2018). Crawford, D. W. Mesodinium rubrum: the phytoplankter that wasn't. Marine Ecology Progress Series 58, 161–174 (1989). Fenchel, T. & Hansen, P. J. Motile behaviour of the bloom-forming ciliate Mesodinium rubrum. Marine Biology Research 2, 33–40, https://doi.org/10.1080/17451000600571044 (2006). Caron, D. A., Gast, R. J. & Garneau, M. È. In Sea Ice (eds D.N. and G.S. Dieckmann Thomas) Ch. 15, 370–393 (John Wiley & Sons, 2017). Olsen, L. M. et al. The seeding of ice algal blooms in Arctic pack ice: The multiyear ice seed repository hypothesis. Journal of Geophysical Research: Biogeosciences 122, 1529–1548, https://doi.org/10.1002/2016jg003668 (2017). Kauko, H. M. et al. Windows in Arctic sea ice: Light transmission and ice algae in a refrozen lead. Journal of Geophysical Research: Biogeosciences 122, 1486–1505, https://doi.org/10.1002/2016jg003626 (2017). Rösel, A. et al. Thin Sea Ice, Thick Snow, and Widespread Negative Freeboard Observed During N-ICE2015 North of Svalbard. Journal of Geophysical Research: Oceans 123, 1156–1176, https://doi.org/10.1002/2017jc012865 (2018). Meyer, A., Fer, I., Sundfjord, A. & Peterson, A. K. Mixing rates and vertical heat fluxes north of Svalbard from Arctic winter to spring. Journal of Geophysical Research: Oceans 122, 4569–4586, https://doi.org/10.1002/2016jc012441 (2017). Assmy, P. et al. Leads in Arctic pack ice enable early phytoplankton blooms below snow-covered sea ice. Sci Rep 7, 40850, https://doi.org/10.1038/srep40850 (2017). Rial, P., Garrido, J. L., Jaén, D. & Rodríguez, F. Pigment composition in three Dinophysis species (Dinophyceae) and the associated cultures of Mesodinium rubrum and Teleaulax amphioxeia. Journal of Plankton Research 35, 433–437, https://doi.org/10.1093/plankt/fbs099 (2013). Crawford, D. W., Purdie, D. A., Lockwood, A. P. M. & Weissman, P. Recurrent Red-tides in the Southampton Water Estuary Caused by the Phototrophic Ciliate Mesodinium rubrum. Estuarine, Coastal and Shelf Science 45, 799–812, https://doi.org/10.1006/ecss.1997.0242 (1997). Holm-Hansen, O., Taylor, F. J. R. & Barsdate, R. J. J. M. B. A ciliate red tide at Barrow, Alaska. Marine Biology 7, 37–46, https://doi.org/10.1007/bf00346806 (1970). Sime-Ngando, T., Juniper, S. K. & Demers, S. Ice-brine and planktonic microheterotrophs from Saroma-ko Lagoon, Hokkaido (Japan): quantitative importance and trophodynamics. Journal of Marine Systems 11, 149–161, https://doi.org/10.1016/S0924-7963(96)00035-8 (1997). Sime-Ngando, T., Gosselin, M., Juniper, S. K. & Levasseur, M. Changes in sea-ice phagotrophic microprotists (20–200 μm) during the spring algal bloom, Canadian Arctic Archipelago. Journal of Marine Systems 11, 163–172, https://doi.org/10.1016/S0924-7963(96)00036-X (1997). Rintala, J.-M. et al. Fast direct melting of brackish sea-ice samples results in biologically more accurate results than slow buffered melting. Polar Biology 37, 1811–1822, https://doi.org/10.1007/s00300-014-1563-1 (2014). Hansen, P. J. & Fenchel, T. The bloom-forming ciliate Mesodinium rubrum harbours a single permanent endosymbiont. Marine Biology Research 2, 169–177, https://doi.org/10.1080/17451000600719577 (2006). Cosgrove, J. & Borowitzka, M. A. In Chlorophyll a Fluorescence in Aquatic Sciences: Methods and Applications (eds David, J. S., Suggett, O. P. & Borowitzka, M. A.) 1–17 (Springer Netherlands, 2010). Moeller, H. V., Johnson, M. D. & Falkowski, P. G. Photoacclimation in the phototrophic marine ciliate Mesodinium rubrum (Ciliophora). J. Phycol. 47, 324–332, https://doi.org/10.1111/j.1529-8817.2010.00954.x (2011). Stoecker, D. K., Putt, M., Davis, L. H. & Michaels, A. E. Photosynthesis in Mesodinium rubrum: species-specific measurements and comparison to community rates. Marine Ecology Progress Series 73, 245–252 (1991). McMinn, A. & Hegseth, E. N. Quantum yield and photosynthetic parameters of marine microalgae from the southern Arctic Ocean, Svalbard. Journal of the Marine Biological Association of the United Kingdom 84, 865–871, https://doi.org/10.1017/S0025315404010112h (2004). Lund-Hansen, L. C., Hawes, I., Nielsen, M. H. & Sorrell, B. K. Is colonization of sea ice by diatoms facilitated by increased surface roughness in growing ice crystals? Polar Biology 40, 593–602, https://doi.org/10.1007/s00300-016-1981-3 (2016). Jiang, H. & Johnson, M. D. Jumping and overcoming diffusion limitation of nutrient uptake in the photosynthetic ciliate Mesodinium rubrum. Limnol Oceanogr 62, 421–436, https://doi.org/10.1002/lno.10432 (2017). Nowell, A. R. M. & Jumars, P. A. Flow Environments of Aquatic Benthos. Ann. Rev. Ecol. Syst. 15, 303–328, https://doi.org/10.1146/annurev.es.15.110184.001511 (1984). McPhee, M. G. & Morison, J. H. In Encyclopedia of Ocean Sciences 3071–3078 (2001). Montagnes, D. J. S. et al. Factors Controlling the Abundance and Size Distribution of the Phototrophic Ciliate Myrionecta rubra in Open Waters of the North Atlantic. Journal of Eukaryotic Microbiology 55, 457–465, https://doi.org/10.1111/j.1550-7408.2008.00344.x (2008). Duarte, P. et al. Sea ice thermohaline dynamics and biogeochemistry in the Arctic Ocean: Empirical and model results. Journal of Geophysical Research: Biogeosciences 122, 1632–1654, https://doi.org/10.1002/2016jg003660 (2017). Merkouriadi, I., Cheng, B., Graham, R. M., Rösel, A. & Granskog, M. A. Critical Role of Snow on Sea Ice Growth in the Atlantic Sector of the Arctic Ocean. Geophysical Research Letters 44, 10,479–410,485, https://doi.org/10.1002/2017gl075494 (2017). Lake, R. A. & Lewis, E. L. Salt rejection by sea ice during growth. Journal of Geophysical Research 75, 583–597, https://doi.org/10.1029/JC075i003p00583 (1970). Hunke, E. C., Notz, D., Turner, A. K. & Vancoppenolle, M. The multiphase physics of sea ice: a review for model developers. The Cryosphere 5, 989–1009, https://doi.org/10.5194/tc-5-989-2011 (2011). Reeburgh, W. S. Fluxes associated with brine motion in growing sea ice. Polar Biology 3, 29–33, https://doi.org/10.1007/bf00265564 (1984). Petrich, C. & Eicken, H. In Sea Ice (eds Thomas, D. N. & Dieckmann, G. S.) Ch. 2, (Wiley/Blackwell, 2010). Worster, M. G. & Rees Jones, D. W. Sea-ice thermodynamics and brine drainage. Philos Trans A Math Phys Eng Sci 373, https://doi.org/10.1098/rsta.2014.0166 (2015). Kim, M., Drumm, K., Daugbjerg, N. & Hansen, P. J. Dynamics of Sequestered Cryptophyte Nuclei in Mesodinium rubrum during Starvation and Refeeding. Front Microbiol 8, 423, https://doi.org/10.3389/fmicb.2017.00423 (2017). Juhl, A. R., Krembs, C. & Meiners, K. M. Seasonal development and differential retention of ice algae and other organic fractions in first-year Arctic sea ice. Marine Ecology Progress Series 436, 1–16, https://doi.org/10.3354/meps09277 (2011). Heiskanen, A. S. Contamination of sediment trap fluxes by vertically migrating phototrophic micro-organisms in the coastal Baltic Sea. Marine Ecology Progress Series 122, 45–58 (1995). Itkin, P. et al. Thin ice and storms: Sea ice deformation from buoy arrays deployed during N-ICE2015. Journal of Geophysical Research: Oceans 122, 4661–4674, https://doi.org/10.1002/2016jc012403 (2017). Utermöhl, H. Zur Vervollkommnung der quantitativen Phytoplankton-Methodik. Mitt. int. Ver. theor. angew. Limnol. 9, 1–38, citeulike-article-id:377423 (1958). Edler, L. & Elbrächter, M. In Microscopic and molecular methods for quantitative phytoplankton analysis Vol. 55 Intergovernmental Oceanographic Commission Manuals and Guides (eds Karlson, B., Cusack, C. & Bresnan, E.) 110 pp. (UNESCO, 2010). Hillebrand, H., Dürselen, C.-D., Kirschtel, D., Pollingher, U. & Zohary, T. Biovolume calculation for pelagic and benthic microalgae. J. Phycol. 35, 403–424, https://doi.org/10.1046/j.1529-8817.1999.3520403.x (1999). Menden-Deuer, S. & Lessard, E. J. Carbon to volume relationships for dinoflagellates, diatoms, and other protist plankton. Limnology and Oceanography 45, 569–579, https://doi.org/10.4319/lo.2000.45.3.0569 (2000). Holm-Hansen, O. & Riemann, B. Chlorophyll a Determination: Improvements in Methodology. Oikos 30, 438–447, https://doi.org/10.2307/3543338 (1978). Tran, S. et al. A survey of carbon monoxide and non-methane hydrocarbons in the Arctic Ocean during summer 2010. Biogeosciences 10, 1909–1935, https://doi.org/10.5194/bg-10-1909-2013 (2013). Webb, W. L., Newton, M. & Starr, D. J. O. Carbon dioxide exchange of Alnus rubra. Oecologia 17, 281–291, https://doi.org/10.1007/bf00345747 (1974). Morison, J. H., McPhee, M. G. & Maykut, G. A. Boundary layer, upper ocean, and ice observations in the Greenland Sea Marginal Ice Zone. Journal of Geophysical Research: Oceans 92, 6987–7011, https://doi.org/10.1029/JC092iC07p06987 (1987). White, F. M. Fluid Mechanics. 7 edn, (McGraw Hill, 2011). Nikuradse, J. Laws of flow in rough pipes. National advicory committee for aeronautics (1933). Taskjelle, T., Hudson, S. R., Pavlov, A. K. & Granskog, M. A. N-ICE2015 surface and under-ice spectral shortwave radiation data [Data set]. Norwegian Polar Institute, https://doi.org/10.21334/npolar.2016.9089792e (2016). Assmy, P. et al. N-ICE2015 water column biogeochemistry [Data set]. Norwegian Polar Institute, https://doi.org/10.21334/npolar.2016.3ebb7f64 (2016). Meyer, A. et al. N-ICE2015 ocean microstructure profiles (MSS90L) [Data set]. Norwegian Polar Institute, https://doi.org/10.21334/npolar.2016.774bf6ab (2016). Olsen, L. M. et al. N-ICE2015 phytoplankton and ice algae taxonomy and abundance [Data set]. Norwegian Polar Institute, https://doi.org/10.21334/npolar.2017.dc61cb24 (2017). Rösel, A. et al. N-ICE2015 total (snow and ice) thickness data from EM31 [Data set]. Norwegian Polar Institute, https://doi.org/10.21334/npolar.2016.70352512 (2016). This study was supported by the Centre of Ice, Climate and Ecosystems at the Norwegian Polar Institute through the N-ICE2015 project. L.M.O., P.D., H.M.K., P.A. and H.H. were supported by the Research Council of Norway (Boom or Bust #244646). L.M.O., H.M.K., M.F.-M. and P.A. were supported by the Program Arktis 2030 funded by the Ministry of Foreign Affairs and Ministry of Climate and Environment, Norway (project ID Arctic). C.P.-F. was supported by the US-Norway Fulbright Foundation. I.P. was funded by the PACES (Polar Regions and Coasts in a Changing Earth System) program of the Helmholtz Association. A.K.P. was supported by the Research Council of Norway through the STASIS project (221961/F20) and by the Polish‐Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009–2014 in the frame of Project Contract Pol‐Nor/197511/40/2013, CDOM‐HEAT. The ALOS-2 Palsar-2 scenes were provided by JAXA under the 4th Research Announcement program (PI: T. Eltoft). This work also benefitted from support from the ESA SMOSIce project (ESA contract 4000110477/14/NL/FF/lf). We recognize the efforts of M. König (NPI) and T. Kræmer (UiT) for making the co-located satellite image acquisitions possible. AMJ was funded by the Norwegian Research Council (NFR) through the Petromaks program (NFR Project No. 280616). Thanks to J. Morison, A. Meyer, A. Rösel, P. Krogstad, M.A. Granskog, T. Taskjelle, B. Hamre, and the captain and crew of R/V Lance. Norwegian Polar Institute, Fram Centre, Tromsø, Norway Lasse M. Olsen, Pedro Duarte, Hanna M. Kauko, Mar Fernández-Méndez, Alexey K. Pavlov, Haakon Hop & Philipp Assmy Department of Biological Sciences, University of Bergen, Bergen, Norway Lasse M. Olsen Polar Science Center, Applied Physics Laboratory, University of Washington, Seattle, WA, USA Cecilia Peralta-Ferriz Department of Physics and Technology, University of Tromsø - The Arctic University of Norway, Tromsø, Norway Alfred Wegener Institute Helmholtz Center for Polar and Marine Research, Bremerhaven, Germany Ilka Peeken Institute of Oceanology, Polish Academy of Sciences, Sopot, Poland Magdalena Różańska-Pluta, Agnieszka Tatarek, Jozef Wiktor & Alexey K. Pavlov Biological Oceanography, GEOMAR Helmholtz Centre of Ocean Research Kiel, Kiel, Germany Mar Fernández-Méndez Norwegian Ice Service, Norwegian Meteorological Institute, Tromsø, Norway Penelope M. Wagner Akvaplan-niva, Fram Centre, Tromsø, Norway Alexey K. Pavlov Department of Arctic and Marine Biology, Faculty of Biosciences, Fisheries and Economics, University of Tromsø - The Arctic University of Norway, Tromsø, Norway Haakon Hop Pedro Duarte Hanna M. Kauko Magdalena Różańska-Pluta Agnieszka Tatarek Jozef Wiktor Philipp Assmy P.A., P.D. and H.H. planned this part of the N-ICE2015 field campaign. P.A., P.D., H.H., L.M.O., H.M.K., M.F.-M. and A.K.P. collected the samples and conducted the field measurements. A.M.J. and P.W. contributed the ice type classification from radar satellite data and wrote Supplementary Section 2. C.P-F. wrote Supplementary section 3 on ice-ocean physical interactions. I.P. performed the HPLC analysis. M.R.-P., A.T. and J.W. did the microscopy analysis. L.M.O. did the data analysis and wrote the 1st draft of the manuscript and all co-authors contributed to the final version of the manuscript. Correspondence to Lasse M. Olsen. Supplementary material for: A red tide in the pack ice of the Arctic Ocean Olsen, L.M., Duarte, P., Peralta-Ferriz, C. et al. A red tide in the pack ice of the Arctic Ocean. Sci Rep 9, 9536 (2019). https://doi.org/10.1038/s41598-019-45935-0 Under-ice observations by trawls and multi-frequency acoustics in the Central Arctic Ocean reveals abundance and composition of pelagic fauna Randi B. Ingvaldsen Elena Eriksen Bodil A. Bluhm Marine snow morphology illuminates the evolution of phytoplankton blooms and determines their subsequent vertical export Emilia Trudnowska Léo Lacour Lars Stemmann A decadal perspective on north water microbial eukaryotes as Arctic Ocean sentinels Nastasia J. Freyria Nathalie Joli Connie Lovejoy
CommonCrawl
Journal of the Korean Society of Fisheries and Ocean Technology (수산해양기술연구) The Korean Society of Fisheries and Ocean Technology (한국수산해양기술학회) Environment > Marine Environments Scopes of this journal in detail, but not limited to, includes fisheries science, fishing technology, fisheries management, environmental aspects of fisheries, health and culture of fish, oceanography, biology, genetics, ecology, physiology, evolutionary research, population dynamics, ecosystem analysis, economic and social aspects of fisheries. https://acoms1.kisti.re.kr/ksfit/index.jsp?publisher_cd=ksfit&cid_year=null&cid_seq=null&lang=null&menu=null KSCI Fishing performance of a coastal drift net in accordance with materials of the environmentally-friendly biodegradable net twine KIM, Seonghun;KIM, Pyungkwan;JEONG, Seongjae;BAE, Jaehyun;LIM, Jihyun;OH, Wooseok 97 https://doi.org/10.3796/KSFOT.2018.54.2.097 PDF KSCI The objective of this study was to estimate physical properties and fishing performances of net twine with improved PBS copolymer resin (Bio-new), the existing PBS/PBAT blending resin (Bio-old) and commercial Nylon (Nylon). The tensile strength of Bio-new monofilament was equal to Bio-old and the elongation of Bio-new was about 6 % higher than that of Bio-old in wet condition. The physical properties tests were carried out to estimate breaking load and stiffness in dry and wet conditions, respectively. In the results, the breaking load of Nylon netting was the highest whereas the elongation of Bio-new was 1.4 times higher than that of Nylon netting in wet condition. The breaking load of Bio-old netting was about 9.2 % higher than that of Bio-new netting. However, the elongation of the Bio-new netting was about 3% higher than that of Bio-old. The stiffness of the Bio-new compared to Bio-old was improved about 34 % in dry condition and about 32 % in wet condition. The filed experiments of the fishing performance were conducted with three kinds of drift nets with different netting materials in the coastal sea of Jeju. The each experimental drift net made of different materials showed the similar fishing performance. Bio-old drift net yielded less catches of small sized yellow croaker than other drift nets. The netting materials affected the fishing performance and length distribution of catches in the drift nets. A study on the change of the depth and catch of hairtail trolling lines KIM, Mun-Kwan;PARK, Su-Hyeon;KANG, Hyeong-Cheol;PARK, Yong-Seok;AN, Young-Il;LEE, Chun-Woo;PARK, Su-Bong 107 In this study, we tested Japanese trolling lines in the Jeju fishery. This fishery simulates the natural marine environment with many seabed rocks, and has been redesigned and manufactured it to be suitable for the Jeju fishery. In order to ensure that the trolling lines were deployed at the inhabitation depth of hairtails, the conditions required for the fishing gear to reach the target depth were determined for use during the experiment. The experimental test fishing was conducted at the depth of 120 m water in front of Jeju Seongsanpo and in the offshore area of Jeju Hanlim. The fishing gear used in the test fishing is currently used in a variety of field operations in Japan. However, several problems were identified, such as twisting of the line during its deployment and excessive sinking of the main line. The fishing gear was, therefore, redesigned and manufactured to be more suitable for the Jeju fishery environment. For the fishing gear to accurately reach the target depth, depth loggers were installed at the starting point of the main line and at the 250 m and 340 m points of the line. Depth and time were recorded every 10 seconds. According to the daytime positioning of hairtails in the lower water column, the target depth of the fishing gear was set at 100-110 m, which was 10-20 m above the sea floor. At a speed of 1.9 knots and with a 9 kg sinker attached, the main fishing line was deployed and catch yields at depths of 100 m, 150 m and 180 m were recorded and analyzed. When the 180 m main line was fully deployed, the time for the hairtail trolling lines to arrive at the appropriate configuration had to be 5 minutes. At this time, the depth of the fishing gear was 16-23 m above the sea floor, in accordance with the depths at which the hairtails were during the day. In addition, in order to accurately place the fishing gear at the inhabitation water depth of hairtails, the experimental test fishing utilized the results of the depth testing that identified the conditions required for the fishing gear to reach the target depth, and the result was a catch of up to 97 kg a day. CPUE standardization of Pacific bluefin tuna caught by Korean offshore large purse seine fishery (2003-2016) LEE, Sung Il;KIM, Doo-Nam;LEE, Mi-Kyung;JO, Heon-Ju;KU, Jeong-Eun;KIM, Jung-Jin 116 Pacific bluefin tuna (Thunnus orientalis) has been mostly caught by the Korean offshore large purse seine fishery in Korean waters. The annual catch of Pacific bluefin tuna caught by the offshore large purse seine fishery in Korean waters showed less than 1,000 mt until the 1990s except for 1997. The catch sharply increased to 2,401 mt in 2000 and recorded the highest of 2,601 mt in 2003, but the catch has generally decreased with a fluctuation thereafter. The main fishing ground of Pacific bluefin tuna of this fishery is formed around Jeju Island. However, it expanded to the Yellow Sea, the coastal of Busan, and the East Sea, which depends on the migration patterns of Pacific bluefin tuna by season. The CPUE standardization of Pacific bluefin tuna was conducted using Generalized Linear Model (GLM) to assess the proxy of the abundance index. The data used for the GLM were catch (weight), effort (number of hauls), catch ratio of Pacific bluefin tuna, moon phase by year, quarter and area. The standardized CPUE from 2004 to 2011, except for 2003 and 2010, showed a steady trend, and then increased until 2014. The CPUE in 2015 decreased, and in 2016 was higher than that in 2015. The result of GLM suggests that the effect of the catch ratio of Pacific bluefin tuna is the largest factor affecting the nominal CPUE. A comparative study on the estimation methods for the potential yield in the Korean waters of the East Sea LIM, Jung-Hyun;SEO, Young-Il;ZHANG, Chang-Ik 124 Due to the decrease in coastal productivity and deterioration in the quality of ecosystem which result from the excessive overfishing of fisheries resources and the environmental pollution, fisheries resources in the Korean waters hit the dangerous level in respect of quantity and quality. In order to manage sustainable and effective fisheries resources, it is necessary to suggest the potential yield (PY) for clarifying available fisheries resources in the Korean waters. So far, however, there have been few studies on the estimation methods for PY in Korea. In addition, there have been no studies on the comparative analysis of the estimation methods and the substantial estimation methods for PY targeted for large marine ecosystem (LME) For the reasonable management of fisheries resources, it is necessary to conduct a comprehensive study on the estimation methods for the PY which combines population dynamics and ecosystem dynamics. To reflect the research need, this study conducts a comparative analysis of estimation methods for the PY in the Korean waters of the East Sea to understand the advantages and disadvantages of each method, and suggests the estimation method which considered both population dynamics and ecosystem dynamics to supplement shortcomings of each method. In this study, the maximum entropy (ME) model of the holistic production method (HPM) is considered to be the most reasonable estimation method due to the high reliability of the estimated parameters. The results of this study are expected to be used as significant basic data to provide indicators and reference points for sustainable and reasonable management of fisheries resources. Diet composition of grass puffer, Takifugu niphobles in the eelgrass bed of Jangpyeong-ri, Tongyeong CHOI, Hee Chan;PARK, Jong Hyeok;NAM, Ki Mun;BAECK, Gun Wook 138 The diet composition of Takifugu niphobles was studied with 587 specimens collected in the eelgrass bed of Jangpyeong-ri, Tongyeong, Korea, using a seine net, monthly from May 2016 to April 2017. The standard length (SL) of the specimens ranged from 0.7 to 9.0 cm. The stomach contents analysis indicated that T. niphobles consumed mainly amphipods (%IRI: 91.0 %). In addition, T. niphobles fed on small quantities of copepods, polychaetes, insects, bivalves and crabs. T. niphobles consumed mainly amphipods over all size class. Smaller individuals (less than 4 cm SL) fed mainly on amphipods and copepods. The proportion of copepods decreased, as body size increased, whereas the consumption of polychaetes increased gradually. The seasonal variation in the diet composition of T. niphobles was significant. Amphipods were most common prey in all seasons. Copepods decreased from summer to spring gradually whereas the consumption of polychaetes increased in autumn. Study on the spatial distribution and aggregation characteristics of fisheries resources in the East Sea, West Sea and South Sea of the South Korea in spring and autumn using a hydroacoustic method PARK, Junseong;HWANG, Kangseok;PARK, Junsu;KANG, Myounghee 146 Acoustic surveys were conducted in the seas surround the South Korea (South Sea A, South Sea B (waters around the Jeju Island), West Sea and East Sea) in spring and autumn in 2016. First, the vertical and horizontal distributions of fisheries resources animals were examined. In most cases vertical acoustic biomass was high in surface water and mid-water layers other than South Sea A in autumn and West Sea. The highest vertical acoustic biomass showed at the depth of 70-80 m in the South Sea A in spring ($274.4m^2/nmi^2$) and the lowest one was 10-20 m in the West Sea in autumn ($0.4m^2/nmi^2$). With regard to the horizontal distributions of fisheries resources animals, in the South Sea A, the acoustic biomass was high in eastern and central part of the South Sea and the northeast of Jeju Island ($505.4-4099.1m^2/nmi^2$) in spring while it was high in eastern South Sea and the coastal water of Yeosu in autumn ($1046.9-2958.3m^2/nmi^2$). In the South Sea B, the acoustic biomass was occurred high in the southern and western seas of Jeju Island in spring ($201.0-1444.9m^2/nmi^2$) and in the southern of Jeju Island in autumn ($203.7-1440.9m^2/nmi^2$). On the other hand, the West Sea showed very low acoustic biomass in spring (average NASC of $1.1m^2/nmi^2$), yet high acoustic biomass in the vicinity of 37 N in autumn ($562.6-3764.2m^2/nmi^2$). The East Sea had high acoustic biomass in the coastal seas of Busan, Ulsan and Pohang in spring ($258.7{\sim}976.4m^2/nmi^2$) and of Goseong, Gangneung, Donghae, Pohang and Busan in autumn ($267.3-1196.3m^2/nmi^2$). During survey periods, fish schools were observed only in the South Sea A and the East Sea in spring and the West Sea in autumn. Fish schools in the South Sea A in spring were small size ($333.2{\pm}763.2m^2$) but had a strong $S_V$ ($-49.5{\pm}5.3dB$). In the East Sea, fish schools in spring had low $S_V$ ($-60.5{\pm}14.5dB$) yet had large sizes ($537.9{\pm}1111.5m^2$) and were distributed in the deep water depth ($83.5{\pm}33.5m$). Fish schools in the West Sea in autumn had strong $S_V$ ($-49.6{\pm}7.4dB$) and large sizes ($507.1{\pm}941.8m^2$). It was the first time for three seas surrounded South Korea to be conducted by acoustic surveys to understand the distribution and aggregation characteristics of fisheries resources animals. The results of this study would be beneficially used for planning a future survey combined acoustic method and mid-water trawling, particularly deciding a survey location, a time period, and a targeting water depth. A study on appearance frequencies and fishing ground exploration of low-run fishing obtained by analyzing AIS data of vessels in the sea around Jeju Island KIM, Kwang-Il;AHN, Jang-Young 157 In the area around Jeju Island, the squid jigging fishery and the hair-tail angling are popular. Therefore, the study on the characteristics of the formation and shift of fishing grounds is very important. We have received and analyzed AIS data of all vessels around Jeju Island from October 16, 2016 to October 16, 2017, and extracted the positions of the fishing vessels with the same operational characteristics as the fishing vessels of their fisheries. The distribution chart of the frequency of fishing vessels appearing in each predefined fishing grid ($1NM{\times}1NM$) was analyzed. So we took a analogy with the monthly shift of fishing grounds. Many fishing vessels appeared in the seas around Jeju Island from November 2016 to January 2017, and the frequency of their appearance was maintained. In November, however, fishing vessels were mostly concentrated in coastal waters. Yet, the density gradually weakened as they moved into January. From February, the frequency itself began to decline, making it the worst in April. The high concentration of fishing vessels in the waters leading from Jeju Island's northwest coast to south coast in November is believed to be related to the yellowtail fishery that are formed annually in the coastal waters off the island of Marado. In May 2017, the appearance frequency of fishing vessels increased and began to show a concentration in coastal waters around Jeju Island. Fishing vessels began to flock in waters northwest of Jeju Island beginning in July and peaked in August, and by September, fishing vessels were moving south along the coast of Jeju Island, weakening the density and spreading out. Between July and August, fishing vessels were concentrated in waters surrounding Jeju Island, which is believed to be related to the operations of fishing vessels for the squid jigging fishery and the hair-tail angling. An economic feasibility analysis of the automatic operation system development for hairtail trolling line in Jeju region, Korea HONG, Seong-Wan;YANG, Ung-Gyu;KIM, Mun-Kwan;PARK, Yong-Seok;PARK, Kyoung-Il;KIM, Do-Hoon 164 This study aimed to analyze the profitability and economic feasibility of hairtail trolling line gear that was developed for the last 3 years (2015-2017). The new fishing gear technology development was accomplished to solve the current problem of fishermen shortage in hairtail targeting fisheries in Jeju region. Results indicated that the profitability of developed hairtail trolling line fishery was estimated to be 36.1 % which would be higher than that of other hairtail targeting fisheries in Jeju region. In addition, as an economic feasibility, the net present value and the internal rate of return of a 20-year cash inflow and outflow were evaluated to be 400.2 million won and 66.9 %, respectively. However, sensitivity analyses of main variables showed that the profitability and economic feasibility would be vulnerable to catch amount and market condition changes. Reduction plan of marine casualty for small fishing vessels PARK, Tae-Geon;KIM, Seok-Jae;CHU, Yeong-Su;KIM, Tae-Sun;RYU, Kyung-Jin;LEE, Yoo-Won 173 Marine casualties of small fishing vessels (SFV) of less than 20 tons are frequent in Korea. The analysis was conducted to identify the cause and then prepared reduction plan using the marine casualty statistics of fishing vessels for the last five years from 2012 to 2016 by the Korean Maritime Safety Tribunal to reduce the marine casualties of SFV. According to the analysis of the type of whole vessels occurring marine casualty, fishing vessels accounted for an average 68.0 %; moreover, except for 2014 when M/V SEWOL ferry capsizing occurred, the rate of death and missing due to marine casualties occurred from 68.3 % to 91.2 % in fishing vessels, and an average 79.5 % was found to be urgent need of a measure. Marine casualties occurrence depending on the gross tonnage of fishing vessel was found that the most occurred at less than 5 tons, followed by the order of 5 to 10 tons or less. However, crews who boarded on SFV do not have any training program for them, except for the fishing safety training of fisher who carry out fishing for shipowners and crew of the coastal and offshore fishing vessel in accordance with the safety regulations for fishing vessels in the Fisheries Cooperative Association. Therefore, it is necessary to revise the training program so as to improve the preventive action and then emergency response including the fishing safety compliance with each fishery, safe navigation, machinery inspection and emergency response. Also, an SFV of less than 5 tons of 56,000 vessels is boarded by unqualified fishers. It would also be possible to consider subdividing small boat operator's certificate to enhance their qualifications. It is expected that marine casualties of SFV will be reduced if active efforts are made to improve the safety consciousness of fisher and shipowners as well as the reorganization of fishing safety training and the small boat operator's certificate system. Performance analysis of dynamic positioning system with loss of propulsion power of T/S NARA LEE, Jun-Ho;KONG, Kyeong-Ju;JUNG, Bong-Kyu 181 In order for the probe to perform ocean exploration and survey research, it is necessary to adjust the position of the ship as desired by dynamic positioning system. The dynamic positioning system of T/S NARA is applied to K-POS dynamic positioning system of Kongsberg, which makes maintaining the ship's position, changing position and heading control possible. T/S NARA is not capable of dynamic positioning if one or more propulsive forces are lost with DP Level One. However, it is predicted that dynamic positioning can be achieved even at the time of missing one thrust in a good sea condition. Therefore, we want to analyze the effect of each propulsion on the performance of dynamic position system. When one of the bow thruster and azimuth thrusters lost their propulsion, maintaining the ship's position, changing position and heading control performance were compared and analyzed. If the situation occurred disable from using the bow thruster, they can not maintain ship's position. Azimuth thruster was influential for the ship's position control and bow thruster was influential in heading control. The excellent dynamic positioning performance can be achieved, considering the propulsion power that will have a impact on each situation in the future. Reproduction characteristics of hagfish Eptatretus burgeri in the South Sea of Korea KIM, Doo-Nam;HWANG, Kang-Seok;CHA, Hyung-Kee;PARK, Jun-Su;KIM, Jung-Nyun;MOON, Seong-Yong;LEE, Jeong-Hoon 188 The reproduction characteristics of hagfish Eptatretus burgeri were examined using individuals caught in the South Sea of Korea. The spawning season and size at minimum sexual maturity of this species were characterized based on a gonad-somatic index (GSI) and monthly variation egg size (long axis). From monthly variation of GSI, the spawning season was estimated to be from August to September. Developing eggs larger than 10 mm were found in March, and the largest egg size was found in July. The first spawning length was 34.2 cm TL. Batch fecundity ranged from 13 to 117 eggs for hagfish sized from 34.2 cm TL to 77.0 cm TL, respectively, and increased linearly with total length.
CommonCrawl
$\sum u_n$ converges $\implies$ $\sum \frac{u^\alpha_n}{n}$ converges Let $\alpha \in (0,\infty)$ and $(u_n)$ a sequence of positive real numbers. Suppose that $\sum_{n\geq 0} u_n$ converges. Prove that $\displaystyle \sum_{n\geq 0} \frac{(u_n)^\alpha}{n}$ converges. Cauchy Schwarz inequality yields $\displaystyle\sum_{n=0}^M\frac{(u_n)^\alpha}{n} \leq \sqrt{\sum_{n=0}^M(u_n)^{2\alpha} \sum_{n=0}^M\frac{1}{n^2}} $ When $\alpha \geq \frac{1}{2}$, using that $u_n \to 0$ yields the convergence of $\sum_{n=0}^\infty(u_n)^{2\alpha}$. Thus the series $\displaystyle \sum_{n\geq 0} \frac{(u_n)^\alpha}{n}$ has bounded partial sums and $\displaystyle\frac{(u_n)^\alpha}{n} \geq 0$. This proves that $\displaystyle \sum_{n\geq 0} \frac{(u_n)^\alpha}{n}$ converges. What do when $\alpha < \frac12$ ? sequences-and-series Gabriel RomonGabriel Romon A generalisation of the Cauchy-Schwarz inequality is Hölder's inequality, $$\int \lvert f(t)g(t)\rvert\,dt \leqslant \left(\int \lvert f(t)\rvert^p\right)^{1/p} \left(\int \lvert g(t)\rvert^{p/(p-1)}\right)^{(p-1)/p}$$ if we write it for integrals. You can write it for sums or regard the sums as integrals with respect to the counting measure, whatever you prefer. $\begingroup$ It works, thanks. I wonder if there's way without Hölder though. $\endgroup$ – Gabriel Romon Sep 14 '14 at 19:55 $\begingroup$ None obvious (to me). There are probably other ways, but I don't see one right now. $\endgroup$ – Daniel Fischer Sep 14 '14 at 20:01 $\begingroup$ Just Young's inequality on it's own will do the trick; for $\alpha \geq 1$ we are just making the terms smaller (eventually), so we only need to consider the case $\alpha < 1$. Take $\beta > 1$ such that $\alpha + 1/\beta = 1$, then get $$\frac{(u_n)^\alpha}{n} \leq \alpha u_n + \frac{1}{n^{\beta} \beta},$$ which completes the proof. (Though saying this, Young's inequality is logically equivalent to Hölder's inequality :P) $\endgroup$ – Matt Rigby Sep 14 '14 at 20:31 Not the answer you're looking for? Browse other questions tagged sequences-and-series or ask your own question. $|\alpha _{n}| \le c_n$, then $\sum{\alpha _{n}}$ converges absolutely If $\sum_{n=0}^\infty u_n$ , $\sum_{n=0}^\infty v_n$ both converge, then $\sum_{n=0}^\infty u_n v_n / (u_n + v_n)$ converges Sequence $\{u_n\}_{n \geq 0}$ - Complex analysis $0\leq u_n\leq \frac {1}{n^2}\sum_{k=1}^nu_k\implies $ $\sum u_k$ converges. Is my proof ok? If $\sum u_n$ diverges then $\sum \frac {u_n} {u_1 + u_2 + \dots + u_n}$ also diverges $\sum_{n}^{\infty}u_n$ converges $\Rightarrow $ $\sum_{n}^{\infty }nu_n^2$ and $\sum_{n}^{\infty}\frac{u_n}{1-nu_n}$ coverge? Prove if $\sum_{n=1}^\infty|a_n|<\infty$, then $\left|\sum_{n=1}^\infty a_n\right|\le \sum_{n=1}^\infty\left|a_n\right|$ If $u_n $ converges to 9, then $\exists w_n$ that converges to $0$ such that $\sum u_n = \sum w_n $ If $\sum\limits_{n=1}^∞u_n^2$ is convergent, then $\sum\limits_{n=1}^∞\frac{u_n}n$ is absolutely convergent Prove that if the series $\sum u_n$ converges, than $\sum \frac{ \sqrt{u_n} }{n + 1} $ converges.
CommonCrawl
Methodology Article Robust gene selection methods using weighting schemes for microarray data analysis Suyeon Kang1 & Jongwoo Song ORCID: orcid.org/0000-0002-0325-487X1 A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis. Microarray technologies allow us to measure the expression levels of thousands of genes simultaneously. Analysis on such high-throughput data is not new, but it is still useful for statistical testing, which is a crucial part of transcriptomic research. A common task in microarray data analysis is to detect genes that are differentially expressed between experimental conditions or biological phenotype. For example, this can involve a comparison of gene expression between treated and untreated samples, or normal and cancer tissue samples. Despite the rapid change of technology and the affordable cost for conducting whole-genome expression experiments, many past and recent studies still have relatively few sample replicates in each group, which makes it difficult to use typical statistical testing methods. These two problems, high dimensionality and small sample size problems, have triggered developments of feature selection in transcriptome data analysis [1,2,3,4,5,6,7,8,9]. These feature selection methods can be mainly classified into four categories depending on how they are combined with learning algorithms in classification tasks: filter, wrapper, embedded, and hybrid methods. For details and the corresponding examples of these methods, we refer the reader to several review papers [10,11,12,13,14,15,16,17,18]. As many researchers commented, filter methods have been dominant over the past decades due to its strong advantages, although they are the earliest in the literature [11,12,13, 15, 16]. They are preferred by biology and molecular domain experts as the results generated by feature ranking techniques are intuitive and easy to understand. Moreover, they are very efficient because they require short computation time. As they are independent of learning algorithms, they can give general solutions for any classifier [15]. They also have a better generalization property as the bias in the feature selection and that of the classifier are uncorrelated [19]. Inspired by its advantages, we focus on the filter method in this study. One of the most widely used filter-based test methods is significance analysis of microarrays (SAM) [1]. It identifies genes with a statistically significant difference in expression between different groups by implementing gene-specific modified t-tests. In microarray experiments, some genes have small variance so their test statistics become large, even though the difference between the expression levels of two groups is small. SAM prevents those genes from being identified as statistically significant by adding a small positive constant to the denominator of the test statistic. This is a simple but powerful modification for detecting differentially expressed genes, considering the characteristics of microarray data. Since its establishment, the SAM program has been repeatedly updated. The latest version is 5.0 [20]. We also aim to develop methods for detecting significant genes based on a deeper understanding of microarray data. Even when researchers monitor an experimental process and control other factors that might have an influence on the experiment, biological or technical error can still arise in high-throughput experiments. For example, when one sample among a number of replicated samples gives an outlying result owing to a technical problem, variance of the gene expression becomes larger than expected and its test statistic becomes small. This is a major issue because it can lead to biologically informative genes failing to be identified as having a significant effect. Therefore, we here attempt to reduce this increase in variance for such cases by modifying the variance structure of SAM statistics, using two weighting schemes. It is also important to adjust the significance level of tests. Since we generally need to test thousands of genes simultaneously, the multiple testing problem arises. To resolve this problem, several methods have been suggested as replacements for the simple p-value; for example, we can use the family-wise error rate (FWER), false discovery rate (FDR) [1, 21], and positive false discovery rate (pFDR) [22]. Among them, FDR, which is the expected proportion of false positives among all significant tests, is a popular method to adjust the significance level. It can be computed by permutation of the original dataset. The test procedures we propose in this paper also use FDR, the same as SAM. Once a list of significant genes is established by a gene selection method, researchers may carry out further experiments such as real-time polymerase chain reaction to determine whether these reference genes are biologically meaningful. However, many genes may not be tested owing to limitations of time and resources. For example, even if hundreds of genes are included in a list of reference genes for a user-defined significance cutoff, researchers may just select a few top-ranked genes among them for further analyses. Therefore, it is very important that the genes are properly ranked in terms of their significance, especially for top-ranked genes [23, 24]. As such, in this paper, we focus on improving test statistics for each gene and assessing how well each test method identifies significant genes. For microarray data analysis, a comparison of the performance of gene selection methods is difficult because we generally do not know the "gold standard" reference genes in actual experiments. In other words, we do not know which genes are truly significant. This is a common problem encountered in transcriptome data analysis, so most studies have focused on comparing classification performances, which are determined by the combination of the feature selection and learning algorithm. As these results are clearly dependent on the performance of learning method, we cannot compare the effectiveness of feature selection techniques definitively [16]. Therefore, in this paper, we generate spike-in synthetic data that allow us to determine which genes are truly differentially expressed between two groups. For this, we suggest a data generation method based on the procedure proposed by Dembélé [25]. By performing such simulations, we can see how the performance changes depending on the characteristics of the dataset, such as sample size, the proportion of differentially expressed genes, and noise level. In this study, we focus on comparing performance according to noise level as our goal is to efficiently detect significant genes in a noisy dataset. To verify that our proposed methods can also compete with previous methods for actual microarray data, we use two sets of actual data that have a list of gold standard genes based on previous findings. All of these real datasets are publicly available and can be downloaded from a website [26] and R package [27]. In order to compare different gene selection methods, we also define two performance metrics that can be used when true differentially expressed genes are known. This paper is organized as follows. In the next section, we review the algorithm of SAM and propose statistical tests for microarray data that are modified versions of SAM, named MSAM1 and MSAM2. In addition, we explain our synthetic data generation method and suggest two performance metrics. In the results section, we describe our simulation studies and real data analysis. We compare SAM, MSAM1, and MSAM2 using 14 types of simulated dataset, which have different noise levels and sample sizes, and two sets of real microarray data. We next discuss the difference between the three methods in detail, focusing on FDR estimation. Additionally, we give the results of classification analysis using some top-ranked genes selected by each method. In the last section, we summarize and conclude this paper. In this section, we briefly review the SAM algorithm [1] and propose new modified versions of SAM, focusing on calculating the test statistic. Let x ij and y ij be the expression levels of gene i in the jth replicate sample in states 1 and 2, respectively. For such a two-class case, the states of samples indicate different experimental conditions, such as control and treatment groups. Let n 1 and n 2 be the numbers of samples in these two groups, respectively. The SAM statistic proposed in [1] is defined as follows: $$ {d}_i=\frac{{\overline{x}}_i-{\overline{y}}_i}{s_i+{s}_0} $$ where \( {\overline{x}}_i \) and \( {\overline{y}}_i \) are the mean expression of the ith gene for each group,\( {\overline{x}}_i={\sum}_{j=1}^{n_1}{x}_{ij}/{n}_1 \) and \( {\overline{y}}_i={\sum}_{j=1}^{n_2}{y}_{ij}/{n}_2 \). The gene-specific scatter s i is defined as: $$ {s}_i=\sqrt{a\left\{\sum_{j=1}^{n_1}{\left({x}_{ij}-{\overline{x}}_i\right)}^2+\sum_{j=1}^{n_2}{\left({y}_{ij}-{\overline{y}}_i\right)}^2\right\}} $$ where a = (1/n 1 + 1/n 2)/(n 1 + n 2 − 2) and s 0 is a small positive constant called the fudge factor, which is chosen to minimize the coefficient of variation of d i . The computation of s 0 is explained in detail in [3]. Now let us consider the overall algorithm. The SAM algorithm proposed in [1] can be stated as follows: Calculate test statistic d i using the original dataset. Make a permuted dataset by fixing the gene expression data and shuffling the group labels under the H 0 where H 0: \( {\overline{x}}_i-{\overline{y}}_i=0 \) for all i. Compute test statistics \( {d}_i^{\ast } \) using the permuted data and order them according to their magnitudes as \( {d}_{(1)}^{\ast}\le {d}_{(2)}^{\ast}\le \cdots \le {d}_{(n)}^{\ast } \), where n is the number of genes. Repeat steps 2 and 3 B times and obtain \( {d}_{(1)}^{\ast }(b)\le {d}_{(2)}^{\ast }(b)\le \cdots \le {d}_{(n)}^{\ast }(b) \) for b = 1 , 2 , … , B, where B denotes the total number of permutations. Calculate the expected score \( {d}_{(i)}^E={\sum}_{b=1}^B{d}_{(i)}^{\ast }(b)/B \). Sort the original statistic from step 1, d (1) ≤ d (2) ≤ ⋯ ≤ d (n). For user-specific cutoff ∆, genes that satisfy \( \mid {d}_{(i)}-{d}_{(i)}^E\mid >\Delta \) are declared significant. A gene is defined as being significantly induced if \( {d}_{(i)}-{d}_{(i)}^E>\Delta \) and significantly suppressed if \( {d}_{(i)}-{d}_{(i)}^E<-\Delta \). Define d (up) as the smallest d (i) among significantly induced genes and d (down) as the largest d (i) among significantly suppressed genes. The false discovery rate (FDR) is defined as the proportion of falsely significant genes among genes considered to be significant and can be estimated as follows: $$ \widehat{\mathrm{FDR}}=\frac{\sum_{b=1}^B\#\left\{i:{d}_{(i)}(b)\ge {d}_{\left(\mathrm{up}\right)}\vee {d}_{(i)}(b)\le {d}_{\left(\mathrm{down}\right)}\right\}/B}{\#\left\{i:{d}_{(i)}\ge {d}_{\left(\mathrm{up}\right)}\vee {d}_{(i)}\le {d}_{\left(\mathrm{down}\right)}\right\}} $$ The algorithm consists of two parts: computation of the test statistic and determination of the cutoff for a given ∆. We will focus on the first of these parts and apply a simple modification to the computation of gene-specific scatter s i to find a more robust test statistic. The numerator of the modified statistic and that of the original SAM statistic are the same. All of the procedures can be implemented using the samr package for Bioconductor in R. [20] described how to use the package and provided technical details of the SAM procedure. Modified SAM From one experiment [28], we observed several cases in which most of the results of gene expression are very close to each other, apart from one substantial outlier. As a result, the ranks of these genes from SAM are lower than expected. This prompted us to propose a new test method that has a different variance structure, leading to robustness on identifying informative genes in the presence of outliers. Throughout the paper, we use the term "outliers" to indicate "unusual observations". Let us consider two cases with the following data: case 1: (5,5,5,5,8.54) and case 2: (3,4,5,6,7). For these two cases, the variance is the same, inferring that they have the same spread. However, even though the levels of variance are equal, in fact, we cannot say that the data points are similarly distributed. We believe that case 1 is more reliable than case 2. Our goal, therefore, is to propose a test statistic that has a more significant result for case 1 than for case 2. To minimize the effects of outliers among samples, we use the median instead of the mean and employ a weight function w when computing the test statistic, resulting in a less weight on an outlier sample that is far from other samples. A modified s i , \( {\overset{\sim }{s}}_i \), is defined as follows: $$ {\overset{\sim }{s}}_i=\sqrt{\sum_{j=1}^{n_1}w\left({x}_{ij}\right){\left({x}_{ij}-{median}_j\left({x}_{ij}\right)\right)}^2+\sum_{j=1}^{n_2}w\left({y}_{ij}\right){\left({y}_{ij}-{median}_j\left({y}_{ij}\right)\right)}^2} $$ Accordingly, our test statistic \( {\overset{\sim }{d}}_i \) is defined as follows: $$ {\overset{\sim }{d}}_i=\frac{{\overline{x}}_i-{\overline{y}}_i}{{\overset{\sim }{s}}_i+{s}_0} $$ Methods modified by this approach might be particularly useful when detecting differentially expressed genes from noisy microarray data. The key idea is to reduce the impact of outliers when calculating the test statistic. We propose two different weight functions in this paper. The values of \( {\overset{\sim }{s}}_i \) and \( {\overset{\sim }{d}}_i \) would differ quite markedly depending on the used weight function. Modified SAM1 (Gaussian weighted SAM) The weight function used in Modified SAM1 (MSAM1) is based on the Gaussian kernel, which is a widely used weight that decreases smoothly to 0 with increasing distance from the center. It is defined as follows: $$ w\left({x}_{ij};{\mu}_i,\sigma \right)=\frac{1}{\sigma}\phi \left(\frac{x_{ij}-{\mu}_i}{\sigma}\right) $$ where ϕ is the probability density function of a standard normal distribution, \( \phi (x)={e}^{-{x}^2/2}/\sqrt{2\pi } \). The mean μ i is a gene-specific parameter such that μ i = median j (x ij ) and standard deviation σ is a data-dependent constant determined by the following procedure: first, m is defined as follows. m = max(|x ij − median j (x ij )|, |y ij − median j (y ij )|). It is calculated from given data. Second, p is a user-defined value between 0 and 1. Finally, given m and p, we can find the value of σ that satisfies the following equation: $$ m={F}^{-1}\left(1-p;0,\sigma \right) $$ where F is the cumulative distribution function of a normal distribution. Therefore, m would approximately be the 100(1 − p)th percentile point of a normal distribution with mean 0 and standard deviation σ. As can be seen from Fig. 1, smaller p yields smaller σ. Therefore, smaller p makes the weight applied to outlier samples smaller. On the other hand, as p increases, the results of original and modified SAMs become similar because the weight on the outlier is very similar to the weight on the non-outliers. In this research, we set p = 0.001 since we found that this value is sufficiently small to reduce the effect of outliers. Two examples of the weight function for MSAM1 when m is 2. When setting p = 0.05, σ is determined to be 1.22 (left panel), and when setting p = 0.1, it is determined to be 1.56 (right panel). Since m is the 100(1 − p)th percentile point of N(0, σ), the grey-shaded area in each panel is 0.05 and 0.1, respectively For a better understanding of MSAM1, we here illustrate the weight function of MSAM1 and its application in detail. Let us consider Leukemia data [29]; for details of this data, see real data analysis section. The data consist of 38 samples (27 from ALL patients and 11 from AML patients) and 7129 genes. For simplicity and clarity, we randomly selected five samples for each sample type and applied SAM, MSAM1 with p = 0.01 and MSAM1 with p = 0.001. In order to compare weights given by each method, let us take one gene, M96326_rna1_at (Azurocidin). This gene would be a good example to clarify the difference between SAM and MSAM1 because it has an outlier sample. From Fig. 2, we can see that gene expressions in group 1 are similar. On the other hand, one of five samples in group 2 is clearly far from others. Table 1 and Fig. 3 show its gene expressions and weights computed by SAM and MSAM1. In Fig. 3, the lengths of 5 red dashed lines indicate the weights on the 5 observations. As we stated above, we can also see that smaller p makes the difference between weights applied to outlier and non-outlier samples greater. Gene expressions of M96326_rna1_at (Azurocidin) from 5 ALL patients and 5 AML patients Table 1 Comparison of SAM and MSAM1 weights: an informative gene from leukemia data, M96326_rna1_at (Azurocidin) The left panel illustrates the weights of MSAM1 when p is 0.01. The right panel is the case when p is 0.001. In each panel, 5 black circle points are gene expressions of M96326_rna1_at (Azurocidin) from 5 AML patients. The lengths of 5 red dashed lines indicate the weights on the 5 observations Modified SAM2 (inverse distance weighted SAM) This method uses Euclidean distance among the observations. The weight function used in Modified SAM2 (MSAM2) is defined as follows: $$ w\left({x}_{ij}\right)=\frac{1}{\sum_k{d}_E\left({x}_{ij},{x}_{ik}\right)} $$ where d E (x ij , x ik ) is the Euclidean distance between the jth and kth samples of gene i. The reason that we use this weight function can be explained by the following example. Let us assume that there are 10,000 genes (i = 1 , 2 , … , 10000). Also, suppose there are 4 sample replicates (observations) in a group of the first gene (i = 1) and their gene expressions are x 11 , x 12 , x 13 and x 14. Let w j be the weight on jth observation for j=1, 2, 3 and 4. In this case, the weights on these observations are as follows. $$ {w}_1={\left(\sum_{k=1}^4{d}_E\left({x}_{11},{x}_{1k}\right)\right)}^{-1},\kern0.75em {w}_2={\left(\sum_{k=1}^4{d}_E\left({x}_{13},{x}_{1k}\right)\right)}^{-1}, $$ $$ {w}_3={\left(\sum_{k=1}^4{d}_E\left({x}_{13},{x}_{1k}\right)\right)}^{-1},\kern0.75em {w}_4={\left(\sum_{k=1}^4{d}_E\left({x}_{14},{x}_{1k}\right)\right)}^{-1} $$ If x 11 , x 12 and x 13 are close to each other and x 14 is far from these 3 values, w 4 is much smaller than w 1 , w 2 and w 3. Therefore, by using this weight function, we can give a smaller weight to an outlier. The further away an observation is from the others, the smaller weight is given. Synthetic data generation To run experiments, we need to generate synthetic gene expression data. These datasets should have characteristics similar to those of real microarray data to ensure that the results are reliable and valid. Two important characteristics of gene expression data, which are reported elsewhere [25, 30, 31] and also considered in this study, are as follows: Under similar biological conditions, the level of gene expression varies around an average value. In rare cases, technical problems would result in values far away from this average. Genes at low levels of expression have a low signal-to-noise ratio. The 'technical problems' mentioned in the first of these points are one possible explanation for outliers observed in microarray data. Since our goal is to develop methods that detect differentially expressed genes well in a noisy dataset containing outliers, we consider not only a dataset with little noise, but also a noisy dataset with outliers. We ensure that outliers are present at higher probability in several of the datasets to provide a wider range of comparisons among the different test methods. Basically, we follow the microarray data generation model by Dembélé [25], which uses a beta distribution. In this article, we employ a beta and a normal distribution to generate data points, assuming that the levels of gene expression essentially follow such distributions. To allow outliers in generated data, we add a technical error term in our model; this term is mentioned in [25], but not used in their model. According to the noise level and distribution type, we consider four different simulation set-ups as follows: Scenario 1, non-contaminated beta; 2, contaminated beta; 3, non-contaminated normal; 4, contaminated normal. Therefore, data used in scenarios 1 and 3 have low noise level, and data used in scenarios 2 and 4 have high noise level. The step-by-step procedure for our data generation method is summarized as follows. Step 1. Let n be the number of genes and n 1 and n 2 be control and treatment sample sizes, respectively. Step 2. Generate z i from a beta (normal) distribution for i = 1 , 2 , … , n and transform the values, \( {\overline{z}}_i= lb+ ub\times {z}_i \). Step 3. For each \( {\overline{z}}_i \), generate (n 1 + n 2) values as follows: \( {z}_{ij}\sim \mathrm{unif}\left(\left(1-{\alpha}_i\right){\overline{z}}_i,\left(1+{\alpha}_i\right){\overline{z}}_i\right) \), where \( {\alpha}_i={\lambda}_1{e}^{-{\lambda}_1{\overline{z}}_i} \). Step 4. The final model is given by $$ {d}_{ij}={z}_{ij}+{s}_{ij}+{n}_{ij}+{t}_{ij} $$ where the term s ij allows us to define differentially expressed genes. Their values are zero for the control group, \( {s}_{ij}\sim N\left({\mu}_{de},{\sigma}_{de}^2\right) \) for genes with induced expression, and \( {s}_{ij}\sim N\left(-{\mu}_{de},{\sigma}_{de}^2\right) \) for genes with suppressed expression, where \( {\mu}_{de}={\mu}_{de}^{min}+\mathrm{Exp}\left({\lambda}_2\right) \). n ij is an additive noise term, \( {n}_{ij}\sim N\left(0,{\sigma}_n^2\right) \). The final term t ij is used to define outlying samples by allowing non-zero values for some genes. The undefined parameters for each step can be set by the users. The values we use in this paper are as follows: λ 1 = 0.13, λ 2 = 2, \( {\mu}_{de}^{min}=0.5 \), σ de = 0.5, σ n = 0.4. For these parameters, the influence of different parameter settings on the generated data is well explained elsewhere [25]. Scenario 1: Beta with low noise level In this case, we generate data points from Beta(shape 1, shape 2). shape 1 and shape 2 are two shape parameters of the beta distribution and we here set shape 1 = 2 and shape 2 = 4. We also set lb = 4, ub = 14. The values of t ij are zero for this case. Scenario 2: Beta with high noise level Here, we generate a noisier data than above data. The generation procedure is basically the same as the above case, except for allowing some non-zero t ij . To make outlying samples, we contaminate the data by adding gaussian noise to some treatment samples: For genes with induced or suppressed expression, \( {t}_{ij}\sim N\left(0,{\sigma}_{\mathrm{deo}}^2\right) \) for j = (n 1 + n 2 − n deo + 1) , … , (n 1 + n 2.) where σ deo is a non-zero constant and n deo is the number of outlying samples. We here set σ deo = 1 and n deo = [0.2 × n 2] where [x] = m if m ≤ x < m + 1 for all integer m. For example, if there are five sample replicates in a treatment group, there can be one possible candidate as an outlier. Therefore, σ deo and n deo control the distribution and noise level of outlying samples. We believe that this set-up is reasonable because it does not destroy the original data structure while controlling the noise level of the data. Scenario 3: Normal with low noise level This scenario assumes that the levels of gene expression essentially follow a normal distribution, instead of a beta distribution. In this research, we use the normal distribution with mean 10 and standard deviation 1.5 for generated data points to be distributed between realistic bounds; the gene expression levels on a log2 scale after robust multichip analysis normalization usually vary between 0 and 20. We set lb = 0, ub = 1 in Step 2, which means that no transformation is applied. Scenario 4: Normal with high noise level To generate a noisier normal data, we use the same data generation procedure of Scenario 3, except for allowing some non-zero t ij in Step 4. The structure of t ij is the same as in Scenario 2. Performance metrics To compare the performance of several methods, we need several evaluation measures. Since we know which genes are differentially expressed in our simulated datasets, we can define two performance metrics as follows, measuring how well each method identifies these TRUE genes. Prior to define metrics, let G up={i: gene i the expression of which is truly significantly induced} and G down={i: gene i the expression of which is truly significantly suppressed}. Rank sum (RS) We define the rank sum (RS) of TRUE genes as follows: $$ \mathrm{RS}={\sum}_{i\in {G}_{up}\cup {G}_{down}}{\sum}_{j:{d}_i{d}_j>0}\mathrm{I}\left(\left|{d}_i\right|\le \left|{d}_j\right|\right) $$ where I(∙) is an indicator function. The reason for determining the ranks of genes with high and low expression is that the SAM procedure uses such a method when detecting genes of the two groups. We use the absolute value of test statistics because test statistics of genes with suppressed expression have negative values. For RS, lower values indicate better performance. Top-ranked frequency (TRF) The top-ranked frequency (TRF) of TRUE genes is computed by $$ \mathrm{TRF}(r)=\#\left\{i\in {G}_{\mathrm{up}}\cup {G}_{\mathrm{down}}:{\sum}_{j:{d}_i{d}_j>0}\mathrm{I}\left(\left|{d}_i\right|\le \left|{d}_j\right|\right)\le r\right\}. $$ Here, r denotes the rank cutoff and is set to be smaller than the number of observations in G up and G down. For a given cutoff r, TRF computes the number of TRUE genes ranked within r. For TRF, higher values indicate better performance. To understand the performance metrics better, let us consider the following case. We have 100 genes and 10 TRUE genes among them. Assume that we obtain a top-ranked gene list as shown in Table 2 by a gene selection method. Among the 15 genes in the table, five are false genes (3rd, 7th, 8th, 12th, and 14th genes in the table). In this case, RS = 76, TRF(5) = 4, and TRF(10) = 7. Table 2 An example list of top-ranked genes Simulation studies In this section, we compare gene selection methods using synthetic datasets. We consider four scenarios described above. For each scenario, we consider 7 different combinations of n 1 and n 2 in order to take into account the affects of sample size and class imbalance on gene selection performance as follows: (n 1, n 2) = (5, 5) , (5, 10) , (10, 5) , (10, 10) , (10, 15) , (15, 10) and (15, 15). For all scenarios, we assume that there are 2% target genes (1% up-regulated and 1% down-regulated genes) among the total of 10,000 genes. For simplicity, let us assume that the first 100 genes are downregulated and last 100 genes are upregulated. Then, we can describe the structure of our simulation data as shown in Fig. 4. This example illustrates the structure of noisy data containing outliers. In this case, the last two samples are outlying samples among 10 treatment samples of 200 target genes. There are five different distributions of data points: A, B, C, D, and E. For 9800 nontarget genes, the distributions of the control and treatment samples are the same (A). The first 100 downregulated genes are generated from two distributions (B and C) and the last 100 upregulated genes are also generated from two distributions (D and E). Groups C and E indicate outlier samples. If there are no outliers in the dataset, B is equivalent to C and D is equivalent to E. The empirical density plot of each group is shown in Fig. 5. For visualization, we use 5000 data points to ensure equivalent density of the points for each group (A, B, and C), that is, with a 1:1:1 ratio, not using the original ratio among the three groups. An example of simulated data structure. Each row and each column of this data frame correspond to a gene and a replicate sample, respectively, so we have a 10,000 × 20 data matrix in this study. We assume that there are 2% target genes (1% up-regulated and 1% down-regulated genes) among the total of 10,000 genes, and ten replicates in each group. There are five different distributions of data points: A, B, C, D, and E; groups C and E indicate outlier samples Empirical density of data points for scenarios 1, 2, 3, and 4. The solid line (a) for each plot is the density of control samples for target genes (a). The red dashed line (b) and green dotted line (c) are the densities of treatment samples for target genes. There are no green dotted lines (c) in the top-left and top-right plots because there are no outliers in scenarios 1 and 3 We conduct simulation studies using synthetic data and compare the results using three metrics; two of them are RS and TRF, which were defined above, and the third is AUC. AUC is the area under a receiver operating characteristic (ROC) curve. Therefore, this value falls between 0 and 1, and higher values indicate better performance. We consider five gene selection methods, named SAM, SAM-wilcoxon, SAM-tbor, MSAM1 and MSAM2. SAM-wilcoxon is the Wilcoxon version of SAM [20, 32]. SAM-tbor is basically the same with SAM, except for applying a simple trim-based outlier removing algorithm to data prior to running SAM. In this study, we remove the largest and smallest observations from each sample type. Figs. 6 and 7 display the average performance of 100 simulations for each method on the three metrics. Table 3 shows numerical results of 4 cases. The best performance on each metric is shown in boldface. In scenario 1, the original SAM always outperform SAM-wilcoxon and SAM-tbor. Although SAM-tbor show better performance than SAM in some cases of scenario 2, its performance is worse than those of MSAMs. As can be seen from the figures and table, our proposed methods show better performance than three versions of SAM in all cases. In particular, modified SAMs are much better when given data is noisy (scenario 2, compared to scenario 1) and is a little better for less noisy cases. We can also see that our methods show more robust performance in all cases. When there is two outliers among ten samples, the number of target genes found by original SAM is reduced by 2–17%, whereas that found by MSAMs is reduced by 1–8%. In particular, when n 1 = 5, n 1 = 10 in scenario 2, SAM fails to detect 90 genes among the 200 TRUE genes, whereas MSAM2 fails to detect only 60 genes on average. Simulation results of scenarios 3 and 4 are in Additional file 1. These results are very similar with those of scenarios 1 and 2; MSAMs always perform better than three versions of SAM. Simulation results for Scenario 1. Three solid lines (black, red, and green) indicate the results of three versions of SAM. Two dashed lines (blue and cyan) indicate the results of two versions of modified SAM. For RS, lower values are better. For AUC and TRF, higher values are better Simulation results for Scenario 2 Table 3 Simulation results for 4 cases Real data analysis 1: Fusarium The Fusarium dataset contains 17,772 genes and nine samples: three each from control, dtri6, and dtri10 groups [28]. Robust multichip analysis algorithm is used for condensing the data for the following [33]: extraction of the intensity measure from the probe level data, background adjustments, and normalization. The post-processed dataset used in [28] are stored at PLEXdb (http://www.plexdb.org) (accession number: FG11) [26]. As this data was from gene mutation experiments, researchers provided a list of genes that are differentially expressed between control and treatment (dtri6, dtri10) groups. These genes are as follows: fgd159-500_at (conserved hypothetical protein), fgd159-520_at (trichothecene 15-O-acetyltransferase), fgd159-540_at (Tri6 trichothecene biosynthesis positive transcription factor), fgd159-550_at (TRI5_GIBZE – trichodiene synthase), fgd159-560_at, fgd159-600_at (putative trichothecene biosynthesis), fgd321-60_at (trichothecene 3-O-acetyltransferase), fgd4-170_at (cytochrome P450 monooxygenase), fgd457-670_at (TRI15 – putative transcription factor), fg03534_s_at (trichothecene 15-O-acetyltransferase), fg03539_at (TRI9 – putative trichothecene biosynthesis gene), and fg03540_s_at (TRI11 – isotrichodermin C-15 hydroxylase). In real data analysis sections, we only consider SAM, MSAM1, and MSAM2, all of which show good performance in simulation studies; we found that SAM-wilcoxon and SAM-tbor are worse than the original SAM in the previous section. Moreover, we cannot apply SAM-tbor to this data because this data has only three sample replicates in each group. Like this case, we can see that such a trim-based method is limited in its applications. Tables 4 and 5 show the rank of 11 reference genes that are differentially expressed between the control group and the treatment groups (dtri6 and dtri10, respectively). The last row in each table indicates the rank sum of these 11 genes. As we can see, MSAM2 shows the best performance because the rank sum of this method is the smallest among those of the three gene selection methods. In particular, MSAMs improve the rank of the genes named fgd4-170_at and fgd159-500_at. For each of these genes, the result for one of their treatment samples is far from those for the other two samples. From the analysis, it can be asserted that our proposed methods efficiently identify the genes whose replicate samples contain an outlier, such as fgd4-170_at and fgd159-500_at. Table 4 Rank of genes of interest: control versus dtri6 Table 5 Rank of interest genes: control versus dtri10 The top two plots show TRUE FDR vs. estimated FDR and the bottom two plots show the number of falsely detected genes relative to the total number of detected genes for scenario 1 and 2. In each top plot, the solid lines indicate estimation curves of each method and the dashed line represents Y = X Real data analysis 2: Leukemia Leukemia is a cancer of the bone marrow, where blood cells are made. In leukemia, abnormal blood cells are produced in the bone marrow and crowd out other normal blood cells. Depending on the type of abnormal blood cells that are multiplying, leukemia can be classified as acute lymphocytic leukemia (ALL) or acute myeloid leukemia (AML). Identifying the type of leukemia is very important because patients should receive different treatments according to the disease type. [29] studied a generic approach to cancer classification based on gene expression and provided a list of 50 significant genes for classifying ALL and AML. After this study, this dataset has been widely used in transcriptomic analysis, e.g., [34, 35]. This data are available in the golubEsets library in Bioconductor [27]. The original data consist of 38 samples (27 from ALL patients and 11 from AML patients) and 7129 genes. We randomly selected five, seven, and ten samples for each sample type and repeated this experiment 100 times for averaging because biological experiments usually have a small number of samples owing to limitations of time and resources. It is thus important that a method shows good performance even if the sample size is small. The simulation results are shown in Table 6. In this table, RS and TRF values of three gene selection methods, which were computed by using 50 genes that are considered informative in [29] over 100 trials. For each case, the best performance is shown in boldface in the table. As we can see, MSAM1 or MSAM2 performs better than SAM in terms of RS and TRF, regardless of rank cutoff values. The overall performance of SAM and MSAM1 are very similar, but MSAM1 always performs slightly better than SAM. In the point of view of sample size, MSAM2 outperform SAM and MSAM1 when the sample size is very small, e.g., 5, and MSAM1 performs better than SAM and MSAM2 when the sample size is moderate, e.g., 7 and 10. As the sample size increases, all of the three methods identify informative genes better. Table 6 Rank sum and top-ranked frequency of informative genes in Leukemia data FDR comparison In this section, we discuss the FDR estimation procedures of SAM, MSAM1, and MSAM2. FDR is used in SAM procedure in order to deal with a multiple testing problem. The SAM interface in R, samr package [20], provides a significant gene list based on the FDR value that is estimated by its internal function. We also construct our own interface for MSAMs in R, based on the samr package, in order to allow for users to apply our proposed methods to their transcriptome research; see Additional file 2. Users start the procedure by setting their desired FDR value (for example, 0.2). We will call this value 'estimated FDR'. Based on the estimated FDR, our procedure calculates the value of corresponding Δ and identifies potentially significant genes. In real applications, we do not know TRUE FDR, so the estimated FDR is used as a substitute for TRUE FDR. If the estimated value is different from the true value, the number of genes that are detected using the estimated FDR is larger or smaller than the true number. Therefore, users may be interested in how well SAM and MSAMs procedures estimate TRUE FDR value. To this end, in this section, we evaluate SAM, MSAM1, and MSAM2, focusing on their FDR estimation performances. Since we know the number of TRUE significant genes in our simulated datasets, we can compare the estimated FDR and TRUE FDR in simulation study. After 100 simulations, we draw a scatter plot of the TRUE FDR versus the estimated FDR by calculating the average values of the TRUE FDR for each estimated FDR. We next draw a smooth curve close to the scatter plot for scenarios 1 and 2 to find the estimation accuracy at various levels of FDR. In particular, the estimation accuracy at low FDR is important since researchers generally set FDR at a small value so as to avoid having a large proportion of falsely significant genes among the detected genes. For this reason, we only show the results when the estimated FDR is lower than 0.5. Figure 8 displays the results; see the top two plots. As we can see, SAM estimates the TRUE FDR very accurately and two modified SAMs slightly overestimate the TRUE FDR. In other words, our methods have conservative property in their FDR estimation. However, the conservative estimation of FDR may not cause serious problems for the analysis when we use FDR as an upper bound of a tolerable error [36]. For such an analysis, the more important thing is how many non-significant genes are included in the detected genes. Because the truths are known in the simulated data, we can calculate the number of falsely detected genes among the identified genes. With the same number of total positives, the method with the smallest number of false positives is the best [36]. Using the plotting method described above, a smooth curve of the number of false positive genes versus the total number of identified genes are drawn. Figure 8 shows the results From the figure, we can see that MSAM1 and MSAM2 gives smaller number of false positive genes than SAM across all noise level and the total number of identified genes. From the results, we can say that MSAMs are better than SAM because they includes the less number of false genes in the selected gene subset. When we estimate FDR, we calculate both median FDR and mean FDR to determine which estimate more closely approximates the true value. Since the original samr interface provides the median FDR and 90th percentile FDR only, we modified its estimation function and obtained the median and mean values of FDR. As a result, we found that the median FDR was closer than the mean FDR to the TRUE FDR for all methods. This coincides with results published elsewhere [37], in which the median FDR was recommended as a criterion for gene selection methods when the estimated proportion of differentially expressed genes is greater than 1%, regardless of the sample size. Based on these results, we use the median value instead of the mean value when estimating FDR. Classification analysis Once important genes are identified from thousands of genes, they can be used to predict two different experimental states or responses (for example, cancer and normal). Therefore, we also examine how well a few top genes selected by each method identify the true classes. We attach these results in Additional file 3. In this file, we introduce 4 datasets we used and explain the construction of classifiers, 6 gene selection methods, 3 performance metrics to be considered in this study. Our comments on the results are also included. As can be seen in the file, our proposed methods, MSAMs, show quite good performances in all cases. In this additional section, we prove their competitiveness in classification tasks, not only in gene selection tasks. In transcriptome data analysis, most studies have been devoted to developing filter-based methods that are the simplest and fastest, and most computationally efficient. Hybrid methods, which are generally the combination of filter and wrapper methods, have recently gained popularity in the literature [13]. These methods consist of two steps: First, relevant features are selected by a filter method and the remaining features are eliminated. Second, a wrapper method verify these features and determine the final feature set that gives high classification accuracy [16]. In this point of view, filter methods have a lot of flexibility as they can be combined with not only any learning algorithm, but also any gene selection method, such as a wrapper method, resulting in a hybrid method. The performance of a hybrid method relies totally on the combination of filter and wrapper methods as well as the classifier [18]. We believe that accurate gene selection by filter methods clearly allow better classification accuracy. Therefore, our new filter-based methods will be useful not only in gene selection, but also constructing a good classifier in microarray applications. Our experiments showed the efficiency of our methods; it was demonstrated that when the same number of genes were selected, our methods included the less number of false genes than the conventional method. Our results also strongly suggest that these newly proposed methods outperform the conventional method and show quite consistent performance, even with a high noise level and a small sample size. Given that noisy data and a small sample size are commonly encountered in microarray studies [30, 38,39,40], we believe that our methods will prove useful. This research was based on the existing interface of SAM that was modified to apply our proposed methods. This modified version of the samr package is available in Additional file 2. We attempted to find a balance between flexibility and control in the usage of our methods by allowing users to set particular parameters and by minimizing the number of modifications to the original interface. Additional file 2 includes a detailed explanation of what we changed, but users can easily apply our methods to their own datasets without reading the manuscript in the first file, since we provide some simple and useful examples of detecting differentially expressed genes using our methods in Additional file 4. We also provide two real datasets and one simulated dataset used in this study (see Additional files 5, 6 and 7). All of the additional files are also available at author's homepage (http://home.ewha.ac.kr/~josong/MSAM/index.html). We have proposed new test methods for identifying genes that are differentially expressed between two groups in microarray data and evaluated their performance using a series of simulated data and two real datasets. The results have demonstrated that our proposed methods identified target genes better than the original method, SAM, for both simulation studies and real data analysis. Using our weighting schemes, significant genes can be selected in a more robust manner by avoiding the overestimation of variance. In particular, these procedures are very effective when the given data are noisy or the sample size is limited. Therefore, they prevent technical or biological problems that can occur in biological experiments and data pre-processing from impeding accurate gene selection. We believe that our proposed methods can be applied to various datasets in other fields if they have characteristics similar to microarray data. AML: AUC: area under the curve MSAM1: modified SAM1 ROC: receiver operating characteristic RS: rank sum of true genes significance analysis of microarrays TRF: top-ranked frequency of true genes Tusher VG, Tibshirani R, Chu G. Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci U S A. 2001;98(9):5116–21. Pavlidis P, Weston J, Cai J, Grundy WN. Gene functional classification from heterogeneous data. Proceedings of the fifth annual international conference on Computational biology. 2001:249–55. Mak MW. Kung SY. A solution to the curse of dimensionality problem in pairwise scoring techniques. In neural information processing. Springer Berlin/Heidelberg. 2006:314–23. Efron B. Microarrays, empirical Bayes and the two-groups model. Stat Sci. 2008;23(1):1–22. Sharma A, Imoto S, Miyano S, Sharma V. Null space based feature selection method for gene expression data. Int J Mach Learn Cybern. 2012;3(4):269–76. Sharma A, Imoto S, Miyano S. A between-class overlapping filter-based method for transcriptome data analysis. J Bioinforma Comput Biol. 2012;10(5):1–20. Sharma A, Imoto S, Miyano SA. Top-r feature selection algorithm for microarray gene expression data. IEEE/ACM Trans Comput Biol Bioinform. 2012;9(3):754–64. Ghalwash MF, Cao XH, Stojkovic I, Obradovic Z. Structured feature selection using coordinate descent optimization. BMC bioinformatics. 2016;17(1):158. Sharbaf FV, Mosafer S, Moattar MHA. Hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization. Genomics. 2016;107(6):231–8. Saeys Y, Inza I, Larranaga PA. Review of feature selection techniques in bioinformatics. Bioinformatics. 2007;23(19):2507–17. Ahmad FK, Norwawi NM, Deris S. Othman NH. A review of feature selection techniques via gene expression profiles. In 2008 International Symposium on Information Technology George G, Raj VC. Review on feature selection techniques and the impact of SVM for cancer classification using gene expression profile. arXiv preprint arXiv. 2011:1109–062. Bolon-Canedo V, Sanchez-Marono N, Alonso-Betanzos A, Benitez JM, Herrera FA. Review of microarray datasets and applied feature selection methods. Inf Sci. 2014;282:111–35. Tang J, Alelyani S, Liu H. Feature selection for classification: a review. Data Classification: Algorithms and Applications. 2014;37 Ang JC, Mirzal A, Haron H, Hamed HNA. Supervised, unsupervised, and semi-supervised feature selection: a review on gene selection. IEEE/ACM Trans Comput Biol Bioinform. 2016;13(5):971–89. Bolón-Canedo V, Sánchez-Maroño N, Alonso-Betanzos A. Feature selection for high-dimensional data. Prog. Artif Intell. 2016;5:65–75. Mahajan S, Singh S. Review on feature selection approaches using gene expression data. Imp. J. Interdiscip. Res. 2016;2(3). Aziz R, Verma CK, Srivastava N. Dimension reduction methods for microarray data: a review. AIMS. Bioengineering. 2017;4(1):179–97. Ding C, Peng H. minimum Redundancy feature selection from microarray gene expression data. J Bioinforma Comput Biol. 2005;3(2):185–205. Chu G, Narasimhan B. Tibshirani R, and Tusher VG. SAM users guide and technical document: Stanford University Labs; 2005. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B. 1995;57:289–300. Storey JDA. Direct approach to false discovery rates. J R Stat Soc Ser B. 2002;64(3):474–98. Mukherjee SN, Roberts SJ, Sykacek P, Gurr SJ. Gene ranking using bootstrapped p-values. SIGKDD Explor. 2003;5(2):16–22. Boulesteix AL, Slawski M. Stability and aggregation of ranked gene lists. Brief Bioinform. 2009;10(5):556–68. Dembélé DA. flexible microarray data simulation model. Microarrays. 2013;2(2):115–30. Wise RP, Caldo RA, Hong L, Shen L, Cannon EK, Dickerson JA. BarleyBase/PLEXdb: Plant Bioinformatics: Methods and Protocols. 2007:347?63. http://www.bioconductor.org. Seong KY, Pasquali M, Zhou X, Song J, Hilburn K, McCormick S, Dong Y, JR X, Kistler HC. Global gene regulation by fusarium transcription factors Tri6 and Tri10 reveals adaptations for toxin biosynthesis. Mol Microbiol. 2009;72(2):354–67. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh M, Downing JR, Caligiuri MA, Bloomfield CD, Lander ES. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999;286(5439):531?7. Kooperberg CF, Aragaki AD, Strand A, Olson JM. Significance testing for small microarray experiments. Stat Med. 2005;24(15):2281–98. Nykter M, Aho T, Ahdesmaki M, Ruusuvuori P, Lehmussola A, Yli-Harja O. Simulation of microarray data with realistic characteristics. BMC Bioinformatics. 2006;7(1):1. Li J, Tibshirani R. Finding consistent patterns: a nonparametric approach for identifying differential expression in RNA-Seq data. Stat Methods Med Res. 2013;22(5):519–36. Irizarry RA, Bolstad BM, Collin F, Cope LM, Hobbs B, Speed TP. Summaries of Affymetrix gene-Chip probe level data. Nucleic Acids Res. 2003;31(4):e15. Pan W. A comparative review of statistical methods for discovering differentially expressed genes in replicated microarray experiments. Bioinformatics. 2002;18(4):546?54. Zhang SA. Comprehensive evaluation of SAM, the SAM R-package and a simple modification to improve its performance. BMC Bioinformatics. 2007;8(1):230. Xie Y, Pan W, Khodursky ABA. Note on using permutation-based false discovery rate estimates to compare different analysis methods for microarray data. Bioinformatics. 2005;21(23):4280–8. Hirakawa A, Sato Y, Hamada D, Yoshimura IA. New test statistic based on shrunken sample variance for identifying differentially expressed genes in small microarray experiments. Bioinform Biol Insights. 2008;2:145–56. Dougherty ER. Small sample issues for microarray?Based classification. Comp Funct Genomics. 2001;2(1):28–34. Marshall E. Getting the noise out of gene arrays. Science. 2004;306(5696):630–1. Cobb K. Microarrays: the search for meaning in a vast sea of data. Biomed. Comput Rev. 2006;2(4):16–23. The authors would like to thank the editor and three anonymous reviewers for their insightful comments that significantly improve this article. This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2017R1D1A1B03036078). None of funding bodies played any role in the design or conclusions of this study. R code for implementing our proposed methods is in Additional files 2 and 4 of this article. The datasets supporting the conclusions of this article are included in Additional files 5, 6 and 7. All of these additional files are also available at http://home.ewha.ac.kr/~josong/MSAM/index.html. Department of Statistics, Ewha Womans University, Seoul, South Korea Suyeon Kang & Jongwoo Song Suyeon Kang Jongwoo Song All authors developed the approach, designed the study, wrote the computer code, analyzed the data, conducted the simulation studies and wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Jongwoo Song. Additional simulation results for scenario 3 and 4 (DOCX 399 kb) . R code for the modified samr package. (R 29 kb) Classification analysis section (DOCX 564 kb) R code for some examples of our method for detecting genes that are differentially expressed. (R 2 kb) Fusarium data (CSV 2107 kb) Leukemia data (CSV 1168 kb) Simulated data (scenario 2) (CSV 2380 kb) Kang, S., Song, J. Robust gene selection methods using weighting schemes for microarray data analysis. BMC Bioinformatics 18, 389 (2017). https://doi.org/10.1186/s12859-017-1810-x Received: 23 March 2017 Microarray data Gene selection method Noisy data Transcriptome analysis
CommonCrawl
2164 paper(s) uploaded by actaadmin. Upload Time Modified Time Half-order differentials on Riemann surfaces N. S. Hawley Stanford University, Stanford, Calif, USA M. Schiffer Stanford University, Stanford, Calif, USA TBD mathscidoc:1701.331301 Acta Mathematica, 115, (1), 199-236, 1965.7 [ Download ] [ 2017-01-08 20:32:08 uploaded by actaadmin ] [ 1578 downloads ] [ 0 comments ] Denis Bernard. On the Wess-Zumino-Witten models on the torus. 1988. Hejhal D A. Monodromy groups and linearly polymorphic functions[J]. Acta Mathematica, 1975, 135(1): 1-55. Alexandrov V, Maehara H, Milka A D, et al. Problem section[J]. European Journal of Combinatorics, 2010. Babelon O. Universal exchange algebra for Bloch waves and Liouville theory[J]. Communications in Mathematical Physics, 1991, 139(3): 619-643. Schlichenmaier M. Krichever-Novikov algebras for more than two points: explicit generators[J]. Letters in Mathematical Physics, 1990, 19(4): 327-336. Gallo D M, Kapovich M, Marden A, et al. The monodromy groups of Schwarzian equations on closed Riemann surfaces[J]. Annals of Mathematics, 1995, 151(2): 625-704. Gustafsson B, Shapiro H S. What is a Quadrature Domain[C]., 2005: 1-25. Martin Schlichenmaier. Local cocycles and central extensions for multi-point algebras of Krichever-Novikov type. 2001. A N Tyurin. ON PERIODS OF QUADRATIC DIFFERENTIALS. 1978. P G Zograf · L A Takhtadzhyan. ON UNIFORMIZATION OF RIEMANN SURFACES AND THE WEIL-PETERSSON METRIC ON TEICHMÜLLER AND SCHOTTKY SPACES. 1988. [ Cited by 76 ] No abstract uploaded! [ Abstract ] [ Full ] Extremal and conjugate extremal distance on open Riemann surfaces with applications to circular-radial slit mappings A. Marden University of Minnesota, Minneapolis B. Rodin University of Minnesota, Minneapolis [ Download ] [ 2017-01-08 20:32:08 uploaded by actaadmin ] [ 337 downloads ] [ 0 comments ] Hedberg L I. Removable singularities and condenser capacities[J]. Arkiv för Matematik, 1974: 181-201. Marden A, Richards I, Rodin B, et al. Analytic self-mappings of Riemann surfaces[J]. Journal D Analyse Mathematique, 1967, 18(1): 197-225. Rodin B. The method of extremal length[J]. Bulletin of the American Mathematical Society, 1974, 80(4): 587-606. Jenkins J A. The Method of the Extremal Metric[C]., 2002: 393-456. Duren P, Pfaltzgraff J A. Hyperbolic capacity and its distortion under conformal mapping[J]. Journal D Analyse Mathematique, 1999, 78(1): 205-218. Minda C D. EXTREMAL LENGTH AND HARMONIC FUNCTIONS ON RIEMANN SURFACES[J]. Transactions of the American Mathematical Society, 1972: 1-22. Makoto Ohtsuka. Dirichlet principle on Riemann surfaces. 1967. Mizumoto H. Theory of Abelian differentials and relative extremal length with applications to extremal slit mappings[C]., 1968: 1-58. Suita N. On continuity of extremal distance and its applications to conformal mappings[J]. Kodai Mathematical Seminar Reports, 1969, 21(2): 236-251. James A Jenkins. Chapter 13 – The Method of the Extremal Metric. 2002. On some non-linear elliptic differential-functional equations Philip Hartman The John Hopkins University, Baltimore, Md., USA Guido Stampacchia The John Hopkins University, Baltimore, Md., USA Kien B T, Yao J C, Yen N D, et al. On the solution existence of pseudomonotone variational inequalities[J]. Journal of Global Optimization, 2008, 41(1): 135-145. [ Cited by 182 ] Discrete series for semisimple Lie groups. II Harish-Chandra The Institute for Advanced Study, Princeton, N.J., USA Acta Mathematica, 116, (1), 1-111, 1965.10 A non-standard integral equation with applications to quasiconformal mappings Lipman Bers Columbia University, New York, N.Y., USA Acta Mathematica, 116, (1), 113-134, 1965.12 Lipman Bers. Uniformization, Moduli, and Kleinian Groups. 1972. Bers L. On Boundaries of Teichmuller Spaces and on Kleinian Groups: I[J]. Annals of Mathematics, 1970, 91(3). Frederick P Gardiner · Dennis Sullivan. Symmetric structures on a closed curve. 1992. Frederick P Gardiner · Nikola Lakic. Quasiconformal Teichmüller theory. 2000. Lipman Bers. Finite dimensional Teichmüller spaces and generalizations. 1981. Bers. Fiber spaces over Teichmuller spaces[J]. Acta Mathematica, 1973, 130(1): 89-126. Gardiner F P, Lakic N. Quasiconformal Teichm? uller Theory[C]., 1991. Lipman Bers. Inequalities for finitely generated Kleinian groups. 1967. Gehring F W. Univalent functions and the Schwarzian derivative[J]. Commentarii Mathematici Helvetici, 1977, 52(1): 561-572. Curtis T Mcmullen. The Moduli Space of Riemann Surfaces is Kahler Hyperbolic. 2000. On convergence and growth of partial sums of Fourier series Lennart Carleson Uppsala, Sweden The theory of stationary point processes Frederick J. Beutler The University of Michigan, Ann Arbor, Michigan, USA Oscar A. Z. Leneman The University of Michigan, Ann Arbor, Michigan, USA Odile Macchi. THE COINCIDENCE APPROACH TO STOCHASTIC POINT PROCESSES. 1975. Beutler F J. Alias-free randomly timed sampling of stochastic processes[J]. IEEE Transactions on Information Theory, 1970, 16(2): 147-152. Masry E. Alias-free sampling: An alternative conceptualization and its applications[J]. IEEE Transactions on Information Theory, 1978, 24(3): 317-324. Djbkovic I, Vaidyanathan P P. Generalized sampling theorems in multiresolution subspaces[J]. IEEE Transactions on Signal Processing, 1997, 45(3): 583-599. Lewis P A. Remarks on the theory, computation and application of the spectral analysis of series of events[J]. Journal of Sound and Vibration, 1970, 12(3): 353-375. Haji R, Newell G F. A relation between stationary queue and waiting time distributions[J]. Journal of Applied Probability, 1971, 8(03): 617-620. Beutler F J, Leneman O A. The Spectral Analysis of Impulse Processes[J]. Information \u0026 Computation, 1968, 12(3): 236-258. Leneman O A. Random sampling of random processes: Impulse processes[J]. Information \u0026 Computation, 1966, 9(4): 347-363. Beutler F J, Leneman O A. Random sampling of random processes - Stationary point processes.[J]. Information \u0026 Computation, 1966, 9(4): 325-346. Harper R K, Sclabassi R J, Estrin T, et al. Time series analysis and sleep research[J]. IEEE Transactions on Automatic Control, 1974, 19(6): 932-943. An axiomatic formulation is presented for point processes which may be interpreted as ordered sequences of points randomly located on the real line. Such concepts as forward recurrence times and number of points in intervals are defined and related in set-theoretic Note that for α∈$A$,$G$^{α}may not cover$G$_{α}as a convex subgroup and so we cannot use Theorem 1.1 to prove this result. Moreover, all that we know about the$G$^{α}/G_{α}is that each is an extension of a trivially ordered subgroup by a subgroup of$R$. It$B$is a plenary subset of$A$, then there exists a$v$-isomorphism μ of$G$into$V(B, G$^{β}/G_{β}), but whether or not μ is an$o$-isomorphism is not known. Weighted polynomial approximation on arithmetic progressions of intervals or points Paul Koosis University of California, Los Angeles, California, USA Mckean H P, Trubowitz E. Hill\u0027s operator and hyperelliptic function theory in the presence of infinitely many branch points[J]. Communications on Pure and Applied Mathematics, 1976, 29(2): 143-226. Pedersen H L. On Krein\u0027s Theorem for Indeterminacy of the Classical Moment Problem[J]. Journal of Approximation Theory, 1998, 95(1): 90-100. Koosis P. Harmonic estimation in certain slit regions and a theorem of Beurling and Malliavin[J]. Acta Mathematica, 1979, 142(1): 275-304. Pedersen H L. Entire functions having small logarithmic sums over certain discrete subsets[J]. Arkiv för Matematik, 1998, 36(1): 119-130. Henrik L Pedersen. Uniform Estimates of Entire Functions by Logarithmic Sums. 1997. Pitt L D. A general approach to approximation problems of the Bernstein type[J]. Advances in Mathematics, 1983, 49(3): 264-299. Loren D Pitt. WeightedL p closure theorems for spaces of entire functions. 1976. Pedersen H L. Entire functions and logarithmic sums over nonsymmetric sets of the real line[C]., 2000, 25(2): 351-388. P K Geetha. On Bernstein approximation problem. 1969. Koosis P. A relation between two results about entire functions of exponential type[J]. Ukrainian Mathematical Journal, 1994, 46(3): 240-250. Special functions on locally compact fields P. J. Sally Jr. Washington University, St. Louis, Mo., USA M. H. Taibleson Washington University, St. Louis, Mo., USA Freund P G, Olson M. Non-archimedean strings[J]. Physics Letters B, 1987, 199(2): 186-190. Keys C D. ON THE DECOMPOSITION OF REDUCIBLE PRINCIPAL SERIES REPRESENTATIONS OF p-ADIC CHEVALLEY GROUPS[J]. Pacific Journal of Mathematics, 1982, 101(2): 351-388. Khrennikov A. P-Adic Dynamical Systems[C]., 2004: 39-56. Taibleson M. Harmonic analysis onn-dimensional vector spaces over local fields: I. Basic results on fractional integration[J]. Mathematische Annalen, 1968, 176(3): 191-207. Shalika J A, Tanaka S. ON AN EXPLICIT CONSTRUCTION OF A CERTAIN CLASS OF AUTOMORPHIC FORMS.[J]. American Journal of Mathematics, 2016, 91(4). Takloobighash R. L-FUNCTIONS FOR THE p-ADIC GROUP GSp(4)[J]. American Journal of Mathematics, 2000, 122(6): 1085-1120. Keith Phillips · Mitchell Taibleson. Singular integrals in several variables over a local field.. 1969. Anatoly N Kochubei. A Schrödingerq-type equation over the field of p-adic numbers. 1993. Osipov D V, Parshin A N. Harmonic analysis on local fields and adelic spaces I[J]. Izvestiya: Mathematics, 2007, 75(4): 749-814. On entire functions of exponential type and indicators of analytic functionals C. O. Kiselman The University of Stockholm, Stockholm, Sweden Acta Mathematica, 117, (1), 1-35, 1966.5 Kiselman C O. The partial Legendre transformation for plurisubharmonic functions[J]. Inventiones Mathematicae, 1978, 49(2): 137-148. Berenstein C A, Struppa D C. Complex Analysis and Convolution Equations[C]., 1993: 1-108. Siu Y. Pseudoconvexity and the problem of Levi[J]. Bulletin of the American Mathematical Society, 1978, 84(4): 481-512. Krivosheev A S, Napalkov V V. Complex analysis and convolution operators[J]. Russian Mathematical Surveys, 1992, 47(6): 1-56. Nachbin L. Recent developments in infinite dimensional holomorphy[J]. Bulletin of the American Mathematical Society, 1973, 79(4): 625-640. Berenstein C A, Dostal M A. The Ritt theorem in several variables[J]. Arkiv för Matematik, 1974: 267-280. Hirschowitz A. Pseudoconvexit au-dessus d\u0027espaces plus ou moins homognes[J]. Inventiones Mathematicae, 1974, 26(4): 303-322. Mats Neymark. On the laplace transform of functionals on classes of infinitely differentiable functions. 1969. Kiselman C O. Plurisubharmonic functions and potential theory in several complex variables[C]., 2000: 655-714. Kiselman C O. Tangents of plurisubharmonic functions[C]., 1988. We shall be concerned with the indicator$p$of an analytic functional μ on a complex manifold$U$: $$p(\varphi ) = \overline {\mathop {\lim }\limits_{t \to + \infty } } \frac{l}{t}\log \left| {\mu (e^{t\varphi } )} \right|,$$ where ϕ is an arbitrary analytic function on$U$. More specifically, we shall consider the smallest upper semicontinuous majorant$p$^{$J$}of the restriction of$p$to a subspace £ of the analytic functions. An obvious problem is then to characterize the set of functions$p$^{$J$}which can occur as regularizations of indicators. In the case when$U$=$C$^{$n$}and £ is the space of all linear functions on$C$^{$n$}, this set can be described more easily as the set of functions(0.1) $$\mathop {\lim }\limits_{\theta \to \zeta } \overline {\mathop {\lim }\limits_{t \to + \infty } } \frac{l}{t}\log \left| {u(t\theta )} \right|$$ of$n$complex variables ζ∈$C$^{$n$}where$u$is an entire function of exponential type in$C$^{$n$}. We hall prove that a function in$C$^{$n$}is of the form (0.1) for some entire function$u$of exponential type if and only if it is plurisubharmonic and positively homogeneous of order one (Theorem 3.4). The proof is based on the characterization given by Fujita and Takeuchi of those open subsets of complex projective$n$-space which are Stein manifolds. Visit the profile of actaadmin
CommonCrawl
The 13th International Conference on Stopping and Manipulation of Ions and related topics (SMI-2019) from Monday, 15 July 2019 (04:00) to Friday, 19 July 2019 (14:00) 18:00 Conference Check In Conference Check In 18:30 Welcome Reception Room: Ball Room 09:00 Welcome Address 09:10 Reviewing the success of radioactive-ion manipulation with RFQ traps and recalling the contributions from McGill - David Lunney (CSNSM/IN2P3 Orsay) Reviewing the success of radioactive-ion manipulation with RFQ traps and recalling the contributions from McGill David Lunney (CSNSM/IN2P3 Orsay) Room: Ball Room Ion cooling and trapping techniques have opened new vistas in the physics associated with exotic (short-lived) nuclides and helped cure the ills of isobaric contamination. The ability of condensing ion-beam phase space using soothing cold buffer gas accompanied by electromagnetic confinement has created a new paradigm: beam preparation. The main player in this field is the so-called RFQ cooler-buncher, a segmented linear Paul trap that can capture exotic nuclides hot off the target, reducing emittance by grouping ions into a tight bunch. Inspired by a 1982 sabbatical leave in Mainz with the group of H.-J. Kluge developing Penning traps for ISOLDE, the late R.B. Moore initiated the first ion-catching developments at McGill with many of details elaborated (at the bar) in Thomson House, the SMI2019 conference venue. Bunchers are now used for mass measurements and collinear laser spectroscopy of exotic nuclides. Further evolutions have them preceeding dipole mass separators to increase resolving power and even inside ISOL target modules combined with laser ionization for beam purification. RFQ bunchers are also necessary preparatory devices for the fabulous multi-reflection time of flight (MRToF) mass spectrometers that are now pervading the radioactive-ion scene. In this presentation, the (local) history will be briefly told and the rich evolution of cooler-bunchers will be illustrated as exhaustively as time will permit. 09:30 Actinide beams by light-ion induced fusion-evaporation for mass-, decay- and optical spectroscopy at IGISOL - Ilkka Pohjalainen (University of Jyväskylä) Actinide beams by light-ion induced fusion-evaporation for mass-, decay- and optical spectroscopy at IGISOL Ilkka Pohjalainen (University of Jyväskylä) Room: Ball Room The production of actinide ion beams has become a focus of recent efforts at the IGISOL facility of the Accelerator Laboratory, University of Jyväskylä, especially aimed at the measurement of nuclear properties of heavy elements using high-resolution optical spectroscopy [1]. The first successful proof-of-principle on-line experiment for the production of actinides from a light-ion fusion-evaporation reaction has recently been performed with protons on $^{232}$Th targets. Several alpha-active reaction products were detected, reaching as neutron deficient as $^{224}$Pa through the $^{232}$Th(p, 9n)$^{224}$Pa with a 60 MeV primary beam. By detection of gamma-rays in coincidence with the alpha-decay, new information on the decay radiation has been obtained on nuclei including $^{226}$Pa. Direct detection of long lived actinides such as $^{229}$Th which is of special interest due to the extremely low-energy isomer [2], was not possible due to low alpha-activity as well due to low $Q_{EC/\beta^-}$ values, rendering separation of isotopes even with high resolution Ramsey cleaning with the Penning trap ineffective. Therefore, the novel Phase-Imaging Ion Cyclotron Resonance (PI-ICR) method [3] at JYFLTRAP is to be used for for a direct yield determination of long-lived isotopes in an upcoming experiment. This will also allow direct high-precision mass measurements creating new anchor points in the mass network calculations which currently rely on long chains of alpha decays in the actinide region of the nuclear chart. An important aspect of these developments has been related to target manufacturing. In addition to metallic thorium targets, several new $^{232}$Th targets manufactured by a novel Drop-on-Demand inkjet printing method [4] were successfully tested. These targets were provided by the Nuclear Chemistry Institute of Johannes Gutenberg-Universität Mainz who will now provide several new targets from other more exotic actinides such as $^{233}$U or $^{237}$Np. With these new targets we expect to access several new isotopes in the neutron-deficient actinide region for decay and optical spectroscopy as well as for mass measurements. [1] A. Voss et al., Phys. Rev. A, 95 (2017) 032506. [2] L. von der Wense et al., Nature, 533 (2016) 47. [3] D. Nesterenko et al., Eur. Phys. J. A, 54 (2018) 154. [4] R. Haas. et al., Nucl. Instr. Meth. A, 874 (2017) 43. 10:00 The ELI-IGISOL radioactive ion beam facility at ELI-NP - Paul Constantin (ELI-NP (Romania)) The ELI-IGISOL radioactive ion beam facility at ELI-NP Paul Constantin (ELI-NP (Romania)) Room: Ball Room The Extreme Light Infrastructure for Nuclear Physics (ELI-NP) facility will make available in the near future two new photon installations: a high-power laser system and a high-brilliance gamma beam system, which can be used together or separately. The ELI-IGISOL project [1] will use the primary gamma beam to generate a Radioactive Ion Beam (RIB) via photofission in a stack of Uranium targets placed at the center of a gas cell [2]. The particular technology used for this gas cell is the High Areal Density with Orthogonal extraction Cryogenic Stopping Cell (HADO-CSC) [3] featuring ion extraction orthogonal to the primary beamline. The gas cell is coupled to a radio-frequency quadrupole for beam formation. The exotic neutron-rich nuclei will be separated, and their mass measured, by a high-resolution Multiple-Reflection Time-of-Flight (MR-ToF) mass spectrometer. The isomerically pure RIBs [4] obtained with the MR-ToF will be further measured by a β-decay tape station and a collinear laser spectroscopy station. The latest developments in the simulation and design of the gas cell are presented. We report benchmark calculations of the production rates and of the extraction time and efficiency from the gas cell. Starting from these studies, the optimal design of the cell and its state-of-the-art technologies is discussed. Various testing units for the HADO-CSC components that are being developed at ELI-NP will be presented. 1. D.L. Balabanski et al., "Photofission Experiments at ELI-NP", *Rom. Rep. Phys.* **68**, S621 (2016). 2. P. Constantin et al., "Design of the gas cell for the IGISOL facility at ELI-NP", *Nucl. Inst. Meth. B* **397**, 1 (2017). 3. T. Dickel et al., "Conceptual design of a novel next-generation cryogenic stopping cell for the Low-Energy Branch of the Super-FRS", *Nucl. Inst. Meth. B* **376**, 216 (2016). 4. T. Dickel et al., "First spatial separation of a heavy ion isomeric beam with a multiple-reflection time-of-flight mass spectrometer", *Phys. Lett. B* **744**, 137 (2015). 10:30 Coffee Break 11:00 Barium Tagging in High Pressure Xenon Gas - Ben Jones (UTA) Barium Tagging in High Pressure Xenon Gas Ben Jones (UTA) Room: Ball Room The identification of a single barium ion in coincidence with an energy deposit measured with a precision of 1% in xenon is widely recognized as an unambiguous signature of neutrinoless double beta decay. The detection of single ions in tons of gas or liquid xenon, however, is a major experimental challenge. In this talk I will discuss barium tagging methodologies based on single molecule fluorescence imaging adapted to high pressure xenon gas time projection chambers. Recent advances in ion sensing chemistry and gas phase microscopy will be presented, followed by a discussion of the subsequent R&D steps planned by the NEXT collaboration to enable an ultra-low background, barium tagging neutrinoless double beta decay technology. 11:30 Barium Ion Transport in High Pressure Xenon Gas using RF Carpets - Katherine Woodruff Barium Ion Transport in High Pressure Xenon Gas using RF Carpets Katherine Woodruff Room: Ball Room A background-free measurement of neutrinoless double beta decay can be achieved with the detection of the daughter nucleus. Methods to image the daughter barium ion in the decay of xenon-136 are being developed for use in high pressure gas time projection chambers by the NEXT collaboration. A major remaining challenge is the transport of the barium ion to a small imaging region within the detector. In this talk I will discuss the plans for testing RF carpet performance in high pressure gas, early simulation results, and experimental tests of RF high voltage behavior in high pressure systems. I will also discuss our studies of ion drift properties in DC fields in high pressure gases. 14:00 Laser Resonance Chromatography (LRC): A new methodology in superheavy element research - Mustapha Laatiaoui (JGU Mainz) Laser Resonance Chromatography (LRC): A new methodology in superheavy element research Mustapha Laatiaoui (JGU Mainz) Room: Ball Room Optical spectroscopy constitutes the historical path to accumulate basic knowledge on the atom and its structure. Former work based on fluorescence and resonance ionization spectroscopy enabled identifying optical spectral lines up to element 102, nobelium [1, 2]. Beyond nobelium, solely predictions of the atom's structure exist, which in general are far from sufficient to reliably identify atoms from spectral lines. One of the major difficulties in atomic model calculations arise from the complicated interaction between the numerous electrons in atomic shells, which necessitate conducting experiments on such exotic quantum systems. The experiments, however, face the challenging refractory nature of the elements, which lay ahead, coupled with shorter half-lives and decreasing production yields. In this contribution, a new concept of laser spectroscopy of the superheavy elements is proposed. To overcome the need for detecting fluorescence light or for neutralization of the fusion products, which were employed up to date when lacking tabulated spectral lines, the new concept foresees resonant optical excitations to alter the ratio of ions in excited metastable states to ions in the ground state. The excitation process shall be readily measurable using electronic-state chromatography techniques [3, 4] as the ions exhibit distinct ion mobilities at proper conditions and thus drift at different speeds through the apparatus to the detector. The concept offers unparalleled access to laser spectroscopy of many mono-atomic ions across the periodic table of elements, in particular, the transition metals including the high-temperature refractory metals and the elusive superheavy elements like rutherfordium and dubnium at the extremes of nuclear existence. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (No. 819957) **References** 1. J. Reader A. Kramida, Yu. Ralchenko and NIST ASD Team (2018), 2019. 2. M. Laatiaoui et al., Nature, 538 (2016) 495. 3. P. R. Kemper and M. T. Bowers, J. Phys. Chem., 95 (1991) 5134. 4. M. J. Manard and P. R. Kemper, Int. J. Mass Spectrom., 407 (2016) 69. 14:20 A compact gas filled linear Paul trap for CRIS experiments. - Ben Cooper (University of Manchester) A compact gas filled linear Paul trap for CRIS experiments. Ben Cooper (University of Manchester) Room: Ball Room The CRIS technique (Collinear Resonance Ionisation Spectroscopy) has been shown to be an efficient method for accessing fundamental nuclear properties of exotic isotopes [1]. The technique can be applied to stable ion beams produced via laser ablation [2] which are pulsed due to the method of production. However, with radioactive cases produced at the ISOLDE (Isotope separator On-line) facility at CERN, a gas filled linear Paul trap is required for creating ion bunches. Currently, radioactive ion beams are produced via proton impact with a suitable target at ISOLDE. The resulting beam is then trapped, cooled, and bunched using the ISCOOL device following mass separation. The ion bunches are then directed to the CRIS setup where they are prepared for laser spectroscopy experiments. The technique has been shown to reveal properties such as nuclear spins, magnetic and electric quadrupole moments, and isotopic variations in the nuclear mean square charge radii. Measurement of these properties is made possible with ion beams that have been bunched with reduced emittance. The CRIS method has so far measured fundamental nuclear properties of neutron deficient Francium [3], and neutron rich radium [4] isotopes, among others. We envisage significant improvements to the CRIS technique following the installation of an independent gas filled linear Paul trap at ISOLDE as an alternative to the ISCOOL device. This would reduce set up times prior to time constrained experiments at the ISOLDE facility. It would enable constant optimisation of beam transport and quality. It would also trivialise switching from a radioactive beam to a stable reference isotope from our independent offline ion source. We provide an overview of the work completed since the first prototype was constructed and installed at the University of Manchester [5], where tests utilising a Ga ion source are ongoing. These tests include ion transport and gas attenuation within the device. Spatial limitations require that the new device is compact (<80 cm in length). SIMION calculations estimate that a prototype device with a 20 cm rod length could achieve a trapping efficiency of up to ~ 40% with a mean energy spread of ~ 4 eV. [1]: T.E Cocolios et al. Nucl, Inst, Methods in Phys Res B, 317 (2013) [2]: R. F. Garcia Ruiz et al. Phys. Rev. X 8, 041005 (2018) [3]: K.T. Flanagan et al. Phys rev lett 111, 212501 (2013) [4]: K. M. Lynch et al. Phys. Rev. C 97, 024309 (2018) [5]: B. S. Cooper et al. Hyperfine Interact, 240:52 (2019) 14:40 Development of offline ion source for collinear laser spectroscopy at the SLOWRI facility in RIKEN - Minori TAJIMA (RIKEN Nishina Center) Development of offline ion source for collinear laser spectroscopy at the SLOWRI facility in RIKEN Minori TAJIMA (RIKEN Nishina Center) Room: Ball Room We have prepared an offline ion source mainly for a planned collinear laser spectroscopy of RI beams at the SLOWRI facility in RIKEN. It was designed to provide low-emittance ion beams including refractory elements such as Zr, by combining laser ablation of a solid target in He gas and RF ion guide system [1]. We have connected the ion source to a test beamline and observed about $10^7$ singly charged ions per laser pulse ($\le 10$ Hz) extracted at 10 keV. The current situation including tests to evaluate the performance will be presented. **References:** [1] M. Wada *et al*., Nucl. Instrum. Methods Phys. Res. **B** 204, 570 (2003). e-mail: mtajima@riken.jp 15:30 The CISe project - Julia Even (University of Groningen) The CISe project Julia Even (University of Groningen) Room: Ball Room Gas-catchers are widely used in experimental nuclear physics to slow down for precision measurements. Chemical reactions of the ions with impurities in the gas can affect the extraction efficiency. Thus, there is lots of effort to keep the gas inside the catcher as clean as possible. Our aim is to explore the potential of chemical reactions for Chemical Isobaric Separation (CISe). We are currently building a new setup consisting of a gas-catcher and a commercial quadrupole Time-of-Flight mass-spectrometer. First studies in a hexapol collision cell have been performed to investigate the ion chemistry of tin, indium, cadmium and silver. In this contribution, an overview of the project will be presented. 16:00 Single Barium Atom Detection in Solid Xenon for the nEXO Experiment - Christopher Chambers (McGill University) Single Barium Atom Detection in Solid Xenon for the nEXO Experiment Christopher Chambers (McGill University) Room: Ball Room The proposed nEXO experiment is a tonne-scale liquid xenon time projection chamber, designed to search for neutrinoless double beta decay in xenon-136 [1]. A critical concern for any rare decay search is reducing or eliminating backgrounds that will interfere with the signal [2]. A powerful background discrimination technique is the positive identification ("tagging") of the decay daughter, in this case barium. A technique being developed in the nEXO collaboration is the trapping and extraction of the Ba daughter ion in solid xenon on a cryogenic probe, then using fluorescence spectroscopy to tag, i.e., identify the barium atom. Individual barium atoms, implanted into Xe ice as Ba ions, have been imaged in solid xenon, and the 619 nm emission of atomic barium in solid xenon has been assigned to single vacancy trapping sites [3]. 1. Al Kharusi et al. (nEXO Collaboration), arXiv:1805.11142 *[physics.ins-det]* (2018). 2. Albert et al. (nEXO Collaboration), *Phys. Rev. C* **97**, 065503 (2018). 3. Chambers et al. (nEXO Collaboration), *Nature* **569**, 203-207 (2019). 16:20 Group Photo 16:40 Adjourn 09:10 First application of mass selective re-trapping enables mass measurements of neutron-deficient Yb and Tm isotopes despite strong isobaric background - Moritz Pascal Reiter (JLU Giessen, TRIUMF) First application of mass selective re-trapping enables mass measurements of neutron-deficient Yb and Tm isotopes despite strong isobaric background Moritz Pascal Reiter (JLU Giessen, TRIUMF) Room: Ball Room TRIUMF's Ion Trap for Atomic and Nuclear science (TITAN) [1] located at the Isotope Separator and Accelerator (ISAC) facility, TRIUMF, Vancouver, Canada is a multiple ion trap system specialized in performing high-precision mass measurements and in-trap decay spectroscopy of short-lived radioactive ions. Although ISAC can deliver high yields for some of the most exotic species, many measurements suffer from strong isobaric background. In order to overcome this limitation an isobar separator based on the Multiple-Reflection Time-Of-Flight Mass Spectrometry (MR-TOF-MS) technique has been developed and installed at TITAN [2]. Mass selection is achieved using dynamic re-trapping of the ions of interest after a time-of-flight analysis in an electrostatic isochronous reflector system [3]. Re-using the injection trap of the device for the mass-selective re-trapping, the TITAN MR-TOF-MS can operate as its very own high resolution isobar separator prior to mass measurements within the same device. This combination of operation modes boosts the dynamic range and background handling capabilities of the device, enabling high precision mass measurements with ion of interests to contaminant ratios of 1:10^6. We will discuss the technical aspects of re-trapping and recent results of mass measurements of neutron-deficient Yb and Tm isotopes investigating the persistence of the N=82 neutron shell closure far from stability made possible by employing for the first time online mass selective re-trapping to supress strong isobaric background. References: [1] J. Dilling et al., NIM B 204, 2003, 492–496 [2] C. Jesch et al., , Hyperfine Interact. 235 (1-3), 2015, 97–106 [3] T. Dickel et al. J. Am. Soc. Mass Spectrom. (2017) 28: 1079 09:30 Design, optimization and commission of a multi-reflection time-of-flight mass analyzer at IMP/CAS - Yongsheng Wang (Institute of Modern Physics, Chinese Academy Science) Design, optimization and commission of a multi-reflection time-of-flight mass analyzer at IMP/CAS Yongsheng Wang (Institute of Modern Physics, Chinese Academy Science) Room: Ball Room A multi-reflection time-of-flight mass analyzer is being constructed for isobaric separation and mass measurement at IMP/CAS (Institute of Modern Physics, Chinese Academy Science). A new method including two sub-procedures, global search and local refinement, has been developed for the design of MRTOF mass analyzer. The method can be used to optimize the parameters of MRTOF-MS both operating in mirror-switching mode and in-trap-lift mode. By using this method, an MRTOF mass analyzer, in which each mirror consists of five cylindrical electrodes, has been designed. In the mirror-switching mode, the maximal mass resolving power has been achieved to be 1.3 × 10$^5$ with a total time-of-flight of 6.5 ms for the ion species of $^{40}$Ar$^{1+}$ [1], and in the in-trap-lift mode, it is 1.6 × 10$^5$ with a total time-of-flight of 6.4 ms [2]. The simulation also reveals the relationships between the resolving power and the potentials applied on the mirror electrodes, the lens electrode and the drift tube. This MRTOF-MS has been constructed and is being commissioning now. The preliminary test results show that it works [2]. In this conference, we will present the design details, optimization method and the test results obtained. **References:** [1] Y.L. Tian, Y.S. Wang, J.Y. Wang, et al., Int. J. Mass Spectrom. 408, 28–32 (2016). [2] Jun-Ying Wang, Yu-Lin Tian, Yong-Sheng Wang, et al., Nucl. Instrum. Meth. B, (2019). e-mail: yswang629@impcas.ac.cn 10:00 MIRACLS: A Multi Ion Reflection Apparatus for Collinear Laser Spectroscopy - Simon Sels (CERN) MIRACLS: A Multi Ion Reflection Apparatus for Collinear Laser Spectroscopy Simon Sels (CERN) Room: Ball Room Laser spectroscopy is a well-established technique for studying nuclear ground-state properties in a model-independent way. By observing the isotope shifts and hyperfine structures of the atoms' spectral lines, the technique provides access to the charge radii and electromagnetic moments of the nuclear ground- and isomeric states [1, 2]. While in-source laser spectroscopy in a hot cavity is a very sensitive method that is able to measure rare isotopes with production rates below one particle per second at ISOL facilities [3], the spectral resolution of this method is limited by Doppler broadening to ~5 GHz. Collinear laser spectroscopy (CLS) on the other hand, provides an excellent spectral resolution of ~10 MHz [1] which is of the order of the natural line widths of allowed optical dipole transitions. However, CLS requires yields of more than 100 or even 10,000 ions/s depending on the specific case and spectroscopic transition [4]. The MIRACLS project at CERN aims to develop a laser spectroscopy technique that combines both the high spectral resolution of conventional fluorescence CLS with an enhanced sensitivity factor of 20-600 depending on the mass and lifetime of the studied nuclide. The sensitivity increase is derived from an extended observation time provided by trapping ion bunches in a Multi-Reflection Time-of-Flight device where they can be probed several thousand times [5]. A proof-of-principle apparatus, operating at 2 keV beam energy, has been assembled at CERN ISOLDE with the goal of demonstrating the MIRACLS concept, benchmark simulations [6] that will be employed to design a future device operating at 30 keV and further technological developments. Recently, first measurements have been performed with the proof-of-principle apparatus using stable magnesium isotopes as a first test case. Laser spectroscopy has been performed on 24,26Mg+ ions trapped for more than 5000 revolutions in the MR-ToF. Line widths close to the Doppler limit in this 2-keV machine have been achieved. Furthermore, a.o. collinear-anticollinear spectroscopy has been performed on 40Ca+ ions. Extensive characterizing study of the device is ongoing. This talk will introduce the MIRACLS concept, present the first results and current status of the project as well as an outlook towards further developments. [1] K. Blaum, et al., Phys. Scr. T152, 014017 (2013) [2] P. Campbell et al., Prog. Part. and Nucl. Phys. 86, 127-180 (2016) [3] B. Marsh et al., Nature Physics 14, 1163-1167 (2018) [4] R. Neugart, J. Phys. G: Nucl. Part. Phys. 44 (2017) [5] S. Sels et al., Nucl. Instr. Meth. B, In press (2019) DOI: 10.1016/j.nimb.2019.04.076 [6] F. Maier, et al. Hyperfine Interact (2019) 240:54 11:00 Status of St. Benedict at the Nuclear Science Laboratory - Daniel Burdette (University of Notre Dame) Status of St. Benedict at the Nuclear Science Laboratory Daniel Burdette (University of Notre Dame) Room: Ball Room St. Benedict, the Superallowed Transition Beta-Neutrino Decay-Ion-Coincidence Trap, is in development at the University of Notre Dame's Nuclear Science Laboratory. This ion trapping system will be composed of three main components. The first component will be a large-volume gas cell which will thermalize ions through collisions with a buffer gas, coupled with a RF-funnel-based ion guide system followed by a sextupole ion guide (SPIG) for extraction. Then, a radiofrequency quadrupole (RFQ) will take the continuous beam from the gas catcher and produce a cooled, bunched beam for injection into a linear Paul trap. The Paul trap will hold the ions near rest until they decay, and surrounding detectors will be used to determine the kinematics of the decay particles. The $\beta$-decay spectrum can be extracted from this information, and used to determine the $\beta$-$\nu$ angular correlation coefficient, $a_{\beta\nu}$. This will allow for the determination of the Fermi to Gamow-Teller mixing ratio, $\rho$, for members of the ensemble of T=1/2 superallowed $\beta$ decays whom have not had this quantity measured experimentally. The determination of $\rho$ for these decays will allow for the calculation of a precision $\text{V}_{\text{ud}}$ value complementary to the current precision limit provided by superallowed $0^+$$\rightarrow0^+$ decays. The current status of the project will be presented. This work is funded by the National Science Foundation Major Research Instrumentation grant PHY-1725711. 11:20 MORA project and optimization of transparent ion trap geometry - Meriem BENALI (LPC Caen, France ) MORA project and optimization of transparent ion trap geometry Meriem BENALI (LPC Caen, France ) Room: Ball Room The MORA (Matter's Origin from the RadioActivity of trapped and oriented ions) project [1] is part of the research on CP violation that could explain the matter-antimatter asymmetry observed in the universe, through the measurement of the so-called D correlation. MORA uses an innovative in-trap orientation method which combines the high trapping efficiency of a transparent Paul trap with laser orientation techniques. The MORA setup will permit to reach precision on D down to a few $10^{-5}$, which allows to probe the Final State Interactions (FSI) effects for the first time. Within the framework of this project, a three-dimensional Paul trap (MORATrap) geometry has been optimized to broad the quadrupolar region, where the contribution of higher order harmonics is reduced. MORATrap is composed of three conic ring pairs with a mid-plane symmetry, its geometry is inspired from the existing transparent Paul trap, LPCTrap [2]. Our trap optimization was carried out by minimizing high order harmonics and maximizing the quadrupolar term in the spherical harmonics expansion of the generated potential in the trap center. Our simulation is based on solving Laplace's equation with the AXIELECTROBEM software developed at LPC Caen coupled to some $\chi^2$ minimization. [1] P. Delahaye et al., arXiv:1812.02970, proceedings of the TCP 2018 conference, to appear in Hyp. Int. [2] P.~Delahaye et al., arXiv:1810.09246 [physics.ins-det], submitted to EPJA. 11:40 Afternoon Free Afternoon Free 09:10 Recent results from the FRS Ion Catcher - Ivan Miskun (1 II. Physikalisches Institut, Justus-Liebig-Universität Gießen, 35392, Gießen, Germany 2 GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291, Darmstadt, Germany) Recent results from the FRS Ion Catcher Ivan Miskun (1 II. Physikalisches Institut, Justus-Liebig-Universität Gießen, 35392, Gießen, Germany 2 GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291, Darmstadt, Germany) Room: Ball Room The FRS Ion Catcher setup [1] is used for thermalization and high-resolution measurements of exotic nuclei produced at relativistic energies of up to 1 GeV/u at the fragment separator (FRS) at GSI. It consists of a cryogenic gas-filled stopping cell (CSC), an RFQ beamline and a multiple-reflection time-of-flight mass-spectrometer (MR-TOF-MS), which can be used for mass measurements with mass accuracies down to $6\cdot10^{-8}$ [2] and for the production of isobarically and isomerically clean beams. Over the last years, several technical improvements and upgrades were implemented to the setup. New techniques for enhancing the selectivity of ion transport based on ion mobility and dissociation of molecular contaminants were developed. The RFQ beamline was expanded and upgraded with improved differential pumping, a mass filter and a laser ablation carbon cluster ion source. The areal density of the CSC was increased to 10 mg/cm$^2$. A novel method for half-lives and branching ratios measurements [3] using the CSC as an ion trap for controllable storing of ions was developed and demonstrated. In addition, the progress on the technical design of the CSC for the Low-Energy Branch of the Super-FRS at FAIR will be reported. References: [1] W. R. Plass et al., Nucl. Instrum. Methods B, 317 (2013) [2] S. Ayet et al., accepted to Phys. Rev. C, arXiv:1901.11278 (2019) [3] I. Miskun et al., submitted to Eur. Phys. Journal A, arXiv:1902.11195 (2019) e-mail: Ivan.Miskun@physik.uni-giessen.de 09:30 The Advanced Cryogenic Gas Stopper at NSCL – Progress towards Operations - Kasey Lund (The National Superconducting Cyclotron Laboratory ) The Advanced Cryogenic Gas Stopper at NSCL – Progress towards Operations Kasey Lund (The National Superconducting Cyclotron Laboratory ) Room: Ball Room The Advanced Cryogenic Gas Stopper (ACGS) has successfully delivered its first rare isotope beam for experiments at the National Superconducting Cyclotron Laboratory (NSCL). The ACGS has shown an increase extraction efficiency, reduce transport time, reduce molecular contamination of the isotope of interest, and the ability to minimize space charge effects. This is achieved by a novel 4-phase Radio Frequency wire-carpet which generates a traveling electrical wave for fast and efficient ion transport, cryogenic cooling of the helium gas chamber reduces unwanted molecular formation, and the new planar geometry with the wire-carpet in the mid-plane of stopper alleviates space charge effects. Offline testing of ACGS has shown wire-carpet transport efficiencies greater than 95% and transport speeds up to 100 m/s. Operating at a temperature of near 80 K, ACGS delivered argon-44 to the ReA3 system reliably for over a week with a beam rate up to twice as much as advertised on the ReA3 Beam List. This presentation will show the most recent online and offline performance of the ACGS and discuss advancements made regarding extraction from the gas stopper. 10:00 Particle-in-Cell Simulations for Studies of Space Charge Effects in Ion Trap and Ion Transport Devices - Ryan Ringle (NSCL/FRIB) Particle-in-Cell Simulations for Studies of Space Charge Effects in Ion Trap and Ion Transport Devices Ryan Ringle (NSCL/FRIB) Room: Ball Room One of the least intuitive phenomena in ion trap or ion transport devices is the effect of large numbers of charged particles, also known as space charge, on the performance of the device. Space charge can shield applied DC and RF fields, leading to poor transport efficiencies and increased spatial and energy distributions. Robust simulation methods must be employed in order to mitigate these effects and to gain a better understanding of the device in the presence of space charge. However, standard ion optics software, such as SIMION [1], have limited ability to handle space charge, or are not optimized to efficiently study the system of interest. Therefore, other, more specialized, techniques must be used. The particle-in-cell (PIC) method has been used to study plasmas and gravitational systems for decades, typically employing 2D or 3D coordinate systems. Thorough treatments of the subject can be found in [2, 3]. Modern desktop computing hardware make 3D PIC simulations with millions of super particles possible in a reasonable amount of time without requiring a high-performance computing cluster. The 3DCylPIC package [4] was developed to study devices at FRIB/NSCL, such as RF carpets, gas cells, radiofrequency quadrupole cooler/bunchers, MR-TOFs, etc., that need to operate effectively in the presence of large amounts of space charge. In this talk I will describe how 3DCylPIC operates and present the results of simulations of devices that are currently in use, making comparisons to measurements where possible. [1] D. A. Dahl, Int. J. Mass Spectrom. 200 (2000) 3–25. [2] C. K. Birdsall, A. B. Langdon, Plasma physics via computer simulation, McGraw-Hill, New York, 1985. [3] R. W. Hockney, J. W. Eastwood, Computer simulation using particles, A. Hilger, Bristol [England] ; Philadelphia, 1988. [4] R. Ringle, Int. J. Mass Spectrom. 303 (2011) 42-50. 11:00 Efficient Ion Thermalization and Mass Spectrometry of (Super-)Heavy Elements at SHIPTRAP - Oliver Kaleja (MPIK Heidelberg, JGU Mainz, GSI Darmstadt) Efficient Ion Thermalization and Mass Spectrometry of (Super-)Heavy Elements at SHIPTRAP Oliver Kaleja (MPIK Heidelberg, JGU Mainz, GSI Darmstadt) Room: Ball Room The quest for the *island of stability*, a region of nuclides with enhanced stability around proton and neutron numbers $Z\approx 114-126$ and $N\approx 184$, respectively, is at the forefront of nuclear physics. The survival of superheavy elements is intimately linked to nuclear shell effects, which can be experimentally probed by mass measurements. Experiments around this region are hampered by extremely low production rates of down to few ions per month. Nonetheless, the Penning-trap mass spectrometer SHIPTRAP, located at the GSI in Darmstadt, Germany, has shown that direct high-precision measurements of atomic masses of $_{102}$No and $_{103}$Lr isotopes around the deformed shell closure $N=152$ are feasible and provide indispensable knowledge on binding energies, shell effects and yield important anchor-points on $\alpha$-decay chains, affecting absolute mass values up to the heaviest elements. To continue this groundbreaking program and to proceed towards heavier and more exotic nuclides, the drop in production rate has to be accomodated by several improvements. The Penning-trap system was recently relocated, allowing to integrate a second-generation gas-stopping cell, operating at cryogenic temperatures. Its stopping efficiency was optimized using the SRIM simulation software, and its purity was recently investigated using recoil-ion sources. In addition, the Phase-Imaging Ion-Cyclotron-Resonance (PI-ICR) technique was developed, increasing the sensitivity of mass measurements. To fully exploit its enhanced mass resolving power required improving the temporal stability of the electric and magnetic fields. Furthermore, its applicabilty in low-rate measurements, accumulating only few ions in total, yet had to be proven. In the SHIPTRAP experimental campaign in summer 2018, we extended direct high-precision Penning-trap mass spectrometry into the region of the heaviest elements using the PI-ICR technique. For the first time, direct mass measurements of $^{251}$No, $^{254}$Lr and the superheavy nuclide $^{257}$Rf were performed with rates down to one detected ion per day. Despite lowest rates the PI-ICR technique allowed resolving the isomeric states $^{251m,254m}$No and $^{254m,255m}$Lr from their respective ground states with mass resolving powers of up to 10.000.000 and to accurately determine their excitation energies, which had previously been derived only indirectly via decay spectroscopy. In this contribution an overview of the technical developments and the recent results will be given. 11:30 Addressing the systematics in phase-imaging ion-cyclotron-resonance measurements at the Canadian Penning Trap mass spectrometer - Dwaipayan Ray (University of Manitoba) Addressing the systematics in phase-imaging ion-cyclotron-resonance measurements at the Canadian Penning Trap mass spectrometer Dwaipayan Ray (University of Manitoba) Room: Ball Room Phase-imaging ion-cyclotron-resonance (PI-ICR) is a novel technique for determining the cyclotron frequency ($\nu_{c}$) of an ion trapped in a Penning trap. First developed by the SHIPTRAP group at GSI [1], this technique relies on measuring the radial phase a trapped ion accumulates over a period of time. At the Canadian Penning Trap mass spectrometer (CPT) in Argonne National Laboratory (ANL), PI-ICR is currently employed [2,3]. The measurement campaigns and extensive tests over the last few years have revealed a number of systematics relating to the alignment between the magnetic field and ejection optics, the stability of the Penning trap electric field, and the initial magnetron motion of the ions [4]. These systematics and the efforts to address them will be presented. This work is supported by Natural Sciences and Engineering Research Council (NSERC, Canada) under Application Number SAPPJ-2018-00028, U.S. Department of Energy (DOE), Office of Nuclear Physics under Contract Number DE-AC02-06CH11357(ANL), and Facility for Rare Isotope Beams - China Scholarship Council (FRIB-CSC) Fellowship under Grant Number 201704910964. [1] S. Eliseev *et al.*, Appl. Phys. B 114 (2014) 107. [2] R. Orford, N. Vassh *et al.*, Phys. Rev. Lett. 120 (2018) 262702. [3] D.J. Hartley, F.G. Kondev, R. Orford *et al.*, Phys. Rev. Lett. 120 (2018) 182502. [4] R. Orford, PhD thesis, McGill University, Canada (2018). 14:00 Characterization of supersonic jets for in-gas-jet laser ionization spectroscopy at the IGLIS laboratory and of gas flow inside the ion guide at the IGISOL-4 facility - ALEXANDRA ZADVORNAYA (University of Jyväskylä) Characterization of supersonic jets for in-gas-jet laser ionization spectroscopy at the IGLIS laboratory and of gas flow inside the ion guide at the IGISOL-4 facility ALEXANDRA ZADVORNAYA (University of Jyväskylä) Room: Ball Room Noble gases such as argon and helium are utilized within the In-Gas Laser Ionization and Spectroscopy (IGLIS) [1] and Ion Guide Isotope Separation On-Line (IGISOL) [2] techniques to thermalize and transport nuclear reaction products, which often have short lifetimes and small production yields. To facilitate the spectroscopic studies of the properties of nuclear reaction products, thorough understanding and characterization of utilized gas flows are essential. Characterization was performed experimentally at both the IGLIS and IGISOL-4 laboratories and numerically using the Computational Fluid Dynamics (CFD) Module of COMSOL Multiphysics. With the in-gas-jet method, an extension of the IGLIS technique, the spectral resolution is improved by more than one order of magnitude in comparison to in-gas-cell laser ionization spectroscopy [3], while maintaining a high efficiency. This allows the determination of nuclear properties with higher precision. The flow parameters of such supersonic gas jets were characterized at the IGLIS laboratory at KU Leuven using Planar Laser Induced Fluorescence (PLIF) and will be discussed in the first part of this talk. The projected temperature associated (Doppler) broadening, which can be attained with an upgraded in-gas-jet method, was estimated to be about 140 MHz for the No isotopes. Moreover, the numerical calculations were performed to obtain temperature, velocity and Mach number profiles of supersonic jets formed by a de Laval nozzle. The experimental and numerical in-gas-jet results agreed reasonably well for a range of coordinates after the nozzle's exit [4]. Extraction efficiencies and delay times of subsonic helium and argon flows inside a fission ion guide are being characterized at the IGISOL-4 facility at the University of Jyvaskyla using a radioactive 223Ra α-recoil source (T1/2=11.4 d). The status of these measurements will be discussed in the second part of this talk. This characterization defines lower limits of production yields and lifetimes of the nuclear reaction products to be studied using gas cells. [1] Yu. Kudryavtsev et al., Beams of short lived nuclei produced by selective laser ionization in a gas cell, Nucl. Instrum. Meth. Phys. Res. B, 114, 350 (1996) [2] I. D. Moore, P. Dendooven, and J. Ärje, The IGISOL technique—three decades of developments. In: Äystö J., Eronen T., Jokinen A., Kankainen A., Moore I.D., Penttilä H., Three decades of research using IGISOL technique at the University of Jyväskylä. Springer, Dordrecht (2013) [3] R. Ferrer et al., Towards high-resolution laser ionization spectroscopy of the heaviest elements in supersonic gas jet expansion, Nat. Commun. 8, 14520 (2017) [4] A. Zadvornaya et al., Characterization of Supersonic Gas Jets for High Resolution Laser Ionization Spectroscopy of Heavy Elements, Phys. Rev. X, 8, 041008 (2018) 14:20 Development of a New Laser Ablation Ion Source - Tim Ratajczyk (TU Darmstadt, Institut für Kernphysik, Darmstadt, Germany) Development of a New Laser Ablation Ion Source Tim Ratajczyk (TU Darmstadt, Institut für Kernphysik, Darmstadt, Germany) Room: Ball Room A new laser ablation ion source is under development at the Institute for Nuclear Physics, TU Darmstadt for high-precision collinear laser spectroscopy. The design will combine the versatility of laser ablation ion production and the non-conservative cooling in Helium buffer gas, to produce a low emittance ion beam of a wide range of elements. It is based on the original idea of an RF-only ion funnel [1] using only the gas jet to transport the ablated ions, which are radially confined by RF electrodes. Additionally, this design will contain a new feature that will allow to further cool and bunch the ion beam. For this purpose, an additional RF electrode stack is placed in the next pumping stage superimposed by a DC gradient towards the exit [2]. The last electrode can be connected to a positive voltage to create a potential barrier and stop the ions to produce a narrow ion bunch. Detailed computer simulations have shown that this ion source [3] will allow us to produce various high-quality continuous and pulsed ion beams, with low transverse and longitudinal emittance. We will present the current status and first results of this project development. [1] Victor Varentsov, A new Approach to the Extraction System Design, SHIPTRAP Collaboration Meeting, 19 March, 2001, DOI: https://doi.org10.13140/RG.2.2.30119.55200 [2] Victor Varentsov, Proposal for a new Laser ablation ion source for LaSpec and MATS testing, NUSTAR Collaboration Meeting, 1 March, 2016, DOI: https://doi.org10.13140/RG.2.2.10904.39686 [3] T. Ratajczyk, V. Varentsov and W. Nörtershäuser, Status of a new laser ablation ion beam source for LASPEC, GSI-FAIR SCIENTIFIC REPORT 2017, DOI: https://doi.org10.15120/GR-2018-1 14:40 Improvement of a dc-to-pulse conversion efficiency of FRAC - So Sato (Department of Physics, Rikkyo University, Toshima, Tokyo 171-8501, Japan) Improvement of a dc-to-pulse conversion efficiency of FRAC So Sato (Department of Physics, Rikkyo University, Toshima, Tokyo 171-8501, Japan) Room: Ball Room At the SCRIT electron scattering facility at RIKEN [1,2], we aim at realizing the world's first electron scattering experiment of unstable nuclei, after succeeding in principle verification experiment using stable nuclei $^{132}$Xe [3]. In order to perform electron scattering with unstable nuclei with small production rate, it is important to accumulate and inject ions efficiently into the SCRIT device. For this purpose, it is necessary to convert a continuous ion beam from the ISOL type ion separator ERIS [4] to a pulsed beam with the pulse duration of 300~500 μs. We developed a dc-to-pulse converter, called FRAC [5], based on RFQ linear ion trap and have attained the dc-to-pulse conversion efficiency of 5.6%. We modified the FRAC to further improve the efficiency, and enabled cooling of the trapped ions by Xe gas of ~10$^{-3}$ Pa. Then an electric field gradient was applied in the longitudinal direction of FRAC. As a result, the conversion efficiency was improved by more than 10 times compared to that before modification. Details of the modification and its latest performance will be presented. **References:** [1] M. Wakasugi et al., Nucl. Instrum. Meth. **B317**, 668 (2013). [2] T. Ohnishi et al., Physca Scripta **T166**, 014071 (2015). [3] K. Tsukada et al., Phys. Rev. Lett. **118**, 262501 (2017). [4] T. Ohnishi et al., Nucl. Instrum. Meth. **B317**, 357 (2013). [5] M. Wakasugi et al., Rev. Sci. Instrum. **89**, 095107 (2018). 15:30 Status of the radiofrequency quadrupole cooler/buncher at TRIUMF-CANREB - Brad Schultz (TRIUMF) Status of the radiofrequency quadrupole cooler/buncher at TRIUMF-CANREB Brad Schultz (TRIUMF) Room: Ball Room The Canadian Rare-isotope facility with Electron Beam ion source (CANREB) is currently being commissioned at TRIUMF in Vancouver, Canada. CANREB will accept rare isotope beams from the Isotope Separator and Accelerator (ISAC) or Advanced Rare Isotope Laboratory (ARIEL) facilities. The ions will be charge bred using an electron beam ion source (EBIS) to 3 ≤ m/q ≤ 7 for post-acceleration to medium- and high-energy experiments. For injection into the EBIS, continuous ion beams from the source will be cooled and bunched using a radiofrequency quadrupole (RFQ) cooler/buncher. Results from initial RFQ commissioning tests, as well as an overall status of CANREB, will be presented. 15:50 SIMULATION VS. PERFORMAMCE OF THE TRIUMF CANREB RFQ COOLER-BUNCHER - Chris Charles (TRIUMF) SIMULATION VS. PERFORMAMCE OF THE TRIUMF CANREB RFQ COOLER-BUNCHER Chris Charles (TRIUMF) Room: Ball Room The CANadian Rare-isotope laboratory with Electron Beam ion source (CANREB) project at TRIUMF [1] produces a large variety of rare radioactive and stable isotope beams for fundamental research. Essential to CANREB is a new radiofrequency quadrupole (RFQ) cooler-buncher [2] operating in grade 5.0 helium gas at 3 MHz, 1.2 kV$_{pp}$ (q $\sim$ 0.2) with 60-70 W input RF power. The RFQ is designed to (A) accept beams with <100 pA currents at <60 keV energies, and (B) deliver cooled and bunched beams <10$^6$ ions/bunch at 100 Hz with >90$\%$ efficiency, <10 eV energy spread, and short <1 us time-spread. Commissioning tests with picoamp beams of 30 keV $^{133}$Cs$^{+1}$ (r $\sim$ 5 mm, angular spread $\sim$ 10 mrad) in $\sim$ 5 mtorr helium yield >90$\%$ transmission through the RFQ with >80$\%$ bunching efficiency. Simulations agree with $^{133}$Cs$^{+1}$ performance characteristics. Here we discuss simulation of beam properties in the RFQ obtained with SIMION to actual performance for $^{133}$Cs$^{+1}$, $^{85}$Rb$^{+1}$ and other isotopes of interest, over a range of energies. Preliminary results indicate q-values for RFQ operation with >90$\%$ transmission occur for 60 keV: $^{133}$Cs$^{+1}$ = 0.10-0.25, $^{85}$Rb$^{+1}$ = 0.09, and $^{133}$Cs$^{+1}$ (18.5 keV) = 0.14-0.30, $^{85}$Rb$^{+1}$ (29 keV) = 0.12-0.16. References: [1] The CANREB project for charge state breeding at TRIUMF. F. Ames, R. Baartman, B. Barquest, C. Barquest, M. Blessenohl, J. R. Crespo López-Urrutia, J. Dilling, S. Dobrodey, L. Graham, R. Kanungo, M. Marchetto, M. R. Pearson, and S. Saminathan. Proceedings of the "17th International Conference on Ion Sources", Oct. 15-20, 2017, Geneva Switzerland, AIP Conf. Proc. 2011, 070010-1–070010-3; (2018). [2] B.R. Barquest, J.C. Bale, J. Dilling, G. Gwinner, R. Kanungo, R. Krucken, M.R. Pearson. Development of a new RFQ beam cooler and buncher for the CANREB project at TRIUMF. NIMB 376 (2016), 207-210. e-mail: ccharles@triumf.ca 18:30 Conference Dinner 09:10 Recent experimental results of KEK Isotope Separation System (KISS) - Yutaka Watanabe (KEK WNSC) Recent experimental results of KEK Isotope Separation System (KISS) Yutaka Watanabe (KEK WNSC) Room: Ball Room KEK Isotope Separation System (KISS) is a laser ion source with an argon gas cell, we have been developing at RIKEN RIBF facility [1,2]. The KISS project is motivated by the systematic nuclear spectroscopy of neutron-rich nuclei at the north-east part of the nuclear chart, that is from around neutron-magic number 126 to the trans-uranium region. The systematic studies of lifetimes, masses, beta-gamma spectroscopy and laser spectroscopy of those nuclei will provide information of nuclear structures, which is crucial inputs to the theoretical predictions of nuclear parameters included in the the simulation of r-process nucleosynthesis, its astrophysical environments remain unrevealed yet. KISS has an argon gas cell which is optimized to efficiently collect and extract nuclear products in the multi nucleon transfer (MNT) reactions, which are considered to be appropriate mechanism to produce neutron-rich nuclei of interest [3,4]. The employment of a doughnut-shaped gas cell with high-vacuum condition of the primary beam line improved the extraction efficiency [5]. The laser resonance ionization technique is used to element-selectively ionize the element of interest. Those photo-ions are transported by RF ion guides through the differential pumping area and are finally accelerated by a high voltage to select one species of isotopes with a mass separator. In-gas-cell and in-gas-jet laser ionizations are utilized at KISS. In this presentation, we will report the present status, the recent experimental results and the future plan of KISS. [1] Y. Hirayama et al., Nucl. Instrum. and Methods B 353 (2015) 4. [2] Y. Hirayama et al., Nucl. Instrum. and Methods B 376 (2016) 52. [3] Y.H. Kim et al., EPJ Web of conferences 66 (2014) 03044. [4] Y.X. Watanabe et al., Phys. Rev. Lett. 115 (2015) 172503. [5] Y. Hirayama et al., Nucl. Instrum. and Methods B 412 (2017) 11. 09:30 Present status and future plans for slow and stopped beams in RIKEN - Peter Peter Schury Present status and future plans for slow and stopped beams in RIKEN Peter Peter Schury Room: Ball Room The accelerator complex at RIKEN's Nishina Center for Accelerator Based Science offers presently unparalleled intensity and variety of radioactive ion beams. The accelerator complex employs multiple facilities utilizing in-flight fission and fragmentation, fusion, and multi-nucleon transfer reactions to provide radioactive ion beams spanning the table of isotopes from $^{6}$He to $^{294}$Og. In order to make these beams viable for low-energy experimental techniques (e.g. ion traps) requires the use of high-pressure gas cells. Several such systems are in various states of readiness. The SHE-mass gas cell, located after the gas-filled recoil ion separator GARIS-II has been successfully operated since 2016. Recent modifications of the SHE-mass system will be discussed and select results presented. A medium-size gas cell is nearing construction for use in symbiotic measurements. It will be used as a beam dump for in-beam gamma-ray experiments and in conjunction with a multi-reflection time-of-flight mass spectrograph will enhance the in-beam gamma-ray experiments. The design of the system and its planned usage will be discussed. To provide access to neutron-rich heavy isotopes which are difficult to access via in-flight fission and fragmentation, the KEK Isotope Separation System (KISS) utilizes multi-nucleon transfer reactions. The transfer products are stopped and neutralized in an argon-filled gas cell. Atoms of a desired element can be selectively re-ionized using a two-color resonance laser ionization scheme. Ions of the selected element are accelerated to 30 keV and isobarically purified via a magnetic dipole prior to being delivered to a measurement station. A new "gas-cell cooler-buncher'' has recently been installed to efficiently convert the 30 keV beam to be compatible with ion traps. The system will be described and its performance reported. 10:00 The N=126 factory at Argonne National Laboratory - Adrian A. Valverde (Argonne National Laboratory) The N=126 factory at Argonne National Laboratory Adrian A. Valverde (Argonne National Laboratory) Room: Ball Room The properties of nuclei near the neutron $N=126$ shell are critical to the understanding of the production of elements via the astrophysical $r$-process pathway, particularly for the $A\sim195$ abundance peak [1]. Unfortunately traditional particle-fragmentation, target-fragmentation, or fission production techniques do not efficiently produce these nuclei. Multi-nucleon transfer (MNT) reactions between two heavy ions, however, can efficiently produce these nuclei [2]. The $N=126$ factory currently under construction at Argonne National Laboratory's ATLAS facility will make use of these reactions to allow for the study of these nuclei [3]. Because of the difficulty collecting MNT reaction products, this new facility will use a large-volume gas catcher, similar to the one currently in use at CARIBU, to convert these reaction products into a low energy beam that will initially be mass separated with a magnetic dipole of resolving power $R\sim10^3$. Subsequently, the beam will pass through an RFQ cooler-buncher and MR-TOF system to provide high mass resolving power ($R\sim10^5$) sufficient to suppress isobaric contaminants. The isotopically separated, bunched low-energy beams will then be available downstream for measurements such as mass measurements using the CPT mass spectrometer or decay studies. The status of the facility under construction will be presented, together with commissioning results of the component devices. This work was supported in part by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357; by NSERC (Canada), Application No. SAPPJ-2018-00028; by the National Science Foundation under Grant No. PHY-1713857; by the University of Notre Dame; and used resources of ANL's ATLAS facility, an Office of Science User Facility. [1] M. Arnould, S. Goriely, and K. Takahashi, Phys. Rep. **450**, 97 (2007) [2] V. Zagrebaev and W. Greiner, Phys. Rev. Lett. **101**, 122701 (2008) [3] G. Savard, M. Brodeur, J.A. Clark and A.A. Valverde, Nucl. Instr. Meth. Phys. Res. B Proceedings of EMIS-2018 (in press) 11:00 On the way to a world-competitive fission fragment facility at SARAF - Israel Mardor (Soreq Nuclear Research Center) On the way to a world-competitive fission fragment facility at SARAF Israel Mardor (Soreq Nuclear Research Center) Room: Ball Room Combining an Ion Catcher, which is based on the cryogenic stopping cell that is being designed for the Low Energy Branch at the Super-FRS at FAIR [1], with the high-power accelerator SARAF II, currently under construction at Soreq NRC [2], and a liquid lithium target [3] will enable creating a research facility for neutron-rich exotic isotopes based on high-energy neutrons induced fission. I will outline a conceptual design and possible implementation of the Ion Catcher at SARAF, along with rate estimations, which indicate that such a facility will be potent in a world competitive manner, with neutron-rich isotope production rates higher than much larger future facilities such as FRIB. References: [1] T. Dickel et al., "Conceptional design of a novel next-generation cryogenic stopping cell for the Low-Energy Branch of the Super-FRS", Nucl. Instr. and Meth. B 376 216-220 (2016) [2] I. Mardor et al., "The Soreq Applied Research Accelerator Facility (SARAF): Overview, research programs and future plans", Eur. Phys. J. A (2018) 54: 91 [3] S. Halfon et al., "Note: Proton irradiation at kilowatt-power and neutron production from a free-surface liquid-lithium target", Rev. Sci. Inst. 85, 056105 (2014) email: mardor@tauex.tau.ac.il 11:30 Overview of progress at SMI-2019 - Iain Moore (University of Jyväskylä) Overview of progress at SMI-2019 Iain Moore (University of Jyväskylä) Room: Ball Room This talk will provide an overview of the field of Stopping and Manipulation of Ions and related topics based on the recent progress presented by the different contributions within SMI-2019. A final focus will aim to look towards the future and the puzzles and possibilities we may face in the coming years which will set the scene for the next conference. 12:00 Concluding Remarks 12:10 Conference Ends Conference Ends
CommonCrawl
What exactly is a virtual displacement in classical mechanics? I'm reading Goldstein's Classical Mechanics and he says the following: A virtual (infinitesimal) displacement of a system refers to a change in the configuration of the system as the result of any arbitrary infinitesimal change of the coordinates $\delta \mathbf{r}_i$, consistent with the forces and constraints imposed on the system at the given instant $t$. The displacement is called virtual to distinguish it from an actual displacement of the system ocurring in a time interval $dt$, during which the forces and constraints may be changing. Then he discusses virtual work and so on. Now, I can't grasp what this thing of virtual really is. By this text there's a diference between one infinitesimal change and one virtual change and I really don't get what this virtual really is. Also, this is based on infinitesimals. How can this be expressed rigourously without refering to infinitesimals? I tried looking on Spivak's Physics for Mathematicians where he considers these virtual displacements as tangent vectors to a certain manifold, but I'm not sure this is the most "standard" way to do it rigorously. classical-mechanics lagrangian-formalism constrained-dynamics $\begingroup$ More on virtual displacement. Concerning infinitesimals, see physics.stackexchange.com/q/70376/2451 , physics.stackexchange.com/q/92925/2451 and links therein. $\endgroup$ – Qmechanic♦ Aug 6 '14 at 18:28 $\begingroup$ I'm leave the work of a full answer to someone more comfortable in the field, but I quote Arnold: "In mechanics, tangent vectors to the configuration manifold are called virtual variations." So I'd say that's a pretty standard way of looking at things. $\endgroup$ – user10851 Aug 6 '14 at 18:40 Let $Q$ denote the set of all possible configurations of the system (the configuration manifold). Consider a point $q_0\in Q$. For the sake of conceptual clarity, and to make contact with physics notation, let's work in some local coordinate patch around $q_0$. Suppose that $q_0$ represents the position of the system under consideration at time $t_0$. At a given time $t$ later, the system will be at some position say $q(t)$ that is determined by the evolution equations (the Euler-Lagrange equations if we are doing Lagrangian mechanics), and the quantity \begin{align} q(t) - q(t_0) = q(t) - q_0 \end{align} would be the displacement of the system after a time $t-t_0$. Suppose, instead we consider some other curve $\gamma(s)$ in the configuration space which starts at the point $s_0$; \begin{align} \gamma(s_0) = q_0, \end{align} and suppose that we compute the displacement \begin{align} \gamma(s) - \gamma(s_0) = \gamma(s) - q_0 \end{align} that would result from moving along this other curve of our choosing. We call this displacement the virtual displacement after a "time" $s-s_0$ corresponding to moving along the curve $\gamma$. It's called virtual because it is the displacement in the position of the system that would occur if the system were to move along the curve $\gamma$ of our choosing -- a "virtual" curve as opposed to the "real" curve along which the system travels according to the Lagrangian evolution of the system. Note. As Qmechanic suggested in the comments, I used the parameter $s$ for the virtual curve $\gamma$ instead of $t$ to emphasize that moving along that curve does not correspond to time-evolution, but rather any curve of our choosing. Now what about virtual "infinitesimal" displacements? Well, recall that the term "infinitesimal" in physics essentially always refers to "first order" approximations, see, e.g. this SE post: Rigorous underpinnings of infinitesimals in physics So when we are discussing a virtual infinitesimal displacement, what we have in mind is taking the virtual displacement $\gamma(s) - q_0$, Taylor expanding it to first order in $s$, and extracting only the first order term. Let's do this: \begin{align} \gamma(s) - q_0 = \gamma(s_0) + \dot\gamma(s_0) (s-s_0) + O((s-s_0)^2) - q_0 \end{align} Using the fact that $\gamma(s_0) = q_0$, we see that the Taylor expansion of the virtual displacement is \begin{align} \gamma(s) - q_0 = \dot\gamma (s_0) (s-s_0) + O((s-s_0)^2), \end{align} and now we notice that to first order in $s$, the size of the virtual displacement is controlled by the coefficient of $s-s_0$, namely $\dot\gamma(s_0)$. In other words, virtual infinitesimal displacements (meaning we just keep the first order contribution in $s-s_0$), are determined by the velocity vector of the chosen "virtual curve" at $s_0$. But if you've taken a differential geometry course, then you know that velocities of curves on a manifold are simply tangent vectors to that manifold! So virtual infinitesimal displacements can be associated with tangent vectors to the configuration manifold. The intuition to keep in mind here as that a virtual displacement just tells us how far we would get away from a certain point on the manifold if we were to travel on a certain curve of our choosing that may not coincide with the actual motion of the system determined by time evolution. The "infinitesimal" part and identifying this part with tangent vectors comes simply from considering what happens only to first order. joshphysicsjoshphysics $\begingroup$ Thanks for your answer, it is much clearer now where the "virtual" name comes from, but I have one doubt. Goldstein says that these virtual displacements should be consistent with the forces and constraints. That wouldn't make the curve $\gamma$ be exactly the solution of the evolution equations? What he really means by that then? $\endgroup$ – user1620696 Aug 6 '14 at 22:37 $\begingroup$ @user1620696 Imagine that you have a particle constrained to move on the surface of a sphere but such that the particle is otherwise free. If the particle is sitting at some point and you give it some initial velocity, then it will travel along a particle great circle (the one whose tangent is in the same direction as the initial velocity). However, even though the initial conditions tell us that the particle will move in a particular direction, we could have considered sending it in any direction along some curve that lies on the sphere; this would still be consistent with the constraints. $\endgroup$ – joshphysics Aug 6 '14 at 22:42 $\begingroup$ Suggestion to the answer (v1): When discussion virtual displacements, call the curve parameter something else than $t$, e.g. $s$ or $u$ (as the reader may confuse $t$ with time). Recall that a virtual displacement takes place at a frozen instant of time. $\endgroup$ – Qmechanic♦ Aug 6 '14 at 22:42 $\begingroup$ @Qmechanic Yeah I was on the fence as to whether to do that or not, but I think you're right; it might be confusing as written for that reason. I'll change the notation. Thanks for the suggestion. $\endgroup$ – joshphysics Aug 6 '14 at 22:43 $\begingroup$ (+1) Really nice explanation. But that would be much nicer if you could provide some simple example which may help to understand the abstract reasoning. :) $\endgroup$ – H. R. Sep 30 '16 at 18:54 In short: virtual displacement is "pretend you are moving, but don't really move". In other words - you move by such a small amount that you don't change the state of the system - but it gives you insight (through work done etc) in what would happen if you did move. In other words - if the system is really moving, you can look at an interval $dt$ to see how little it moved in that time. That's an "infinitesimal" motion. With virtual motion, you pretend you moved by $dx$ - but not because the system is in motion, but just imagining that you made the tiniest motion (in finite time - so there is no velocity, $\frac{dx}{dt}=0$) FlorisFloris $\begingroup$ "pretend you are moving, but don't really move". In other words - you move by such a small amount " - if we pretend to move a larger distance , would it not be virtual displacement then. Because Wikipedia article says " all possible virtual paths"... $\endgroup$ – Shashaank May 12 '17 at 13:51 You may look into this. It has elaborate works on understanding what virtual displacement is. https://www.researchgate.net/publication/2174249_On_Virtual_Displacement_and_Virtual_Work_in_Lagrangian_Dynamics SahilSahil $\begingroup$ Permalink: dx.doi.org/10.1088/0143-0807/27/2/014 $\endgroup$ – Qmechanic♦ Sep 25 '19 at 7:33 As I understand, it must be a displacement in generalized coordinates. If they are orthogonal space coordinates they are not virtual. Not the answer you're looking for? Browse other questions tagged classical-mechanics lagrangian-formalism constrained-dynamics or ask your own question. How to treat differentials and infinitesimals? Does Newtonian $F=ma$ imply the least action principle in mechanics? Confusion with Virtual Displacement Virtual displacement Is a pendulum in dynamic equilibrium? Confusion about virtual displacements Mathematics of the Virtual Displacement What is difference between variations of the work and virtual work? What is "irreversible displacement"? Are there examples in classical mechanics where D'Alembert's principle fails? Allowed Virtual Displacements Virtual displacements Meaning and Origin of an Expression which Involves Virtual Displacement Confusion about virtual displacement
CommonCrawl
Existence of nonstationary periodic solutions for $\Gamma$-symmetric Lotka-Volterra type systems On smooth conjugacy of expanding maps in higher dimensions August 2011, 30(3): 699-708. doi: 10.3934/dcds.2011.30.699 Equilibrium states of the pressure function for products of matrices De-Jun Feng 1, and Antti Käenmäki 2, Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong Department of Mathematics and Statistics, P.O. Box 35 (MaD), FI-40014, University of Jyväskylä, Finland Received April 2010 Revised October 2010 Published March 2011 Let $\{M_i\}_{i=1}^l$ be a non-trivial family of $d\times d$ complex matrices, in the sense that for any $n\in \N$, there exists $i_1\cdots i_n\in \{1,\ldots, l\}^n$ such that $M_{i_1}\cdots M_{i_n}\ne $0. Let P : $(0,\infty)\to \R$ be the pressure function of $\{M_i\}_{i=1}^l$. We show that for each $q>0$, there are at most $d$ ergodic $q$-equilibrium states of $P$, and each of them satisfies certain Gibbs property. Keywords: Thermodynamical formalism, Products of matrices., Equilibrium states. Mathematics Subject Classification: Primary: 37D35; Secondary: 34D2. Citation: De-Jun Feng, Antti Käenmäki. Equilibrium states of the pressure function for products of matrices. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 699-708. doi: 10.3934/dcds.2011.30.699 P. Bougerol and J. Lacroix, "Products of Random Matrices with Applications to Schrödinger Operators," Birkhäuser, 1985. Google Scholar R. Bowen, "Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms," Lecture notes in Math., 470, Springer-Verlag, 1975. Google Scholar Y. L. Cao, D. J. Feng and W. Huang, The thermodynamical formalism for submultiplicative potentials, Discrete Contin. Dyn. Syst., 20 (2008), 639-657. Google Scholar K. J. Falconer, The Hausdorff dimension of self-affine fractals, Math. Proc. Cambridge Philos. Soc., 103 (1988), 339-350. doi: 10.1017/S0305004100064926. Google Scholar K. Falconer and A. Sloan, Continuity of subadditive pressure for self-affine sets, Real Analysis Exchange, 34 (2009), 413-427. Google Scholar D. J. Feng, Lyapunov exponents for products of matrices and multifractal analysis. Part I: Positive matrices, Israel J. Math., 138 (2003), 353-376. doi: 10.1007/BF02783432. Google Scholar D. J. Feng, The variational principle for products of non-negative matrices, Nonlinearity, 17 (2004), 447-457. doi: 10.1088/0951-7715/17/2/004. Google Scholar D. J. Feng, Lyapunov exponents for products of matrices and multifractal analysis, part II: General matrices, Israel J. Math., 170 (2009), 355-394. doi: 10.1007/s11856-009-0033-x. Google Scholar D. J. Feng, Equilibrium states for factor maps between subshifts, Adv. Math., 226 (2011), 2470-2502. doi: i:10.1016/j.aim.2010.09.012. Google Scholar D. J. Feng and W. Huang, Lyapunov spectrum of asymptotically sub-additive potentials, Comm. Math. Phys., 297 (2010), 1-43. doi: 10.1007/s00220-010-1031-x. Google Scholar D. J. Feng and K. S. Lau, The pressure function for products of non-negative matrices, Math. Res. Lett., 9 (2002), 363-378. Google Scholar H. Furstenberg and H. Kesten, Products of random matrices, Ann. Math. Statist., 31 (1960), 457-468. doi: 10.1214/aoms/1177705909. Google Scholar Y. Guivarc'h and E. Le Page, Simplicité de spectres de Lyapounov et propriété d'isolation spectrale pour une famille d'opérateurs de transfert sur l'espace projectif, in "Random Walks and Geometry," Walter de Gruyter GmbH & Co. KG, Berlin, (2004), 181-259. Google Scholar Y. Heurteaux, Estimations de la dimension inférieure et de la dimension supérieure des mesures, Ann. Inst. Henri Poincaré, 34 (1998), 309-338. doi: 10.1016/S0246-0203(98)80014-9. Google Scholar A. Käenmäki, On natural invariant measures on generalised iterated function systems, Ann. Acad. Sci. Fenn. Math., 29 (2004), 419-458. Google Scholar A. Käenmäki and M. Vilppolainen, Dimension and measures on sub-self-affine sets, Monatsh. Math., 161 (2010), 271-293. doi: 10.1007/s00605-009-0144-9. Google Scholar E. Le Page, "Théorèmes Limites pour les Produits de Matrices Aléatoires," Lecture Notes in Math., 928, Springer, Berlin-New York, 1982. Google Scholar D. Ruelle, "Thermodynamic Formalism. The Mathematical Structures of Classical Equilibrium Statistical Mechanics," in "Encyclopedia of Mathematics and its Applications," 5, Addison-Wesley Publishing Co., Reading, Mass., 1978. Google Scholar P. Walters, "An Introduction to Ergodic Theory,'' Springer-Verlag, 1982. Google Scholar Imen Bhouri, Houssem Tlili. On the multifractal formalism for Bernoulli products of invertible matrices. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1129-1145. doi: 10.3934/dcds.2009.24.1129 Renaud Leplaideur. From local to global equilibrium states: Thermodynamic formalism via an inducing scheme. Electronic Research Announcements, 2014, 21: 72-79. doi: 10.3934/era.2014.21.72 Luis Barreira. Nonadditive thermodynamic formalism: Equilibrium and Gibbs measures. Discrete & Continuous Dynamical Systems, 2006, 16 (2) : 279-305. doi: 10.3934/dcds.2006.16.279 Omri M. Sarig. Bernoulli equilibrium states for surface diffeomorphisms. Journal of Modern Dynamics, 2011, 5 (3) : 593-608. doi: 10.3934/jmd.2011.5.593 Dominic Veconi. Equilibrium states of almost Anosov diffeomorphisms. Discrete & Continuous Dynamical Systems, 2020, 40 (2) : 767-780. doi: 10.3934/dcds.2020061 V. M. Gundlach, Yu. Kifer. Expansiveness, specification, and equilibrium states for random bundle transformations. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 89-120. doi: 10.3934/dcds.2000.6.89 Alexander Arbieto, Luciano Prudente. Uniqueness of equilibrium states for some partially hyperbolic horseshoes. Discrete & Continuous Dynamical Systems, 2012, 32 (1) : 27-40. doi: 10.3934/dcds.2012.32.27 Ivan Werner. Equilibrium states and invariant measures for random dynamical systems. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1285-1326. doi: 10.3934/dcds.2015.35.1285 Rajeshwari Majumdar, Phanuel Mariano, Hugo Panzo, Lowen Peng, Anthony Sisti. Lyapunov exponent and variance in the CLT for products of random matrices related to random Fibonacci sequences. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4779-4799. doi: 10.3934/dcdsb.2020126 Roger M. Nisbet, Kurt E. Anderson, Edward McCauley, Mark A. Lewis. Response of equilibrium states to spatial environmental heterogeneity in advective systems. Mathematical Biosciences & Engineering, 2007, 4 (1) : 1-13. doi: 10.3934/mbe.2007.4.1 Jisang Yoo. Decomposition of infinite-to-one factor codes and uniqueness of relative equilibrium states. Journal of Modern Dynamics, 2018, 13: 271-284. doi: 10.3934/jmd.2018021 Vítor Araújo. Semicontinuity of entropy, existence of equilibrium states and continuity of physical measures. Discrete & Continuous Dynamical Systems, 2007, 17 (2) : 371-386. doi: 10.3934/dcds.2007.17.371 Suzete Maria Afonso, Vanessa Ramos, Jaqueline Siqueira. Equilibrium states for non-uniformly hyperbolic systems: Statistical properties and analyticity. Discrete & Continuous Dynamical Systems, 2021, 41 (9) : 4485-4513. doi: 10.3934/dcds.2021045 Xiaolin Xu, Xiaoqiang Cai. Price and delivery-time competition of perishable products: Existence and uniqueness of Nash equilibrium. Journal of Industrial & Management Optimization, 2008, 4 (4) : 843-859. doi: 10.3934/jimo.2008.4.843 Jana Kopfová. Thermodynamical consistency - a mystery or?. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 757-767. doi: 10.3934/dcdss.2015.8.757 Ruikuan Liu, Tian Ma, Shouhong Wang, Jiayan Yang. Thermodynamical potentials of classical and quantum systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1411-1448. doi: 10.3934/dcdsb.2018214 Benjamin Couéraud, François Gay-Balmaz. Variational discretization of thermodynamical simple systems on Lie groups. Discrete & Continuous Dynamical Systems - S, 2020, 13 (4) : 1075-1102. doi: 10.3934/dcdss.2020064 Vincent Pavan. Thermodynamical considerations implying wall/particles scattering kernels. Kinetic & Related Models, 2014, 7 (1) : 133-168. doi: 10.3934/krm.2014.7.133 Vaughn Climenhaga. A note on two approaches to the thermodynamic formalism. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 995-1005. doi: 10.3934/dcds.2010.27.995 Hala Ghazi, François James, Hélène Mathis. A nonisothermal thermodynamical model of liquid-vapor interaction with metastability. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2371-2409. doi: 10.3934/dcdsb.2020183 De-Jun Feng Antti Käenmäki
CommonCrawl
Modern Curve Tracer The Department of Urban Planning and Municipalities presented a paper to the Workshop covering key principles, objectives of the building indicator. The Briot Attitude Patternless Edging System raises the bar for what is possible in your in-house finishing lab. 41) fitting the literature data of Xe isotope mass fractionation in ancient samples [see the study of Avice et al. You will find the best deals on Hewlett Packard 30 Db Apc7 Rf Microwave Attenuator P N 8492a and other equipment here. Measuring electronic parameters as a function of time through. After the tracer has been selected and its radiosynthesis is reliable, and the model has been developed, and validated, the experimental design needs to be optimized, based on previous studies and simulations. Physics is the study of an enormous span of natural phenomena ranging from the large scale involvement of galaxies to the the submicroscopic motion of atoms and nuclei. All modern production tubes are kind of inferior to the original tubes made in "the western world". As he calls it "the Ferrari" among the tube testers. A standard curve tracer The has a CRT screen that can show the behavior of the current as the voltage is varied. with transistors, a 570 can do the same with tubes. The Tektronix 5CT1N is a curve tracer plug-in for the 5000 series oscilloscopes, C30 and C32 should be replaced with modern equivalents rated above 35 V. There are 108 curve tracers suppliers, mainly located in Asia. So, the actual HTML is a bit uglier than desired and there is no table of contents but it's not much larger than necessary. 3/23/2016 Textbooks and Materials: CS385DLS1A2016 Modern Developments in Advanced Networking 1/4 Textbooks and Materials Required textbooks No textbook is required. Keysight B1505A Power Device Analyzer/Curve Tracer can meet the requirements of modern power devices evaluations. However, Keysight scopes are also capable of displaying voltage-versus-voltage by using their "X-Y mode". RTI provides curve tracers with powered and unpowered test capabilities, and relevant peripherals and accessories. This processtechnology uses an arrangement where the (usually "enhancement-mode") p-channel MOSFET and n-channel MOSFET are connected in series such that when one is on, the other is off. Introduction to Power MOSFETs and their Applications -- F airchild Semiconductor. Exclusive Daily Sales! #tracer-recliner-by-palliser-furniture #_Custom-Furniture #Custom-Recliners The Tracer collection contains comfortable pieces that emit tranquillity and a laid-back style. The left most column specifies the voltage that will be provided to the circuit via the power supply. Modern curve tracers often contain mechanical shields and interlocks that make it more difficult for the operator to. I used to know that beast very well. Met deze Jeulin Monoscope 10MHz heb ik een transistor curve tracer gemaakt. Best Air Quality Monitor with VOC and Particle Counter Foobot Air Quality Monitor with Wi-Fi. Sourcing for Candy, Fabric Increase Ahead of Halloween Retail Surge. 5V, -2V, -2. RTI provides curve tracers with powered and unpowered test capabilities, and relevant peripherals and accessories. As well as from electronic. The best selection of Royalty Free Curve Alphabet Vector Art, Graphics and Stock Illustrations. (a) Map showing the modern Yarlung Tsangpo–Brahmaputra drainage network in the Eastern Himalaya superimposed on a Google Earth Landsat image. Fully Adjustable Shock Rear suspension features fully adjustable Kayaba® link-type shock with 11. In this way, the proper operation of the DUT can be proven or faults in the device can be traced. This generator lets you create handwriting practice sheets with the text you provide. Tracer is the core application developed by ReRa that will help you to characterize your solar cells and compare the results. I-V-curve tracer uses a lot of resistors and switches to produce a smooth curve. I have only the adaptors for diodes and small-signal transistors. Also, the vastly increased complexity of modern electronic equipment can. Another interesting video on Western Electric tubes. On multiyear timescales, tracer variability in the lower stratosphere is dominated by upwelling variability associated with the El Nino-Southern Oscillation (ENSO) (˜ Garcia. readout 576 curve tracer Post by maurissor » May 27th, 2014, 10:26 pm I have a nice 576 curvetracer with readout system failure: missing only letters K, m & micro, on Beta measurement in the lower part. It applies a swept signal to a semiconductor device and displays the response. 2008 - specification of curve tracer. Reed W2CQH suggested looking into obtaining a curve tracer for tube testing. One good starting point is to take an existing curve tracer, get it's list of functions and use that to define the necessary blocks. TMI is accredited by ANSI-ASQ National Accreditation Board in accordance with the recognized International Standards ISO/IEC 17025:2017, ANSI/NCSL Z540-1-1994, ANSI/NCSL Z540. The method described here is crude, but the only way to do better would be to spend money for a scope/curve tracer. Plot the E(t) and F(t) curves. 1996-2019 and may not be reproduced in part or in full without written consent from Space-Tech Lab. Dawn of the Parametric Curve Tracer. I like especially the storage capablility so you can verifiy old "temperature compensated zeners". Sure, this might not be a true curve-tracer any more, but the curve tracer has just been replaced by more complex setups of testing systems. The maximum effective range is 2,000 meters due to tracer burnout. Ive used a curve tracer (expensive) before to match output transistor in power amps, and have been told that you can set up a less expensive scope and such to do it without the curve tracer per se, but Im not an expert in that. -Curve for the T-I-S Model The RTD will be analyzed from a tracer pulse injected into the first reactor of three equally sized CSTRs in series. The solar module curve tracer, a hand portable, battery powered instrument, is used to measure the output I-V characteristics of a solar module in a solar cell array. Create a curve tracer with your oscilloscope! E0. This number of events can be time consuming to review and analyze. This led me to design and build my analogue curve tracer which I used for many years successfully until I build my uTracer, which was a great innovation in curve tracing. Richard Cardenas, Ph. In Tracer you will find your all-in-one solution for the measurement and elaboration of IV-curve measurements. The best way to graph a supply and demand curve in Microsoft Excel would be to use the XY Scatter chart. Kuphaldt - under the terms and conditions of the Creative Commons Attribution 4. Curve Tracers / Power Device Analyzers have wide voltage and current coverage options ranging from 3 kV / 20A to 10 kV /1500A and other features that make them capable of handling all types of power devices. Whether testing your power supply or monitoring a heartbeat, if you have an oscilloscope, Dash DAQ will help you control and read your instrument with user-friendly GUIs. This type of Access control systems have the security features that control users and systems to communicate and interact with other systems and resources. This index has a wide collection of FM circuits or schematics, that can be very useful for the enginner or the student who need a circuit / schematic for reference or information for a project that has to be in FM (frequency modula at category fm circuit : RF CircuitsCircuits and Schematics at Next. Well since I learn fastest by doing (not exactly unique I guess) time to start my next trick. With a voltage swing of (340V - 97V) * 122ma / 2 = 14. CHAPTER – I INTRODUCTION 1. Power-supply packages will influence the performance, cost, and size of the power supply used in the power-management subsystem. Instead I matched NPN transistor pairs on my Tektronix 577 curve tracer and bonded them together with a 3300 PPM/C 330R tempco (R12 on card 6 and R26 on card 7). The curve consists of three distinct regions: early life, useful life, and wear-out. 1 inches of travel and race-tested compression valving. (4) the fourth school of thought is "total distortion" — meaning that output tubes are truly matched when the total distortion of your amp is lowest vis-a-vis another set of tubes. Faradayweg 4-6, DE 14195 Berlin. Modern curve tracers often contain mechanical shields and interlocks that make it more difficult for the operator to come into contact with hazardous voltages or currents. 1- Aug 27 - Lab-1, and two homework sets. Curve "C" has only the knot sequence spaced differently, showing how knots help control the area that each control point affects. We won't go into why this is so here, as there is more information on this topic on our Online Training site. [Jason Jones] has always wanted a curve tracer for his home shop. Welcome to the Gearslutz Pro Audio Community!. Guitar amps nearly always have very simple power supplies, free from modern refinements like electronic regulators, which makes them easy to design. Find Curve In Stock Now. The most important component of successful projects is the team that has been formed for the entire existence of the company. Another compare is with the uTrace or E-tracer curve tracer. Sweeping the pins of a reworked device with a curve tracer can help you to determine whether these diodes are present, implying a good solder connection. To avoid personal injury, do not perform any servicing unless you are qualified to do so. Curve tracers are capable of generating lethal voltages and currents and so pose an electrocution hazard for the operator. The main problem in curve-tracing vacuum tubes is the lack of affordable tracers. This machine would feed a. A classic curve tracer continuously sweeps. Unlike a multimeter test, the curve tracer exercises the device under test over a range of voltage and currents, so it's a much more informative measurement. Reconditioned curve tracers may still be obtained from various suppliers of used test equipment. However in the course of wiring it up I found that some 6AU6s just refused to bias where they should. Best Air Quality Monitor with VOC and Particle Counter Foobot Air Quality Monitor with Wi-Fi. All information on this website is copyrighted by Space-Tech Lab. Watch it arc up and curve with the wind on its way to a distant target. 4 pA to 4 mA with 100 fA minimum resolution. When he was starting out in electronics he fell in love with a machine called a Huntron Tracker 2000. Hi Gents, as there is a lot of interest in valve testers, try this circuit for size. White arrows indicate river flow direction. It measures from 328 V to 60 kV with 15 V resolution and from 9. readout 576 curve tracer Post by maurissor » May 27th, 2014, 10:26 pm I have a nice 576 curvetracer with readout system failure: missing only letters K, m & micro, on Beta measurement in the lower part. These enhancements make the B1505A the industry's first power. 3-2006 and all other program requirements in the field of calibration. After recalibration the 577 shows very good results. 3 !!! THIS IS A REAL WORLD TEST OF THE TUBE. Full Github integration keeps people up to date on development of your open source project. As it can drive anything that it is connected to it with absolute perfection, the FM 711-MKIII is loved worldwide and already a modern Classic. In this way, the proper operation of the DUT can be proven or faults in the device can be traced. Find Curve In Stock Now. Starting from an observation of Marconi's, a number of interesting facts have been accumulated on the absorbing effect of sunlight on the propagation of long Hertzian waves through space, and on the disturbing effects of atmospheric electricity as well as upon the influence of earth curvature and obstacles of various kinds interposed in the line between the sending and transmitting stations. [Jason Jones] has always wanted a curve tracer for his home shop. PART III provides information on the design of small to medium size electronic flashes and repeating strobes including basic design guidelines, shortening or lengthening flash duration, power supply component selection. Operation of the tubes was also not disturbed as the tubes were dropped repeatedly onto a metal block from a height of several feet. While planar scintigraphy is still the mainstream imaging method, SPECT, PET and bremsstrahlung imaging have promising properties to improve accuracy in quantification. It is as close to a reference guide as we will probably ever get, with an entire chapter devoted to tube curve tracers. 100% means triode connection, 0% means tetrode connection, some where in between is ultra linear connection see picture:. However, we will. In fact, it won the whole damn competition with 197 MPGe while accelerating to 60 MPH in just over 6 seconds. Modern curve tracers are modular in design to accommodate (expensive) functions only some users mey desire, such an approach may be used here also. 6522 VIA Board for RC2014. 370B Programmable Curve Tracer 070-A838-51 Warning The servicing instructions are for use by qualified personnel only. But be careful and check the feed back first. In most cases, you can analyze your data and get results immediately, with no offline post-processing. I did quite a bit of web searching before buying this kit in 2014. Picture of two DB3 diacs from two different manufacturers. With modern solid state analog design, A/D and D/A converters and cheap computing, a decent vacuum tube curve tracer that can display the curves on a computer screen is quite feasible. Agilent Technologies Inc. Curve For Sale. The DCM motor used in our locomotive is still an old-fashioned 3-pole series motor with an electromagnet to provide motive power. Another compare is with the uTrace or E-tracer curve tracer. van 't Klooster* State University ofUtrecht, Laboratory ofAnalytical Chemistry, Croesestraat 77a, 3522 ADUtrecht, TheNetherlands Introduction Hardware Although direct connection ofanalytical instruments to com-. The power supply is the most important part of the amplifier because, ultimately, it is the power supply that dictates the limitations of the amplifier as a whole. Entitled 'Tube Testers and Classic Electronic Test Gear', it is available from Antique Electronic Supply. Unless, of course, the price is just too irresistible. While planar scintigraphy is still the mainstream imaging method, SPECT, PET and bremsstrahlung imaging have promising properties to improve accuracy in quantification. I have more than once had the occasion to work with older model test equipment. However, on the course of the experimentation, it was discovered that the 0. Cupid Arrows (Red Tracer Flare) Description. The Leaderway Electronics Co. They were originally used to allow gunners to track where their small caliber bullets were actually going. In this way, the proper operation of the DUT can be proven or faults in the device can be traced. Tolbert, Senior Member, IEEE Abstract—Power electronics is an enabling technology found in most renewable energy generation systems. Post using Markdown with syntax highlighting. 8 volts at 1 amp forward current. A Curve-tracer is test equipment that displays voltage to current relationship of the component. Because of its superior. The following curve tracer photo shows grid conduction for a 3A5 triode from the 1950's. Simply select the desired curve on the menu and fill in the blanks. A line graph is good when trying to find out a point where both sets of data intersects. Start studying Modern Chemistry Chapter 22. Rishabh has 15 jobs listed on their profile. Full details can be provided if any one wishes to build it (and has a good stock of uniselectors) Ed. in 14C ages=670y!. Nuclear engineering shares with electrical engineering a concern for electrical power generation, automatic control, computer sciences, and plasmas. Curve tracers are available for low cost surplus that fully characterize semiconductors. ☀ Leather Sofas Best Sale ☀ Tracer Reclining Sofa by Palliser Furniture At Your Doorstep Faster Than Ever. Testing Modern Power Semiconductor Devices Requires a Modern Curve Tracer. The Keysight Curve Tracer / Power Device Analyzers are the best solutions for power device evaluation. out of the box, they are powerful analytical tools. compare 14C ages of benthic and planktonic forams in same core horizon 2. The 2N4392 JFET is designed to be operated as a switch, and its transfer characteristic is far from ideal. It can be installed in either a vertical or a horizontal compartment, a front-panel switch must be set accordingly. On multiyear timescales, tracer variability in the lower stratosphere is dominated by upwelling variability associated with the El Nino-Southern Oscillation (ENSO) (˜ Garcia. This index has a wide collection of FM circuits or schematics, that can be very useful for the enginner or the student who need a circuit / schematic for reference or information for a project that has to be in FM (frequency modula at category fm circuit : RF CircuitsCircuits and Schematics at Next. > number of the Heathkit curve tracer? I don't have a curve tracer but > I do have a high voltage variable power supply that I can manually > sweep to see if there are any negative resistance regions. Tracer intends to support most instruments available on the market to use as an IV-curve measurement system. 5V, -2V, -2. TMI is accredited by ANSI-ASQ National Accreditation Board in accordance with the recognized International Standards ISO/IEC 17025:2017, ANSI/NCSL Z540-1-1994, ANSI/NCSL Z540. 100% means triode connection, 0% means tetrode connection, some where in between is ultra linear connection see picture:. goodship and d. Modern Methods in Heterogeneous Catalysis Research. It is everything you have ever wanted in a tube tester and more. Tracer simulation. According to this circuit I arranged a curve tracer test , a 5881 tube with Tamura 2012 PP output transformer in ultra linear connection and the VR is 5K/4= 1. I already had an analogue scope, and I was thinking of giving this one away to some deserving newbie, but it was in pretty poor shape. This paper discusses a compact high voltage curve tracer for high voltage semiconductor device characterization. The polarity "fixes" that resulted when the demarked transistors were removed and analyzed on a Tektronix curve tracer may be seen on this first draft. Refer to all safety summaries prior to performing service. You don't have to imagine. Keysight Technologies, Inc. However in the course of wiring it up I found that some 6AU6s just refused to bias where they should. The mixed-signal curve tracer circuitry interfaces. This is done by the Tracer Configurator. The curve tracer allows us to visualize the i-v characteristics of the MOSFET on the face of the oscilloscope so that we can better understand the MOSFET's properties and operation and measure its parameters such as K and VTR. And this is how a PNP transistor trace looks. curve tracer or the solderless breadboard. Cupid Arrows (Red Tracer Flare) Description. Based on an oscilloscope, the device also contains voltage and current sources that can be used to stimulate the device under test (DUT). readout 576 curve tracer Post by maurissor » May 27th, 2014, 10:26 pm I have a nice 576 curvetracer with readout system failure: missing only letters K, m & micro, on Beta measurement in the lower part. The American Radio Relay League (ARRL) is the national association for amateur radio, connecting hams around the U. 3V) and current readings taken, the entire set of curves are produced in one capacitor discharge. Internally, the Apollo Guidance Computer processes instructions by breaking an instruction into subinstructions, where each subinstruction takes one memory cycle For example, the ADS instruction consists of two subinstructions: the ADS0. Pioneer SX-737 Receiver (audio monitor for DVD) 26. I agree that you need a curve tracer. The circuit uses a programmable current source to force increasing discrete current values and samples the voltage at the I OUT terminal at each step. The Tektronix model 575 curve tracer shown in the gallery was a typical early instrument. Unlike a multimeter test, the curve tracer exercises the device under test over a range of voltage and currents, so it's a much more informative measurement. Plot and analyze the shapes of the E(Θ) and F(Θ) curves. This explains and demonstrates, piece by piece, how a ray tracer works it's a "breakdown" in the sense of "an explanatory analysis", not in the sense of "a failure. The Oscilloscope Watch has all the features of a modern watch (time, calendar, alarm, etc) combined with all the features of the popular Xprotolab (Oscilloscope, Waveform Generator, Logic Analyzer, Protocol Sniffer, Frequency Counter). Tracer IV-curve software is the all-in-one solution for the measurement and elaboration of IV-curves for solar cells and modules. A router is one of the most versatile tools you can have in your shop, and there are MANY different router bits available. transistor curve tracer (i 1991/01) introduction to digital signal processing (i 1991/01) feedback killer (i 1990/02) transistor match maker(*) (i 1990/02) bbd sound effect unit(*) (i 1990/04) elktr midi signal redistribution (1990/07-08) *elktr midi master keyboard (doepfer)part1(1990/06) *elktr horn loudspeaker(1990/05). Review of Scientific Instruments publishes novel advancements in scientific instrumentation, apparatuses, techniques of experimental measurement, and related mathematical analysis. The correct part number to use is 955-W65C22N6TPG-14. This generator lets you create handwriting practice sheets with the text you provide. While planar scintigraphy is still the mainstream imaging method, SPECT, PET and bremsstrahlung imaging have promising properties to improve accuracy in quantification. The results you'll get from the bare-bones method are close enough to select pairs that will work well in the Fuzz Face circuit with no trimming. We tested it with the same voltages it will see in a real application circuit. This knob was used on two different Telequipment models - the CT-71 curve tracer that I am repairing, and the D75 Oscilloscope. (a) Map showing the modern Yarlung Tsangpo–Brahmaputra drainage network in the Eastern Himalaya superimposed on a Google Earth Landsat image. Keysight Technologies, Inc. Its predecessor was one of the brand's most acclaimed, best-selling models to date and won the "Interbike Bike of the Year Award" in 2014. It'll also be harder to block an incoming D. Pioneer PRV-LX1 Professional DVD Recorder (I have 3 of these - 2 are backup units) 25. Use of electronic test equipment is essential to any serious work on electronics systems. Tolbert, Senior Member, IEEE Abstract—Power electronics is an enabling technology found in most renewable energy generation systems. Unlike a multimeter test, the curve tracer exercises the device under test over a range of voltage and currents, so it's a much more informative measurement. The two-diode model darkI-V curve fitting. Sun & Fun Inc. This machine would feed a. Trade-offs between the convenience of the study procedure, the speed of data processing, and the reliability and level. A design for a modern microprocessor. 3 !!! THIS IS A REAL WORLD TEST OF THE TUBE. out of the box, they are powerful analytical tools. You can configure a B1505A system from a wide variety of measurement resources that best meets your needs today and also permits expansion in future. With modern solid state analog design, A/D and D/A converters and cheap computing, a decent vacuum tube curve tracer that can display the curves on a computer screen is quite feasible. Designing New Devices to Meet Evolving Needs Trace Mode supports interactive testing of a device. The DCM motor used in our locomotive is still an old-fashioned 3-pole series motor with an electromagnet to provide motive power. I wanted to build a tube stereo amp that I could be proud of. A router is one of the most versatile tools you can have in your shop, and there are MANY different router bits available. A full set of characteristic curves for vacuum tubes, and later for semiconductor devices, could be displayed on an oscilloscope screen by use of a plug-in adapter, or on a dedicated curve tracer. Every few years someone asks this community for advice on oscilloscopes. PSPICE Tips, basics, equivalent resistance, dependent sources, network theorems, transcient simulation, inductors and capacitors, waveforms, second order circuits, sinusoidal inputs, frequency response. Power DUTs can become dangerously hot during testing. An inexpensive, easy to build diy valve/tube tester For quite some time I have been looking at developing a design for a diy valve/tube tester that is. In this way, the proper operation of the DUT can be proven or faults in the device can be traced. Great Prices and Choice of TOMMY JEANS Modern Blouse Today To Bring An Upscale Really feel To Your House!, Complete the rest of the room with stunning TOMMY JEANS Modern Blouse, You will get additional information about TOMMY JEANS Modern Blouse, Browse many TOMMY JEANS Modern Blouse and TOMMY JEANS, including oversized household furniture, There exists a excellent choice of household. The high grid quality of the EML tubes, allows for the much higher values, which you find under "maximum specifications". Recently, however, test instrument manufacturers have introduced systems that combine the best of old-fashioned curve tracers with modern parametric characterization instruments. With all new high-accuracy DC Parametrics, you'll enjoy the most powerful curve tracer you have ever used. I hit upon a seller from France that accompanies each vacuum tube with a photo of its "characteristic curve", which he makes with an old Tektronix 570 Characteristic Curve Tracer. I live in the rainforest jungle of Panama and so our 110 volt supply is from an inverter. This is a very low, inconvenient value. Note 4) The historical 50 tube was specified with max 10k grid resistor. Still, I'd think in working order it's at least $500+. Tolbert, Senior Member, IEEE Abstract—Power electronics is an enabling technology found in most renewable energy generation systems. The OP's point is that since much NEW audio gear uses vacuum tubes why are there no new tube testers to test the new tubes with. When an oponent has entered the range of the targeting reticule, white tracer lines appear pointing toward the target's. I can tell you the advantages, disadvantages, features, and accuracy and ease of use, but the main limiting factor will be the price you are willing to pay. Figure 4: Measurements performed on a curve tracer showing the forward bias characteristics of an MPS diode (blue trace) and a non-MPS diode (red trace). Real-time, High-Speed Drawing The GS series is high-speed communication and sweep. Learn how to create a curve tracer to test tubes and semiconductors using a Tektronix Semiconductor urve Tracer and an add-on board. The main emphasis is on tracer test interpretation (section 3. Lets take a simple one, of a VR tube like 5651, the grandfather of Zener. Richard Cardenas, Ph. The spreadsheet is set up in the following manner. The Heathkit IT3121 is almost a copy of the typical (traditional) Tektronix Curve Tracers, which are, in my opinion, the de-facto standard in Curve Tracers, the 576 in particular. CURVE TRACER MATCHED!!! WITH THE BOGEY @ 200 VOLTS and 4. Refer to all safety summaries prior to performing service. Modern curve tracers often contain mechanical shields and interlocks that make it more difficult for the operator to come into contact with hazardous voltages or currents. AUTOOTIVE UNDERSTANDING POWER TESTING APPLICATIONS FOR TODAY'S AUTOMOBILES index 4 Get advice for your application. RSE Practice Skills Assessment ­ PT assesses your knowledge and Packet Tracer skills on CCNA Routing and Switching – Routing and Switching Essentials. Most familiar may be the polar planimeter (see Figure 1), for which a nice geometrical explanation is given in [1] and a direct constructive proof using Green's Theorem in. Always lowest price on Ct G Round Diamond, processing orders both local and international. Bally Alpha 2 Pro V22. Watch it arc up and curve with the wind on its way to a distant target. I'm quite impressed and excited about LightTracer, as it is the first WebGL based path tracer that can render relatively complex scenes (including textures), which is something I've been waiting to see happen for a while (I tried something similar a few. Original Henredon Curve Front Roll Arm/back Sofa Vintage Mid Century Modern. The improved curve tracer as proposed in has mainly two operating modes, the first one is the fixed resistor (FR) and the second is the modulated resistor (MR) mode. Welcome to the Gearslutz Pro Audio Community!. Right on the lake of Zurich in Horgen, 15 minutes from Zurich city by train or car is the new home of FM Acoustics (former home was Tiefenhofstrasse 17, CH-8820 Wadenswil/Switzerland). Agilent B1505A Curve Tracer + Signatone 1160 Manual Prober: Delcom Noncontact Sheet Resistance Meter: Ecopia HMS-5000 Hall Effect Measurement System: Elite 300 Semi-automatic Prober + Keithley 4200-SCS Parameter Analyzer: IV-5 Solar Cell I-V Measurement System : LakeShore 7607 Hall Measurement System: LAMBDA 1050 UV/Vis/NIR Spectrometer. Valavanisa, S. As you can see from the graph, the forward characteristic is a classic diode curve with a threshold about 0. PET study design. Here is how it looks when hooked on to 575. Modern technological jobs require an increasing amount of theoretical knowledge, and therefore many engineering colleges have been eliminating laboratory courses, in order to leave time for the teaching of more theory. "This is the schematic, traced from the pedal that was brought, by a "Jeff Beck" crew member, to Tycobrahe Sound for repair, which became the genesis of the Tycobrahe Guitar Pedal line. If you have an image of the thrust curve graph of a motor plus some standard information about it, you can use it to create a motor file simply and accurately. We need a current limit resistor and a current sense resistor. In addition, I would also like to express my gratitude to my loving parent and friends who had helped and given me encouragement and support for this project. The power curve of a wind turbine is a graph that indicates how large the electrical power output will be for the turbine at different wind speeds. To run the same test today you can substitute more modern equipment as long as the newer equipment has equal or better specs. Images for Keysight news releases. For a gate voltage of $-1\,\mathrm{V}$, find the transconductance directly by differentiating the transfer characteristic curve, and check that your value agrees with the value automatically calculated by the Curve Tracer. Full details can be provided if any one wishes to build it (and has a good stock of uniselectors) Ed. Unfortunately, modern EPROM programmers typically do not handle the 1702A. The results you'll get from the bare-bones method are close enough to select pairs that will work well in the Fuzz Face circuit with no trimming. Includes a term project of constructing and testing a robot or other. Oscilloscope tube curve tracer plug-in. If you go back and look at the FET Characteristics Curve from earlier, you can see what this looks like. An incremental optical curve tracer with sequential logic is disposed to guide a precision machining operation and/or produce a drawing in which any of one-, two- or three-dimensional curves, lines or combinations thereof are converted stepwise into a corresponding linear or rectilinear equivalent. Well since I learn fastest by doing (not exactly unique I guess) time to start my next trick. It doesn't do that with the semiconductor versions, either. Audiomatica (Italy) made a good computer-controlled curve tracer, but it cost $4000, and is no longer available. Original Henredon Curve Front Roll Arm/back Sofa Vintage Mid Century Modern. Mini‐CT a Curve Tracer for the Shack QST in Depth Veikko Kanto 3355 W. @Phadreus: I am one of the Engineers originally from India who CAN solder but I am NOT a EE graduate -my degrees have been in Mechanics / Mechanical area. Power DUTs can become dangerously hot during testing. PET tracers and techniques for measuring myocardial blood flow in patients with coronary artery disease. Is the shape of the curve such that the curve or parts of the curve can be fit by an ideal reactor model? Does the curve have a long tail suggesting a stagnant zone? Does the curve have an early spike indicating bypassing?. i will combine shipping please wait for me to send. RTI provides curve tracers with powered and unpowered test capabilities, and relevant peripherals and accessories. van 't Klooster* State University ofUtrecht, Laboratory ofAnalytical Chemistry, Croesestraat 77a, 3522 ADUtrecht, TheNetherlands Introduction Hardware Although direct connection ofanalytical instruments to com-. Content coverage includes instrumentation used in scientific research of physics, chemistry, and biology. "Jack, come on, love. You can scroll through the list to find the brand you are interested in and then read the titles for specific information on the model etc. Availability and Price. 000's of tube socket connections and test settings. This is a legendary and famously expensive instrument. We think that Light Tracer is a fun new kind of VR puzzler. Even Reinhardt's shield has a slight curve to it! Experimental Barrier's flat design makes it easy for flankers like Tracer, Reaper, or Damage-Fist to sneak hits in from the side. Highlighted! Jim Ellington "packaged" his uTracer in an old Fender harmonica case!! Visit my Testimonial Page to view this and many other beautiful uTracer implementations. The curve consists of three distinct regions: early life, useful life, and wear-out. He had improved the power rating of the unit, but unless he's done far more (I have not looked for over a year now) it will top out around a 6L6 size tube, and maybe not that "big". Tracer is the core application developed by ReRa that will help you to characterize your solar cells and compare the results. Abstract: tektronix 576 curve tracer tektronix type 576 curve tracer tektronix 370 curve tracer curve tracer Tracer 176 specifications curve tracer tektronix 577 curve tracer quadrac curve tracer specifications Text:. Make sure to set the base terminal to "open. Now, don't expect it to light up a sign that says H-K SHORT. This app uses graphic elements of Dash DAQ to create an interface for an IV curve tracer using a Keithley 2400 SourceMeter. A transistor amplifier is a current control device. Met deze Jeulin Monoscope 10MHz heb ik een transistor curve tracer gemaakt. 8 volts at 1 amp forward current. Our JJs give good overall performance when sorted and matched on a modern curve tracer (not a 50-year-old tube tester with a single meter) and they are non-microphonic and have good heater/cathode alignment for low noise. Antique Electronic Supply shared a post. Just like a semiconductor curve tracer (Tek 575, 576, 577, etc. Welcome to Neon! Neon™ is a free, fully raytraced viewport plug-in for Rhino 5. The group is active in Solar Energy research and has a fully equipped solar cell lab with a solar simulator for recreating sunlight conditions in the lab and I-V curve tracer for characterising the efficiency of solar cells. Hi Gents, as there is a lot of interest in valve testers, try this circuit for size. In contrary to the AT1000, the software of uTrace and E-tracer is constructed with a lot of pride, and regularly updated. Below are some general guidelines for characterizing DC parameters using a Tektronix 576 curve tracer, Keithley 238 parametric analyzer, or a TESEC 881-. The Tektronix 5CT1N is a curve tracer plug-in for the 5000 series oscilloscopes, C30 and C32 should be replaced with modern equivalents rated above 35 V. Have an open source project and need a group to discuss it? Groups. For a while, I have been thinking about building a modern vacuum tube curve tracer - an affordable one, that is. As he calls it "the Ferrari" among the tube testers. Re-wrap the MOSFET leads with the. Automation helps prevent damage to the curve tracer and DUT while the equipment is running. di riparazioni TV in disuso. 1 inches of travel and race-tested compression valving. If you go back and look at the FET Characteristics Curve from earlier, you can see what this looks like. Skip navigation Sign in. with analytic ray curve tracer and Gaussian beam ocean acoustic environment for the development of a modern active sonar system. Now turn on the power supply and open the Excel spreadsheet label "I-V Curve Tracer". Type 570 Vacuum-Tube Characteristic-Curve Tracer. but they were often adapted to the specialized needs of the user. However, also here, you have to judge for yourself if you like the test results or not. The Tektronix 7CT1N is a curve tracer plug-in for 7000-series scopes. 2) and cooling predictions (chapter 4) along with field examples. Vokasa aDepartment of Electronic Engineering, Technological and Educational Institute of Piraeus.
CommonCrawl