arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Number Theory: If $p\equiv 3\pmod{4}$, then $-r$ has order $(p-1)/2$ modulo $p$.
I know that someone has already posted this problem, but I wanted to see if my proof is also valid. Here is the problem:
Let $r$ be a primitive root of the odd prime $p$. Prove that if $p\equiv 3\pmod4$, then $-r$ has order $(p-1)/2$ modulo $p$.
Proof
Note that since $r$ is a primitive root of $p$, we have $r^{(p-1)/2}\equiv -1\pmod{p}$.
Suppose $o_p(-r)=m$, some $0<m<\frac{p-1}{2}$.
First suppose $m$ is even. Then, $(-r)^m=r^m$, so $(-r^m)\equiv 1\pmod{p}\implies r^m\equiv 1\pmod{p}$, which contradicts $r$ is a primitive root of $p$.
Now, suppose $m$ is odd. Then $(-r)^m=-r^m$, so $(-r^m)\equiv 1\pmod{p}\implies -r^m\equiv 1\pmod{p}\implies r^m\equiv -1\pmod{p}$.
But since $r$ is a primitive root of $p$, we have $r^{(p-1)/2}\equiv -1\pmod{p}$ (we had shown this in a previous problem). So $r^m\equiv -1\pmod{p}\implies m=\frac{p-1}{2}$, contradicting $m<\frac{p-1}{2}$.
Therefore, $o_p(-r)$ is not less than $\frac{p-1}{2}$.
I'm not sure where to go from here...
• I solved, is there is something not clear comment that I edit. Nov 3, 2015 at 23:19
• Thank you so much, I hadn't thought about doing it that way! Nov 9, 2015 at 9:08
Try this: Once $p \equiv 3 \pmod 4$ it means that $-1$ is not a square over $Z_p$. That's ok? Then, $(-r)^{\frac{p-1}{2}} \equiv (\frac{-1}{p})(\frac{r}{p}) \pmod p.$ Where $(\frac{}{})$ means the Legendre symbol. But $(\frac{-1}{p}) = -1$ and then $(-r)^{\frac{p-1}{2}} = 1,$ once $(\frac{r}{p}) = -1,$ since order of $r$ is $p-1$ and not $(p-1)/2$. Then we have proved that $(-r)^{\frac{p-1}{2}} \equiv 1 \pmod p$.
Now, suppose that the order of $(-r)$ is $m$ less then $(p-1)/2$. Then $2m = (p-1)/2$ and then $p \equiv 1 \pmod 4.$
|
|
How do I remove “jitter” when detecting frequency of Sine Wave
Input: Sine wave of any frequency centred at 1.66V
Output: Square wave with a frequency equal to input
I have been working on implementing a common DSP problem -- Detect the frequency of a Sine Wave. I have used a handful of posts from stack exchange to get me started, and found This answer best suited for my problem. However, implementing the solution and measuring both the input signal (sine wave) and the output signal (square wave) with an oscilloscope, it appears I am getting some "Jitter" on my output signal.
Does anyone know what could be causing this? Or is this just... as accurate a detection I can get?
Here is the code I am using to detect the frequency of the sine wave:
volatile int period; // equal to the number of samples taken between zero crossings
volatile float frequency; // the calculated frequency of the sine wave
int numSamplesTaken = 0; // used in frequency calculation formula
int currValue = 0; // the current sampled value of sinewave input
int prevValue = 0; // the previous sampled value of sinewave input
int zero_crossing = 32767; // ADC range is 0v - 3.3v, so the midpoint of the sine wave should be 1.65v (ie. 65535 / 2 = 32767)
int threshold = 500; // for handling hysteresis
int isPositive = false; // whether the sine wave is rising or falling
// interupt occuring at 8000hz
void sampleSignal() {
currValue = input.read_u16(); // convert analog voltage input (sine wave) to a 16 bit number
if (currValue >= (zero_crossing + threshold) && prevValue < (zero_crossing + threshold) && isPositive) {
output.write(0); // write digital output pin LOW
isPositive = false;
} else if (currValue <= (zero_crossing - threshold) && prevValue > (zero_crossing - threshold) && !isPositive) {
output.write(1); // write digital output pin HIGH
period = numSamplesTaken; // how many samples have occurred between positive zero crossings
frequency = 8000 / period; // sample rate divided by period of input signal
numSamplesTaken = 0; // reset sample count
isPositive = true;
}
prevValue = currValue;
numSamplesTaken++;
}
• For a single tone signal, I don't think you can get better and as efficient results than with this technique: dsprelated.com/showarticle/1284.php – Cedron Dawg Jun 21 '20 at 14:46
• if you're doing zero crossing, try doing a moving average of the measured period. a simple way to do that is to count the sample periods between the first and last of 20 zero crossings and divide by 10. or the first and last of 200 zero crossings and divide by 100. – robert bristow-johnson Jun 24 '20 at 4:38
Your interrupt seems to occur at 8 kHz. There are 2 obvious jitter causes that I can see.
First of all if $$\frac{f_s}{f} \neq N$$ where N is an integer, $$f_s$$ is the sampling frequency and $$f$$ the signal frequency, you will get jitter since the sampling instants will not be "periodic". Try to set f to 80 Hz, you will have 100 samples per period, check if that fixes your jitter.
Secondly, there could be latency caused by your ADC (is it integrated to your microcontroller? SPI, I2C?), latency caused by the ISR, etc. that could cause the jitter.
• I am having a hard time understanding your formula - what is f suppose to represent? Is N period? As for latency, I can't imagine there being much since the ADC is internal to the MCU and the interupt routine is the only task I have given the MCU 🤷♂️ – scottc11 Jun 21 '20 at 15:35
• f : frequency of the signal fs : sampling frequency – Ben Jun 21 '20 at 16:04
Interrupt driven sampling can be surprisingly jittery all by itself. It depends on a great many things how much time elapses between the desired moment of time and the sample time.
The easiest solution would be use an ADC that can be synchronized to a simple clock. One edge tells the ADC to start the sample, then the other edge can drive your interrupt to read the result. Even if it is a built-in ADC, your chip may be able to set up for that.
If that isn't possible here, a tougher solution would be to set up a high-res timer, and then time-stamp each sample, and use the time info to adjust your measurements accordingly.
|
|
# Combinatorial identity with sum of binomial coefficients
How to attack this kinds of problem? I am hoping that there will some kind of shortcuts to calculate this.
$$\sum_{k=0}^{38\,204\,629\,939\,869} \frac{\binom{38\,204\,629\,939\,869}{k}}{\binom{76\,409\,259\,879\,737}{k}}\,.$$
EDIT:
As I see, the numerator is $n \choose k$ and the denominator is ${2n-1} \choose k$, where $n =38\,204\,629\,939\,869$. i.e $$\sum_{k=0}^n {\frac {n \choose k} {{2n-1} \choose k}} = 2.$$
• Have you noticed that this is $\sum\limits_{k=0}^n \dfrac{\binom nk}{\binom {2n-1}k}$ for $n=38204629939869$? – ajotatxe Nov 26 '14 at 8:01
• Yes, sometimes after a posted the problem. :) – arindam mitra Nov 26 '14 at 8:02
• To begin with, you can take the $\frac{n!}{(2n-1)!}$ outside the $\sum$. – barak manos Nov 26 '14 at 8:03
• Indeed, after trying some examples with wolfram alpha it seems that this is a general equality $\sum_{k=0}^n \frac{\binom{n}{k}}{\binom{2n-1}{k}} = 2$ – GenericNickname Nov 26 '14 at 8:09
• We have $\displaystyle \frac{\binom{n}{k}}{\binom{2n}{k}} - \frac{\binom{n}{k+1}}{\binom{2n}{k+1}} = \frac{1}{2}\frac{\binom{n}{k}}{\binom{2n-1}{k}}$, thus the sum telescopes to $2$ .. – r9m Dec 6 '14 at 10:33
(New Answer, posted 28 Nov 2016)
Just had a quick look at this again, and noticed that there is a much shorter solution!
Using the subset-of-a-subset identity $\displaystyle\binom ab\binom bc=\binom ac\binom {a-c}{b-c}$, note that \begin{align}\binom {2n-1}n\binom nk&=\binom {2n-1}k\binom {2n-1-k}{n-k}\\ &=\binom {2n-1}k\binom {2n-1-k}{n-1}\end{align} Cross-dividing and summing, \begin{align} \sum_{k=0}^n \frac {\displaystyle\binom nk}{\displaystyle\binom {2n-1}{k}} &=\sum_{k=0}^n\frac{\displaystyle\binom {2n-1-k}{n-1}}{\displaystyle\binom {2n-1}n}\\ &=\frac 1{\displaystyle\binom {2n-1}n}\sum_{r=0}^n\binom {n-1+r}{n-1}\qquad\qquad(r=n-k)\\ &=\frac{\displaystyle\binom {2n}n}{\displaystyle\binom {2n-1}n} \color{lightgray}{=\frac{(2n)(2n-1)(2n-2)\cdots (n+1)}{\qquad\; (2n-1)(2n-2)\cdots (n+1)n}}\\ &=\color{red}2 \end{align} NB - No binomial coefficient expansion, no factorials!
(Original Answer, posted 26 Nov 2014)
\begin{align} \sum_{k=0}^{n}\frac{\Large\binom nk}{\Large\binom{2n-1}k} &=\sum_{k=0}^{n}\frac{n!}{k!(n-k)!}\cdot \frac{k!(2n-1-k)!}{(2n-1)!}\\ &=\frac{n!}{(2n-1)!}\cdot \color{green}{(n-1)!}\sum_{k=0}^{n}\frac{(2n-1-k)!}{\color{green}{(n-1)!}(n-k)!}\\[10pt] &=\frac{n!(n-1)!}{(2n-1)!}\sum_{k=0}^{n} {\binom {2n-1-k}{n-1}}\\[10pt] &={\binom{2n-1}n}^{-1}\sum_{r=0}^{n} \binom {n-1+r}{n-1}\qquad \small\text{(putting r=n-k)}\\[10pt] &={\binom{2n-1}n}^{-1}\binom{2n}{n}\\[10pt] &=\frac{(2n)^\underline{n}}{(2n-1)^\underline{n}} \color{gray}{=\frac{(2n)(2n-1)(2n-2)\cdots (n+1)}{\qquad\;\;\; (2n-1)(2n-2)\cdots (n+1)n}}\\[10pt] &=\frac{2n}{n}\\[10pt] &=2\qquad\blacksquare \end{align}
NB: Thanks for reading the answer above and for your upvotes! Please also see my other solution in a separate post below, which uses a different and possibly more direct approach.
• Thank you, @r9m! And other upvoters too. It was fun deriving the proof:) – hypergeometric Nov 26 '14 at 9:10
• @hypergeometric On your line two inside the sum , did you mean $(n-k)!$ instead of $(2n-1)!$ ? – Display name Nov 26 '14 at 13:47
• @Why - Yes that's right! Thanks for pointing it out. Have edited it accordingly. – hypergeometric Nov 26 '14 at 14:34
The identity $$\sum_{k=0}^n\frac{\binom nk}{\binom{2n-1}k}=2$$ holds for every positive integer $n$. The case $n=1$ is trivial. Here is a probabilistic proof for $n\ge2$.
Consider the following random experiment. An urn initially contains $n$ black balls and $n-1$ white balls. Balls are drawn one by one, without replacement, until a white ball is drawn. The random variable $X$ is the number of draws; its range of values is $\{1,2,\dots,n,n+1\}$. We will compute the expected value $E(X)$ in two different ways.
I. Clearly we have $$X=\sum_{k=0}^nX_k$$ where $$X_k=\begin{cases} 1\text{ if }X\gt k,\\ 0\text{ if }X\le k; \end{cases}$$ in other words, $X_k=1$ if there is no white ball in the first $k$ draws, meaning that a $(k+1)^{\text{st}}$ draw is needed. Thus we have $$E(X)=E(\sum_{k=0}^n X_k)=\sum_{k=0}^n E(X_k)=\sum_{k=0}^n P(X_k=1)=\sum_{k=0}^n\frac{\binom nk}{\binom{2n-1}k}.$$
II. Call the black balls $B_1,B_2,\dots,B_n$. Let $Y_i$ be the indicator variable which takes the value $1$ if the ball $B_i$ is drawn before any white ball is drawn, $0$ otherwise. Clearly the variable $X$ is equal to $1$ plus the number of black balls drawn, that is, $$X=1+\sum_{i=1}^nY_i$$ and so $$E(X)=1+\sum_{i=1}^nE(Y_i)=1+\sum_{i=1}^nP(Y_i=1)=1+\sum_{i=1}^n\frac1n=2.$$
More generally, for any integers $m,n\ge0$, the same argument (with $n$ black and $m$ white balls) shows that $$\sum_{k=0}^n\frac{\binom nk}{\binom{m+n}k}=1+\frac n{m+1}.$$
• Can you please explain why $P(Y_i = 1) = \frac{1}{n}$? – shardulc says Reinstate Monica Apr 4 '16 at 7:19
• For the event $Y_i=1$ the only balls that matter are $B_i$ and the $n-1$ white balls. Each of these $n$ balls has the same probability of being drawn first, namely $1/n.$ In particular, the probability that $B_i$ is drawn before any of the white balls is $1/n.$ – bof Apr 4 '16 at 7:38
• A beautiful proof! – 6005 Jul 23 '16 at 21:03
It is easy to see that:
$$\sum\limits_{k=0}^1\frac{1\choose k}{1\choose k}= \frac{1\choose 0}{1\choose 0}+\frac{1\choose 1}{1\choose 1}=2$$
Now, if you can show that:
$$\sum_{k=0}^n \frac{\dbinom{n}{k}}{\dbinom{2n-1}{k}}=2\implies \sum_{k=0}^{n+1} \frac{\dbinom{n+1}{k}}{\dbinom{2n+1}{k}}=2$$
Then you have a proof by induction.
I wouldn't usually post another answer, but the approach here is quite different so I hope this will be excused.
\begin{align} &\large\sum_{k=0}^n\frac{\Large\binom nk}{\Large\binom{2n-1}k}\\[10pt] &=\large\sum_{k=0}^n\frac{n^\underline{k}}{k!}\cdot \frac{k!}{(2n-1)^\underline{k}}\\[10pt] &=\large{\sum_{k=0}^n}\frac{n^\underline{k}}{(2n-1)^\underline{k}}\\[10pt] &=1+\frac n{2n-1}+\frac{n(n-1)}{(2n-1)(2n-2)}+\cdots+\frac{n(n-1)\cdots3\cdot 2\cdot 1}{(2n-1)(2n-2)\cdots (n+2)(n+1)n}\\[10pt] &=1+\frac n{2n-1}\left(1+\frac{n-1}{2n-2}\left(1+\cdots \left(1+\frac 3{n+2}\left(1+\frac 2{n+1}\color{blue}{\left(1+\frac 1n\right)} \right)\right)\right)\right)\\[10pt] &=1+\frac n{2n-1}\left(1+\frac{n-1}{2n-2}\left(1+\cdots \left(1+\frac 3{n+2}\color{blue}{\left(1+\frac 2{n}\right)} \right)\right)\right)\\[10pt] &=1+\frac n{2n-1}\left(1+\frac{n-1}{2n-2}\left(1+\cdots \color{blue}{\left(1+\frac 3n\right)}\right)\right)\\[10pt] &=\cdots\\[10pt] &=1+\frac n{2n-1}\color{blue}{\left(1+\frac{n-1}{n}\right)}\\[10pt] &=\color{blue}{1+\frac n{n}}\\[10pt] &=2\qquad\qquad \blacksquare\\[10pt] \end{align}
According to the Gosper's algorithm (Maxima command
AntiDifference(binomial(n,k)/binomial(2*n-1,k),k),
also implemented in Mathematica and Maple): $${\frac {n \choose k} {{2n-1} \choose k}} = {{\left((k+1)-2n\right){{n}\choose{k+1}}}\over{n{{2n-1}\choose{k+1 }}}} -{{\left(k-2n\right){{n}\choose{k}}}\over{n{{2n-1}\choose{k}}}}$$ and the sum telescopes : $$\sum_{k=0}^n{\frac{n \choose k}{{2n-1} \choose k}} = \sum_{k=0}^n{{\left((k+1)-2n\right){{n}\choose{k+1}}}\over{n{{2n-1}\choose{k+1}}}} -{{\left(k-2n\right){{n}\choose{k+1}}}\over{n{{2n-1}\choose{k}}}}= {{\left(1-n\right){{n}\choose{n+1}}}\over{n{{2n-1}\choose{n}}}}- {{\left(-2n\right){{n}\choose{0}}}\over{n{{2n-1}\choose{0}}}}=0-(-2)$$
|
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
QUESTION
A ketone has the molecular formula C5H10O. Write the structural formulae of the isomers to show positional isomerism?
A ketone has one double bonded oxygen atom, but not at the end of a (sub) chain (or it would be an aldehyde).
Step 1: Make a straight chain of C's. You can place the =O at the second or third C (number 1 and 5 are not allowed, and 4 would be the same as two). Of course you fill the rest of the valencies with H atoms. Step 2: Make a chain of 4 C's and put a branch-C at number 3. Number 3 has but one valence left, so the =O goes to number two (again the other ones are end-C's so not allowed).
So you are left with altogether 3 isomers:
CH_3-CO-CH_2-CH_2-CH_3 pent-2-one (or 2-pentone)
CH_3-CH_2-CO-CH_2-CH_3 pent-3-one (or 3-pentone)
CH_3-CO-CH(CH_3)-CH_3 3-methyl but-2-one (or iso-pentone). Since the places of the =O group in relation to the CH_3- branch are fixed in relation to each other (i.e. ther is only one way), you may leave out the numbers: methyl butone
|
|
# Perpendicular Circles
Geometry Level 3
Three different circles are all touching each other but not overlapping.
• One circle has center $$O$$ and radius 4,
• and another circle has center $$A$$ and radius 2,
• and another circle has center $$P$$ and radius $$r$$
Given that $$OA$$ is perpendicular to $$AP$$, find $$r$$.
×
|
|
# Help Center > Badges > Steward
Complete at least 1,000 review tasks. This badge is awarded once per review type.
Awarded 40 times.
Awarded Dec 10 '19 at 2:10 to
for reviewing Suggested Edits
Awarded Dec 9 '19 at 3:15 to
for reviewing Low Quality Posts
Awarded Nov 10 '19 at 14:40 to
for reviewing Close Votes
Awarded Jul 5 '19 at 22:00 to
for reviewing First Posts
Awarded Jun 24 '19 at 6:30 to
for reviewing Close Votes
Awarded Dec 12 '18 at 15:00 to
for reviewing First Posts
Awarded Nov 29 '18 at 2:20 to
for reviewing Suggested Edits
Awarded Sep 24 '18 at 11:51 to
for reviewing First Posts
Awarded Sep 14 '18 at 7:55 to
for reviewing First Posts
Awarded Sep 13 '18 at 17:00 to
for reviewing Close Votes
Awarded Jun 26 '18 at 11:09 to
for reviewing Close Votes
Awarded Jun 12 '18 at 9:27 to
for reviewing First Posts
Awarded May 22 '18 at 18:12 to
for reviewing Close Votes
Awarded Jan 19 '18 at 5:36 to
for reviewing Close Votes
Awarded Dec 14 '17 at 3:20 to
for reviewing First Posts
Awarded Dec 12 '17 at 17:22 to
for reviewing Close Votes
Awarded Oct 25 '17 at 12:05 to
for reviewing Close Votes
Awarded Oct 25 '17 at 8:34 to
for reviewing First Posts
Awarded Sep 27 '17 at 15:12 to
for reviewing Close Votes
Awarded Jul 16 '17 at 21:09 to
for reviewing First Posts
Awarded May 12 '17 at 17:06 to
for reviewing Close Votes
Awarded Apr 26 '17 at 10:21 to
for reviewing Close Votes
Awarded Apr 8 '17 at 20:15 to
for reviewing First Posts
Awarded Mar 7 '17 at 20:10 to
for reviewing Close Votes
Awarded Jan 5 '17 at 18:27 to
for reviewing First Posts
Awarded Dec 9 '16 at 4:18 to
for reviewing First Posts
Awarded Oct 26 '16 at 10:52 to
for reviewing Suggested Edits
Awarded Oct 19 '16 at 11:56 to
for reviewing First Posts
Awarded Jun 29 '16 at 13:32 to
for reviewing First Posts
Awarded Mar 20 '16 at 9:53 to
for reviewing First Posts
Awarded Feb 23 '16 at 10:07 to
for reviewing First Posts
Awarded Sep 20 '15 at 14:02 to
for reviewing Close Votes
Awarded Sep 7 '15 at 16:51 to
for reviewing First Posts
Awarded Aug 17 '15 at 17:50 to
for reviewing First Posts
Awarded May 21 '15 at 9:48 to
for reviewing Suggested Edits
Awarded Apr 1 '15 at 5:25 to
for reviewing First Posts
Awarded Jul 29 '14 at 10:49 to
for reviewing Close Votes
Awarded Apr 27 '14 at 14:49 to
for reviewing First Posts
Awarded Sep 24 '13 at 19:15 to
for reviewing First Posts
Awarded May 23 '13 at 9:37 to
for reviewing First Posts
|
|
1 Mar 2011 02:52
### Re: Issue with Rotated Tables and PS2PDF
28.2.2011 8:00, Swarnendu Biswas kirjoitti:
> Hello,
>
> I want to create a large table in LaTeX sideways, i.e., vertically. This can be done using the rotating
package and sidewaystables
> environment to create rotated tables. We can also use the lscape package to get the same effect. Both these
options are working fine when I am generating a dvi and then creating the pdf from it.
>
> However, I use special symbols in my Xfig figures, and use psfrag for it. Therefore, I first generate the dvi
> file, then the ps file and then the pdf file. But with this approach, the rotated table is not being properly
generated. The output is fine in the intermediate
> dvi and ps files. But when I am generating the pdf using ps2pdf the
> page orientation of the whole concerned page in the pdf file gets
> distorted to landscape.
>
> It would be nice if someone can help.
>
> Regards,
> Swarnendu Biswas.
> ------------------------------------------------------------------------------
> Free Software Download: Index, Search& Analyze Logs and other IT data in
> Real-Time with Splunk. Collect, index and harness all the fast moving IT data
> generated by your applications, servers and devices whether physical, virtual
> or in the cloud. Deliver compliance at lower cost and gain new business
> insights. http://p.sf.net/sfu/splunk-dev2dev
> _______________________________________________
> MiKTeX-Users mailing list
> MiKTeX-Users <at> lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/miktex-users
(Continue reading)
1 Mar 2011 04:37
### Re: Issue with Rotated Tables and PS2PDF
Hello,
Yes it is working fine with pdflatex also.
Thanks and regards,
Swarnendu Biswas.
----- Original Message -----
From: Paul Thompson <Paul.Thompson <at> SanfordHealth.org>
To: miktex-users <at> lists.sourceforge.net
Sent: Tue, 1 Mar 2011 01:44:34 +0530 (IST)
Subject: Re: [MiKTeX] Issue with Rotated Tables and PS2PDF
Can you do this with pdflatex?
Paul A. Thompson, Ph.D.
Additional contact numbers:
Cell: 618-974-0473
Fax: 605-312-6071
-----Original Message-----
From: Swarnendu Biswas [mailto:swarna_cse <at> indiatimes.com]
Sent: Monday, February 28, 2011 12:01 AM
To: miktex-users <at> lists.sourceforge.net
Subject: [MiKTeX] Issue with Rotated Tables and PS2PDF
Hello,
I want to create a large table in LaTeX sideways, i.e., vertically. This can be done using the rotating
(Continue reading)
1 Mar 2011 09:42
### Re: Issue with Rotated Tables and PS2PDF
Am Mon, 28 Feb 2011 11:30:40 +0530 (IST) schrieb Swarnendu Biswas:
> Hello,
>
> I want to create a large table in LaTeX sideways, i.e., vertically. This can be done using the rotating
package and sidewaystables
> environment to create rotated tables. We can also use the lscape package to get the same effect. Both these
options are working fine when I am generating a dvi and then creating the pdf from it.
>
> However, I use special symbols in my Xfig figures, and use psfrag for it. Therefore, I first generate the
dvi
> file, then the ps file and then the pdf file. But with this approach, the rotated table is not being properly
generated. The output is fine in the intermediate
> dvi and ps files. But when I am generating the pdf using ps2pdf the
> page orientation of the whole concerned page in the pdf file gets
> distorted to landscape.
>
> It would be nice if someone can help.
Without some code that demonstrates your problem, it is difficult to
know what you mean. In case that you want to avoid the autorotate of
the second page like in the following example, try
ps2pdf -dAutoRotatePages#/None test.ps test.pdf
\documentclass{article}
\usepackage{lscape,lipsum}
\begin{document}
\lipsum[1]
(Continue reading)
1 Mar 2011 10:50
### Re: What causes "rotate 90" warnings
> I think this thread may answer your question - http://www.mail-archive.com/xetex <at> tug.org/msg01528.html
Yep, thanks, that's it.
I just hope there is a way around this.
Doesn't look like it.
But, to tell the truth, I hadn't seen any glitches in the PDF output so
far. So, I am not too worried.
Thanks,
Marko
------------------------------------------------------------------------------
Free Software Download: Index, Search & Analyze Logs and other IT data in
Real-Time with Splunk. Collect, index and harness all the fast moving IT data
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business
insights. http://p.sf.net/sfu/splunk-dev2dev
1 Mar 2011 10:54
### Re: What causes "rotate 90" warnings
Hi Mike,
yes, you are right, I have to apoligize for my minimalistic approach to
this issue. I invested too little effort in analyzing the issue myself
beforehand, I am aware of that. I just hoped I'd find someone who had see
the same warnings popping up and who knew what to do.
Anyway, since it looks like it doesn't do any harm to me now - as pointed
out in my response to David's post a few seconds ago - I just ignore the
warning.
Thanks,
Marko
------------------------------------------------------------------------------
Free Software Download: Index, Search & Analyze Logs and other IT data in
Real-Time with Splunk. Collect, index and harness all the fast moving IT data
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business
insights. http://p.sf.net/sfu/splunk-dev2dev
1 Mar 2011 16:31
### why should I refresh FNDB every time?
I wonder why should I refresh the FNDB every time I
i) add a figure to a tex file (because MikTeX can't fine the file...)
ii) every time I change the name of a tex file, because I get:
! I can't find file nnnn.aux'.
<to be read again>
\relax
l.353 \end{document}
Please type another input file name:
As far as I remember, when I used to use version 2.4-2.6, MikTeX had
no problem to find the graphics files if they were in the same folder
as the main tex file. Nowadays, I have to update de database every
time I add a figure to my text.
Am I doing anything wrong or is this the way MikTeX works today?
Best regards,
Agustin
------------------------------------------------------------------------------
Free Software Download: Index, Search & Analyze Logs and other IT data in
Real-Time with Splunk. Collect, index and harness all the fast moving IT data
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business
insights. http://p.sf.net/sfu/splunk-dev2dev
(Continue reading)
1 Mar 2011 18:36
### recent update breaks 2.9 in multiple ways...
OK, so did the update this morning. Now,
1\ relative paths to anything (e.g., image files) are broken
Simple minimal example -- first with figure in same directory as .tex file
\documentclass[11pt,letterpaper]{article}
\usepackage{graphicx}
\begin{document}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=3,keepaspectratio=true]{fig1.eps}
\end{center}
\end{figure}
\end{document}
Works fine. Now, move figure to upstream directory, compile fails --
can't find fig1.eps:
\documentclass[11pt,letterpaper]{article}
\usepackage{graphicx}
\begin{document}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=3,keepaspectratio=true]{../fig1.eps}
\end{center}
\end{figure}
\end{document}
2\ yap no longer auto renders, and whatever rendering it now uses seems
(Continue reading)
1 Mar 2011 18:17
### 2.9 | upgrade breaks /includegraphics
Did a recent upgrade this morning -- including the miktex .bin. Seems as
if something has changed (yet again) in how paths are handled for
graphics. Such that, things which compiled an hour ago (before the
update), no longer do -- lots of errors about not being able to find
graphics.
Sigh.
So, I have a skeleton file for \including lots of chapters. In each
chapter I have something like
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.8,keepaspectratio=true]{../chapter0/figs/ch0_fig2.eps}
\end{center}
\end{figure}
If someone tells me I need to use absolute path names now (or something
like that) I'm going to say bad words. I'd have to change ~1500
images in 28 chapters, for just a single one of my books.
So, what has changed, and what flip do I need to switch to make it work
the way it used to? I already have the env variable set to let me be
'unsafe'. What else?
------------------------------------------------------------------------------
Free Software Download: Index, Search & Analyze Logs and other IT data in
Real-Time with Splunk. Collect, index and harness all the fast moving IT data
generated by your applications, servers and devices whether physical, virtual
(Continue reading)
1 Mar 2011 23:34
### recent 2.9 update breaks relative paths
Pre-updarte, following compiles perfectly:
Doing the latest 2.9 upgrade to MikTeX seems to break several things --
most critical of which is that support for relative paths seems to be
completely broken. For example,
\documentclass[11pt,letterpaper]{article}
\usepackage{graphicx}
\begin{document}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=3,keepaspectratio=true]{../fig1.eps}
\end{center}
\end{figure}
\end{document}
Immediately post-updated (replicated on 2 different machines), can no longer
find the image file. I've confirmed that*anything* that uses relative
paths is now broken, after the update.
And yes, I've already set the env variable to allow such 'unsafe things'
as relative paths.
This is really a critical change, for me. I have 100's to 1000's of images for my books that I generally
include using relative paths. The recent update complete breaks this.
Is there a workaround? And please don't tell me the change is 'for security'. If I wanted someone to change
something (or prohibit something) they though 'unsafe', I'd get a Mac (and let 'Steve' tell me what is and
is not acceptable).
(Continue reading)
2 Mar 2011 09:11
### Re: why should I refresh FNDB every time?
Am Tue, 1 Mar 2011 12:31:23 -0300 schrieb Agustin E. Bolzán:
> I wonder why should I refresh the FNDB every time I
>
> i) add a figure to a tex file (because MikTeX can't fine the file...)
>
> ii) every time I change the name of a tex file, because I get:
>
> ! I can't find file nnnn.aux'.
> <to be read again>
> \relax
> l.353 \end{document}
> Please type another input file name:
>
> As far as I remember, when I used to use version 2.4-2.6, MikTeX had
> no problem to find the graphics files if they were in the same folder
> as the main tex file. Nowadays, I have to update de database every
> time I add a figure to my text.
>
> Am I doing anything wrong or is this the way MikTeX works today?
Problably you have stored your document in a texmf-tree. Don't do
this. Move it to a folder where you would also store e.g. a word
document.
--
--
Ulrike Fischer
------------------------------------------------------------------------------
Free Software Download: Index, Search & Analyze Logs and other IT data in
(Continue reading)
Gmane
|
|
# Discount and Percentage
In this article, you will be able to practice problems about discounts and percentage. We will cite real life problem scenarios where you can apply this mathematics topic when you do shopping or simply selling small goods.
As a primary goal of this site, get a pen and paper, solve the problem and check if your answer is correct. That way, you’re doing a self-teaching technique which is essential to developing your math skill.
Worked Problem 1:
The fifth grade students are planning a field trip. Of 150 fifth grade students, 68% decided to go. How many students is this?
Solution:
Let N be the number of students comprising 68% of the total number.
$N=150\times 0.68$
$N=102$
Worked Problem 2:
What is 10% of 20% of 6% of 5000?
Solution:
This problem sounds freak but the solution is very easy. The word “of” in Math denotes multiplication. Let N be the required number.
$N=0.1\times 0.2\times \times 0.06\times 5000$
$N=6$
Worked Problem 3:
In a shopping mall, you want to buy a top with a marked price of 750 pesos and a discount rate of 12% off. How much is the price of the top?
Solution:
Since the price is 12% less, the discounted price must be 100%-12%=88% of the 750 Pesos. Thus, you should be paying only $750\times 0.88=660.00$ pesos.
Worked Problem 4:
A pair of hoes marked 950 pesos was bought for 760 pesos. What was the percent discount?
Solution:
To calculate the discount price, we can deduct 760 from 950 for us to get 190 and finally get the discount rate by dividing 190 by the original price which is 950 to get 0.2 which is 20%.
### Dan
Blogger and a Math enthusiast. Has no interest in Mathematics until MMC came. Aside from doing math, he also loves to travel and watch movies.
|
|
# American Institute of Mathematical Sciences
## Positive solution branches of two-species competition model in open advective environments
1 School of Computer Science, Shaanxi Normal University, Xi'an, Shaanxi 710119, China 2 School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, Shaanxi 710119, China
* Corresponding author
Received August 2020 Revised December 2020 Published December 2020
Fund Project: The work is supported by the National Natural Science Foundation of China (12071270, 61907030), the Fundamental Research Funds for the Central Universities (GK201903088)
The effect of competition is an important topic in spatial ecology. This paper deals with a general two-species competition system in open advective and inhomogeneous environments. At first, the critical values on the interspecific competition coefficients are established, which determine the stability of semi-trivial steady states. Secondly, by analyzing the nonexistence of coexistence steady states and using the theory of monotone dynamical system, we find that the competitive exclusion principle holds if one of the interspecific competition coefficients is large and the other is in a certain range. Thirdly, in terms of these critical values, the structure and direction of bifurcating branches of positive equilibria arising from two semi-trivial steady states are given by means of the bifurcation theory and stability analysis. Finally, we show that multiple coexistence occurs under certain regimes.
Citation: Yan'e Wang, Nana Tian, Hua Nie. Positive solution branches of two-species competition model in open advective environments. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021006
##### References:
show all references
##### References:
The schematic diagrams of the curves $\mathcal{S}^+ = \{(\|u\|_\infty, b): (u,v,b)\in \mathcal{B}^+\}$ with $\mathcal{B}^+$ defined in Section 4.3. Here $b_0<b^*$ in (ⅰ)-(ⅱ), and $b_0>b^*$ in (ⅲ)-(ⅳ)
The schematic diagrams of the curves $\mathcal{S}^+ = \{(\|u\|_\infty, b): (u,v,b)\in \mathcal{B}^+\}$ with $\mathcal{B}^+$ is defined in Section 4.3
The schematic diagrams of the curves $\mathcal{S}^+ = \{(\|u\|_\infty, b): (u,v,b)\in \mathcal{B}^+\}$ with $\mathcal{B}^+$ defined in Section 4.3
The graphs of $b_0-b^*$ vs. $d_1$ in (ⅰ) and vs. $d_2$ in (ⅱ) with other parameters fixed as (5.1)
The graphs of $b_0-b^*$ vs. $d_1$ in (ⅰ) and vs. $d_2$ in (ⅱ) with $q_1 = q_2 = 0$ and other parameters fixed as (5.1)
Schematic diagram of the global dynamics on system (1.3) in $b-c$ plane
[1] Xianyong Chen, Weihua Jiang. Multiple spatiotemporal coexistence states and Turing-Hopf bifurcation in a Lotka-Volterra competition system with nonlocal delays. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021013 [2] Hirofumi Izuhara, Shunsuke Kobayashi. Spatio-temporal coexistence in the cross-diffusion competition system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 919-933. doi: 10.3934/dcdss.2020228 [3] Mohamed Dellal, Bachir Bar. Global analysis of a model of competition in the chemostat with internal inhibitor. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1129-1148. doi: 10.3934/dcdsb.2020156 [4] Haoyu Li, Zhi-Qiang Wang. Multiple positive solutions for coupled Schrödinger equations with perturbations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020294 [5] Lin Niu, Yi Wang, Xizhuang Xie. Carrying simplex in the Lotka-Volterra competition model with seasonal succession with applications. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021014 [6] Wenrui Hao, King-Yeung Lam, Yuan Lou. Ecological and evolutionary dynamics in advective environments: Critical domain size and boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 367-400. doi: 10.3934/dcdsb.2020283 [7] Qing Li, Yaping Wu. Existence and instability of some nontrivial steady states for the SKT competition model with large cross diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3657-3682. doi: 10.3934/dcds.2020051 [8] Sebastian J. Schreiber. The $P^*$ rule in the stochastic Holt-Lawton model of apparent competition. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 633-644. doi: 10.3934/dcdsb.2020374 [9] Yu Jin, Xiang-Qiang Zhao. The spatial dynamics of a Zebra mussel model in river environments. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020362 [10] Yunfeng Geng, Xiaoying Wang, Frithjof Lutscher. Coexistence of competing consumers on a single resource in a hybrid model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 269-297. doi: 10.3934/dcdsb.2020140 [11] Wei Feng, Michael Freeze, Xin Lu. On competition models under allee effect: Asymptotic behavior and traveling waves. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5609-5626. doi: 10.3934/cpaa.2020256 [12] Linfeng Mei, Feng-Bin Wang. Dynamics of phytoplankton species competition for light and nutrient with recycling in a water column. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020359 [13] Pan Zheng. Asymptotic stability in a chemotaxis-competition system with indirect signal production. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1207-1223. doi: 10.3934/dcds.2020315 [14] Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461 [15] Yunfeng Jia, Yi Li, Jianhua Wu, Hong-Kun Xu. Cauchy problem of semilinear inhomogeneous elliptic equations of Matukuma-type with multiple growth terms. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3485-3507. doi: 10.3934/dcds.2019227 [16] Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272 [17] Xueli Bai, Fang Li. Global dynamics of competition models with nonsymmetric nonlocal dispersals when one diffusion rate is small. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3075-3092. doi: 10.3934/dcds.2020035 [18] Hongxia Sun, Yao Wan, Yu Li, Linlin Zhang, Zhen Zhou. Competition in a dual-channel supply chain considering duopolistic retailers with different behaviours. Journal of Industrial & Management Optimization, 2021, 17 (2) : 601-631. doi: 10.3934/jimo.2019125 [19] Jiannan Zhang, Ping Chen, Zhuo Jin, Shuanming Li. Open-loop equilibrium strategy for mean-variance portfolio selection: A log-return model. Journal of Industrial & Management Optimization, 2021, 17 (2) : 765-777. doi: 10.3934/jimo.2019133 [20] Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259
2019 Impact Factor: 1.27
|
|
# CSASC 2013
9-13 June 2013
Koper, Slovenia
UTC timezone
Home > Timetable > Contribution details
PDF | XML
# On some bilinear problems on weighted Hardy-Sobolev spaces
Presented by Prof. Carme CASCANTE
Type: Oral presentation
Track: Several Complex Variables
## Content
If $w$ is a weight in ${\bf S}^n$, the weighted Hardy-Sobolev space $H_s^p(w)$, $0\leq s$, $0 \lt p \lt +\infty$, consists of functions $f$ holomorphic in $\B^n$ such that if ${\displaystyle f(z)=\sum_k f_k(z)}$ is its homogeneous polynomial expansion, and ${\displaystyle (I+R)^s f(z):=\sum_k (1+k)^s f_k(z)},$ we have that $${\displaystyle ||f||_{H_s^p(w)}^p:= \sup_{r\lt 1}\int_{{\bf S}^n}|(I+R)^s f(r\zeta)|^pw(\zeta)d\sigma(\zeta)<+\infty}.$$ For fixed $0 \lt s,t\lt n$, and $w$ a weight in the Muckenhoupt class ${\mathcal A}_p$, we study the positive Borel measures $\mu$ on the unit sphere of $\C^n$, ${\bf S}^n$, for which the following bilinear problem holds: There exists $C>0$ such that for any $f\in H_s^2(w)$, $g\in H_t^2(w)$, \begin{equation*} \sup_{\rho<1}\left|\int_{{\bf S}^n}f(\rho\zeta) \overline{g (\rho\zeta)}d\mu(\zeta)\right|\leq C\|f\|_{H_s^2(w)}\|g\|_{H_t^2(w)}. \end{equation*} We will give characterizations of this bilinear problem in two situations: for $s,t$ non necessarily equal, under some restrictions on $s,t$ and the weight $w$, and when $s=t$ in a more general situation. (Joint work with Joaqu\'\i n M.\ Ortega.)
## Place
Location: Koper, Slovenia
More
|
|
# zbMATH — the first resource for mathematics
A parallel Newton multigrid framework for monolithic fluid-structure interactions. (English) Zbl 07161485
Summary: We present a monolithic parallel Newton-multigrid solver for nonlinear nonstationary three dimensional fluid-structure interactions in arbitrary Lagrangian Eulerian (ALE) formulation. We start with a finite element discretization of the coupled problem, based on a remapping of the Navier-Stokes equations onto a fixed reference framework. The strongly coupled fluid-structure interaction problem is discretized with finite elements in space and finite differences in time. The resulting nonlinear and linear systems of equations are large and show a very high condition number. We present a novel Newton approach that is based on two essential ideas: First, a condensation of the solid deformation by exploiting the discretized velocity-deformation relation $$d_t \mathbf{u}=\mathbf{v}$$, second, the Jacobian of the fluid-structure interaction system is simplified by neglecting all derivatives with respect to the ALE deformation, an approximation that has shown to have little impact. The resulting system of equations decouples into a joint momentum equation and into two separate equations for the deformation fields in solid and fluid. Besides a reduction of the problem sizes, the approximation has a positive effect on the conditioning of the systems such that multigrid solvers with simple smoothers like a parallel Vanka-iteration can be applied. We demonstrate the efficiency of the resulting solver infrastructure on a well-studied 2d test-case and we also introduce a challenging 3d problem.
##### MSC:
76M Basic methods in fluid mechanics 74F Coupling of solid mechanics with other effects 76D Incompressible viscous fluids 65M Numerical methods for partial differential equations, initial value and time-dependent initial-boundary value problems
##### Software:
Eigen; FaCSI; GASCOIGNE; UMFPACK
Full Text:
|
|
# Bayesian inference for network meta-regression using multivariate random effects with applications to cholesterol lowering drugs
Bayesian inference for network meta-regression using multivariate random effects with... Summary Low-density lipoprotein cholesterol (LDL-C) has been identified as a causative factor for atherosclerosis and related coronary heart disease, and as the main target for cholesterol- and lipid-lowering therapy. Statin drugs inhibit cholesterol synthesis in the liver and are typically the first line of therapy to lower elevated levels of LDL-C. On the other hand, a different drug, Ezetimibe, inhibits the absorption of cholesterol by the small intestine and provides a different mechanism of action. Many clinical trials have been carried out on safety and efficacy evaluation of cholesterol lowering drugs. To synthesize the results from different clinical trials, we examine treatment level (aggregate) network meta-data from 29 double-blind, randomized, active, or placebo-controlled statins +/$$-$$ Ezetimibe clinical trials on adult treatment-naïve patients with primary hypercholesterolemia. In this article, we propose a new approach to carry out Bayesian inference for arm-based network meta-regression. Specifically, we develop a new strategy of grouping the variances of random effects, in which we first formulate possible sets of the groups of the treatments based on their clinical mechanisms of action and then use Bayesian model comparison criteria to select the best set of groups. The proposed approach is especially useful when some treatment arms are involved in only a single trial. In addition, a Markov chain Monte Carlo sampling algorithm is developed to carry out the posterior computations. In particular, the correlation matrix is generated from its full conditional distribution via partial correlations. The proposed methodology is further applied to analyze the network meta-data from 29 trials with 11 treatment arms. 1. Introduction According to the National Center for Health Statistics, high cholesterol is a risk factor for heart disease, which is the leading cause of death for both men and women. Nearly 600 000 people die of heart disease in the United States every year—that’s one in every four deaths. Every year about 715 00 Americans have a heart attack and the costs of coronary heart disease alone costs the US over $${\}$$100 billion annually which includes the cost of health care services, medications, and lost productivity (NCHStats, 2013). High cholesterol is well known to contribute to heart disease and other cardiovascular diseases. Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (2001) has issued treatment guidelines identifying low-density lipoprotein cholesterol (LDL-C, “bad” cholesterol) as a causative factor for coronary heart disease and as the main target for cholesterol-lowering and lipid-lowering therapy. Cholesterol-lowering medicines called “statins” work mainly in the liver to decrease the production of cholesterol and reduce cholesterol in the bloodstream. Many clinical trials since the introduction of statin drugs in 1987 have shown statins to drive down the rates of heart attack and stroke. Statin drugs are typically the first choice to drive down elevated levels of LDL-C. An estimated 38.7 million Americans were on a statin in 2011–2012, an increase from 24 million in 2003–2004, and from 12.5 million in 1990–2000 (Adedinsewo and others, 2016). There are several effective brands on generic labels and thus are quite inexpensive. There are several classes of cholesterol-lowering drugs, but statins have become the drug class of choice because of their demonstrated efficacy and safety. This class of drugs includes: atorvastatin (Lipitor), simvastatin (Zocor), lovastatin (Mevacor), rosuvastatin (Crestor), pravastatin (Pravachol), and fluvastatin (Lescol). Between 2000 and 2014, the fraction of Americans with elevated blood levels of cholesterol declined from 18.3% to 11%, at least partly due to an increase in the use of cholesterol-lowering medications. The use of any lipid-lowering agent was 20% in 2004 and the data from 2012 showed that 28% of Americans over the age of 40 are taking such a medication (Ross, 2015). But a high blood level of LDL-C remains a major risk factor for heart disease and stroke in the US. In general, statins positively affect the lipid profile by decreasing LDL-C and triglycerides (TG), and increasing high-density lipoprotein cholesterol (HDL-C, ‘good’ cholesterol), although the effect on HDL-C is relatively smaller. Clinical studies have shown that statins significantly reduce the risk of heart attack and death in patients with coronary artery disease and can also reduce cardiac events in patients with high cholesterol levels. Even though statins are the first-line treatment in most patients, lipid goals are frequently not achieved because of inadequate response to therapy, poor compliance, or concerns regarding the increased potential for side effects at higher doses. On the other hand, a drug called Ezetimibe (Zetia) works in the digestive tract. It is unique in the way it helps block absorption of cholesterol that comes from food. Ezetimibe (EZE) can complement statins in targeting both sources of cholesterol. EZE can be given as monotherapy to lower cholesterol levels in patients who are intolerant to statin or in whom treatment with statin is not appropriate. EZE can also be used in combination with a statin in patients whose cholesterol levels remain elevated despite treatment with statin alone. It can be either co-administered with the statin dose or given as a fixed-dose combination tablet (known as Vytorin) containing simvastatin and EZE. A combination tablet of atorvastatin and EZE (known as Liptruzet) and composite packs of rosuvastatin and EZE (known as Rosuzet) are also available around the world. The data for direct comparisons of these combination therapies for their effectiveness are very limited. The clinical trials for head-to-head comparisons of these combination therapies are very limited prompting for the indirect comparisons using network meta-analysis. This is the motivation and objective of our work in this article. The effects of statins in combination with EZE on LDL-C could vary due to possible drug interactions and need to be studied and is a subject of this article. The effects of statins have been well-studied and known in the literature. We estimate these effects through both direct and indirect comparisons here in a network meta-analysis framework. Advantages of using combination therapy include greater efficacy through differing mechanisms of action, lower doses of individual drugs, and potential amelioration of side effects experienced with high doses of single agents (Morrone and others, 2012). In network meta-analysis, more than two treatments are compared in different randomized pairwise or multi-arm trials (Lu and Ades, 2006). Approximately a quarter of randomized trials include more than two arms (Chan and Altman, 2005), while the presence of multi-arm trials adds complexity to the analysis. Moreover, in the evidence synthesis process, it is not rare to encounter treatments which are involved in very few trials or even only in one trial (Lu and Ades, 2006; Hong and others, 2013, 2016; Gwon and others, 2016). In the case, where some treatments are involved in only one trial, excluding such treatments sometimes can have important effects on the network meta-analysis results (Mills and others, 2013). Bayesian approaches are becoming more popular due to their flexibility and interpretability (Higgins and Whitehead, 1996; Lu and Ades, 2004, 2006). Meanwhile, random effects models are increasingly popular as a useful tool for network meta-analysis (Lu and Ades, 2006; Hong and others, 2013). Compared with fixed effects models, one of the advantages of random effects models is to allow for borrowing of strength from different trials. If all treatments are involved in multiple trials, it is desirable to allow the between-trial variances of the random effects to vary across treatments (arm-based model) or treatment comparisons (contrast-based model) (Lumley, 2002; Hong and others, 2016). It is a common practice to assume homogeneity of between-trial variations for all arms. The performance of homogeneous and heterogeneous variance models have been examined in Lu and Ades (2004, 2006, 2009). When the number of treatments is large and some treatments are involved only in a single trial, the variance parameters of the random effects in heterogeneous variance models cannot be estimated. In addition, an equal-correlation structure is often assumed for the correlation matrix of the random effects, and a value of half is usually assumed for the correlation (Lu and Ades, 2004, 2006). However, the equal-correlation structure is quite restrictive, since it does not allow for different correlations as well as arbitrary negative correlations among the random effects. In this article, we consider a network meta-data consisting of 29 trials, 11 treatment arms (10 active treatments plus placebo), and 10 aggregate covariates. We first develop arm-based meta-regression models with multivariate random effects in order to compare these treatments while adjusting for aggregate covariates. A challenge that arises in dealing with 11-dimensional random effects is that some variances of the random effects cannot be estimated due to the fact that five treatments are involved only in a single trial. To circumvent this issue, we develop a general framework to formulate possible sets of the groups of treatments according to their clinical mechanisms of action, and further use the deviance information criterion (DIC) and the logarithm of the pseudo marginal likelihood (LPML) to select the best group. Unlike assuming known correlations, we specify a non-informative uniform prior for the correlation matrix and then develop a new localized Metropolis algorithm to generate the correlation matrix from its full conditional distribution via partial correlations. Finally, we carry out a detailed analysis of the network meta-data using the proposed methodology and our results in the treatment comparisons are generally consistent with direct comparisons reported in the literature. The rest of this article is organized as follows. In Section 2, we provide a detailed description of the LDL-C network meta-data from 29 clinical trials, which motivates the proposed methodology. In Section 3, we fully develop the network meta-regression model, specify a multivariate normal distribution for random effects, and propose a grouping methodology for the covariance matrix. The complete-data likelihood and the observed-data likelihood are also given in Section 3. We present priors and posteriors in Section 4.1, develop a Markov chain Monte Carlo (MCMC) sampling algorithm, including collapsed Gibbs sampling and localized metropolis sampling in Section 4.2, and derive Bayesian model comparison criteria in Section 4.3. Section 5 gives a detailed analysis of the LDL-C network meta-data discussed in Section 2. We conclude the article with some discussion in Section 6. 2. The LDL-C network meta-data A systematic search for randomized clinical trials using statins was conducted online using Google Scholar. The search yielded 78 trials: 32 Merck Sharp and Dohme (MSD) sponsored and 46 non-MSD sponsored trials. From these trials, all second-line studies (i.e. studies with patients on statin at study entry) were excluded. From the remaining first-line trials (i.e. studies with patients who were drug-naïve or rendered drug-naïve by wash-out at study entry), those with missing information on the response variable (LDL-C mean percent change) or study covariates (listed below) were excluded. The inclusion-exclusion flow diagram of studies is given in Figure 1 (a). This left us with 29 double-blind, randomized, active, or placebo-controlled clinical trials on adult treatment-naïve patients with primary hypercholesterolemia (15 MSD-sponsored and 14 non-MSD sponsored). These trials were conducted between 2002 and 2010 and study durations ranged from 5 to 12 weeks. Some trials had longer durations with titration of doses but only the data prior to the first titration were used in the analysis. The primary goal of these clinical trials was to evaluate the LDL-C lowering effects of different statins or ezetimibe plus statins. The treatments used in these trials were placebo (PBO), simvastatin (S), atorvastatin (A), lovastatin (L), rosuvastatin (R), pravastatin (P), Ezetimibe (E), and the combinations of S and E (SE), A and E (AE), L and E (LE) and P and E (PE). Ezetimibe is available at only one dose of 10 mg while the statins are available at multiple doses. In this article, the LDL-C lowering effects of different doses of each statin are combined to form the treatment group. Fig. 1. View largeDownload slide (a) Flow diagram for trials included in the network meta-analysis. (b) LDL-C network metadata diagram. Each node represents a treatment in the network metadata. The number associated with the treatment gives the total number of patients across all trials. Each edge represents the direct evidence comparing the treatments it connects, and the involved trial IDs are given for each head-to-head comparison. Fig. 1. View largeDownload slide (a) Flow diagram for trials included in the network meta-analysis. (b) LDL-C network metadata diagram. Each node represents a treatment in the network metadata. The number associated with the treatment gives the total number of patients across all trials. Each edge represents the direct evidence comparing the treatments it connects, and the involved trial IDs are given for each head-to-head comparison. A network diagram (for LDL-C mean percent change) of the included treatments based on these 29 trials is presented in Figure 1 (b). Taking SE as an example, we can see from the diagram that (i) SE was included in trials 1, 3, 4, 5, 6, 7, 8, 9, 12, 13, 14, and 15, and 6596 patients were treated with SE in all these trials; (ii) SE was compared head-to-head with PBO, A, R, S, and E, while it is not compared head-to-head with AE, L, LE, PE, and P; and (iii) SE and PBO were compared head-to-head in trials 1, 3, and 6. Note that the size of the node is proportional to the total sample size of the corresponding treatment, and the width of the edge is related to the number of trials which include the direct comparisons. The covariates considered here are baseline LDL-C (bl_ldlc) mean, baseline HDL-C (bl_hdlc) mean, baseline TG (bl_trig) mean, age mean, race proportion (of white), gender proportion (of male), body mass index (BMI) mean, proportion of medium statin potency, proportion of high statin potency, and trial duration. We consider mean percent change from baseline in LDL-C as the outcome variable. A summary of the covariates and outcome variables is given in Tables S1a and S1b of the supplementary materials available at Biostatistics online. Tables S2a and S2b of supplementary material available at Biostatistics online provide the title, treatment groups, treatment duration, and citation for the published primary manuscript for each trial. The patient entry criteria for these studies included in this meta-analysis are given in Tables S3a and S3b of supplementary material available at Biostatistics online. All head-to-head comparisons from these trials can be easily seen through Table S4 of the of supplementary material available at Biostatistics online. 3. Network meta-regression models Suppose, we consider $$K$$ randomized trials and a set of treatments $${\mathscr T}=\{0,1,\dots,T\}$$ from all $$K$$ trials. The $$k$$th trial has $$T^{(k)}$$ treatments, which are denoted by $$\mathscr{T}^{(k)}=\{t^{(k)}_{1},\ldots,t^{(k)}_{T^{(k)}};t^{(k)}_{\ell}\in \mathscr{T},\;\ell=1,\ldots,T^{(k)}\}$$. Let $$y^{(k)}_{ t^{(k)}_{\ell}}$$ denote the aggregate response, which generally represents the sample mean, and $$S^{2(k)}_{t^{(k)}_{\ell}}$$ denote the sample variance for $$\ell=1,2,\ldots,T^{(k)}$$ and $$k=1,2,\ldots,K$$. For the LDL-C network meta-data discussed in Section 2, we have $$K=29$$ trials and $$T=10$$ active treatment arms. We use 0 to denote PBO and 1–10 to denote S, A, L, R, P, E, SE, AE, LE, and PE, respectively. The first trial $$k=1$$ includes treatments PBO, S, E, and SE, thus $$T^{(1)}=4$$ and $$\mathscr{T}^{(1)}=\{t^{(1)}_{1}=0, t^{(1)}_{2}=1,t^{(1)}_{3}=6,t^{(1)}_{4}=7\}$$, which is a subset of $$\mathscr{T}$$. Following Yao and others (2011), Yao and others (2015), and Hong and others (2017), we propose the following random effects model for the network meta-analysis $$y^{(k)}_{t^{(k)}_{\ell}} = ({\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}})^T {\boldsymbol \beta} + \gamma^{(k)}_{t^{(k)}_{\ell}} + \epsilon^{(k)}_{t^{(k)}_{\ell}}, \quad \epsilon^{(k)}_{t^{(k)}_{\ell}} \sim N\big(0,\frac{\sigma^{2(k)}_{ t^{(k)}_{\ell}}}{n^{(k)}_{t^{(k)}_{\ell}}}\big), \label{ymodel}$$ (3.1) and $$\frac{(n^{(k)}_{t^{(k)}_{\ell}}-1)S^{2(k)}_{t^{(k)}_{\ell}}} {\sigma^{2(k)}_{t^{(k)}_{\ell}}} \sim \chi^2_{(n^{(k)}_{t^{(k)}_{\ell}}-1)}, \label{smodel}$$ (3.2) where $$y^{(k)}_{t^{(k)}_{\ell}}$$ and $$S^{2(k)}_{t^{(k)}_{\ell}}$$ are independent. The $$p$$-dimensional vector $${\boldsymbol x}^{(k)}_{ t^{(k)}_{\ell}}$$ represents the aggregate (arm-level) covariates and $${\boldsymbol \beta}$$ is a $$p$$-dimensional vector of regression coefficients corresponding to the aggregate covariate vector $${\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}}$$. The random effect of the $$t^{(k)}_{\ell}$$th treatment within the $$k$$th trial, $$\gamma^{(k)}_{t^{(k)}_{\ell}}$$, is assumed to be independent of $$\epsilon^{(k)}_{t^{(k)}_{\ell}}$$. It captures the dependence of the $$y^{(k)}_{t^{(k)}_{\ell}}$$’s within the trial as well as the heterogeneity across trials. Let $${\boldsymbol \gamma}=(\gamma_0,\gamma_1,\ldots,\gamma_T)'$$ represent the $$(T+1)$$-dimensional vector of the overall treatment effects and $$\Omega$$ denote the $$(T+1)\times(T+1)$$ unknown covariance matrix. We define a collection of unit vectors $$E^{(k)}=(e_{t^{(k)}_{1}},e_{t^{(k)}_{2}},\ldots,e_{t^{(k)}_{T^{(k)}}})$$, where $$e_{t^{(k)}_{\ell}}=(0,\ldots,1,\ldots,0)',\ell=1,\ldots,T^{(k)}$$ with $$t^{(k)}_{\ell}$$th element equal to $$1$$ and $$0$$ otherwise. Thus, $$E^{(k)}$$ is a $$(T+1)\times T^{(k)}$$ matrix. Also, let $$(E^{(k)})^C$$ be a $$(T+1)\times (T+1-T^{(k)})$$ matrix, which consists of the columns of the $$(T+1)\times (T+1)$$ identity matrix $$I_{T+1}$$ that are not included in $$E^{(k)}$$. We let $${\boldsymbol \gamma}^{(k)}=( \gamma^{(k)}_{0}, \gamma^{(k)}_{1}, \ldots,\gamma^{(k)}_{T})'$$ denote the vector of the $$(T+1)$$-dimensional random effects, which would be observed in the $$k$$th trial. Then, $${\boldsymbol \gamma}^{(k)}_{o}= (E^{(k)})^T {\boldsymbol \gamma}^{(k)}$$ is the vector of the $$T^{(k)}$$-dimensional random effects of the treatments that are actually observed in the $$k$$th trial while $${\boldsymbol \gamma}^{(k)}_{m}= ((E^{(k)})^C)^T {\boldsymbol \gamma}^{(k)}$$ is the vector of the $$(T+1-T^{(k)})$$-dimensional random effects of the treatments that are not included in the $$k$$th trial. As an illustration, $(E^{(1)})^T= \left(\begin{array}{@{}ccccccccccc@{}} 1&0&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&1&0&0&0&0 \\ 0&1&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&1&0&0&0\\ \end{array}\right)$ and ${\boldsymbol \gamma}^{(1)}_{o}= (E^{(1)})^T {\boldsymbol \gamma}^{(1)}=\left(\begin{array}{@{}cccc@{}} \gamma^{(1)}_{0} & \gamma^{(1)}_{1} & \gamma^{(1)}_{6} & \gamma^{(1)}_{7} \end{array}\right)^T$. A detailed summary of the key notations is provided in Tables S4a and S4b of supplementary material available at Biostatistics online. For the random effects, we assume $${\boldsymbol \gamma}^{(k)} \sim N_{T+1}({\boldsymbol \gamma}, \Omega)$$, which is a multivariate normal distribution with a $$(T+1)$$-dimensional vector of overall effects $${\boldsymbol \gamma}$$ and a $$(T+1) \times (T+1)$$ positive definite covariance matrix $$\Omega$$. Further, let $$({\boldsymbol \gamma}^{(k)})^R={\boldsymbol \gamma}^{(k)} -{\boldsymbol \gamma}$$, so that $$({\boldsymbol \gamma}^{(k)})^R \sim N_{T+1}(\mathbf{0}, \Omega)$$. It is easy to show that $$({\boldsymbol \gamma}^{(k)}_{o})^R \sim N_{T^{(k)}}\big(\mathbf{0}, (E^{(k)})^T\Omega E^{(k)}\big)$$ and $$({\boldsymbol \gamma}^{(k)}_{m})^R \sim N_{T+1-T^{(k)}}\big( \mathbf{0}, ((E^{(k)})^C)^T\Omega (E^{(k)})^C\big)$$, where $$({\boldsymbol \gamma}^{(k)}_{o})^R=(E^{(k)})^T ({\boldsymbol \gamma}^{(k)} - {\boldsymbol \gamma})$$ and $$({\boldsymbol \gamma}^{(k)}_{m})^R=((E^{(k)})^C)^T ({\boldsymbol \gamma}^{(k)}-{\boldsymbol \gamma})$$, $$k=1,2,\dots,K$$. Now, let $${\boldsymbol \epsilon}^{(k)}=\left(\epsilon^{(k)}_{t^{(k)}_{1}}, \epsilon^{(k)}_{ t^{(k)}_{2}},\ldots,\epsilon^{(k)}_{t^{(k)}_{T^{(k)}}}\right)^{'}$$ denote the vector of random errors for the $$k$$th trial. Then $${\boldsymbol \epsilon}^{(k)} \sim N_{T^{(k)}}(0,\Sigma^{(k)})$$, where $$\Sigma^{(k)}=\mbox {diag}\left(\frac{\sigma^{2(k)}_{ t^{(k)}_{1}}}{n^{(k)}_{ t^{(k)}_{1}}},\frac{\sigma^{2(k)}_{t^{(k)}_{2}}} {n^{(k)}_{t^{(k)}_{2}}},\ldots,\frac{\sigma^{2(k)}_{t^{(k)}_{T^{(k)}}}}{n^{(k)}_{ t^{(k)}_{T^{(k)}}}}\right)$$ is a $$T^{(k)} \times T^{(k)}$$ diagonal matrix. Let $${\boldsymbol X}^{(k)}=\left({\boldsymbol x}^{(k)}_{t^{(k)}_{1}} \; {\boldsymbol x}^{(k)}_{t^{(k)}_{2}}\;\ldots\; {\boldsymbol x}^{(k)}_{t^{(k)}_{T^{(k)}}}\right)^{'}$$ denote the $$T^{(k)} \times p$$ covariate matrix for the $$k$$th trial. Write $$W^{(k)}=({\boldsymbol X}^{(k)} \; (E^{(k)})^T)$$ and $${\boldsymbol \beta}^*=({\boldsymbol \beta}',{\boldsymbol \gamma}')'$$. Then the vector form of (3.1) is given by $${\boldsymbol y}^{(k)} =W^{(k)} {\boldsymbol \beta}^* + ({\boldsymbol \gamma}^{(k)}_{o})^R + {\boldsymbol \epsilon}^{(k)}, \label{ymodelvec}$$ (3.3) where $${\boldsymbol y}^{(k)}=\left(y^{(k)}_{t^{(k)}_{1}},y^{(k)}_{t^{(k)}_{2}},\ldots, y^{(k)}_{t^{(k)}_{T^{(k)}}}\right)^{'}$$. Let $$D_{oy}=\left\{ \left(y^{(k)}_{t^{(k)}_{\ell}},n^{(k)}_{t^{(k)}_{\ell}}, {\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}}\right), \; \ell=1,2,\ldots,T^{(k)},k=1,2,\ldots,K \right\}$$ and $$D_{os}=\left\{ S^{2(k)}_{t^{(k)}_{\ell}}, n^{(k)}_{t^{(k)}_{\ell}}\right.$$, $$\left. \ell=1,2,\ldots,T^{(k)},k=1,2,\ldots,K \right\}$$. Then $$D_o=D_{oy}\cup D_{os}$$ denotes the observed data. Further, let $$D_c=D_o \cup \left\{\left(\gamma^{(k)}_{t^{(k)}_{\ell}}\right)^R, \ell=1,2,\ldots,T^{(k)},k=1,2,\ldots,K \right\}$$ denote the complete data. Let $${\boldsymbol \theta}=( {\boldsymbol \beta}^*,\Omega,\Sigma^*)$$ denote the collection of all model parameters, where $$\Sigma^*= \left(\sigma^{2(1)}_{t^{(1)}_{1}},\dots,\sigma^{2(1)}_{t^{(1)}_{T^{(1)}}},\dots, \sigma^{2(K)}_{t^{(K)}_{1}},\dots,\sigma^{2(K)}_{t^{(K)}_{T^{(K)}}}\right)$$. Using (3.2) and (3.3) and the independence of $$y^{(k)}_{t^{(k)}_{\ell}}$$ and $$S^{2(k)}_{t^{(k)}_{\ell}}$$, the complete data likelihood function can be written as \begin{align} &L( {\boldsymbol \theta} \mid D_c) \notag\\ &\quad = \prod_{k=1}^{K} \left( \vphantom{\frac{\left((n^{(k)}_{ t^{(k)}_{\ell}}-1)S^{2(k)}_{t^{(k)}_{\ell}}\right)^{\frac{n^{(k)}_{ t^{(k)}_{\ell}}-1}{2}-1}} {\left(2\sigma^{2(k)}_{t^{(k)}_{\ell}}\right)^\frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2}\Gamma\left(\frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2}\right)}} (2\pi)^{-\frac{T^{(k)}}{2}}|\Sigma^{(k)}|^{-\frac{1}{2}} \exp \left\{-\frac{\left({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*-({\boldsymbol \gamma}^{(k)}_{o})^R\right)^T(\Sigma^{(k)})^{-1}\left({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*-({\boldsymbol \gamma}^{(k)}_{o})^R\right)}{2}\right\} \right.\nonumber \\ &\qquad\left. \times \prod_{l=1}^{T^{(k)}} \left[ \frac{ \left((n^{(k)}_{ t^{(k)}_{\ell}}-1)S^{2(k)}_{t^{(k)}_{\ell}}\right)^{ \frac{n^{(k)}_{ t^{(k)}_{\ell}}-1}{2}-1}} {\left(2\sigma^{2(k)}_{t^{(k)}_{\ell}}\right)^\frac{ n^{(k)}_{t^{(k)}_{\ell}}-1}{2}\Gamma\left( \frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2}\right)} \exp\left\{-\frac{ (n^{(k)}_{t^{(k)}_{\ell}}-1)S^{2(k)}_{t^{(k)}_{\ell}}}{ 2\sigma^{2(k)}_{t^{(k)}_{\ell}}}\right\}\right] \times f(({\boldsymbol \gamma}^{(k)})^R \mid \Omega) \right), \label{completelikelihood} \end{align} (3.4) where $$f(({\boldsymbol \gamma}^{(k)})^R \mid \Omega)$$ is the probability density function corresponding to a $$N_{T+1}(\mathbf{0}, \Omega)$$ distribution. The random effects $$({\boldsymbol \gamma}^{(k)})^R$$ can be directly integrated out from (3.4) because they follow a multivariate normal distribution. In equation 3.3, the random effects $$({\boldsymbol \gamma}^{(k)}_{o})^R$$ are distributed as multivariate normal, and are independent of $${\boldsymbol \epsilon}^{(k)}$$. Hence, we have $${\boldsymbol y}^{(k)} \mid {\boldsymbol \theta} \sim N\big(W^{(k)} {\boldsymbol \beta}^*, \; (E^{(k)})' \Omega E^{(k)} + \Sigma^{(k)} \big)$$. Therefore, the observed data likelihood function is given by $$L( {\boldsymbol \theta} \mid D_o)=L( {\boldsymbol \theta} \mid D_{oy})L( \Sigma^* \mid D_{os}), \label{obslike}$$ (3.5) where $$L( {\boldsymbol \theta} \mid D_{oy}) = \prod_{k=1}^{K} \bigg[ \;(2\pi)^{-\frac{T^{(k)}}{2}} \big|(E^{(k)})^T \Omega E^{(k)} +\Sigma^{(k)}\big|^{-\frac{1}{2}} \exp\Big \{ -\frac{1}{2} ({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*)^T \big((E^{(k)})^T \Omega E^{(k)} +\Sigma^{(k)}\big)^{-1} ({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*) \Big \} \bigg]$$ and $$L( \Sigma^* \mid D_{os})=\prod_{k=1}^{K} \prod_{l=1}^{T^{(k)}} \left[ \frac{\left((n^{(k)}_{t^{(k)}_{\ell}}-1)S^{2(k)}_{ t^{(k)}_{\ell}}\right)^{\frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2}-1} } {(2\sigma^{2(k)}_{t^{(k)}_{\ell}})^\frac{n^{(k)}_{ t^{(k)}_{\ell}}-1}{2}\Gamma(\frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2})} \exp\left\{-\frac{(n^{(k)}_{t^{(k)}_{\ell}}-1)S^{2(k)}_{ t^{(k)}_{\ell}}}{2\sigma^{2(k)}_{t^{(k)}_{\ell}}}\right\}\right]$$. One of the major challenges for the proposed network meta-regression model is that only part of the covariance matrix $$\Omega$$ can be estimated due to the fact that some of the treatments are only included in a single trial. For example, treatments P, PE, L, LE, and AE were only included in trials 2, 10, and 11, respectively, for the LDL-C network meta-data. This makes estimation of the variances of the random effects corresponding to these treatments impossible. To overcome this problem, we assume that (i) these $$T+1$$ treatments can be divided into $$G$$ groups; and (ii) the random effects for those treatments within the same group share the same variance. Mathematically, we assume $${\mathscr T} = \bigcup_{g=1}^G {\mathscr G}_g$$ such that $${\mathscr G}_g \bigcap {\mathscr G}_{g'} = \emptyset$$ for all $$g \ne g'$$ and let $$\{ \tau^2_g, \; g=1,2,\dots, G\}$$ denote the $$G$$ distinct variances of the random effects. Then, we assume $$\Omega_{jj} = \tau^2_g \; \mbox{ for j \in {\mathscr G}_g}, \label{grouping}$$ (3.6) for $$g=1,2,\dots, G$$. We write the covariance matrix as $$\Omega=V^{\frac{1}{2}}\;{\boldsymbol \rho} \;V^{\frac{1}{2}}$$, where $$V=\text{diag}(\Omega_{00}, \Omega_{11},\dots,\Omega_{TT})$$ is the matrix of the diagonal elements of $$\Omega$$ and $${\boldsymbol \rho}$$ is the corresponding correlation matrix. Thus, the grouping of variances does not imply any grouping of correlations. The determination of $$G$$ and $${\mathscr G}_g$$ is not an easy task. Here, we propose a two-step grouping strategy: (i) formulating possible sets of the groups of treatments based on their clinical mechanisms of action and (ii) using Bayesian model comparison criteria to select the best set of groups. For the LDL-C network meta-data, the random effect corresponding to the placebo arm is expected to have a smaller variance than the random effects corresponding to the active treatment arms since the placebo effect should be very similar across trials. Therefore, the placebo arm should stand alone as a group. In addition, statins (inhibit cholesterol synthesis in the liver) have a very different mechanism of action from EZE (inhibit the absorption of cholesterol by the small intestine). Therefore, the treatments with statins alone should not be classified into the same group as those with EZE arms. According to this strategy, we first formulate eight sets of $$G$$ and $${\mathscr G}_g$$ and then determine the best set of $$G$$ and $${\mathscr G}_g$$ according to Bayesian model comparison criteria. 4. Bayesian inference 4.1. Priors and posteriors We assume that $${\boldsymbol \beta}^*$$, $$\Omega,$$ and $$\Sigma^{(k)}$$, $$k=1,2,\dots,K$$ are independent a priori. We further assume $${\boldsymbol \beta}^* \sim N_{p+T+1}(0,c_{01} I_{p+T+1})$$. For $$\Sigma^{(k)}=\mbox {diag}\left(\frac{\sigma^{2(k)}_{t^{(k)}_{1}}}{n^{(k)}_{ t^{(k)}_{1}}},\frac{\sigma^{2(k)}_{t^{(k)}_{2}}}{n^{(k)}_{ t^{(k)}_{2}}},\ldots,\frac{\sigma^{2(k)}_{t^{(k)}_{T^{(k)}}}}{n^{(k)}_{ t^{(k)}_{T^{(k)}}}}\right)$$, we assume that $$\sigma^{2(k)}_{t^{(k)}_{\ell}} \sim \text{IG}(a_{00},b_{00})$$, $$\ell=1,2,\dots,T^{(k)}$$, $$k=1,2,\dots,K$$, that is, $$p\left(\sigma^{2(k)}_{t^{(k)}_{\ell}}|a_{00},b_{00}\right)\propto \left(\sigma^{2(k)}_{ t^{(k)}_{\ell}}\right)^{-a_{00}-1}\exp\left\{-\frac{b_{00}}{\sigma^{2(k)}_{t^{(k)}_{\ell}}}\right\}$$. Write $${\boldsymbol \rho} = (\rho_{ij})_{0\le i,j \le T}$$ and assume that $$\pi\big((\rho_{01},\rho_{12},\rho_{02},\rho_{23},\rho_{13},...,\rho_{0T} )\big) \propto 1$$ such that $${\boldsymbol \rho}$$ is positive definite. We further assume $$\Omega_{jj}=\tau^2_g \sim IG(a_{0g},b_{0g})$$, for $$j \in {\mathscr G}_g$$, $$g=1,2,\dots, G$$. Note that $$c_{01}$$, $$a_{00}$$, $$b_{00}$$, $$a_{0g}$$ and $$b_{0g}$$ are prespecified hyperparameters. In this article, we use $$c_{01}=100,000$$, $$a_{00}=0.0001$$, $$b_{00}=0.0001$$, $$a_{0g}=0.01,$$ and $$b_{0g}=0.01$$, $$g=1,2,\dots, G$$. We use the sampling algorithm based on partial correlations in Yao and others (2015) to sample $${\boldsymbol \rho}$$ to ensure that $${\boldsymbol \rho}$$ is a positive definite correlation matrix. Let $${\boldsymbol \gamma}^R_o=\big( (({\boldsymbol \gamma}^{(1)}_{o})^R)^T, (({\boldsymbol \gamma}^{(2)}_{o})^R)^T,\dots, (({\boldsymbol \gamma}^{R(K)}_{o})^R)^T \big)^T$$. From previous discussion and (3.4), the augmented posterior distribution is given by \begin{align} &\; \pi({\boldsymbol \beta}^*,\Omega,\Sigma^*,{\boldsymbol \gamma}^R_o \mid D_o) \nonumber \\ &\quad \propto \prod_{k=1}^{K} |\Sigma^{(k)}|^{-\frac{1}{2}}\exp \left\{-\frac{\left({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*-({\boldsymbol \gamma}^{(k)}_{o})^R\right)'\left(\Sigma^{(k)}\right)^{-1}\left({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*-({\boldsymbol \gamma}^{(k)}_{o})^R\right)}{2}\right\} L( \Sigma^* \mid D_{os}) \nonumber \\ &\qquad\times \prod_{k=1}^{K} |(E^{(k)})' \Omega E^{(k)}|^{-\frac{1}{2}}\exp\left\{-\frac{(({\boldsymbol \gamma}^{(k)}_{o})^R)^T((E^{(k)})' \Omega E^{(k)})^{-1} ({\boldsymbol \gamma}^{(k)}_{o})^R}{2} \right\} \exp\left\{-\frac{({\boldsymbol \beta}^*)'{\boldsymbol \beta}^*}{2c_{01}}\right\} \nonumber \\ &\qquad \times \prod_{k=1}^{K} \prod_{l=1}^{T^{(k)}} (\sigma^{2(k)}_{ t^{(k)}_{\ell}})^{-a_{00}-1}\exp\left\{-\frac{b_{00}}{\sigma^{2(k)}_{ t^{(k)}_{\ell}}}\right\} \prod_{g=1}^{G} (\tau^2_g)^{-a_{0g}-1}\exp\left\{-\frac{b_{0g}}{\tau^2_g}\right\}, \label{posterior} \end{align} (4.1) where $$L( \Sigma^* \mid D_{os})$$ is defined in (3.5). 4.2. Computational development The analytical evaluation of the posterior distribution of $${\boldsymbol \theta}=({\boldsymbol \beta}^*,\Omega,\Sigma^*)$$ given in (4.1) is not available. However, we can develop a MCMC sampling algorithm to sample from (4.1). The algorithm requires sampling the following parameters in turn from their respective full conditional distributions: (i) $$[\Sigma^* \mid {\boldsymbol \beta}^*,{\boldsymbol \gamma}^R_o,\Omega,D_o]$$ and (ii) $$[{\boldsymbol \beta}^*,\Omega, {\boldsymbol \gamma}^R_o \mid \Sigma^* ,D_o]$$. For (ii), we use the modified collapsed Gibbs sampling technique in Chen and others (2000) via the identity $$[{\boldsymbol \beta}^*,\Omega,{\boldsymbol \gamma}^R_o \mid \Sigma^*,D_o]=[{\boldsymbol \beta}^*,\Omega \mid \Sigma^*,D_o][{\boldsymbol \gamma}^R_o \mid \Sigma^*,{\boldsymbol \beta}^*,\Omega,D_o]$$, and further $$[{\boldsymbol \beta}^*,\Omega \mid \Sigma^*,D_o]$$ is sampled in turn from the following full conditional distributions: (iia) $$[{\boldsymbol \beta}^* \mid \Sigma^*,\Omega,D_o]$$; (iib) $$[\Omega \mid \Sigma^*,{\boldsymbol \beta}^*, D_o]$$. For (iib), the sampling scheme is not straightforward. Following Section 4.1, $$\Omega$$ can be written as $$V^{\frac{1}{2}}{\boldsymbol \rho} V^{\frac{1}{2}}$$. The sampling process is thus divided into two parts, $$V \mid \Sigma^*, {\boldsymbol \beta}^*,{\boldsymbol \rho} , D_o$$ and $${\boldsymbol \rho} \mid \Sigma^*, {\boldsymbol \beta}^*, V,D_o$$. For (ii), the modified collapsed Gibbs algorithm further requires sampling from (iic) $$[{\boldsymbol \gamma}^R_o \mid \Sigma^*,{\boldsymbol \beta}^*,\Omega,D_o]$$. The full conditional distributions for (i), (iia), and (iic), which are given in Appendix A of supplementary material available at Biostatistics online, are either an inverse gamma distribution or a multivariate normal distribution. Thus, sampling from $$[\Sigma^* \mid {\boldsymbol \beta}^*,{\boldsymbol \gamma}^R_o,\Omega,D_o]$$, $$[{\boldsymbol \beta}^* \mid \Sigma^*,\Omega,D_o]$$, and $$[{\boldsymbol \gamma}^R_o \mid \Sigma^*,{\boldsymbol \beta}^*,\Omega,D_o]$$ is straightforward. The full conditional distributions $$[V \mid \Sigma^*, {\boldsymbol \beta}^*,{\boldsymbol \rho} , D_o]$$ and $$[{\boldsymbol \rho} \mid \Sigma^*, {\boldsymbol \beta}^*, V,D_o]$$ are also given in Appendix A of supplementary material available at Biostatistics online. Sampling from these two conditional distributions is not trivial. In Appendix A of supplementary material available at Biostatistics online, we develop a modified localized Metropolis algorithm to sample from each of these two conditional distribution. The previous localized Metropolis algorithms used in Chen and others (2000) and Yao and others (2015) require the first and second derivatives of the logarithms of the full conditional densities. Unfortunately, computing the first and second derivatives of the log full conditional densities for $$[V \mid \Sigma^*, {\boldsymbol \beta}^*,{\boldsymbol \rho} , D_o]$$ and $$[{\boldsymbol \rho} \mid \Sigma^*, {\boldsymbol \beta}^*, V,D_o]$$ are prohibitive. The modified localized Metropolis algorithm avoids the direct computation of the second derivatives of these log full conditional densities. 4.3. Bayesian model comparison Notice that in Section 3, we introduced the grouping approach to solve the partial estimability problem of the covariance matrix $$\Omega$$. The grouping of the $$T+1$$ variances into $$G$$ groups motivates the idea of model comparison to select the appropriate grouping. We carry out model comparison using the DIC (Spiegelhalter and others, 2002) and the LPML (Ibrahim and others, 2001). Using the previous notation, the collection of all model parameters is denoted by $${\boldsymbol \theta}=( {\boldsymbol \beta}^*,\Omega,\Sigma^*)$$. Let $$D^{(k)}_{oy}=\left\{ (y^{(k)}_{t^{(k)}_{\ell}},n^{(k)}_{t^{(k)}_{\ell}}, {\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}}), \; \ell=1,\ldots,T^{(k)}\right\}$$ denote the response variables, sample sizes and covariates for the $$k_{th}$$ trial. We define the trial-based deviance function $$\text{Dev}^{(k)}({\boldsymbol \theta})$$ based on the observed-data likelihood corresponding to the response variables $${\boldsymbol y}^{(k)}$$, that is $$\text{Dev}^{(k)}({\boldsymbol \theta}) = -2\log f( {\boldsymbol \theta} \mid D^{(k)}_{oy})$$, where $$f( {\boldsymbol \theta} |D^{(k)}_{oy})$$ is the density function of a $$N(W^{(k)}{\boldsymbol \beta}^*, \; (E^{(k)})'\Omega E^{(k)}+\Sigma^{(k)})$$ distribution. The trial-based $$\text{DIC}^{(k)}$$ is given by $$\text{DIC}^{(k)}= \text{Dev}^{(k)}(\bar{{\boldsymbol \theta}}) + 2p^{(k)}_D$$, where $$\bar{{\boldsymbol \theta}}=E[{\boldsymbol \theta} \mid D_o]$$ is the posterior mean of $${\boldsymbol \theta}$$, $$p_D^{(k)}=\overline{\text{Dev}^{(k)}({\boldsymbol \theta})} - \text{Dev}^{(k)}(\bar{{\boldsymbol \theta}})$$ and $$\overline{\text{Dev}^{(k)}({\boldsymbol \theta})} = -2 E_{{\boldsymbol \theta}} [\log f( {\boldsymbol \theta} \mid D^{(k)}_{oy})]$$ is the posterior mean deviance. The overall deviance function $$\text{Dev}({\boldsymbol \theta})$$ is thus given by $$\text{Dev}({\boldsymbol \theta}) = -2\log L( {\boldsymbol \theta} \mid D_{oy}) = \sum_{k=1}^{K} \text{Dev}^{(k)}({\boldsymbol \theta})$$, and the DIC is $$\text{DIC}= \text{Dev}(\bar{{\boldsymbol \theta}}) + 2p_D = \sum_{k=1}^{K} \text{DIC}^{(k)}, \label{DIC}$$ (4.2) where $$\text{Dev}(\bar{{\boldsymbol \theta}})=\sum_{k=1}^{K}\text{Dev}^{(k)}(\bar{{\boldsymbol \theta}})$$ is a measure of goodness of fit, and $$p_D=\overline{\text{Dev}({\boldsymbol \theta})} - \text{Dev}(\bar{{\boldsymbol \theta}}) = \sum_{k=1}^{K}p_D^{(k)}$$ is the effective number of model parameters. To define the LPML, let $$D^{(-k)}_{oy}=\left\{ (y^{(j)}_{t^{(j)}_{\ell}}, n^{(j)}_{t^{(j)}_{\ell}}, {\boldsymbol x}^{(j)}_{t^{(j)}_{\ell}}), \; \ell=1,\ldots,T_j,j=1,\ldots,k-1,k+1,\ldots,K\right\}$$ denote the response variables with the $$k_{th}$$ trial deleted. The conditional predictive ordinate $$\text{CPO}^{(k)}$$ describes how much the response variables of the $$k_{th}$$ trial supports the model and can be written as $$\text{CPO}^{(k)} = \int f({\boldsymbol y}^{(k)}\mid {\boldsymbol \theta}) \pi({\boldsymbol \theta} \mid D^{(-k)}_{oy}, D_{os}) {\rm d}{\boldsymbol \theta} = \frac{1}{ \int \frac{1}{f({\boldsymbol y}^{(k)}\mid {\boldsymbol \theta})} \pi({\boldsymbol \theta}\mid D_o) {\rm d}{\boldsymbol \theta}}$$, where $$\pi({\boldsymbol \theta}\mid D^{(-k)}_{oy}, D_{os})$$ is the posterior distribution of $${\boldsymbol \theta}$$ with data $$(D^{(-k)}_{oy}, D_{os})$$, and $$f({\boldsymbol y}^{(k)}\mid {\boldsymbol \theta})$$ is the density function of a $$N(W^{(k)}{\boldsymbol \beta}^*, \; (E^{(k)})'\Omega E^{(k)}+\Sigma^{(k)})$$ distribution. The LPML can be used as a criterion-based measure for model selection, which is given by \begin{align} \text{LPML} = \sum_{k=1}^K \log \big(\text{CPO}^{(k)}\big). \label{lpml} \end{align} (4.3) Since closed forms of $$\text{CPO}^{(k)}$$ are not available, we approximate them using Gibbs iterates $$\{{\boldsymbol \theta}^{(b)}, b=1,...,B\}$$. From Dey and others (1995), $$\text{CPO}^{(k)}$$ can be approximated using the Monte Carlo estimate $$\widehat{\text{CPO}^{(k)}} = B\Big[\sum_{b=1}^{B}\big\{ f({\boldsymbol y}^{(k)}\mid {\boldsymbol \theta}^{(b)}) \big\}^{-1}\Big]^{-1}$$. 5. Analysis of the LDL-C network metadata Let $$y^{(k)}_{t^{(k)}_{\ell}}$$ be the mean percent change in LDL-C from the baseline value for the $$t^{(k)}_{\ell}$$th treatment arm in the $$k$$th trial. The vector of covariates is $${\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}}=\big($$1, $$\text{(bl_ldlc)}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{(bl_hdlc)}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{(bl_tg)}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{age}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{white}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{male}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{BMI}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{potency_med}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{potency_high}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{duration}^{(k)}_{t^{(k)}_{\ell}}\big)^T$$, and the corresponding regression coefficient vector is $${\boldsymbol \beta}$$. We fit the meta-regression model defined in (3.1) and (3.2) to the LDL-C network meta-data using the sampling algorithm proposed in Section 4.2. Let $$\mathcal{M}_1,...,\mathcal{M}_8$$ represent eight different groupings whose definitions are given in Table 1. Among the eight groupings, model $$\mathcal{M}_1$$ is the simplest model and therefore serves as the “benchmark” model. We computed the DICs defined in (4.2) and the LPMLs defined in (4.3) under models $$\mathcal{M}_1$$ - $$\mathcal{M}_8$$ and reported their results in Table 1. Since some treatments in trials 2, 10, and 11 are not included in any other trials, the calculation of the LPML excluded those trials. We see from Table 1 that (i) the DIC value under the 1 group model $$\mathcal{M}_1$$ (381.33) is larger than the DIC value under the 4 group model $$\mathcal{M}_2$$ (377.84), which implies that separating the variances of statins, EZE and statins with EZE from the variance of PBO is necessary for a better model fit; (ii) among the five group models $$\mathcal{M}_3$$ - $$\mathcal{M}_5$$, separating the variance of R from the all-statin group (i.e. $$\mathcal{M}_4$$) has the smallest DIC value (371.50), which is also smaller than the DIC value under model $$\mathcal{M}_2$$; and (iii) among the six group models $$\mathcal{M}_6$$ - $$\mathcal{M}_8$$, separating the variances of A and R from the all-statin group (i.e. $$\mathcal{M}_8$$) has the smallest DIC value (368.33), which is also smaller than the DIC value under model $$\mathcal{M}_4$$. The DIC values under models $$\mathcal{M}_1$$, $$\mathcal{M}_2$$, $$\mathcal{M}_4$$, $$\mathcal{M}_8$$ indicate that the smallest DIC for each fixed $$G$$ is a decreasing function of $$G$$ and the “best” DIC value is attained at $$G=6$$. The LPML values under $$\mathcal{M}_1$$ - $$\mathcal{M}_8$$ behave similarly to these DIC values and the LPML value under model $$\mathcal{M}_8$$ is the largest ($$-$$160.50), which is consistent with the smallest DIC value under model $$\mathcal{M}_8$$. Table 1. Description and comparison of models $$\mathcal{M}_1$$ - $$\mathcal{M}_8$$ Model $$G$$ $$\Omega$$ Description DIC LPML $$\mathcal{M}_1$$ $$G=1$$ $$\Omega_{jj}=\tau^2_1, j=0,...,10$$ The random effects for all 11 arms have the same variance 381.33 $$-$$165.30 $$\mathcal{M}_2$$ $$G=4$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{jj}=\tau^2_2, j=1,...,5$$; PBO alone; S/A/L/R/P; 377.84 $$-$$164.68 $$\Omega_{66}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_3$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 373.24 $$-$$161.93 $$\Omega_{jj}=\tau^2_3, j=2,...,5$$; A/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_4$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{44}=\tau^2_2$$; PBO alone; R alone; 371.50 $$-$$161.87 $$\Omega_{jj}=\tau^2_3, j=1,2,3,5$$; S/A/L/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_5$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 377.85 $$-$$164.90 $$\Omega_{jj}=\tau^2_3, j=1,3,4,5$$; S/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_6$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 369.72 $$-$$161.19 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=2,3,5$$; R alone; A/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_7$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 371.84 $$-$$161.64 $$\Omega_{22}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=3,4,5$$; A alone; L/R/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_8$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 368.33 $$-$$160.50 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=1,3,5$$; R alone; S/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE Model $$G$$ $$\Omega$$ Description DIC LPML $$\mathcal{M}_1$$ $$G=1$$ $$\Omega_{jj}=\tau^2_1, j=0,...,10$$ The random effects for all 11 arms have the same variance 381.33 $$-$$165.30 $$\mathcal{M}_2$$ $$G=4$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{jj}=\tau^2_2, j=1,...,5$$; PBO alone; S/A/L/R/P; 377.84 $$-$$164.68 $$\Omega_{66}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_3$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 373.24 $$-$$161.93 $$\Omega_{jj}=\tau^2_3, j=2,...,5$$; A/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_4$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{44}=\tau^2_2$$; PBO alone; R alone; 371.50 $$-$$161.87 $$\Omega_{jj}=\tau^2_3, j=1,2,3,5$$; S/A/L/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_5$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 377.85 $$-$$164.90 $$\Omega_{jj}=\tau^2_3, j=1,3,4,5$$; S/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_6$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 369.72 $$-$$161.19 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=2,3,5$$; R alone; A/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_7$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 371.84 $$-$$161.64 $$\Omega_{22}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=3,4,5$$; A alone; L/R/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_8$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 368.33 $$-$$160.50 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=1,3,5$$; R alone; S/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE Model $$\mathcal{M}_1$$ represents that all treatment arms share the same variance; model $$\mathcal{M}_2$$ has 4 groups, in which PBO alone is in the first group, all statins (S, A, L, R, P) are in the second group, EZE is in the third group, and all statins with EZE (SE, AE, LE, PE) are in the fourth group; models $$\mathcal{M}_3$$ - $$\mathcal{M}_5$$ are similar to model $$\mathcal{M}_2$$ but separate one statin (S, A or R), which all are involved in multiple trials, from the other statins; models $$\mathcal{M}_6$$ - $$\mathcal{M}_8$$ are also similar to model $$\mathcal{M}_2$$ but separate two statins (S and R, S and A, or A and R) from the other statins. Table 1. Description and comparison of models $$\mathcal{M}_1$$ - $$\mathcal{M}_8$$ Model $$G$$ $$\Omega$$ Description DIC LPML $$\mathcal{M}_1$$ $$G=1$$ $$\Omega_{jj}=\tau^2_1, j=0,...,10$$ The random effects for all 11 arms have the same variance 381.33 $$-$$165.30 $$\mathcal{M}_2$$ $$G=4$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{jj}=\tau^2_2, j=1,...,5$$; PBO alone; S/A/L/R/P; 377.84 $$-$$164.68 $$\Omega_{66}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_3$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 373.24 $$-$$161.93 $$\Omega_{jj}=\tau^2_3, j=2,...,5$$; A/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_4$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{44}=\tau^2_2$$; PBO alone; R alone; 371.50 $$-$$161.87 $$\Omega_{jj}=\tau^2_3, j=1,2,3,5$$; S/A/L/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_5$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 377.85 $$-$$164.90 $$\Omega_{jj}=\tau^2_3, j=1,3,4,5$$; S/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_6$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 369.72 $$-$$161.19 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=2,3,5$$; R alone; A/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_7$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 371.84 $$-$$161.64 $$\Omega_{22}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=3,4,5$$; A alone; L/R/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_8$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 368.33 $$-$$160.50 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=1,3,5$$; R alone; S/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE Model $$G$$ $$\Omega$$ Description DIC LPML $$\mathcal{M}_1$$ $$G=1$$ $$\Omega_{jj}=\tau^2_1, j=0,...,10$$ The random effects for all 11 arms have the same variance 381.33 $$-$$165.30 $$\mathcal{M}_2$$ $$G=4$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{jj}=\tau^2_2, j=1,...,5$$; PBO alone; S/A/L/R/P; 377.84 $$-$$164.68 $$\Omega_{66}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_3$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 373.24 $$-$$161.93 $$\Omega_{jj}=\tau^2_3, j=2,...,5$$; A/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_4$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{44}=\tau^2_2$$; PBO alone; R alone; 371.50 $$-$$161.87 $$\Omega_{jj}=\tau^2_3, j=1,2,3,5$$; S/A/L/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_5$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 377.85 $$-$$164.90 $$\Omega_{jj}=\tau^2_3, j=1,3,4,5$$; S/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_6$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 369.72 $$-$$161.19 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=2,3,5$$; R alone; A/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_7$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 371.84 $$-$$161.64 $$\Omega_{22}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=3,4,5$$; A alone; L/R/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_8$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 368.33 $$-$$160.50 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=1,3,5$$; R alone; S/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE Model $$\mathcal{M}_1$$ represents that all treatment arms share the same variance; model $$\mathcal{M}_2$$ has 4 groups, in which PBO alone is in the first group, all statins (S, A, L, R, P) are in the second group, EZE is in the third group, and all statins with EZE (SE, AE, LE, PE) are in the fourth group; models $$\mathcal{M}_3$$ - $$\mathcal{M}_5$$ are similar to model $$\mathcal{M}_2$$ but separate one statin (S, A or R), which all are involved in multiple trials, from the other statins; models $$\mathcal{M}_6$$ - $$\mathcal{M}_8$$ are also similar to model $$\mathcal{M}_2$$ but separate two statins (S and R, S and A, or A and R) from the other statins. The trial-based $$\text{DIC}^{(k)}$$’s and $$\text{CPO}^{(k)}$$’s defined in Section 4.3 are also computed. Plots of the $$\text{DIC}^{(k)}$$’s and $$\text{log(CPO}^{(k)})$$’s under model $$\mathcal{M}_8$$ versus model $$\mathcal{M}_1$$ are shown in Figure 2. In the $$\text{DIC}^{(k)}$$’s plot from Figure 2, 21 dots are red and 8 dots are blue. This implies that 21 out of 29 trials favor model $$\mathcal{M}_8$$ to model $$\mathcal{M}_1$$ in terms of individual DIC. Specifically, the values of $$\text{DIC}_1$$, $$\text{DIC}_3$$, $$\text{DIC}_6$$, $$\text{DIC}_{10}$$, and $$\text{DIC}_{21}$$ under model $$\mathcal{M}_8$$ are much smaller than those under model $$\mathcal{M}_1$$. In the $$\text{log(CPO}^{(k)})$$’s plot from Figure 2, there are 22 out of 26 trials in favor of model $$\mathcal{M}_8$$. The value of $$\text{log(CPO}_{21})$$ under model $$\mathcal{M}_8$$ are significantly larger than that under model $$\mathcal{M}_1$$. These results suggest that trials may favor different models, however, compared with the “benchmark” model $$\mathcal{M}_1$$, more trials favor model $$\mathcal{M}_8$$. This is consistent with the results in Table 1. Fig. 2. View largeDownload slide Individual DIC and log(CPO) plots of model $$\mathcal{M}_1$$ versus model $$\mathcal{M}_8$$. The plot of $$\text{DIC}^{(k)}$$’s is on the left and the plot of $$\text{log(CPO}^{(k)})$$’s is on the right. The filled circle points represent the trials that favor the x-axis model compared to the y-axis model and the empty triangle points are vice versa. Differences of $$\text{DIC}^{(k)}$$’s or $$\text{log(CPO}^{(k)})$$’s between two models larger than one are labeled with trial IDs. Fig. 2. View largeDownload slide Individual DIC and log(CPO) plots of model $$\mathcal{M}_1$$ versus model $$\mathcal{M}_8$$. The plot of $$\text{DIC}^{(k)}$$’s is on the left and the plot of $$\text{log(CPO}^{(k)})$$’s is on the right. The filled circle points represent the trials that favor the x-axis model compared to the y-axis model and the empty triangle points are vice versa. Differences of $$\text{DIC}^{(k)}$$’s or $$\text{log(CPO}^{(k)})$$’s between two models larger than one are labeled with trial IDs. The posterior estimates, including posterior means, posterior standard deviations (SDs), and $$95\%$$ highest posterior density (HPD) intervals of the parameters under model $$\mathcal{M}_8$$ and $$\mathcal{M}_1$$ are reported in Table 2 and Table S6 of supplementary material available at Biostatistics online. We see from these two tables that the posterior estimates for the overall treatment effects given the covariates were similar under the two models. Except for PBO, patients on all other treatments had substantial percent changes from baseline in LDL-C (i.e. the $$95\%$$ HPD intervals did not contain 0). The estimate for $$\gamma_8$$ (i.e., AE) was the lowest ($$-$$51.93) which indicates that the treatment AE had the highest percent change from baseline in LDL-C. Among the ten covariates, the baseline HDL-C regression coefficient had an HPD interval not containing zero under model $$\mathcal{M}_8$$. The posterior mean (SD) and $$95\%$$ HPD interval for $$\beta_2$$ (i.e. baseHDLC) were $$-1.49$$$$(0.74)$$ and $$(-2.93, -0.02)$$. This result indicates that there was a substantial improvement in LDL-C from baseline for higher baseline value of HDL-C. In contrast, no covariates coefficients were significant under model $$\mathcal{M}_1$$. The posterior mean (SD) of $$\tau^2_1$$ (i.e. the variance shared by the random effects for all 11 arms) under model $$\mathcal{M}_1$$ was 8.07 (1.83). Under the 6 group model $$\mathcal{M}_8$$, the posterior means (SDs) of $$\tau^2_1$$-$$\tau^2_6$$ were 2.14 (3.05), 4.90 (2.91), 14.65 (7.15), 2.43 (3.82), 2.21 (3.58), and 15.24 (8.57), respectively. Note that under $$\mathcal{M}_8$$, the posterior mean of $$\tau^2_1$$ (i.e. the variance of the random effect for PBO) is much smaller than the posterior mean of $$\tau^2_1$$ under model $$\mathcal{M}_1$$, where the random effect for PBO was assumed to share the same variance with the random effects for other treatments. Moreover, the posterior estimates of the variances varied a lot between different groups under model $$\mathcal{M}_8$$. Therefore, the one group model $$\mathcal{M}_1$$ cannot fit the data well because the variances of the random effects were different among these treatments, which further confirmed the results of DICs and LPMLs in Table 1. The last column of Table 2 reported the posterior probability that each treatment effect was positive. Since the goal was to see reduction on LDL-C, the posterior probability $$P(\gamma_j > 0)$$ can be seen as an analogy to a P-value for each effect. A small $$P(\gamma_j > 0)$$ indicated strong evidence against the statement that the treatment was not effective. Table 2. Posterior estimates of the parameters under model $$\mathcal{M}_8$$ Posterior Posterior Variables Parameter Mean SD $$95\%$$ HPD Interval $$P(\gamma_j > 0)$$ baseLDLC $$\beta_1$$ $$-$$0.10 0.63 ($$-$$1.30, 1.18) baseHDLC $$\beta_2$$ $$-$$1.49 0.74 ($$-$$2.93, $$-$$0.02) baseTG $$\beta_3$$ 0.52 0.60 ($$-$$0.63, 1.72) age $$\beta_4$$ $$-$$0.74 0.54 ($$-$$1.79, 0.35) white $$\beta_5$$ $$-$$0.79 0.53 ($$-$$1.87, 0.20) male $$\beta_6$$ $$-$$1.21 0.75 ($$-$$2.64, 0.29) BMI $$\beta_7$$ 0.41 0.60 ($$-$$0.76, 1.58) potency_med $$\beta_8$$ 3.40 3.06 ($$-$$2.38, 9.65) potency_high $$\beta_9$$ $$-$$1.69 3.40 ($$-$$8.19, 5.19) duration $$\beta_{10}$$ 1.09 0.63 ($$-$$0.19, 2.29) PBO $$\gamma_{0}$$ 1.20 5.47 (-9.69, 11.87) 0.59 S $$\gamma_{1}$$ $$-$$40.43 1.11 ($$-$$42.63, $$-$$38.26) 0.00 A $$\gamma_{2}$$ $$-$$44.60 1.64 ($$-$$47.81, $$-$$41.36) 0.00 L $$\gamma_{3}$$ $$-$$27.84 3.89 ($$-$$35.35, $$-$$20.01) $$5\times 10^{-5}$$ R $$\gamma_{4}$$ $$-$$42.22 2.14 ($$-$$46.48, $$-$$38.08) 0.00 P $$\gamma_{5}$$ $$-$$27.81 3.91 ($$-$$35.30, $$-$$19.96) $$5\times 10^{-5}$$ E $$\gamma_{6}$$ $$-$$18.99 5.41 ($$-$$29.49, $$-$$8.25) $$1.35\times 10^{-3}$$ SE $$\gamma_{7}$$ $$-$$47.48 2.19 ($$-$$51.72, $$-$$43.10) 0.00 AE $$\gamma_{8}$$ $$-$$51.93 4.52 ($$-$$60.67, $$-$$42.74) 0.00 LE $$\gamma_{9}$$ $$-$$47.84 4.55 ($$-$$56.81, $$-$$38.85) 0.00 PE $$\gamma_{10}$$ $$-$$47.02 4.66 ($$-$$56.07, $$-$$37.65) 0.00 $$\tau^2_1$$ 2.14 3.05 (0.07, 6.95) $$\tau^2_2$$ 4.90 2.91 (1.06, 10.73) Variance of $$\tau^2_3$$ 14.65 7.15 (4.62, 28.43) random effects $$\tau^2_4$$ 2.43 3.82 (0.07, 7.61) $$\tau^2_5$$ 2.21 3.58 (0.07, 7.33) $$\tau^2_6$$ 15.24 8.57 (4.19, 31.14) Posterior Posterior Variables Parameter Mean SD $$95\%$$ HPD Interval $$P(\gamma_j > 0)$$ baseLDLC $$\beta_1$$ $$-$$0.10 0.63 ($$-$$1.30, 1.18) baseHDLC $$\beta_2$$ $$-$$1.49 0.74 ($$-$$2.93, $$-$$0.02) baseTG $$\beta_3$$ 0.52 0.60 ($$-$$0.63, 1.72) age $$\beta_4$$ $$-$$0.74 0.54 ($$-$$1.79, 0.35) white $$\beta_5$$ $$-$$0.79 0.53 ($$-$$1.87, 0.20) male $$\beta_6$$ $$-$$1.21 0.75 ($$-$$2.64, 0.29) BMI $$\beta_7$$ 0.41 0.60 ($$-$$0.76, 1.58) potency_med $$\beta_8$$ 3.40 3.06 ($$-$$2.38, 9.65) potency_high $$\beta_9$$ $$-$$1.69 3.40 ($$-$$8.19, 5.19) duration $$\beta_{10}$$ 1.09 0.63 ($$-$$0.19, 2.29) PBO $$\gamma_{0}$$ 1.20 5.47 (-9.69, 11.87) 0.59 S $$\gamma_{1}$$ $$-$$40.43 1.11 ($$-$$42.63, $$-$$38.26) 0.00 A $$\gamma_{2}$$ $$-$$44.60 1.64 ($$-$$47.81, $$-$$41.36) 0.00 L $$\gamma_{3}$$ $$-$$27.84 3.89 ($$-$$35.35, $$-$$20.01) $$5\times 10^{-5}$$ R $$\gamma_{4}$$ $$-$$42.22 2.14 ($$-$$46.48, $$-$$38.08) 0.00 P $$\gamma_{5}$$ $$-$$27.81 3.91 ($$-$$35.30, $$-$$19.96) $$5\times 10^{-5}$$ E $$\gamma_{6}$$ $$-$$18.99 5.41 ($$-$$29.49, $$-$$8.25) $$1.35\times 10^{-3}$$ SE $$\gamma_{7}$$ $$-$$47.48 2.19 ($$-$$51.72, $$-$$43.10) 0.00 AE $$\gamma_{8}$$ $$-$$51.93 4.52 ($$-$$60.67, $$-$$42.74) 0.00 LE $$\gamma_{9}$$ $$-$$47.84 4.55 ($$-$$56.81, $$-$$38.85) 0.00 PE $$\gamma_{10}$$ $$-$$47.02 4.66 ($$-$$56.07, $$-$$37.65) 0.00 $$\tau^2_1$$ 2.14 3.05 (0.07, 6.95) $$\tau^2_2$$ 4.90 2.91 (1.06, 10.73) Variance of $$\tau^2_3$$ 14.65 7.15 (4.62, 28.43) random effects $$\tau^2_4$$ 2.43 3.82 (0.07, 7.61) $$\tau^2_5$$ 2.21 3.58 (0.07, 7.33) $$\tau^2_6$$ 15.24 8.57 (4.19, 31.14) Table 2. Posterior estimates of the parameters under model $$\mathcal{M}_8$$ Posterior Posterior Variables Parameter Mean SD $$95\%$$ HPD Interval $$P(\gamma_j > 0)$$ baseLDLC $$\beta_1$$ $$-$$0.10 0.63 ($$-$$1.30, 1.18) baseHDLC $$\beta_2$$ $$-$$1.49 0.74 ($$-$$2.93, $$-$$0.02) baseTG $$\beta_3$$ 0.52 0.60 ($$-$$0.63, 1.72) age $$\beta_4$$ $$-$$0.74 0.54 ($$-$$1.79, 0.35) white $$\beta_5$$ $$-$$0.79 0.53 ($$-$$1.87, 0.20) male $$\beta_6$$ $$-$$1.21 0.75 ($$-$$2.64, 0.29) BMI $$\beta_7$$ 0.41 0.60 ($$-$$0.76, 1.58) potency_med $$\beta_8$$ 3.40 3.06 ($$-$$2.38, 9.65) potency_high $$\beta_9$$ $$-$$1.69 3.40 ($$-$$8.19, 5.19) duration $$\beta_{10}$$ 1.09 0.63 ($$-$$0.19, 2.29) PBO $$\gamma_{0}$$ 1.20 5.47 (-9.69, 11.87) 0.59 S $$\gamma_{1}$$ $$-$$40.43 1.11 ($$-$$42.63, $$-$$38.26) 0.00 A $$\gamma_{2}$$ $$-$$44.60 1.64 ($$-$$47.81, $$-$$41.36) 0.00 L $$\gamma_{3}$$ $$-$$27.84 3.89 ($$-$$35.35, $$-$$20.01) $$5\times 10^{-5}$$ R $$\gamma_{4}$$ $$-$$42.22 2.14 ($$-$$46.48, $$-$$38.08) 0.00 P $$\gamma_{5}$$ $$-$$27.81 3.91 ($$-$$35.30, $$-$$19.96) $$5\times 10^{-5}$$ E $$\gamma_{6}$$ $$-$$18.99 5.41 ($$-$$29.49, $$-$$8.25) $$1.35\times 10^{-3}$$ SE $$\gamma_{7}$$ $$-$$47.48 2.19 ($$-$$51.72, $$-$$43.10) 0.00 AE $$\gamma_{8}$$ $$-$$51.93 4.52 ($$-$$60.67, $$-$$42.74) 0.00 LE $$\gamma_{9}$$ $$-$$47.84 4.55 ($$-$$56.81, $$-$$38.85) 0.00 PE $$\gamma_{10}$$ $$-$$47.02 4.66 ($$-$$56.07, $$-$$37.65) 0.00 $$\tau^2_1$$ 2.14 3.05 (0.07, 6.95) $$\tau^2_2$$ 4.90 2.91 (1.06, 10.73) Variance of $$\tau^2_3$$ 14.65 7.15 (4.62, 28.43) random effects $$\tau^2_4$$ 2.43 3.82 (0.07, 7.61) $$\tau^2_5$$ 2.21 3.58 (0.07, 7.33) $$\tau^2_6$$ 15.24 8.57 (4.19, 31.14) Posterior Posterior Variables Parameter Mean SD $$95\%$$ HPD Interval $$P(\gamma_j > 0)$$ baseLDLC $$\beta_1$$ $$-$$0.10 0.63 ($$-$$1.30, 1.18) baseHDLC $$\beta_2$$ $$-$$1.49 0.74 ($$-$$2.93, $$-$$0.02) baseTG $$\beta_3$$ 0.52 0.60 ($$-$$0.63, 1.72) age $$\beta_4$$ $$-$$0.74 0.54 ($$-$$1.79, 0.35) white $$\beta_5$$ $$-$$0.79 0.53 ($$-$$1.87, 0.20) male $$\beta_6$$ $$-$$1.21 0.75 ($$-$$2.64, 0.29) BMI $$\beta_7$$ 0.41 0.60 ($$-$$0.76, 1.58) potency_med $$\beta_8$$ 3.40 3.06 ($$-$$2.38, 9.65) potency_high $$\beta_9$$ $$-$$1.69 3.40 ($$-$$8.19, 5.19) duration $$\beta_{10}$$ 1.09 0.63 ($$-$$0.19, 2.29) PBO $$\gamma_{0}$$ 1.20 5.47 (-9.69, 11.87) 0.59 S $$\gamma_{1}$$ $$-$$40.43 1.11 ($$-$$42.63, $$-$$38.26) 0.00 A $$\gamma_{2}$$ $$-$$44.60 1.64 ($$-$$47.81, $$-$$41.36) 0.00 L $$\gamma_{3}$$ $$-$$27.84 3.89 ($$-$$35.35, $$-$$20.01) $$5\times 10^{-5}$$ R $$\gamma_{4}$$ $$-$$42.22 2.14 ($$-$$46.48, $$-$$38.08) 0.00 P $$\gamma_{5}$$ $$-$$27.81 3.91 ($$-$$35.30, $$-$$19.96) $$5\times 10^{-5}$$ E $$\gamma_{6}$$ $$-$$18.99 5.41 ($$-$$29.49, $$-$$8.25) $$1.35\times 10^{-3}$$ SE $$\gamma_{7}$$ $$-$$47.48 2.19 ($$-$$51.72, $$-$$43.10) 0.00 AE $$\gamma_{8}$$ $$-$$51.93 4.52 ($$-$$60.67, $$-$$42.74) 0.00 LE $$\gamma_{9}$$ $$-$$47.84 4.55 ($$-$$56.81, $$-$$38.85) 0.00 PE $$\gamma_{10}$$ $$-$$47.02 4.66 ($$-$$56.07, $$-$$37.65) 0.00 $$\tau^2_1$$ 2.14 3.05 (0.07, 6.95) $$\tau^2_2$$ 4.90 2.91 (1.06, 10.73) Variance of $$\tau^2_3$$ 14.65 7.15 (4.62, 28.43) random effects $$\tau^2_4$$ 2.43 3.82 (0.07, 7.61) $$\tau^2_5$$ 2.21 3.58 (0.07, 7.33) $$\tau^2_6$$ 15.24 8.57 (4.19, 31.14) Table S7 in Appendix B of supplementary material available at Biostatistics online presented the posterior means, posterior SDs, and $$95\%$$ HPD intervals for the pairwise differences in treatments means (the mean percent reductions in LDL-C from the baseline) after adjusting for the aggregate covariates under model $$\mathcal{M}_8$$. The following observations are noteworthy. First, as expected, all statins, EZE and statins with EZE combinations provided a substantially higher reduction in LDL-C than PBO. Second, among 10 pairwise comparisons between five statins, (a) S, A, and R provided a significantly higher LDL-C reduction than L and P, (b) A provided a significantly higher LDL-C reduction than S, and (c) the remaining three pairwise comparisons were not significant. Third, statins provided significantly higher LDL-C reduction than EZE. Fourth, considering the statins with EZE combination therapies, (a) the mean LDL-C reductions for these four combination therapies were not significantly different from each other, (b) all four combination therapies provided a significantly higher LDL-C reduction than corresponding statin mono-therapy and also E mono-therapy except with AE, in which case, the $$95\%$$ HPD interval for the difference in LDL-C reductions between AE and A was ($$-$$16.11, 1.41). Our fitted model $$\mathcal{M}_8$$ did yield numerically a higher LDL-C reduction for AE than A. In the clinical literature, this difference in LDL-C reductions between AE and A is known to be highly significant through direct comparisons (Ballantyne and others, 2003). For comparison purposes, we also fitted model $$\mathcal{M}_8$$ without covariates. Table S8 in Appendix B of supplementary material available at Biostatistics online presented the pairwise comparisons which were not adjusted by aggregate covariates in model $$\mathcal{M}_8$$. The difference in LDL-C reductions due to AE and A became highly significant. The posterior mean (SD) of the difference was $$-$$15.97 (3.62) yielding ($$-$$23.07, $$-$$8.74) and ($$-$$32.53, $$-$$0.15) as $$95\%$$ and $$99.9\%$$ HPD intervals, respectively. Likewise, the difference in LDL-C reductions between R and S as well as between R and A became significant under model $$\mathcal{M}_8$$ without covariates ($$95\%$$ HPD interval, [$$-$$13.04, $$-$$6.00] and [$$-$$8.35, $$-$$4.38], respectively). The opposite was the case for the LDL-C reduction difference between A and S ($$95\%$$ HPD interval, [$$-$$6.78, 0.43]). All these three network meta-analysis results under model $$\mathcal{M}_8$$ without covariates were consistent with the direct comparison results from Insull and others (2007). In general, most estimates of the treatment differences from the network meta-analysis under model $$\mathcal{M}_8$$ were consistent with the direct comparisons (if there were any). Some of the treatment differences were in the right direction numerically but did not reach significance mainly due to the inclusion of covariates. Finally, the posterior estimates of the correlation parameters under model $$\mathcal{M}_8$$ are reported in Table S9 in Appendix B of supplementary material available at Biostatistics online. From this table, we see that none of the correlation parameters are significant since all of the 95% HPD intervals contain 0. The absolute and cumulative probabilities under model $$\mathcal{M}_8$$ for all 11 treatments taking each possible rank were plotted in Figures 3 and S1 in Appendix B of supplementary material available at Biostatistics online. The probabilities in Figure 3 were adjusted by including the covariates while the probabilities in Figure S1 were not. Also reported was the surface under the cumulative ranking curve (SUCRA) (Salanti and others, 2011), which was used to estimate the ranking probabilities for all treatments to obtain a treatment hierarchy. SUCRA can also be used to calculate the normalized mean rank. The mean rank for a treatment is $$1+(1-p)(T-1)$$, where $$p$$ is the SUCRA for the treatment and $$T$$ is the number of treatments. The order of treatments (in descending order) suggested by Figure 3 was AE, SE, LE, PE, A, R, S, L, P, E, and PBO. This suggested ranking order was consistent with the magnitudes of the estimated LDL-C percent reductions in Table 2. The order of treatments according to SUCRAs shown in Figure S1 was AE, SE, R, A, LE, PE, S, P, L, E, and PBO. Thus, the ranking of treatments was changed if the aggregate covariates were not included. Fig. 3. View largeDownload slide Plots of ranking probabilities for all treatment arms. The dashed line represents the absolute probability and the solid line represents the cumulative probability. SUCRA is the percentage of efficacy of a treatment on the outcome, which would be one when a treatment is certain to be the best and zero when a treatment is certain to be the worst. Fig. 3. View largeDownload slide Plots of ranking probabilities for all treatment arms. The dashed line represents the absolute probability and the solid line represents the cumulative probability. SUCRA is the percentage of efficacy of a treatment on the outcome, which would be one when a treatment is certain to be the best and zero when a treatment is certain to be the worst. In all of the Bayesian computations, we used 20 000 MCMC samples, which were taken from every fifth iteration, after a burn-in of 5000 iterations for each model to compute all posterior estimates, including posterior means, posterior SDs, $$95\%$$ HPD intervals, DICs and LPMLs, and cumulative ranking curves. The convergence of the MCMC sampling algorithm was checked using several diagnostic procedures discussed in Chen and others (2000). The HPD intervals were computed via the Monte Carlo method developed by Chen and Shao (1999). 6. Discussion Our methodology and analysis was motivated by the fact that the head-to-head clinical trials directly comparing all the treatments of interest is not possible in clinical research due to time, resources and other practical constraints. We have used real data from clinical trials on lipid lowering therapies to motivate our research and illustrate the network meta-analysis methodology developed here. Although, we do not know the ground truth, the results obtained for comparisons among LDL-C lowering therapies turn out to be fairly consistent with what is known in the clinical literature, and hence are supportive of the methodology and assumptions used here. In Appendix A of supplementary material available at Biostatistics online, we use the modified localized Metropolis algorithm to generate a positive definite correlation matrix via the partial correlations (Joe, 2006). We also implement the modified localized Metropolis algorithm by reparameterizing the correlation matrix using a spherical co-ordinate system (Lu and Ades, 2009). Both reparameterization methods yield in the similar posterior estimates, as shown, for an example, in Table S10 of Appendix B of supplementary material available at Biostatistics online. There are indeed two fundamental approaches to network meta-analysis in that one can use arm-based models or contrast-based models. The arguments for using a contrast-based model are strong in the case that there is a natural base treatment for each study for which to build the model. In our network, we do not have a natural base treatment for some of the studies, and thus picking an arbitrary base treatment for some of the studies in the network may lead to biased assessments of the treatment effect and inappropriate inference. We believe that arm-based methods are most useful in settings in which there is not a natural base treatment for some studies, as is this the case here with our network. Thus, although we are aware of the debate (Dias and Ades, 2016; Hong and others, 2016) and are aware that contrast-based methods can handle confounding when there is a natural base treatment, arm-based methods may be more suitable in settings where no natural base treatment exists for many of the studies. It is for this reason, we have decided to take an arm-based modeling approach, and we believe that our arm-based approach is useful in our particular network meta-analysis setting. As we have seen from this network meta-data, the dimension of the covariance matrix of the random effects is high and some treatments are included in very few trials or even in a single trial. Therefore, many correlation coefficients among the random effects cannot be estimated. Unlike the variances of the random effects, the correlation coefficients are bounded. Therefore, we simply assume a uniform prior for the correlation matrix $${\boldsymbol \rho}$$ in our analysis. Due to the positive definite constraint of $${\boldsymbol \rho}$$, we have developed a Metropolis-within-Gibbs sampling algorithm to generate $${\boldsymbol \rho}$$ via partial correlations. To allow for borrowing of strength from different pairs of correlation coefficients, a potential extension of the grouping approach for the variances of the random effects is to assume that some pairs of treatments share the same correlations. However, determining the number of groups and selecting group membership for correlation coefficients is much more challenging than for the variances of the random effects. From Table S1a, we see that the $$S^{2(k)}_{t^{(k)}_{\ell}}$$’s are generally large and the values of the $$S^{(k)}_{t^{(k)}_{\ell}}$$’s range from 10.50 to 18.04. Similar to the meta-regression model assumed for the mean response, a potential extension is to assume a log-linear meta regression model for $$\sigma^{2(k)}_{t^{(k)}_{\ell}}$$ in (3.1) and (3.2). These extensions are currently under investigation. As an extension of the proposed grouping approach, both the total number of groups and group membership allocation are assumed to be random in the model; and then a reversible jump MCMC algorithm is developed to sample $$G$$ and the membership allocation. This alternative approach may provide an empirical justification of the grouping based on the clinical relevance, which is certainly another promising future research project. 7. Software Computer code was written for the FORTRAN 95 compiler. We built an R package which includes the LDL-C network meta-data and serves as an interface calling the FORTRAN code within R. The R-package with the built-in data used in this article is available at https://github.com/epochholly/Network-Meta-Analysis-Bayesian-Inference-Multivariate-Random-Effects. Supplementary material Supplementary material is available at http://biostatistics.oxfordjournals.org. Acknowledgments We would like to thank the Editor, an Associate Editor, and two referees for their very helpful comments and suggestions, which have led to a much improved version of the article. Conflict of Interest: None declared. Funding National Institutes of Health (GM70335 and P01CA142538 to M.-H.C. and J.G.I.). Intramural Research Program of National Institutes of Health and National Cancer Institute (S.K.). References Adedinsewo D. , Taka N. , Agasthi P. , Sachdeva R. , Rust G. and Onwuanyi A. ( 2016 ). Prevalence and factors associated with statin use among a nationally representative sample of US adults: National Health and Nutrition Examination Survey, 2011–2012. Clinical Cardiology 9 , 491 – 496 . Google Scholar CrossRef Search ADS Ballantyne C. M. , Houri J. , Notarbartolo A. , Melani L. , Lipka L. J. , Suresh R. , Sun S. , LeBeaut A. P. , Sager P. T. and Veltri E. P. ( 2003 ). Effect of ezetimibe coadministered with atorvastatin in 628 patients with primary hypercholesterolemia. Circulation 19 , 2409 – 2415 . Google Scholar CrossRef Search ADS Chan A. W. and Altman D. G. ( 2005 ). Epidemiology and reporting of randomised trials published in PubMed journals. The Lancet 9465 , 1159 – 1162 . Google Scholar CrossRef Search ADS Chen M.-H. and Shao Q. M. ( 1999 ). Monte Carlo estimation of Bayesian credible and HPD intervals. Journal of Computational and Graphical Statistics 1 , 69 – 92 . Chen M.-H. , Shao Q. M. and Ibrahim. J. G. ( 2000 ). Monte Carlo Methods in Bayesian Computation . New York : Springer . Google Scholar CrossRef Search ADS Dey D. K. , Kuo L. and Sahu S. K. ( 1995 ). A Bayesian predictive approach to determining the number of components in a mixture distribution. Statistics and Computing 4 , 297 – 305 . Google Scholar CrossRef Search ADS Dias S. and Ades A. E. ( 2016 ). Absolute or relative effects? Arm-based synthesis of trial data. Research Synthesis Methods 7 , 23 – 28 . Google Scholar CrossRef Search ADS PubMed Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults . ( 2001 ). Executive summary of the Third Report of the National Cholesterol Education Program (NCEP) expert panel on detection, evaluation, and treatment of high blood cholesterol in adults (Adult Treatment Panel III). Journal of American Medical Association 19 , 2486 – 2497 . Gwon Y. , Mo M. , Chen M.-H. , Li J. , Xia H. A. and Ibrahim J. G. ( 2016 ). Network meta-regression for ordinal outcomes: applications in comparing Crohn’s disease treatments. Technical Report 16–28 , Department of Statistics, University of Connecticut . Hong H. , Carlin B. P. , Shamliyan T. A. , Wyman J. F. , Ramakrishnan R. , Sainfort F. and Kane R. L. ( 2013 ). Comparing Bayesian and frequentist approaches for multiple outcome mixed treatment comparisons. Medical Decision Making 5 , 702 – 714 . Google Scholar CrossRef Search ADS Hong H. , Chu H. , Zhang J. and Carlin B. P. ( 2016 ). A Bayesian missing data framework for generalized multiple outcome mixed treatment comparisons. Research Synthesis Methods 7 , 6 – 22 . Google Scholar CrossRef Search ADS PubMed Hong H. , Price K. L. , Fu H. and Carlin B. P. ( 2017 ). Bayesian network meta-analysis for multiple endpoints. In: Gatsonis C. and Morton C. S. (editors), Methods in Comparative Effectiveness Research , Chapter 12 . Taylor & Francis Group , pp. 385 – 407 . Google Scholar CrossRef Search ADS Higgins J. and Whitehead A. ( 1996 ). Borrowing strength from external trials in a meta-analysis. Statistics in Medicine 24 , 2733 – 2749 . Google Scholar CrossRef Search ADS Ibrahim J. G. , Chen M.-H. and Sinha D. ( 2001 ). Bayesian Survival Analysis . New York : Springer . Google Scholar CrossRef Search ADS Insull W. , Ghali J. K. , Hassman D. R. , Ycas J. W. , Gandhi S. K. and Miller E. ( 2007 ). Achieving low-density lipoprotein cholesterol goals in high-risk patients in managed care: comparison of rosuvastatin, atorvastatin, and simvastatin in the SOLAR trial. Mayo Clinic Proceedings 5 , 543 – 550 . Google Scholar CrossRef Search ADS Joe H. ( 2006 ). Generating random correlation matrices based on partial correlations. Journal of Multivariate Analysis 10 , 2177 – 2189 . Google Scholar CrossRef Search ADS Lu G. and Ades A. E. ( 2004 ). Combination of direct and indirect evidence in mixed treatment comparisons. Statistics in Medicine 20 , 3105 – 3124 . Google Scholar CrossRef Search ADS Lu G. and Ades A. E. ( 2006 ). Assessing evidence inconsistency in mixed treatment comparisons. Journal of the American Statistical Association 474 , 447 – 459 . Google Scholar CrossRef Search ADS Lu G. and Ades A. E. ( 2009 ). Modeling between-trial variance structure in mixed treatment comparisons. Biostatistics 10 , 792 – 805 . Google Scholar CrossRef Search ADS PubMed Lumley T. ( 2002 ). Network meta-analysis for indirect treatment comparisons. Statistics in Medicine 16 , 2313 – 2324 . Google Scholar CrossRef Search ADS Mills E. J. , Kanters S. , Thorlund K. , Chaimani A. , Veroniki A. A. and Ioannidis J. P. ( 2013 ). The effects of excluding treatments from network meta-analyses: survey. British Medical Journal 347 , f5195 . Google Scholar CrossRef Search ADS PubMed Morrone D. , Weintraub W. S. , Toth P. P. , Hanson M. E. , Lowe R. S. , Lin J. , Shah A. K. and Tershakovec A. M. ( 2012 ) Lipid-altering efficacy of ezetimibe plus statin and statin monotherapy and identification of factors associated with treatment response: A pooled analysis of over 21,000 subjects from 27 clinical trials. Atherosclerosis 223 , 251 – 261 . Google Scholar CrossRef Search ADS PubMed NCHStats . ( 2013 ). A Blog of the National Center for Health Center for Health Statistics . https://nchstats.com/2013/11/14/statistics-on-statin-use/. Ross G. ( 2015 ). Too Few Americans Take Statins, CDC Study Reveals . http://acsh.org/news/2015/12/04/cdc-study-reveals-that-too-few-americans-are-on-statins. Salanti G. , Ades A. E. and Ioannidis J. P. ( 2011 ). Graphical methods and numerical summaries for presenting results from multiple-treatment meta-analysis: an overview and tutorial. Journal of Clinical Epidemiology 2 , 163 – 171 . Google Scholar CrossRef Search ADS Spiegelhalter D. J. , Best N. G. , Carlin B. P. and Van Der Linde A. ( 2002 ). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 4 , 583 – 639 . Google Scholar CrossRef Search ADS Yao H. , Chen M.-H. and Qiu C. ( 2011 ). Bayesian modeling and inference for meta data with applications in efficacy evaluation of an allergic rhinitis drug. Journal of Biopharmaceutical Statistics 5 , 992 – 1005 . Google Scholar CrossRef Search ADS Yao H. , Kim S. , Chen M.-H. , Ibrahim J. G. , Shah A. K. and Lin J. ( 2015 ). Bayesian inference for multivariate meta-regression with partially observed within-study sample covariance matrix. Journal of the American Statistical Association 510 , 528 – 544 . Google Scholar CrossRef Search ADS © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Biostatistics Oxford University Press
# Bayesian inference for network meta-regression using multivariate random effects with applications to cholesterol lowering drugs
, Volume Advance Article – Apr 18, 2018
18 pages
/lp/ou_press/bayesian-inference-for-network-meta-regression-using-multivariate-kEbgUQW0Iw
Publisher
Oxford University Press
ISSN
1465-4644
eISSN
1468-4357
D.O.I.
10.1093/biostatistics/kxy014
Publisher site
See Article on Publisher Site
### Abstract
Summary Low-density lipoprotein cholesterol (LDL-C) has been identified as a causative factor for atherosclerosis and related coronary heart disease, and as the main target for cholesterol- and lipid-lowering therapy. Statin drugs inhibit cholesterol synthesis in the liver and are typically the first line of therapy to lower elevated levels of LDL-C. On the other hand, a different drug, Ezetimibe, inhibits the absorption of cholesterol by the small intestine and provides a different mechanism of action. Many clinical trials have been carried out on safety and efficacy evaluation of cholesterol lowering drugs. To synthesize the results from different clinical trials, we examine treatment level (aggregate) network meta-data from 29 double-blind, randomized, active, or placebo-controlled statins +/$$-$$ Ezetimibe clinical trials on adult treatment-naïve patients with primary hypercholesterolemia. In this article, we propose a new approach to carry out Bayesian inference for arm-based network meta-regression. Specifically, we develop a new strategy of grouping the variances of random effects, in which we first formulate possible sets of the groups of the treatments based on their clinical mechanisms of action and then use Bayesian model comparison criteria to select the best set of groups. The proposed approach is especially useful when some treatment arms are involved in only a single trial. In addition, a Markov chain Monte Carlo sampling algorithm is developed to carry out the posterior computations. In particular, the correlation matrix is generated from its full conditional distribution via partial correlations. The proposed methodology is further applied to analyze the network meta-data from 29 trials with 11 treatment arms. 1. Introduction According to the National Center for Health Statistics, high cholesterol is a risk factor for heart disease, which is the leading cause of death for both men and women. Nearly 600 000 people die of heart disease in the United States every year—that’s one in every four deaths. Every year about 715 00 Americans have a heart attack and the costs of coronary heart disease alone costs the US over $${\}$$100 billion annually which includes the cost of health care services, medications, and lost productivity (NCHStats, 2013). High cholesterol is well known to contribute to heart disease and other cardiovascular diseases. Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (2001) has issued treatment guidelines identifying low-density lipoprotein cholesterol (LDL-C, “bad” cholesterol) as a causative factor for coronary heart disease and as the main target for cholesterol-lowering and lipid-lowering therapy. Cholesterol-lowering medicines called “statins” work mainly in the liver to decrease the production of cholesterol and reduce cholesterol in the bloodstream. Many clinical trials since the introduction of statin drugs in 1987 have shown statins to drive down the rates of heart attack and stroke. Statin drugs are typically the first choice to drive down elevated levels of LDL-C. An estimated 38.7 million Americans were on a statin in 2011–2012, an increase from 24 million in 2003–2004, and from 12.5 million in 1990–2000 (Adedinsewo and others, 2016). There are several effective brands on generic labels and thus are quite inexpensive. There are several classes of cholesterol-lowering drugs, but statins have become the drug class of choice because of their demonstrated efficacy and safety. This class of drugs includes: atorvastatin (Lipitor), simvastatin (Zocor), lovastatin (Mevacor), rosuvastatin (Crestor), pravastatin (Pravachol), and fluvastatin (Lescol). Between 2000 and 2014, the fraction of Americans with elevated blood levels of cholesterol declined from 18.3% to 11%, at least partly due to an increase in the use of cholesterol-lowering medications. The use of any lipid-lowering agent was 20% in 2004 and the data from 2012 showed that 28% of Americans over the age of 40 are taking such a medication (Ross, 2015). But a high blood level of LDL-C remains a major risk factor for heart disease and stroke in the US. In general, statins positively affect the lipid profile by decreasing LDL-C and triglycerides (TG), and increasing high-density lipoprotein cholesterol (HDL-C, ‘good’ cholesterol), although the effect on HDL-C is relatively smaller. Clinical studies have shown that statins significantly reduce the risk of heart attack and death in patients with coronary artery disease and can also reduce cardiac events in patients with high cholesterol levels. Even though statins are the first-line treatment in most patients, lipid goals are frequently not achieved because of inadequate response to therapy, poor compliance, or concerns regarding the increased potential for side effects at higher doses. On the other hand, a drug called Ezetimibe (Zetia) works in the digestive tract. It is unique in the way it helps block absorption of cholesterol that comes from food. Ezetimibe (EZE) can complement statins in targeting both sources of cholesterol. EZE can be given as monotherapy to lower cholesterol levels in patients who are intolerant to statin or in whom treatment with statin is not appropriate. EZE can also be used in combination with a statin in patients whose cholesterol levels remain elevated despite treatment with statin alone. It can be either co-administered with the statin dose or given as a fixed-dose combination tablet (known as Vytorin) containing simvastatin and EZE. A combination tablet of atorvastatin and EZE (known as Liptruzet) and composite packs of rosuvastatin and EZE (known as Rosuzet) are also available around the world. The data for direct comparisons of these combination therapies for their effectiveness are very limited. The clinical trials for head-to-head comparisons of these combination therapies are very limited prompting for the indirect comparisons using network meta-analysis. This is the motivation and objective of our work in this article. The effects of statins in combination with EZE on LDL-C could vary due to possible drug interactions and need to be studied and is a subject of this article. The effects of statins have been well-studied and known in the literature. We estimate these effects through both direct and indirect comparisons here in a network meta-analysis framework. Advantages of using combination therapy include greater efficacy through differing mechanisms of action, lower doses of individual drugs, and potential amelioration of side effects experienced with high doses of single agents (Morrone and others, 2012). In network meta-analysis, more than two treatments are compared in different randomized pairwise or multi-arm trials (Lu and Ades, 2006). Approximately a quarter of randomized trials include more than two arms (Chan and Altman, 2005), while the presence of multi-arm trials adds complexity to the analysis. Moreover, in the evidence synthesis process, it is not rare to encounter treatments which are involved in very few trials or even only in one trial (Lu and Ades, 2006; Hong and others, 2013, 2016; Gwon and others, 2016). In the case, where some treatments are involved in only one trial, excluding such treatments sometimes can have important effects on the network meta-analysis results (Mills and others, 2013). Bayesian approaches are becoming more popular due to their flexibility and interpretability (Higgins and Whitehead, 1996; Lu and Ades, 2004, 2006). Meanwhile, random effects models are increasingly popular as a useful tool for network meta-analysis (Lu and Ades, 2006; Hong and others, 2013). Compared with fixed effects models, one of the advantages of random effects models is to allow for borrowing of strength from different trials. If all treatments are involved in multiple trials, it is desirable to allow the between-trial variances of the random effects to vary across treatments (arm-based model) or treatment comparisons (contrast-based model) (Lumley, 2002; Hong and others, 2016). It is a common practice to assume homogeneity of between-trial variations for all arms. The performance of homogeneous and heterogeneous variance models have been examined in Lu and Ades (2004, 2006, 2009). When the number of treatments is large and some treatments are involved only in a single trial, the variance parameters of the random effects in heterogeneous variance models cannot be estimated. In addition, an equal-correlation structure is often assumed for the correlation matrix of the random effects, and a value of half is usually assumed for the correlation (Lu and Ades, 2004, 2006). However, the equal-correlation structure is quite restrictive, since it does not allow for different correlations as well as arbitrary negative correlations among the random effects. In this article, we consider a network meta-data consisting of 29 trials, 11 treatment arms (10 active treatments plus placebo), and 10 aggregate covariates. We first develop arm-based meta-regression models with multivariate random effects in order to compare these treatments while adjusting for aggregate covariates. A challenge that arises in dealing with 11-dimensional random effects is that some variances of the random effects cannot be estimated due to the fact that five treatments are involved only in a single trial. To circumvent this issue, we develop a general framework to formulate possible sets of the groups of treatments according to their clinical mechanisms of action, and further use the deviance information criterion (DIC) and the logarithm of the pseudo marginal likelihood (LPML) to select the best group. Unlike assuming known correlations, we specify a non-informative uniform prior for the correlation matrix and then develop a new localized Metropolis algorithm to generate the correlation matrix from its full conditional distribution via partial correlations. Finally, we carry out a detailed analysis of the network meta-data using the proposed methodology and our results in the treatment comparisons are generally consistent with direct comparisons reported in the literature. The rest of this article is organized as follows. In Section 2, we provide a detailed description of the LDL-C network meta-data from 29 clinical trials, which motivates the proposed methodology. In Section 3, we fully develop the network meta-regression model, specify a multivariate normal distribution for random effects, and propose a grouping methodology for the covariance matrix. The complete-data likelihood and the observed-data likelihood are also given in Section 3. We present priors and posteriors in Section 4.1, develop a Markov chain Monte Carlo (MCMC) sampling algorithm, including collapsed Gibbs sampling and localized metropolis sampling in Section 4.2, and derive Bayesian model comparison criteria in Section 4.3. Section 5 gives a detailed analysis of the LDL-C network meta-data discussed in Section 2. We conclude the article with some discussion in Section 6. 2. The LDL-C network meta-data A systematic search for randomized clinical trials using statins was conducted online using Google Scholar. The search yielded 78 trials: 32 Merck Sharp and Dohme (MSD) sponsored and 46 non-MSD sponsored trials. From these trials, all second-line studies (i.e. studies with patients on statin at study entry) were excluded. From the remaining first-line trials (i.e. studies with patients who were drug-naïve or rendered drug-naïve by wash-out at study entry), those with missing information on the response variable (LDL-C mean percent change) or study covariates (listed below) were excluded. The inclusion-exclusion flow diagram of studies is given in Figure 1 (a). This left us with 29 double-blind, randomized, active, or placebo-controlled clinical trials on adult treatment-naïve patients with primary hypercholesterolemia (15 MSD-sponsored and 14 non-MSD sponsored). These trials were conducted between 2002 and 2010 and study durations ranged from 5 to 12 weeks. Some trials had longer durations with titration of doses but only the data prior to the first titration were used in the analysis. The primary goal of these clinical trials was to evaluate the LDL-C lowering effects of different statins or ezetimibe plus statins. The treatments used in these trials were placebo (PBO), simvastatin (S), atorvastatin (A), lovastatin (L), rosuvastatin (R), pravastatin (P), Ezetimibe (E), and the combinations of S and E (SE), A and E (AE), L and E (LE) and P and E (PE). Ezetimibe is available at only one dose of 10 mg while the statins are available at multiple doses. In this article, the LDL-C lowering effects of different doses of each statin are combined to form the treatment group. Fig. 1. View largeDownload slide (a) Flow diagram for trials included in the network meta-analysis. (b) LDL-C network metadata diagram. Each node represents a treatment in the network metadata. The number associated with the treatment gives the total number of patients across all trials. Each edge represents the direct evidence comparing the treatments it connects, and the involved trial IDs are given for each head-to-head comparison. Fig. 1. View largeDownload slide (a) Flow diagram for trials included in the network meta-analysis. (b) LDL-C network metadata diagram. Each node represents a treatment in the network metadata. The number associated with the treatment gives the total number of patients across all trials. Each edge represents the direct evidence comparing the treatments it connects, and the involved trial IDs are given for each head-to-head comparison. A network diagram (for LDL-C mean percent change) of the included treatments based on these 29 trials is presented in Figure 1 (b). Taking SE as an example, we can see from the diagram that (i) SE was included in trials 1, 3, 4, 5, 6, 7, 8, 9, 12, 13, 14, and 15, and 6596 patients were treated with SE in all these trials; (ii) SE was compared head-to-head with PBO, A, R, S, and E, while it is not compared head-to-head with AE, L, LE, PE, and P; and (iii) SE and PBO were compared head-to-head in trials 1, 3, and 6. Note that the size of the node is proportional to the total sample size of the corresponding treatment, and the width of the edge is related to the number of trials which include the direct comparisons. The covariates considered here are baseline LDL-C (bl_ldlc) mean, baseline HDL-C (bl_hdlc) mean, baseline TG (bl_trig) mean, age mean, race proportion (of white), gender proportion (of male), body mass index (BMI) mean, proportion of medium statin potency, proportion of high statin potency, and trial duration. We consider mean percent change from baseline in LDL-C as the outcome variable. A summary of the covariates and outcome variables is given in Tables S1a and S1b of the supplementary materials available at Biostatistics online. Tables S2a and S2b of supplementary material available at Biostatistics online provide the title, treatment groups, treatment duration, and citation for the published primary manuscript for each trial. The patient entry criteria for these studies included in this meta-analysis are given in Tables S3a and S3b of supplementary material available at Biostatistics online. All head-to-head comparisons from these trials can be easily seen through Table S4 of the of supplementary material available at Biostatistics online. 3. Network meta-regression models Suppose, we consider $$K$$ randomized trials and a set of treatments $${\mathscr T}=\{0,1,\dots,T\}$$ from all $$K$$ trials. The $$k$$th trial has $$T^{(k)}$$ treatments, which are denoted by $$\mathscr{T}^{(k)}=\{t^{(k)}_{1},\ldots,t^{(k)}_{T^{(k)}};t^{(k)}_{\ell}\in \mathscr{T},\;\ell=1,\ldots,T^{(k)}\}$$. Let $$y^{(k)}_{ t^{(k)}_{\ell}}$$ denote the aggregate response, which generally represents the sample mean, and $$S^{2(k)}_{t^{(k)}_{\ell}}$$ denote the sample variance for $$\ell=1,2,\ldots,T^{(k)}$$ and $$k=1,2,\ldots,K$$. For the LDL-C network meta-data discussed in Section 2, we have $$K=29$$ trials and $$T=10$$ active treatment arms. We use 0 to denote PBO and 1–10 to denote S, A, L, R, P, E, SE, AE, LE, and PE, respectively. The first trial $$k=1$$ includes treatments PBO, S, E, and SE, thus $$T^{(1)}=4$$ and $$\mathscr{T}^{(1)}=\{t^{(1)}_{1}=0, t^{(1)}_{2}=1,t^{(1)}_{3}=6,t^{(1)}_{4}=7\}$$, which is a subset of $$\mathscr{T}$$. Following Yao and others (2011), Yao and others (2015), and Hong and others (2017), we propose the following random effects model for the network meta-analysis $$y^{(k)}_{t^{(k)}_{\ell}} = ({\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}})^T {\boldsymbol \beta} + \gamma^{(k)}_{t^{(k)}_{\ell}} + \epsilon^{(k)}_{t^{(k)}_{\ell}}, \quad \epsilon^{(k)}_{t^{(k)}_{\ell}} \sim N\big(0,\frac{\sigma^{2(k)}_{ t^{(k)}_{\ell}}}{n^{(k)}_{t^{(k)}_{\ell}}}\big), \label{ymodel}$$ (3.1) and $$\frac{(n^{(k)}_{t^{(k)}_{\ell}}-1)S^{2(k)}_{t^{(k)}_{\ell}}} {\sigma^{2(k)}_{t^{(k)}_{\ell}}} \sim \chi^2_{(n^{(k)}_{t^{(k)}_{\ell}}-1)}, \label{smodel}$$ (3.2) where $$y^{(k)}_{t^{(k)}_{\ell}}$$ and $$S^{2(k)}_{t^{(k)}_{\ell}}$$ are independent. The $$p$$-dimensional vector $${\boldsymbol x}^{(k)}_{ t^{(k)}_{\ell}}$$ represents the aggregate (arm-level) covariates and $${\boldsymbol \beta}$$ is a $$p$$-dimensional vector of regression coefficients corresponding to the aggregate covariate vector $${\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}}$$. The random effect of the $$t^{(k)}_{\ell}$$th treatment within the $$k$$th trial, $$\gamma^{(k)}_{t^{(k)}_{\ell}}$$, is assumed to be independent of $$\epsilon^{(k)}_{t^{(k)}_{\ell}}$$. It captures the dependence of the $$y^{(k)}_{t^{(k)}_{\ell}}$$’s within the trial as well as the heterogeneity across trials. Let $${\boldsymbol \gamma}=(\gamma_0,\gamma_1,\ldots,\gamma_T)'$$ represent the $$(T+1)$$-dimensional vector of the overall treatment effects and $$\Omega$$ denote the $$(T+1)\times(T+1)$$ unknown covariance matrix. We define a collection of unit vectors $$E^{(k)}=(e_{t^{(k)}_{1}},e_{t^{(k)}_{2}},\ldots,e_{t^{(k)}_{T^{(k)}}})$$, where $$e_{t^{(k)}_{\ell}}=(0,\ldots,1,\ldots,0)',\ell=1,\ldots,T^{(k)}$$ with $$t^{(k)}_{\ell}$$th element equal to $$1$$ and $$0$$ otherwise. Thus, $$E^{(k)}$$ is a $$(T+1)\times T^{(k)}$$ matrix. Also, let $$(E^{(k)})^C$$ be a $$(T+1)\times (T+1-T^{(k)})$$ matrix, which consists of the columns of the $$(T+1)\times (T+1)$$ identity matrix $$I_{T+1}$$ that are not included in $$E^{(k)}$$. We let $${\boldsymbol \gamma}^{(k)}=( \gamma^{(k)}_{0}, \gamma^{(k)}_{1}, \ldots,\gamma^{(k)}_{T})'$$ denote the vector of the $$(T+1)$$-dimensional random effects, which would be observed in the $$k$$th trial. Then, $${\boldsymbol \gamma}^{(k)}_{o}= (E^{(k)})^T {\boldsymbol \gamma}^{(k)}$$ is the vector of the $$T^{(k)}$$-dimensional random effects of the treatments that are actually observed in the $$k$$th trial while $${\boldsymbol \gamma}^{(k)}_{m}= ((E^{(k)})^C)^T {\boldsymbol \gamma}^{(k)}$$ is the vector of the $$(T+1-T^{(k)})$$-dimensional random effects of the treatments that are not included in the $$k$$th trial. As an illustration, $(E^{(1)})^T= \left(\begin{array}{@{}ccccccccccc@{}} 1&0&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&1&0&0&0&0 \\ 0&1&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&1&0&0&0\\ \end{array}\right)$ and ${\boldsymbol \gamma}^{(1)}_{o}= (E^{(1)})^T {\boldsymbol \gamma}^{(1)}=\left(\begin{array}{@{}cccc@{}} \gamma^{(1)}_{0} & \gamma^{(1)}_{1} & \gamma^{(1)}_{6} & \gamma^{(1)}_{7} \end{array}\right)^T$. A detailed summary of the key notations is provided in Tables S4a and S4b of supplementary material available at Biostatistics online. For the random effects, we assume $${\boldsymbol \gamma}^{(k)} \sim N_{T+1}({\boldsymbol \gamma}, \Omega)$$, which is a multivariate normal distribution with a $$(T+1)$$-dimensional vector of overall effects $${\boldsymbol \gamma}$$ and a $$(T+1) \times (T+1)$$ positive definite covariance matrix $$\Omega$$. Further, let $$({\boldsymbol \gamma}^{(k)})^R={\boldsymbol \gamma}^{(k)} -{\boldsymbol \gamma}$$, so that $$({\boldsymbol \gamma}^{(k)})^R \sim N_{T+1}(\mathbf{0}, \Omega)$$. It is easy to show that $$({\boldsymbol \gamma}^{(k)}_{o})^R \sim N_{T^{(k)}}\big(\mathbf{0}, (E^{(k)})^T\Omega E^{(k)}\big)$$ and $$({\boldsymbol \gamma}^{(k)}_{m})^R \sim N_{T+1-T^{(k)}}\big( \mathbf{0}, ((E^{(k)})^C)^T\Omega (E^{(k)})^C\big)$$, where $$({\boldsymbol \gamma}^{(k)}_{o})^R=(E^{(k)})^T ({\boldsymbol \gamma}^{(k)} - {\boldsymbol \gamma})$$ and $$({\boldsymbol \gamma}^{(k)}_{m})^R=((E^{(k)})^C)^T ({\boldsymbol \gamma}^{(k)}-{\boldsymbol \gamma})$$, $$k=1,2,\dots,K$$. Now, let $${\boldsymbol \epsilon}^{(k)}=\left(\epsilon^{(k)}_{t^{(k)}_{1}}, \epsilon^{(k)}_{ t^{(k)}_{2}},\ldots,\epsilon^{(k)}_{t^{(k)}_{T^{(k)}}}\right)^{'}$$ denote the vector of random errors for the $$k$$th trial. Then $${\boldsymbol \epsilon}^{(k)} \sim N_{T^{(k)}}(0,\Sigma^{(k)})$$, where $$\Sigma^{(k)}=\mbox {diag}\left(\frac{\sigma^{2(k)}_{ t^{(k)}_{1}}}{n^{(k)}_{ t^{(k)}_{1}}},\frac{\sigma^{2(k)}_{t^{(k)}_{2}}} {n^{(k)}_{t^{(k)}_{2}}},\ldots,\frac{\sigma^{2(k)}_{t^{(k)}_{T^{(k)}}}}{n^{(k)}_{ t^{(k)}_{T^{(k)}}}}\right)$$ is a $$T^{(k)} \times T^{(k)}$$ diagonal matrix. Let $${\boldsymbol X}^{(k)}=\left({\boldsymbol x}^{(k)}_{t^{(k)}_{1}} \; {\boldsymbol x}^{(k)}_{t^{(k)}_{2}}\;\ldots\; {\boldsymbol x}^{(k)}_{t^{(k)}_{T^{(k)}}}\right)^{'}$$ denote the $$T^{(k)} \times p$$ covariate matrix for the $$k$$th trial. Write $$W^{(k)}=({\boldsymbol X}^{(k)} \; (E^{(k)})^T)$$ and $${\boldsymbol \beta}^*=({\boldsymbol \beta}',{\boldsymbol \gamma}')'$$. Then the vector form of (3.1) is given by $${\boldsymbol y}^{(k)} =W^{(k)} {\boldsymbol \beta}^* + ({\boldsymbol \gamma}^{(k)}_{o})^R + {\boldsymbol \epsilon}^{(k)}, \label{ymodelvec}$$ (3.3) where $${\boldsymbol y}^{(k)}=\left(y^{(k)}_{t^{(k)}_{1}},y^{(k)}_{t^{(k)}_{2}},\ldots, y^{(k)}_{t^{(k)}_{T^{(k)}}}\right)^{'}$$. Let $$D_{oy}=\left\{ \left(y^{(k)}_{t^{(k)}_{\ell}},n^{(k)}_{t^{(k)}_{\ell}}, {\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}}\right), \; \ell=1,2,\ldots,T^{(k)},k=1,2,\ldots,K \right\}$$ and $$D_{os}=\left\{ S^{2(k)}_{t^{(k)}_{\ell}}, n^{(k)}_{t^{(k)}_{\ell}}\right.$$, $$\left. \ell=1,2,\ldots,T^{(k)},k=1,2,\ldots,K \right\}$$. Then $$D_o=D_{oy}\cup D_{os}$$ denotes the observed data. Further, let $$D_c=D_o \cup \left\{\left(\gamma^{(k)}_{t^{(k)}_{\ell}}\right)^R, \ell=1,2,\ldots,T^{(k)},k=1,2,\ldots,K \right\}$$ denote the complete data. Let $${\boldsymbol \theta}=( {\boldsymbol \beta}^*,\Omega,\Sigma^*)$$ denote the collection of all model parameters, where $$\Sigma^*= \left(\sigma^{2(1)}_{t^{(1)}_{1}},\dots,\sigma^{2(1)}_{t^{(1)}_{T^{(1)}}},\dots, \sigma^{2(K)}_{t^{(K)}_{1}},\dots,\sigma^{2(K)}_{t^{(K)}_{T^{(K)}}}\right)$$. Using (3.2) and (3.3) and the independence of $$y^{(k)}_{t^{(k)}_{\ell}}$$ and $$S^{2(k)}_{t^{(k)}_{\ell}}$$, the complete data likelihood function can be written as \begin{align} &L( {\boldsymbol \theta} \mid D_c) \notag\\ &\quad = \prod_{k=1}^{K} \left( \vphantom{\frac{\left((n^{(k)}_{ t^{(k)}_{\ell}}-1)S^{2(k)}_{t^{(k)}_{\ell}}\right)^{\frac{n^{(k)}_{ t^{(k)}_{\ell}}-1}{2}-1}} {\left(2\sigma^{2(k)}_{t^{(k)}_{\ell}}\right)^\frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2}\Gamma\left(\frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2}\right)}} (2\pi)^{-\frac{T^{(k)}}{2}}|\Sigma^{(k)}|^{-\frac{1}{2}} \exp \left\{-\frac{\left({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*-({\boldsymbol \gamma}^{(k)}_{o})^R\right)^T(\Sigma^{(k)})^{-1}\left({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*-({\boldsymbol \gamma}^{(k)}_{o})^R\right)}{2}\right\} \right.\nonumber \\ &\qquad\left. \times \prod_{l=1}^{T^{(k)}} \left[ \frac{ \left((n^{(k)}_{ t^{(k)}_{\ell}}-1)S^{2(k)}_{t^{(k)}_{\ell}}\right)^{ \frac{n^{(k)}_{ t^{(k)}_{\ell}}-1}{2}-1}} {\left(2\sigma^{2(k)}_{t^{(k)}_{\ell}}\right)^\frac{ n^{(k)}_{t^{(k)}_{\ell}}-1}{2}\Gamma\left( \frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2}\right)} \exp\left\{-\frac{ (n^{(k)}_{t^{(k)}_{\ell}}-1)S^{2(k)}_{t^{(k)}_{\ell}}}{ 2\sigma^{2(k)}_{t^{(k)}_{\ell}}}\right\}\right] \times f(({\boldsymbol \gamma}^{(k)})^R \mid \Omega) \right), \label{completelikelihood} \end{align} (3.4) where $$f(({\boldsymbol \gamma}^{(k)})^R \mid \Omega)$$ is the probability density function corresponding to a $$N_{T+1}(\mathbf{0}, \Omega)$$ distribution. The random effects $$({\boldsymbol \gamma}^{(k)})^R$$ can be directly integrated out from (3.4) because they follow a multivariate normal distribution. In equation 3.3, the random effects $$({\boldsymbol \gamma}^{(k)}_{o})^R$$ are distributed as multivariate normal, and are independent of $${\boldsymbol \epsilon}^{(k)}$$. Hence, we have $${\boldsymbol y}^{(k)} \mid {\boldsymbol \theta} \sim N\big(W^{(k)} {\boldsymbol \beta}^*, \; (E^{(k)})' \Omega E^{(k)} + \Sigma^{(k)} \big)$$. Therefore, the observed data likelihood function is given by $$L( {\boldsymbol \theta} \mid D_o)=L( {\boldsymbol \theta} \mid D_{oy})L( \Sigma^* \mid D_{os}), \label{obslike}$$ (3.5) where $$L( {\boldsymbol \theta} \mid D_{oy}) = \prod_{k=1}^{K} \bigg[ \;(2\pi)^{-\frac{T^{(k)}}{2}} \big|(E^{(k)})^T \Omega E^{(k)} +\Sigma^{(k)}\big|^{-\frac{1}{2}} \exp\Big \{ -\frac{1}{2} ({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*)^T \big((E^{(k)})^T \Omega E^{(k)} +\Sigma^{(k)}\big)^{-1} ({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*) \Big \} \bigg]$$ and $$L( \Sigma^* \mid D_{os})=\prod_{k=1}^{K} \prod_{l=1}^{T^{(k)}} \left[ \frac{\left((n^{(k)}_{t^{(k)}_{\ell}}-1)S^{2(k)}_{ t^{(k)}_{\ell}}\right)^{\frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2}-1} } {(2\sigma^{2(k)}_{t^{(k)}_{\ell}})^\frac{n^{(k)}_{ t^{(k)}_{\ell}}-1}{2}\Gamma(\frac{n^{(k)}_{t^{(k)}_{\ell}}-1}{2})} \exp\left\{-\frac{(n^{(k)}_{t^{(k)}_{\ell}}-1)S^{2(k)}_{ t^{(k)}_{\ell}}}{2\sigma^{2(k)}_{t^{(k)}_{\ell}}}\right\}\right]$$. One of the major challenges for the proposed network meta-regression model is that only part of the covariance matrix $$\Omega$$ can be estimated due to the fact that some of the treatments are only included in a single trial. For example, treatments P, PE, L, LE, and AE were only included in trials 2, 10, and 11, respectively, for the LDL-C network meta-data. This makes estimation of the variances of the random effects corresponding to these treatments impossible. To overcome this problem, we assume that (i) these $$T+1$$ treatments can be divided into $$G$$ groups; and (ii) the random effects for those treatments within the same group share the same variance. Mathematically, we assume $${\mathscr T} = \bigcup_{g=1}^G {\mathscr G}_g$$ such that $${\mathscr G}_g \bigcap {\mathscr G}_{g'} = \emptyset$$ for all $$g \ne g'$$ and let $$\{ \tau^2_g, \; g=1,2,\dots, G\}$$ denote the $$G$$ distinct variances of the random effects. Then, we assume $$\Omega_{jj} = \tau^2_g \; \mbox{ for j \in {\mathscr G}_g}, \label{grouping}$$ (3.6) for $$g=1,2,\dots, G$$. We write the covariance matrix as $$\Omega=V^{\frac{1}{2}}\;{\boldsymbol \rho} \;V^{\frac{1}{2}}$$, where $$V=\text{diag}(\Omega_{00}, \Omega_{11},\dots,\Omega_{TT})$$ is the matrix of the diagonal elements of $$\Omega$$ and $${\boldsymbol \rho}$$ is the corresponding correlation matrix. Thus, the grouping of variances does not imply any grouping of correlations. The determination of $$G$$ and $${\mathscr G}_g$$ is not an easy task. Here, we propose a two-step grouping strategy: (i) formulating possible sets of the groups of treatments based on their clinical mechanisms of action and (ii) using Bayesian model comparison criteria to select the best set of groups. For the LDL-C network meta-data, the random effect corresponding to the placebo arm is expected to have a smaller variance than the random effects corresponding to the active treatment arms since the placebo effect should be very similar across trials. Therefore, the placebo arm should stand alone as a group. In addition, statins (inhibit cholesterol synthesis in the liver) have a very different mechanism of action from EZE (inhibit the absorption of cholesterol by the small intestine). Therefore, the treatments with statins alone should not be classified into the same group as those with EZE arms. According to this strategy, we first formulate eight sets of $$G$$ and $${\mathscr G}_g$$ and then determine the best set of $$G$$ and $${\mathscr G}_g$$ according to Bayesian model comparison criteria. 4. Bayesian inference 4.1. Priors and posteriors We assume that $${\boldsymbol \beta}^*$$, $$\Omega,$$ and $$\Sigma^{(k)}$$, $$k=1,2,\dots,K$$ are independent a priori. We further assume $${\boldsymbol \beta}^* \sim N_{p+T+1}(0,c_{01} I_{p+T+1})$$. For $$\Sigma^{(k)}=\mbox {diag}\left(\frac{\sigma^{2(k)}_{t^{(k)}_{1}}}{n^{(k)}_{ t^{(k)}_{1}}},\frac{\sigma^{2(k)}_{t^{(k)}_{2}}}{n^{(k)}_{ t^{(k)}_{2}}},\ldots,\frac{\sigma^{2(k)}_{t^{(k)}_{T^{(k)}}}}{n^{(k)}_{ t^{(k)}_{T^{(k)}}}}\right)$$, we assume that $$\sigma^{2(k)}_{t^{(k)}_{\ell}} \sim \text{IG}(a_{00},b_{00})$$, $$\ell=1,2,\dots,T^{(k)}$$, $$k=1,2,\dots,K$$, that is, $$p\left(\sigma^{2(k)}_{t^{(k)}_{\ell}}|a_{00},b_{00}\right)\propto \left(\sigma^{2(k)}_{ t^{(k)}_{\ell}}\right)^{-a_{00}-1}\exp\left\{-\frac{b_{00}}{\sigma^{2(k)}_{t^{(k)}_{\ell}}}\right\}$$. Write $${\boldsymbol \rho} = (\rho_{ij})_{0\le i,j \le T}$$ and assume that $$\pi\big((\rho_{01},\rho_{12},\rho_{02},\rho_{23},\rho_{13},...,\rho_{0T} )\big) \propto 1$$ such that $${\boldsymbol \rho}$$ is positive definite. We further assume $$\Omega_{jj}=\tau^2_g \sim IG(a_{0g},b_{0g})$$, for $$j \in {\mathscr G}_g$$, $$g=1,2,\dots, G$$. Note that $$c_{01}$$, $$a_{00}$$, $$b_{00}$$, $$a_{0g}$$ and $$b_{0g}$$ are prespecified hyperparameters. In this article, we use $$c_{01}=100,000$$, $$a_{00}=0.0001$$, $$b_{00}=0.0001$$, $$a_{0g}=0.01,$$ and $$b_{0g}=0.01$$, $$g=1,2,\dots, G$$. We use the sampling algorithm based on partial correlations in Yao and others (2015) to sample $${\boldsymbol \rho}$$ to ensure that $${\boldsymbol \rho}$$ is a positive definite correlation matrix. Let $${\boldsymbol \gamma}^R_o=\big( (({\boldsymbol \gamma}^{(1)}_{o})^R)^T, (({\boldsymbol \gamma}^{(2)}_{o})^R)^T,\dots, (({\boldsymbol \gamma}^{R(K)}_{o})^R)^T \big)^T$$. From previous discussion and (3.4), the augmented posterior distribution is given by \begin{align} &\; \pi({\boldsymbol \beta}^*,\Omega,\Sigma^*,{\boldsymbol \gamma}^R_o \mid D_o) \nonumber \\ &\quad \propto \prod_{k=1}^{K} |\Sigma^{(k)}|^{-\frac{1}{2}}\exp \left\{-\frac{\left({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*-({\boldsymbol \gamma}^{(k)}_{o})^R\right)'\left(\Sigma^{(k)}\right)^{-1}\left({\boldsymbol y}^{(k)} -W^{(k)} {\boldsymbol \beta}^*-({\boldsymbol \gamma}^{(k)}_{o})^R\right)}{2}\right\} L( \Sigma^* \mid D_{os}) \nonumber \\ &\qquad\times \prod_{k=1}^{K} |(E^{(k)})' \Omega E^{(k)}|^{-\frac{1}{2}}\exp\left\{-\frac{(({\boldsymbol \gamma}^{(k)}_{o})^R)^T((E^{(k)})' \Omega E^{(k)})^{-1} ({\boldsymbol \gamma}^{(k)}_{o})^R}{2} \right\} \exp\left\{-\frac{({\boldsymbol \beta}^*)'{\boldsymbol \beta}^*}{2c_{01}}\right\} \nonumber \\ &\qquad \times \prod_{k=1}^{K} \prod_{l=1}^{T^{(k)}} (\sigma^{2(k)}_{ t^{(k)}_{\ell}})^{-a_{00}-1}\exp\left\{-\frac{b_{00}}{\sigma^{2(k)}_{ t^{(k)}_{\ell}}}\right\} \prod_{g=1}^{G} (\tau^2_g)^{-a_{0g}-1}\exp\left\{-\frac{b_{0g}}{\tau^2_g}\right\}, \label{posterior} \end{align} (4.1) where $$L( \Sigma^* \mid D_{os})$$ is defined in (3.5). 4.2. Computational development The analytical evaluation of the posterior distribution of $${\boldsymbol \theta}=({\boldsymbol \beta}^*,\Omega,\Sigma^*)$$ given in (4.1) is not available. However, we can develop a MCMC sampling algorithm to sample from (4.1). The algorithm requires sampling the following parameters in turn from their respective full conditional distributions: (i) $$[\Sigma^* \mid {\boldsymbol \beta}^*,{\boldsymbol \gamma}^R_o,\Omega,D_o]$$ and (ii) $$[{\boldsymbol \beta}^*,\Omega, {\boldsymbol \gamma}^R_o \mid \Sigma^* ,D_o]$$. For (ii), we use the modified collapsed Gibbs sampling technique in Chen and others (2000) via the identity $$[{\boldsymbol \beta}^*,\Omega,{\boldsymbol \gamma}^R_o \mid \Sigma^*,D_o]=[{\boldsymbol \beta}^*,\Omega \mid \Sigma^*,D_o][{\boldsymbol \gamma}^R_o \mid \Sigma^*,{\boldsymbol \beta}^*,\Omega,D_o]$$, and further $$[{\boldsymbol \beta}^*,\Omega \mid \Sigma^*,D_o]$$ is sampled in turn from the following full conditional distributions: (iia) $$[{\boldsymbol \beta}^* \mid \Sigma^*,\Omega,D_o]$$; (iib) $$[\Omega \mid \Sigma^*,{\boldsymbol \beta}^*, D_o]$$. For (iib), the sampling scheme is not straightforward. Following Section 4.1, $$\Omega$$ can be written as $$V^{\frac{1}{2}}{\boldsymbol \rho} V^{\frac{1}{2}}$$. The sampling process is thus divided into two parts, $$V \mid \Sigma^*, {\boldsymbol \beta}^*,{\boldsymbol \rho} , D_o$$ and $${\boldsymbol \rho} \mid \Sigma^*, {\boldsymbol \beta}^*, V,D_o$$. For (ii), the modified collapsed Gibbs algorithm further requires sampling from (iic) $$[{\boldsymbol \gamma}^R_o \mid \Sigma^*,{\boldsymbol \beta}^*,\Omega,D_o]$$. The full conditional distributions for (i), (iia), and (iic), which are given in Appendix A of supplementary material available at Biostatistics online, are either an inverse gamma distribution or a multivariate normal distribution. Thus, sampling from $$[\Sigma^* \mid {\boldsymbol \beta}^*,{\boldsymbol \gamma}^R_o,\Omega,D_o]$$, $$[{\boldsymbol \beta}^* \mid \Sigma^*,\Omega,D_o]$$, and $$[{\boldsymbol \gamma}^R_o \mid \Sigma^*,{\boldsymbol \beta}^*,\Omega,D_o]$$ is straightforward. The full conditional distributions $$[V \mid \Sigma^*, {\boldsymbol \beta}^*,{\boldsymbol \rho} , D_o]$$ and $$[{\boldsymbol \rho} \mid \Sigma^*, {\boldsymbol \beta}^*, V,D_o]$$ are also given in Appendix A of supplementary material available at Biostatistics online. Sampling from these two conditional distributions is not trivial. In Appendix A of supplementary material available at Biostatistics online, we develop a modified localized Metropolis algorithm to sample from each of these two conditional distribution. The previous localized Metropolis algorithms used in Chen and others (2000) and Yao and others (2015) require the first and second derivatives of the logarithms of the full conditional densities. Unfortunately, computing the first and second derivatives of the log full conditional densities for $$[V \mid \Sigma^*, {\boldsymbol \beta}^*,{\boldsymbol \rho} , D_o]$$ and $$[{\boldsymbol \rho} \mid \Sigma^*, {\boldsymbol \beta}^*, V,D_o]$$ are prohibitive. The modified localized Metropolis algorithm avoids the direct computation of the second derivatives of these log full conditional densities. 4.3. Bayesian model comparison Notice that in Section 3, we introduced the grouping approach to solve the partial estimability problem of the covariance matrix $$\Omega$$. The grouping of the $$T+1$$ variances into $$G$$ groups motivates the idea of model comparison to select the appropriate grouping. We carry out model comparison using the DIC (Spiegelhalter and others, 2002) and the LPML (Ibrahim and others, 2001). Using the previous notation, the collection of all model parameters is denoted by $${\boldsymbol \theta}=( {\boldsymbol \beta}^*,\Omega,\Sigma^*)$$. Let $$D^{(k)}_{oy}=\left\{ (y^{(k)}_{t^{(k)}_{\ell}},n^{(k)}_{t^{(k)}_{\ell}}, {\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}}), \; \ell=1,\ldots,T^{(k)}\right\}$$ denote the response variables, sample sizes and covariates for the $$k_{th}$$ trial. We define the trial-based deviance function $$\text{Dev}^{(k)}({\boldsymbol \theta})$$ based on the observed-data likelihood corresponding to the response variables $${\boldsymbol y}^{(k)}$$, that is $$\text{Dev}^{(k)}({\boldsymbol \theta}) = -2\log f( {\boldsymbol \theta} \mid D^{(k)}_{oy})$$, where $$f( {\boldsymbol \theta} |D^{(k)}_{oy})$$ is the density function of a $$N(W^{(k)}{\boldsymbol \beta}^*, \; (E^{(k)})'\Omega E^{(k)}+\Sigma^{(k)})$$ distribution. The trial-based $$\text{DIC}^{(k)}$$ is given by $$\text{DIC}^{(k)}= \text{Dev}^{(k)}(\bar{{\boldsymbol \theta}}) + 2p^{(k)}_D$$, where $$\bar{{\boldsymbol \theta}}=E[{\boldsymbol \theta} \mid D_o]$$ is the posterior mean of $${\boldsymbol \theta}$$, $$p_D^{(k)}=\overline{\text{Dev}^{(k)}({\boldsymbol \theta})} - \text{Dev}^{(k)}(\bar{{\boldsymbol \theta}})$$ and $$\overline{\text{Dev}^{(k)}({\boldsymbol \theta})} = -2 E_{{\boldsymbol \theta}} [\log f( {\boldsymbol \theta} \mid D^{(k)}_{oy})]$$ is the posterior mean deviance. The overall deviance function $$\text{Dev}({\boldsymbol \theta})$$ is thus given by $$\text{Dev}({\boldsymbol \theta}) = -2\log L( {\boldsymbol \theta} \mid D_{oy}) = \sum_{k=1}^{K} \text{Dev}^{(k)}({\boldsymbol \theta})$$, and the DIC is $$\text{DIC}= \text{Dev}(\bar{{\boldsymbol \theta}}) + 2p_D = \sum_{k=1}^{K} \text{DIC}^{(k)}, \label{DIC}$$ (4.2) where $$\text{Dev}(\bar{{\boldsymbol \theta}})=\sum_{k=1}^{K}\text{Dev}^{(k)}(\bar{{\boldsymbol \theta}})$$ is a measure of goodness of fit, and $$p_D=\overline{\text{Dev}({\boldsymbol \theta})} - \text{Dev}(\bar{{\boldsymbol \theta}}) = \sum_{k=1}^{K}p_D^{(k)}$$ is the effective number of model parameters. To define the LPML, let $$D^{(-k)}_{oy}=\left\{ (y^{(j)}_{t^{(j)}_{\ell}}, n^{(j)}_{t^{(j)}_{\ell}}, {\boldsymbol x}^{(j)}_{t^{(j)}_{\ell}}), \; \ell=1,\ldots,T_j,j=1,\ldots,k-1,k+1,\ldots,K\right\}$$ denote the response variables with the $$k_{th}$$ trial deleted. The conditional predictive ordinate $$\text{CPO}^{(k)}$$ describes how much the response variables of the $$k_{th}$$ trial supports the model and can be written as $$\text{CPO}^{(k)} = \int f({\boldsymbol y}^{(k)}\mid {\boldsymbol \theta}) \pi({\boldsymbol \theta} \mid D^{(-k)}_{oy}, D_{os}) {\rm d}{\boldsymbol \theta} = \frac{1}{ \int \frac{1}{f({\boldsymbol y}^{(k)}\mid {\boldsymbol \theta})} \pi({\boldsymbol \theta}\mid D_o) {\rm d}{\boldsymbol \theta}}$$, where $$\pi({\boldsymbol \theta}\mid D^{(-k)}_{oy}, D_{os})$$ is the posterior distribution of $${\boldsymbol \theta}$$ with data $$(D^{(-k)}_{oy}, D_{os})$$, and $$f({\boldsymbol y}^{(k)}\mid {\boldsymbol \theta})$$ is the density function of a $$N(W^{(k)}{\boldsymbol \beta}^*, \; (E^{(k)})'\Omega E^{(k)}+\Sigma^{(k)})$$ distribution. The LPML can be used as a criterion-based measure for model selection, which is given by \begin{align} \text{LPML} = \sum_{k=1}^K \log \big(\text{CPO}^{(k)}\big). \label{lpml} \end{align} (4.3) Since closed forms of $$\text{CPO}^{(k)}$$ are not available, we approximate them using Gibbs iterates $$\{{\boldsymbol \theta}^{(b)}, b=1,...,B\}$$. From Dey and others (1995), $$\text{CPO}^{(k)}$$ can be approximated using the Monte Carlo estimate $$\widehat{\text{CPO}^{(k)}} = B\Big[\sum_{b=1}^{B}\big\{ f({\boldsymbol y}^{(k)}\mid {\boldsymbol \theta}^{(b)}) \big\}^{-1}\Big]^{-1}$$. 5. Analysis of the LDL-C network metadata Let $$y^{(k)}_{t^{(k)}_{\ell}}$$ be the mean percent change in LDL-C from the baseline value for the $$t^{(k)}_{\ell}$$th treatment arm in the $$k$$th trial. The vector of covariates is $${\boldsymbol x}^{(k)}_{t^{(k)}_{\ell}}=\big($$1, $$\text{(bl_ldlc)}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{(bl_hdlc)}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{(bl_tg)}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{age}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{white}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{male}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{BMI}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{potency_med}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{potency_high}^{(k)}_{t^{(k)}_{\ell}}$$, $$\text{duration}^{(k)}_{t^{(k)}_{\ell}}\big)^T$$, and the corresponding regression coefficient vector is $${\boldsymbol \beta}$$. We fit the meta-regression model defined in (3.1) and (3.2) to the LDL-C network meta-data using the sampling algorithm proposed in Section 4.2. Let $$\mathcal{M}_1,...,\mathcal{M}_8$$ represent eight different groupings whose definitions are given in Table 1. Among the eight groupings, model $$\mathcal{M}_1$$ is the simplest model and therefore serves as the “benchmark” model. We computed the DICs defined in (4.2) and the LPMLs defined in (4.3) under models $$\mathcal{M}_1$$ - $$\mathcal{M}_8$$ and reported their results in Table 1. Since some treatments in trials 2, 10, and 11 are not included in any other trials, the calculation of the LPML excluded those trials. We see from Table 1 that (i) the DIC value under the 1 group model $$\mathcal{M}_1$$ (381.33) is larger than the DIC value under the 4 group model $$\mathcal{M}_2$$ (377.84), which implies that separating the variances of statins, EZE and statins with EZE from the variance of PBO is necessary for a better model fit; (ii) among the five group models $$\mathcal{M}_3$$ - $$\mathcal{M}_5$$, separating the variance of R from the all-statin group (i.e. $$\mathcal{M}_4$$) has the smallest DIC value (371.50), which is also smaller than the DIC value under model $$\mathcal{M}_2$$; and (iii) among the six group models $$\mathcal{M}_6$$ - $$\mathcal{M}_8$$, separating the variances of A and R from the all-statin group (i.e. $$\mathcal{M}_8$$) has the smallest DIC value (368.33), which is also smaller than the DIC value under model $$\mathcal{M}_4$$. The DIC values under models $$\mathcal{M}_1$$, $$\mathcal{M}_2$$, $$\mathcal{M}_4$$, $$\mathcal{M}_8$$ indicate that the smallest DIC for each fixed $$G$$ is a decreasing function of $$G$$ and the “best” DIC value is attained at $$G=6$$. The LPML values under $$\mathcal{M}_1$$ - $$\mathcal{M}_8$$ behave similarly to these DIC values and the LPML value under model $$\mathcal{M}_8$$ is the largest ($$-$$160.50), which is consistent with the smallest DIC value under model $$\mathcal{M}_8$$. Table 1. Description and comparison of models $$\mathcal{M}_1$$ - $$\mathcal{M}_8$$ Model $$G$$ $$\Omega$$ Description DIC LPML $$\mathcal{M}_1$$ $$G=1$$ $$\Omega_{jj}=\tau^2_1, j=0,...,10$$ The random effects for all 11 arms have the same variance 381.33 $$-$$165.30 $$\mathcal{M}_2$$ $$G=4$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{jj}=\tau^2_2, j=1,...,5$$; PBO alone; S/A/L/R/P; 377.84 $$-$$164.68 $$\Omega_{66}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_3$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 373.24 $$-$$161.93 $$\Omega_{jj}=\tau^2_3, j=2,...,5$$; A/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_4$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{44}=\tau^2_2$$; PBO alone; R alone; 371.50 $$-$$161.87 $$\Omega_{jj}=\tau^2_3, j=1,2,3,5$$; S/A/L/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_5$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 377.85 $$-$$164.90 $$\Omega_{jj}=\tau^2_3, j=1,3,4,5$$; S/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_6$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 369.72 $$-$$161.19 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=2,3,5$$; R alone; A/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_7$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 371.84 $$-$$161.64 $$\Omega_{22}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=3,4,5$$; A alone; L/R/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_8$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 368.33 $$-$$160.50 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=1,3,5$$; R alone; S/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE Model $$G$$ $$\Omega$$ Description DIC LPML $$\mathcal{M}_1$$ $$G=1$$ $$\Omega_{jj}=\tau^2_1, j=0,...,10$$ The random effects for all 11 arms have the same variance 381.33 $$-$$165.30 $$\mathcal{M}_2$$ $$G=4$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{jj}=\tau^2_2, j=1,...,5$$; PBO alone; S/A/L/R/P; 377.84 $$-$$164.68 $$\Omega_{66}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_3$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 373.24 $$-$$161.93 $$\Omega_{jj}=\tau^2_3, j=2,...,5$$; A/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_4$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{44}=\tau^2_2$$; PBO alone; R alone; 371.50 $$-$$161.87 $$\Omega_{jj}=\tau^2_3, j=1,2,3,5$$; S/A/L/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_5$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 377.85 $$-$$164.90 $$\Omega_{jj}=\tau^2_3, j=1,3,4,5$$; S/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_6$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 369.72 $$-$$161.19 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=2,3,5$$; R alone; A/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_7$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 371.84 $$-$$161.64 $$\Omega_{22}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=3,4,5$$; A alone; L/R/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_8$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 368.33 $$-$$160.50 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=1,3,5$$; R alone; S/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE Model $$\mathcal{M}_1$$ represents that all treatment arms share the same variance; model $$\mathcal{M}_2$$ has 4 groups, in which PBO alone is in the first group, all statins (S, A, L, R, P) are in the second group, EZE is in the third group, and all statins with EZE (SE, AE, LE, PE) are in the fourth group; models $$\mathcal{M}_3$$ - $$\mathcal{M}_5$$ are similar to model $$\mathcal{M}_2$$ but separate one statin (S, A or R), which all are involved in multiple trials, from the other statins; models $$\mathcal{M}_6$$ - $$\mathcal{M}_8$$ are also similar to model $$\mathcal{M}_2$$ but separate two statins (S and R, S and A, or A and R) from the other statins. Table 1. Description and comparison of models $$\mathcal{M}_1$$ - $$\mathcal{M}_8$$ Model $$G$$ $$\Omega$$ Description DIC LPML $$\mathcal{M}_1$$ $$G=1$$ $$\Omega_{jj}=\tau^2_1, j=0,...,10$$ The random effects for all 11 arms have the same variance 381.33 $$-$$165.30 $$\mathcal{M}_2$$ $$G=4$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{jj}=\tau^2_2, j=1,...,5$$; PBO alone; S/A/L/R/P; 377.84 $$-$$164.68 $$\Omega_{66}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_3$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 373.24 $$-$$161.93 $$\Omega_{jj}=\tau^2_3, j=2,...,5$$; A/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_4$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{44}=\tau^2_2$$; PBO alone; R alone; 371.50 $$-$$161.87 $$\Omega_{jj}=\tau^2_3, j=1,2,3,5$$; S/A/L/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_5$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 377.85 $$-$$164.90 $$\Omega_{jj}=\tau^2_3, j=1,3,4,5$$; S/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_6$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 369.72 $$-$$161.19 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=2,3,5$$; R alone; A/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_7$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 371.84 $$-$$161.64 $$\Omega_{22}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=3,4,5$$; A alone; L/R/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_8$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 368.33 $$-$$160.50 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=1,3,5$$; R alone; S/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE Model $$G$$ $$\Omega$$ Description DIC LPML $$\mathcal{M}_1$$ $$G=1$$ $$\Omega_{jj}=\tau^2_1, j=0,...,10$$ The random effects for all 11 arms have the same variance 381.33 $$-$$165.30 $$\mathcal{M}_2$$ $$G=4$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{jj}=\tau^2_2, j=1,...,5$$; PBO alone; S/A/L/R/P; 377.84 $$-$$164.68 $$\Omega_{66}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_3$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 373.24 $$-$$161.93 $$\Omega_{jj}=\tau^2_3, j=2,...,5$$; A/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_4$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{44}=\tau^2_2$$; PBO alone; R alone; 371.50 $$-$$161.87 $$\Omega_{jj}=\tau^2_3, j=1,2,3,5$$; S/A/L/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_5$$ $$G=5$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 377.85 $$-$$164.90 $$\Omega_{jj}=\tau^2_3, j=1,3,4,5$$; S/L/R/P; $$\Omega_{66}=\tau^2_4$$; $$\Omega_{jj}=\tau^2_5, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_6$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 369.72 $$-$$161.19 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=2,3,5$$; R alone; A/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_7$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{11}=\tau^2_2$$; PBO alone; S alone; 371.84 $$-$$161.64 $$\Omega_{22}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=3,4,5$$; A alone; L/R/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE $$\mathcal{M}_8$$ $$G=6$$ $$\Omega_{00}=\tau^2_1$$; $$\Omega_{22}=\tau^2_2$$; PBO alone; A alone; 368.33 $$-$$160.50 $$\Omega_{44}=\tau^2_3$$; $$\Omega_{jj}=\tau^2_4, j=1,3,5$$; R alone; S/L/P; $$\Omega_{66}=\tau^2_5$$; $$\Omega_{jj}=\tau^2_6, j=7,...,10$$ EZE alone; SE/AE/LE/PE Model $$\mathcal{M}_1$$ represents that all treatment arms share the same variance; model $$\mathcal{M}_2$$ has 4 groups, in which PBO alone is in the first group, all statins (S, A, L, R, P) are in the second group, EZE is in the third group, and all statins with EZE (SE, AE, LE, PE) are in the fourth group; models $$\mathcal{M}_3$$ - $$\mathcal{M}_5$$ are similar to model $$\mathcal{M}_2$$ but separate one statin (S, A or R), which all are involved in multiple trials, from the other statins; models $$\mathcal{M}_6$$ - $$\mathcal{M}_8$$ are also similar to model $$\mathcal{M}_2$$ but separate two statins (S and R, S and A, or A and R) from the other statins. The trial-based $$\text{DIC}^{(k)}$$’s and $$\text{CPO}^{(k)}$$’s defined in Section 4.3 are also computed. Plots of the $$\text{DIC}^{(k)}$$’s and $$\text{log(CPO}^{(k)})$$’s under model $$\mathcal{M}_8$$ versus model $$\mathcal{M}_1$$ are shown in Figure 2. In the $$\text{DIC}^{(k)}$$’s plot from Figure 2, 21 dots are red and 8 dots are blue. This implies that 21 out of 29 trials favor model $$\mathcal{M}_8$$ to model $$\mathcal{M}_1$$ in terms of individual DIC. Specifically, the values of $$\text{DIC}_1$$, $$\text{DIC}_3$$, $$\text{DIC}_6$$, $$\text{DIC}_{10}$$, and $$\text{DIC}_{21}$$ under model $$\mathcal{M}_8$$ are much smaller than those under model $$\mathcal{M}_1$$. In the $$\text{log(CPO}^{(k)})$$’s plot from Figure 2, there are 22 out of 26 trials in favor of model $$\mathcal{M}_8$$. The value of $$\text{log(CPO}_{21})$$ under model $$\mathcal{M}_8$$ are significantly larger than that under model $$\mathcal{M}_1$$. These results suggest that trials may favor different models, however, compared with the “benchmark” model $$\mathcal{M}_1$$, more trials favor model $$\mathcal{M}_8$$. This is consistent with the results in Table 1. Fig. 2. View largeDownload slide Individual DIC and log(CPO) plots of model $$\mathcal{M}_1$$ versus model $$\mathcal{M}_8$$. The plot of $$\text{DIC}^{(k)}$$’s is on the left and the plot of $$\text{log(CPO}^{(k)})$$’s is on the right. The filled circle points represent the trials that favor the x-axis model compared to the y-axis model and the empty triangle points are vice versa. Differences of $$\text{DIC}^{(k)}$$’s or $$\text{log(CPO}^{(k)})$$’s between two models larger than one are labeled with trial IDs. Fig. 2. View largeDownload slide Individual DIC and log(CPO) plots of model $$\mathcal{M}_1$$ versus model $$\mathcal{M}_8$$. The plot of $$\text{DIC}^{(k)}$$’s is on the left and the plot of $$\text{log(CPO}^{(k)})$$’s is on the right. The filled circle points represent the trials that favor the x-axis model compared to the y-axis model and the empty triangle points are vice versa. Differences of $$\text{DIC}^{(k)}$$’s or $$\text{log(CPO}^{(k)})$$’s between two models larger than one are labeled with trial IDs. The posterior estimates, including posterior means, posterior standard deviations (SDs), and $$95\%$$ highest posterior density (HPD) intervals of the parameters under model $$\mathcal{M}_8$$ and $$\mathcal{M}_1$$ are reported in Table 2 and Table S6 of supplementary material available at Biostatistics online. We see from these two tables that the posterior estimates for the overall treatment effects given the covariates were similar under the two models. Except for PBO, patients on all other treatments had substantial percent changes from baseline in LDL-C (i.e. the $$95\%$$ HPD intervals did not contain 0). The estimate for $$\gamma_8$$ (i.e., AE) was the lowest ($$-$$51.93) which indicates that the treatment AE had the highest percent change from baseline in LDL-C. Among the ten covariates, the baseline HDL-C regression coefficient had an HPD interval not containing zero under model $$\mathcal{M}_8$$. The posterior mean (SD) and $$95\%$$ HPD interval for $$\beta_2$$ (i.e. baseHDLC) were $$-1.49$$$$(0.74)$$ and $$(-2.93, -0.02)$$. This result indicates that there was a substantial improvement in LDL-C from baseline for higher baseline value of HDL-C. In contrast, no covariates coefficients were significant under model $$\mathcal{M}_1$$. The posterior mean (SD) of $$\tau^2_1$$ (i.e. the variance shared by the random effects for all 11 arms) under model $$\mathcal{M}_1$$ was 8.07 (1.83). Under the 6 group model $$\mathcal{M}_8$$, the posterior means (SDs) of $$\tau^2_1$$-$$\tau^2_6$$ were 2.14 (3.05), 4.90 (2.91), 14.65 (7.15), 2.43 (3.82), 2.21 (3.58), and 15.24 (8.57), respectively. Note that under $$\mathcal{M}_8$$, the posterior mean of $$\tau^2_1$$ (i.e. the variance of the random effect for PBO) is much smaller than the posterior mean of $$\tau^2_1$$ under model $$\mathcal{M}_1$$, where the random effect for PBO was assumed to share the same variance with the random effects for other treatments. Moreover, the posterior estimates of the variances varied a lot between different groups under model $$\mathcal{M}_8$$. Therefore, the one group model $$\mathcal{M}_1$$ cannot fit the data well because the variances of the random effects were different among these treatments, which further confirmed the results of DICs and LPMLs in Table 1. The last column of Table 2 reported the posterior probability that each treatment effect was positive. Since the goal was to see reduction on LDL-C, the posterior probability $$P(\gamma_j > 0)$$ can be seen as an analogy to a P-value for each effect. A small $$P(\gamma_j > 0)$$ indicated strong evidence against the statement that the treatment was not effective. Table 2. Posterior estimates of the parameters under model $$\mathcal{M}_8$$ Posterior Posterior Variables Parameter Mean SD $$95\%$$ HPD Interval $$P(\gamma_j > 0)$$ baseLDLC $$\beta_1$$ $$-$$0.10 0.63 ($$-$$1.30, 1.18) baseHDLC $$\beta_2$$ $$-$$1.49 0.74 ($$-$$2.93, $$-$$0.02) baseTG $$\beta_3$$ 0.52 0.60 ($$-$$0.63, 1.72) age $$\beta_4$$ $$-$$0.74 0.54 ($$-$$1.79, 0.35) white $$\beta_5$$ $$-$$0.79 0.53 ($$-$$1.87, 0.20) male $$\beta_6$$ $$-$$1.21 0.75 ($$-$$2.64, 0.29) BMI $$\beta_7$$ 0.41 0.60 ($$-$$0.76, 1.58) potency_med $$\beta_8$$ 3.40 3.06 ($$-$$2.38, 9.65) potency_high $$\beta_9$$ $$-$$1.69 3.40 ($$-$$8.19, 5.19) duration $$\beta_{10}$$ 1.09 0.63 ($$-$$0.19, 2.29) PBO $$\gamma_{0}$$ 1.20 5.47 (-9.69, 11.87) 0.59 S $$\gamma_{1}$$ $$-$$40.43 1.11 ($$-$$42.63, $$-$$38.26) 0.00 A $$\gamma_{2}$$ $$-$$44.60 1.64 ($$-$$47.81, $$-$$41.36) 0.00 L $$\gamma_{3}$$ $$-$$27.84 3.89 ($$-$$35.35, $$-$$20.01) $$5\times 10^{-5}$$ R $$\gamma_{4}$$ $$-$$42.22 2.14 ($$-$$46.48, $$-$$38.08) 0.00 P $$\gamma_{5}$$ $$-$$27.81 3.91 ($$-$$35.30, $$-$$19.96) $$5\times 10^{-5}$$ E $$\gamma_{6}$$ $$-$$18.99 5.41 ($$-$$29.49, $$-$$8.25) $$1.35\times 10^{-3}$$ SE $$\gamma_{7}$$ $$-$$47.48 2.19 ($$-$$51.72, $$-$$43.10) 0.00 AE $$\gamma_{8}$$ $$-$$51.93 4.52 ($$-$$60.67, $$-$$42.74) 0.00 LE $$\gamma_{9}$$ $$-$$47.84 4.55 ($$-$$56.81, $$-$$38.85) 0.00 PE $$\gamma_{10}$$ $$-$$47.02 4.66 ($$-$$56.07, $$-$$37.65) 0.00 $$\tau^2_1$$ 2.14 3.05 (0.07, 6.95) $$\tau^2_2$$ 4.90 2.91 (1.06, 10.73) Variance of $$\tau^2_3$$ 14.65 7.15 (4.62, 28.43) random effects $$\tau^2_4$$ 2.43 3.82 (0.07, 7.61) $$\tau^2_5$$ 2.21 3.58 (0.07, 7.33) $$\tau^2_6$$ 15.24 8.57 (4.19, 31.14) Posterior Posterior Variables Parameter Mean SD $$95\%$$ HPD Interval $$P(\gamma_j > 0)$$ baseLDLC $$\beta_1$$ $$-$$0.10 0.63 ($$-$$1.30, 1.18) baseHDLC $$\beta_2$$ $$-$$1.49 0.74 ($$-$$2.93, $$-$$0.02) baseTG $$\beta_3$$ 0.52 0.60 ($$-$$0.63, 1.72) age $$\beta_4$$ $$-$$0.74 0.54 ($$-$$1.79, 0.35) white $$\beta_5$$ $$-$$0.79 0.53 ($$-$$1.87, 0.20) male $$\beta_6$$ $$-$$1.21 0.75 ($$-$$2.64, 0.29) BMI $$\beta_7$$ 0.41 0.60 ($$-$$0.76, 1.58) potency_med $$\beta_8$$ 3.40 3.06 ($$-$$2.38, 9.65) potency_high $$\beta_9$$ $$-$$1.69 3.40 ($$-$$8.19, 5.19) duration $$\beta_{10}$$ 1.09 0.63 ($$-$$0.19, 2.29) PBO $$\gamma_{0}$$ 1.20 5.47 (-9.69, 11.87) 0.59 S $$\gamma_{1}$$ $$-$$40.43 1.11 ($$-$$42.63, $$-$$38.26) 0.00 A $$\gamma_{2}$$ $$-$$44.60 1.64 ($$-$$47.81, $$-$$41.36) 0.00 L $$\gamma_{3}$$ $$-$$27.84 3.89 ($$-$$35.35, $$-$$20.01) $$5\times 10^{-5}$$ R $$\gamma_{4}$$ $$-$$42.22 2.14 ($$-$$46.48, $$-$$38.08) 0.00 P $$\gamma_{5}$$ $$-$$27.81 3.91 ($$-$$35.30, $$-$$19.96) $$5\times 10^{-5}$$ E $$\gamma_{6}$$ $$-$$18.99 5.41 ($$-$$29.49, $$-$$8.25) $$1.35\times 10^{-3}$$ SE $$\gamma_{7}$$ $$-$$47.48 2.19 ($$-$$51.72, $$-$$43.10) 0.00 AE $$\gamma_{8}$$ $$-$$51.93 4.52 ($$-$$60.67, $$-$$42.74) 0.00 LE $$\gamma_{9}$$ $$-$$47.84 4.55 ($$-$$56.81, $$-$$38.85) 0.00 PE $$\gamma_{10}$$ $$-$$47.02 4.66 ($$-$$56.07, $$-$$37.65) 0.00 $$\tau^2_1$$ 2.14 3.05 (0.07, 6.95) $$\tau^2_2$$ 4.90 2.91 (1.06, 10.73) Variance of $$\tau^2_3$$ 14.65 7.15 (4.62, 28.43) random effects $$\tau^2_4$$ 2.43 3.82 (0.07, 7.61) $$\tau^2_5$$ 2.21 3.58 (0.07, 7.33) $$\tau^2_6$$ 15.24 8.57 (4.19, 31.14) Table 2. Posterior estimates of the parameters under model $$\mathcal{M}_8$$ Posterior Posterior Variables Parameter Mean SD $$95\%$$ HPD Interval $$P(\gamma_j > 0)$$ baseLDLC $$\beta_1$$ $$-$$0.10 0.63 ($$-$$1.30, 1.18) baseHDLC $$\beta_2$$ $$-$$1.49 0.74 ($$-$$2.93, $$-$$0.02) baseTG $$\beta_3$$ 0.52 0.60 ($$-$$0.63, 1.72) age $$\beta_4$$ $$-$$0.74 0.54 ($$-$$1.79, 0.35) white $$\beta_5$$ $$-$$0.79 0.53 ($$-$$1.87, 0.20) male $$\beta_6$$ $$-$$1.21 0.75 ($$-$$2.64, 0.29) BMI $$\beta_7$$ 0.41 0.60 ($$-$$0.76, 1.58) potency_med $$\beta_8$$ 3.40 3.06 ($$-$$2.38, 9.65) potency_high $$\beta_9$$ $$-$$1.69 3.40 ($$-$$8.19, 5.19) duration $$\beta_{10}$$ 1.09 0.63 ($$-$$0.19, 2.29) PBO $$\gamma_{0}$$ 1.20 5.47 (-9.69, 11.87) 0.59 S $$\gamma_{1}$$ $$-$$40.43 1.11 ($$-$$42.63, $$-$$38.26) 0.00 A $$\gamma_{2}$$ $$-$$44.60 1.64 ($$-$$47.81, $$-$$41.36) 0.00 L $$\gamma_{3}$$ $$-$$27.84 3.89 ($$-$$35.35, $$-$$20.01) $$5\times 10^{-5}$$ R $$\gamma_{4}$$ $$-$$42.22 2.14 ($$-$$46.48, $$-$$38.08) 0.00 P $$\gamma_{5}$$ $$-$$27.81 3.91 ($$-$$35.30, $$-$$19.96) $$5\times 10^{-5}$$ E $$\gamma_{6}$$ $$-$$18.99 5.41 ($$-$$29.49, $$-$$8.25) $$1.35\times 10^{-3}$$ SE $$\gamma_{7}$$ $$-$$47.48 2.19 ($$-$$51.72, $$-$$43.10) 0.00 AE $$\gamma_{8}$$ $$-$$51.93 4.52 ($$-$$60.67, $$-$$42.74) 0.00 LE $$\gamma_{9}$$ $$-$$47.84 4.55 ($$-$$56.81, $$-$$38.85) 0.00 PE $$\gamma_{10}$$ $$-$$47.02 4.66 ($$-$$56.07, $$-$$37.65) 0.00 $$\tau^2_1$$ 2.14 3.05 (0.07, 6.95) $$\tau^2_2$$ 4.90 2.91 (1.06, 10.73) Variance of $$\tau^2_3$$ 14.65 7.15 (4.62, 28.43) random effects $$\tau^2_4$$ 2.43 3.82 (0.07, 7.61) $$\tau^2_5$$ 2.21 3.58 (0.07, 7.33) $$\tau^2_6$$ 15.24 8.57 (4.19, 31.14) Posterior Posterior Variables Parameter Mean SD $$95\%$$ HPD Interval $$P(\gamma_j > 0)$$ baseLDLC $$\beta_1$$ $$-$$0.10 0.63 ($$-$$1.30, 1.18) baseHDLC $$\beta_2$$ $$-$$1.49 0.74 ($$-$$2.93, $$-$$0.02) baseTG $$\beta_3$$ 0.52 0.60 ($$-$$0.63, 1.72) age $$\beta_4$$ $$-$$0.74 0.54 ($$-$$1.79, 0.35) white $$\beta_5$$ $$-$$0.79 0.53 ($$-$$1.87, 0.20) male $$\beta_6$$ $$-$$1.21 0.75 ($$-$$2.64, 0.29) BMI $$\beta_7$$ 0.41 0.60 ($$-$$0.76, 1.58) potency_med $$\beta_8$$ 3.40 3.06 ($$-$$2.38, 9.65) potency_high $$\beta_9$$ $$-$$1.69 3.40 ($$-$$8.19, 5.19) duration $$\beta_{10}$$ 1.09 0.63 ($$-$$0.19, 2.29) PBO $$\gamma_{0}$$ 1.20 5.47 (-9.69, 11.87) 0.59 S $$\gamma_{1}$$ $$-$$40.43 1.11 ($$-$$42.63, $$-$$38.26) 0.00 A $$\gamma_{2}$$ $$-$$44.60 1.64 ($$-$$47.81, $$-$$41.36) 0.00 L $$\gamma_{3}$$ $$-$$27.84 3.89 ($$-$$35.35, $$-$$20.01) $$5\times 10^{-5}$$ R $$\gamma_{4}$$ $$-$$42.22 2.14 ($$-$$46.48, $$-$$38.08) 0.00 P $$\gamma_{5}$$ $$-$$27.81 3.91 ($$-$$35.30, $$-$$19.96) $$5\times 10^{-5}$$ E $$\gamma_{6}$$ $$-$$18.99 5.41 ($$-$$29.49, $$-$$8.25) $$1.35\times 10^{-3}$$ SE $$\gamma_{7}$$ $$-$$47.48 2.19 ($$-$$51.72, $$-$$43.10) 0.00 AE $$\gamma_{8}$$ $$-$$51.93 4.52 ($$-$$60.67, $$-$$42.74) 0.00 LE $$\gamma_{9}$$ $$-$$47.84 4.55 ($$-$$56.81, $$-$$38.85) 0.00 PE $$\gamma_{10}$$ $$-$$47.02 4.66 ($$-$$56.07, $$-$$37.65) 0.00 $$\tau^2_1$$ 2.14 3.05 (0.07, 6.95) $$\tau^2_2$$ 4.90 2.91 (1.06, 10.73) Variance of $$\tau^2_3$$ 14.65 7.15 (4.62, 28.43) random effects $$\tau^2_4$$ 2.43 3.82 (0.07, 7.61) $$\tau^2_5$$ 2.21 3.58 (0.07, 7.33) $$\tau^2_6$$ 15.24 8.57 (4.19, 31.14) Table S7 in Appendix B of supplementary material available at Biostatistics online presented the posterior means, posterior SDs, and $$95\%$$ HPD intervals for the pairwise differences in treatments means (the mean percent reductions in LDL-C from the baseline) after adjusting for the aggregate covariates under model $$\mathcal{M}_8$$. The following observations are noteworthy. First, as expected, all statins, EZE and statins with EZE combinations provided a substantially higher reduction in LDL-C than PBO. Second, among 10 pairwise comparisons between five statins, (a) S, A, and R provided a significantly higher LDL-C reduction than L and P, (b) A provided a significantly higher LDL-C reduction than S, and (c) the remaining three pairwise comparisons were not significant. Third, statins provided significantly higher LDL-C reduction than EZE. Fourth, considering the statins with EZE combination therapies, (a) the mean LDL-C reductions for these four combination therapies were not significantly different from each other, (b) all four combination therapies provided a significantly higher LDL-C reduction than corresponding statin mono-therapy and also E mono-therapy except with AE, in which case, the $$95\%$$ HPD interval for the difference in LDL-C reductions between AE and A was ($$-$$16.11, 1.41). Our fitted model $$\mathcal{M}_8$$ did yield numerically a higher LDL-C reduction for AE than A. In the clinical literature, this difference in LDL-C reductions between AE and A is known to be highly significant through direct comparisons (Ballantyne and others, 2003). For comparison purposes, we also fitted model $$\mathcal{M}_8$$ without covariates. Table S8 in Appendix B of supplementary material available at Biostatistics online presented the pairwise comparisons which were not adjusted by aggregate covariates in model $$\mathcal{M}_8$$. The difference in LDL-C reductions due to AE and A became highly significant. The posterior mean (SD) of the difference was $$-$$15.97 (3.62) yielding ($$-$$23.07, $$-$$8.74) and ($$-$$32.53, $$-$$0.15) as $$95\%$$ and $$99.9\%$$ HPD intervals, respectively. Likewise, the difference in LDL-C reductions between R and S as well as between R and A became significant under model $$\mathcal{M}_8$$ without covariates ($$95\%$$ HPD interval, [$$-$$13.04, $$-$$6.00] and [$$-$$8.35, $$-$$4.38], respectively). The opposite was the case for the LDL-C reduction difference between A and S ($$95\%$$ HPD interval, [$$-$$6.78, 0.43]). All these three network meta-analysis results under model $$\mathcal{M}_8$$ without covariates were consistent with the direct comparison results from Insull and others (2007). In general, most estimates of the treatment differences from the network meta-analysis under model $$\mathcal{M}_8$$ were consistent with the direct comparisons (if there were any). Some of the treatment differences were in the right direction numerically but did not reach significance mainly due to the inclusion of covariates. Finally, the posterior estimates of the correlation parameters under model $$\mathcal{M}_8$$ are reported in Table S9 in Appendix B of supplementary material available at Biostatistics online. From this table, we see that none of the correlation parameters are significant since all of the 95% HPD intervals contain 0. The absolute and cumulative probabilities under model $$\mathcal{M}_8$$ for all 11 treatments taking each possible rank were plotted in Figures 3 and S1 in Appendix B of supplementary material available at Biostatistics online. The probabilities in Figure 3 were adjusted by including the covariates while the probabilities in Figure S1 were not. Also reported was the surface under the cumulative ranking curve (SUCRA) (Salanti and others, 2011), which was used to estimate the ranking probabilities for all treatments to obtain a treatment hierarchy. SUCRA can also be used to calculate the normalized mean rank. The mean rank for a treatment is $$1+(1-p)(T-1)$$, where $$p$$ is the SUCRA for the treatment and $$T$$ is the number of treatments. The order of treatments (in descending order) suggested by Figure 3 was AE, SE, LE, PE, A, R, S, L, P, E, and PBO. This suggested ranking order was consistent with the magnitudes of the estimated LDL-C percent reductions in Table 2. The order of treatments according to SUCRAs shown in Figure S1 was AE, SE, R, A, LE, PE, S, P, L, E, and PBO. Thus, the ranking of treatments was changed if the aggregate covariates were not included. Fig. 3. View largeDownload slide Plots of ranking probabilities for all treatment arms. The dashed line represents the absolute probability and the solid line represents the cumulative probability. SUCRA is the percentage of efficacy of a treatment on the outcome, which would be one when a treatment is certain to be the best and zero when a treatment is certain to be the worst. Fig. 3. View largeDownload slide Plots of ranking probabilities for all treatment arms. The dashed line represents the absolute probability and the solid line represents the cumulative probability. SUCRA is the percentage of efficacy of a treatment on the outcome, which would be one when a treatment is certain to be the best and zero when a treatment is certain to be the worst. In all of the Bayesian computations, we used 20 000 MCMC samples, which were taken from every fifth iteration, after a burn-in of 5000 iterations for each model to compute all posterior estimates, including posterior means, posterior SDs, $$95\%$$ HPD intervals, DICs and LPMLs, and cumulative ranking curves. The convergence of the MCMC sampling algorithm was checked using several diagnostic procedures discussed in Chen and others (2000). The HPD intervals were computed via the Monte Carlo method developed by Chen and Shao (1999). 6. Discussion Our methodology and analysis was motivated by the fact that the head-to-head clinical trials directly comparing all the treatments of interest is not possible in clinical research due to time, resources and other practical constraints. We have used real data from clinical trials on lipid lowering therapies to motivate our research and illustrate the network meta-analysis methodology developed here. Although, we do not know the ground truth, the results obtained for comparisons among LDL-C lowering therapies turn out to be fairly consistent with what is known in the clinical literature, and hence are supportive of the methodology and assumptions used here. In Appendix A of supplementary material available at Biostatistics online, we use the modified localized Metropolis algorithm to generate a positive definite correlation matrix via the partial correlations (Joe, 2006). We also implement the modified localized Metropolis algorithm by reparameterizing the correlation matrix using a spherical co-ordinate system (Lu and Ades, 2009). Both reparameterization methods yield in the similar posterior estimates, as shown, for an example, in Table S10 of Appendix B of supplementary material available at Biostatistics online. There are indeed two fundamental approaches to network meta-analysis in that one can use arm-based models or contrast-based models. The arguments for using a contrast-based model are strong in the case that there is a natural base treatment for each study for which to build the model. In our network, we do not have a natural base treatment for some of the studies, and thus picking an arbitrary base treatment for some of the studies in the network may lead to biased assessments of the treatment effect and inappropriate inference. We believe that arm-based methods are most useful in settings in which there is not a natural base treatment for some studies, as is this the case here with our network. Thus, although we are aware of the debate (Dias and Ades, 2016; Hong and others, 2016) and are aware that contrast-based methods can handle confounding when there is a natural base treatment, arm-based methods may be more suitable in settings where no natural base treatment exists for many of the studies. It is for this reason, we have decided to take an arm-based modeling approach, and we believe that our arm-based approach is useful in our particular network meta-analysis setting. As we have seen from this network meta-data, the dimension of the covariance matrix of the random effects is high and some treatments are included in very few trials or even in a single trial. Therefore, many correlation coefficients among the random effects cannot be estimated. Unlike the variances of the random effects, the correlation coefficients are bounded. Therefore, we simply assume a uniform prior for the correlation matrix $${\boldsymbol \rho}$$ in our analysis. Due to the positive definite constraint of $${\boldsymbol \rho}$$, we have developed a Metropolis-within-Gibbs sampling algorithm to generate $${\boldsymbol \rho}$$ via partial correlations. To allow for borrowing of strength from different pairs of correlation coefficients, a potential extension of the grouping approach for the variances of the random effects is to assume that some pairs of treatments share the same correlations. However, determining the number of groups and selecting group membership for correlation coefficients is much more challenging than for the variances of the random effects. From Table S1a, we see that the $$S^{2(k)}_{t^{(k)}_{\ell}}$$’s are generally large and the values of the $$S^{(k)}_{t^{(k)}_{\ell}}$$’s range from 10.50 to 18.04. Similar to the meta-regression model assumed for the mean response, a potential extension is to assume a log-linear meta regression model for $$\sigma^{2(k)}_{t^{(k)}_{\ell}}$$ in (3.1) and (3.2). These extensions are currently under investigation. As an extension of the proposed grouping approach, both the total number of groups and group membership allocation are assumed to be random in the model; and then a reversible jump MCMC algorithm is developed to sample $$G$$ and the membership allocation. This alternative approach may provide an empirical justification of the grouping based on the clinical relevance, which is certainly another promising future research project. 7. Software Computer code was written for the FORTRAN 95 compiler. We built an R package which includes the LDL-C network meta-data and serves as an interface calling the FORTRAN code within R. The R-package with the built-in data used in this article is available at https://github.com/epochholly/Network-Meta-Analysis-Bayesian-Inference-Multivariate-Random-Effects. Supplementary material Supplementary material is available at http://biostatistics.oxfordjournals.org. Acknowledgments We would like to thank the Editor, an Associate Editor, and two referees for their very helpful comments and suggestions, which have led to a much improved version of the article. Conflict of Interest: None declared. Funding National Institutes of Health (GM70335 and P01CA142538 to M.-H.C. and J.G.I.). Intramural Research Program of National Institutes of Health and National Cancer Institute (S.K.). References Adedinsewo D. , Taka N. , Agasthi P. , Sachdeva R. , Rust G. and Onwuanyi A. ( 2016 ). Prevalence and factors associated with statin use among a nationally representative sample of US adults: National Health and Nutrition Examination Survey, 2011–2012. Clinical Cardiology 9 , 491 – 496 . Google Scholar CrossRef Search ADS Ballantyne C. M. , Houri J. , Notarbartolo A. , Melani L. , Lipka L. J. , Suresh R. , Sun S. , LeBeaut A. P. , Sager P. T. and Veltri E. P. ( 2003 ). Effect of ezetimibe coadministered with atorvastatin in 628 patients with primary hypercholesterolemia. Circulation 19 , 2409 – 2415 . Google Scholar CrossRef Search ADS Chan A. W. and Altman D. G. ( 2005 ). Epidemiology and reporting of randomised trials published in PubMed journals. The Lancet 9465 , 1159 – 1162 . Google Scholar CrossRef Search ADS Chen M.-H. and Shao Q. M. ( 1999 ). Monte Carlo estimation of Bayesian credible and HPD intervals. Journal of Computational and Graphical Statistics 1 , 69 – 92 . Chen M.-H. , Shao Q. M. and Ibrahim. J. G. ( 2000 ). Monte Carlo Methods in Bayesian Computation . New York : Springer . Google Scholar CrossRef Search ADS Dey D. K. , Kuo L. and Sahu S. K. ( 1995 ). A Bayesian predictive approach to determining the number of components in a mixture distribution. Statistics and Computing 4 , 297 – 305 . Google Scholar CrossRef Search ADS Dias S. and Ades A. E. ( 2016 ). Absolute or relative effects? Arm-based synthesis of trial data. Research Synthesis Methods 7 , 23 – 28 . Google Scholar CrossRef Search ADS PubMed Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults . ( 2001 ). Executive summary of the Third Report of the National Cholesterol Education Program (NCEP) expert panel on detection, evaluation, and treatment of high blood cholesterol in adults (Adult Treatment Panel III). Journal of American Medical Association 19 , 2486 – 2497 . Gwon Y. , Mo M. , Chen M.-H. , Li J. , Xia H. A. and Ibrahim J. G. ( 2016 ). Network meta-regression for ordinal outcomes: applications in comparing Crohn’s disease treatments. Technical Report 16–28 , Department of Statistics, University of Connecticut . Hong H. , Carlin B. P. , Shamliyan T. A. , Wyman J. F. , Ramakrishnan R. , Sainfort F. and Kane R. L. ( 2013 ). Comparing Bayesian and frequentist approaches for multiple outcome mixed treatment comparisons. Medical Decision Making 5 , 702 – 714 . Google Scholar CrossRef Search ADS Hong H. , Chu H. , Zhang J. and Carlin B. P. ( 2016 ). A Bayesian missing data framework for generalized multiple outcome mixed treatment comparisons. Research Synthesis Methods 7 , 6 – 22 . Google Scholar CrossRef Search ADS PubMed Hong H. , Price K. L. , Fu H. and Carlin B. P. ( 2017 ). Bayesian network meta-analysis for multiple endpoints. In: Gatsonis C. and Morton C. S. (editors), Methods in Comparative Effectiveness Research , Chapter 12 . Taylor & Francis Group , pp. 385 – 407 . Google Scholar CrossRef Search ADS Higgins J. and Whitehead A. ( 1996 ). Borrowing strength from external trials in a meta-analysis. Statistics in Medicine 24 , 2733 – 2749 . Google Scholar CrossRef Search ADS Ibrahim J. G. , Chen M.-H. and Sinha D. ( 2001 ). Bayesian Survival Analysis . New York : Springer . Google Scholar CrossRef Search ADS Insull W. , Ghali J. K. , Hassman D. R. , Ycas J. W. , Gandhi S. K. and Miller E. ( 2007 ). Achieving low-density lipoprotein cholesterol goals in high-risk patients in managed care: comparison of rosuvastatin, atorvastatin, and simvastatin in the SOLAR trial. Mayo Clinic Proceedings 5 , 543 – 550 . Google Scholar CrossRef Search ADS Joe H. ( 2006 ). Generating random correlation matrices based on partial correlations. Journal of Multivariate Analysis 10 , 2177 – 2189 . Google Scholar CrossRef Search ADS Lu G. and Ades A. E. ( 2004 ). Combination of direct and indirect evidence in mixed treatment comparisons. Statistics in Medicine 20 , 3105 – 3124 . Google Scholar CrossRef Search ADS Lu G. and Ades A. E. ( 2006 ). Assessing evidence inconsistency in mixed treatment comparisons. Journal of the American Statistical Association 474 , 447 – 459 . Google Scholar CrossRef Search ADS Lu G. and Ades A. E. ( 2009 ). Modeling between-trial variance structure in mixed treatment comparisons. Biostatistics 10 , 792 – 805 . Google Scholar CrossRef Search ADS PubMed Lumley T. ( 2002 ). Network meta-analysis for indirect treatment comparisons. Statistics in Medicine 16 , 2313 – 2324 . Google Scholar CrossRef Search ADS Mills E. J. , Kanters S. , Thorlund K. , Chaimani A. , Veroniki A. A. and Ioannidis J. P. ( 2013 ). The effects of excluding treatments from network meta-analyses: survey. British Medical Journal 347 , f5195 . Google Scholar CrossRef Search ADS PubMed Morrone D. , Weintraub W. S. , Toth P. P. , Hanson M. E. , Lowe R. S. , Lin J. , Shah A. K. and Tershakovec A. M. ( 2012 ) Lipid-altering efficacy of ezetimibe plus statin and statin monotherapy and identification of factors associated with treatment response: A pooled analysis of over 21,000 subjects from 27 clinical trials. Atherosclerosis 223 , 251 – 261 . Google Scholar CrossRef Search ADS PubMed NCHStats . ( 2013 ). A Blog of the National Center for Health Center for Health Statistics . https://nchstats.com/2013/11/14/statistics-on-statin-use/. Ross G. ( 2015 ). Too Few Americans Take Statins, CDC Study Reveals . http://acsh.org/news/2015/12/04/cdc-study-reveals-that-too-few-americans-are-on-statins. Salanti G. , Ades A. E. and Ioannidis J. P. ( 2011 ). Graphical methods and numerical summaries for presenting results from multiple-treatment meta-analysis: an overview and tutorial. Journal of Clinical Epidemiology 2 , 163 – 171 . Google Scholar CrossRef Search ADS Spiegelhalter D. J. , Best N. G. , Carlin B. P. and Van Der Linde A. ( 2002 ). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 4 , 583 – 639 . Google Scholar CrossRef Search ADS Yao H. , Chen M.-H. and Qiu C. ( 2011 ). Bayesian modeling and inference for meta data with applications in efficacy evaluation of an allergic rhinitis drug. Journal of Biopharmaceutical Statistics 5 , 992 – 1005 . Google Scholar CrossRef Search ADS Yao H. , Kim S. , Chen M.-H. , Ibrahim J. G. , Shah A. K. and Lin J. ( 2015 ). Bayesian inference for multivariate meta-regression with partially observed within-study sample covariance matrix. Journal of the American Statistical Association 510 , 528 – 544 . Google Scholar CrossRef Search ADS © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)
### Journal
BiostatisticsOxford University Press
Published: Apr 18, 2018
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations
|
|
# Is it useful to use sparse regression (e.g. Lasso) when the number of observations is significantly larger than the number of covariates?
by Ijies Last Updated May 16, 2019 04:19 AM
I'm learning about penalized/sparse regression and I noticed that the examples used for penalized/sparse regression, e.g. Lasso, are usually cases where the number of observations is significantly smaller than the number of covariates/independent variables/predictors, $$n << p$$.
I was wondering if it would still be useful to apply such methods to datasets where we also have a large number of $$p$$ but the number of observations $$n$$ is significantly larger, $$n >> p$$.
For reference, we have a dataset that has $$n = 19,051$$ observations and each observation has $$p = 336$$ predictors.
Tags :
## Related Questions
Updated June 13, 2015 15:08 PM
Updated October 22, 2017 15:19 PM
Updated July 07, 2016 08:08 AM
Updated March 19, 2017 10:19 AM
Updated April 11, 2018 10:19 AM
|
|
# Relation between entire function of exponential type and exponential polynomials
Is it true in general that the theory of entire function of exponential type and and that of exponential polynomials (with purely imaginary exponents) are analogous ?
Can one derive results about entire function of exponential type by using results about exponential polynomials ?
For example I am wondering if it is possible to derive sampling theorems about band-limited functions by studying properties of exponential polynomials ?
What about the distribution of zeros ?
Exponential polynomials are very special among the entire functions of exponential type. Their zeros accumulate to finitely many directions: there are finitely many numbers $\{\theta_1,\ldots,\theta_n\}$ such that the arguments of zeros accumulate only to those $\theta_j$. General entire functions of exponential type can have arguments of zeros arbitrarily distributed, for example, uniformly.
Further, $\log M(r)$, where $M(r)=\max_{|z|\leq r}|f(z)|$ for exponential polynomials is like $(c+o(1))r$, while for general entire functions of exponential type, the limit $(\log M(r))/r$ might not exist.
|
|
# Factoring High School Level Olympiad Problem
Factor $x^2 - 3xy + 2y^2 + x -8y - 6$
Attempt at a solution:
I have factored these and don't know how to continue...
$x^2-3xy +2y^2 = (x - y) (x-2y)$
$x^2 + x -6 = (x + 3) (x - 2)$
$2y^2 - 8y + 6 = 2 (y - 3)(y - 1)$
• are you sure that there isn't a typo? Feb 18, 2018 at 16:26
• Solve for $x$ or $y$ Feb 18, 2018 at 16:32
• It only says factor Feb 18, 2018 at 16:40
Look for in form: $$x^2-3xy+2y^2+x-8y-6=(x+Ay+B)(x+Cy+D)$$ Plug $y=0$: $$x^2+x-6=(x+B)(x+D) \Rightarrow B=3; D=-2.$$ Plug $x=0$: $$2y^2-8y-6=(Ay+3)(Cy-2) \Rightarrow \begin{cases} AC=2 \\ -2A+3C=-8 \end{cases}$$ Can you finish?
Appendix: Note that the found parameters will not be suitable. So this method may not always work.
• Is this a formula or something? Feb 18, 2018 at 16:58
• I would call it a method... Feb 18, 2018 at 17:02
• @orion, yes, you are right, all but $xy$ term is not matching. So now the OP asker can turn to Dr.SonnhardGraubner's method of expressing as a quadratic equation and solving by discriminant. Thank you for pointing to the issue. I will leave my answer as an example of failed method. Feb 18, 2018 at 17:52
you can write your equation in the form $$y^2-y\left(4+\frac{3}{2}x\right)+\frac{x^2+x-6}{2}=0$$ and solve this for $y$ you will get $$\left(y-\frac{3}{4}x-2-\frac{1}{4}\sqrt{x^2+40x+112}\right)\left(y+\frac{3}{4}x+2-\frac{1}{4}\sqrt{x^2+40x+112}\right)=0$$
• What equation do you mean? Feb 18, 2018 at 16:41
• $$x^2-3xy+2y^2+x-8y-6=0$$ Feb 18, 2018 at 16:42
• But there is no equation in the question. Feb 18, 2018 at 16:43
• but i made this to an equation Feb 18, 2018 at 16:44
• and by the way there are three equations Feb 18, 2018 at 16:45
Since the polynomial is of degree $2$, we can use the well established "tool-set" for the study of Quadrics or Conic Sections.
So \eqalign{ & Q(x,y) = x^{\,2} - 3xy + 2y^{\,2} + x - 8y - 6 = \cr & = \left( {x,y,1} \right)^T \left( {\matrix{ 1 & { - 3/2} & {1/2} \cr { - 3/2} & 2 & { - 4} \cr {1/2} & { - 4} & 6 \cr } } \right)\left( {\matrix{ x \cr y \cr 1 \cr } } \right) \cr} But the determinant of the matrix defining the Conic $${\bf A}_{\,Q} = \left( {\matrix{ 1 & { - 3/2} & {1/2} \cr { - 3/2} & 2 & { - 4} \cr {1/2} & { - 4} & 6 \cr } } \right)$$ is not null, which means that the conic is not degenerate and thus $Q(x,y)$ cannot be factored, not even in the complex field.
|
|
Like us on Facebook, or we'll kill this dog... f o r m a t h T e X Click for: mathTeX homepage
email: john@forkosh.com
$\parstyle\usepackage{color} \large\color{blue}\begin{center}\today\\\time\end{center}$\parstyle\usepackage{color} \large\color{blue}\begin{center}\today\\\time\end{center}
C o n t e n t s (1) Preliminaries Character Set Symbols and Commands Command Parameters (2) Basic Constructions Sub/Superscripts & Limits Delimiters Accents Functions Matrices (3) Additional Refinements Spacing Text Font Sizes (4) Examples 1 2 3 4 5 6 7 8 9 (5) Symbol Sets Math, roman, bbold, etc Greek Math Symbols (6) References Online resources LaTeX books
L a T e X P r a c t i c e B o x "If you're not making mistakes, then you're not doing anything." –– John Wooden Enter any LaTeX math markup you like in the top box. Then press Submit to see it rendered. I've started you out with a little example already in the box. Or you can Click any example image in this tutorial to place its corresponding markup in the box.
First enter your own LaTeX expression, or Click any example... \Large x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}
Now click "Submit" to see your expression rendered below...
You should see $\small x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$\small x=\frac{-b\pm\sqrt{b^2-4ac}}{2a} if you submit the sample expression already in the box.
Or try clicking this image $\small \begin{pmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \end{pmatrix}$\small \begin{pmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \end{pmatrix} and see what happens.
"Computers are like Old Testament gods: lots of rules and no mercy."
–– Joseph Campbell, The Power of Myth (Doubleday 1988, page 18)
(1) Preliminaries
"Readers should not be discouraged if they find that they do not have the
prerequisites to read the prerequisites." –– Paul Halmos, in Measure Theory, 1950
[¶1] LaTeX is a word processing programpurists will tell you it's a typesetting system, but for our purposes that's a distinction without a difference that lets you easily write documents containing complicated math. It's the math fragment of LaTeX's markup syntax that we'll briefly review here. Only the most basic features are discussed. And to keep the discussion short, examples sometimes illustrate features not explicitly explained in the text. Use the practice box above to try out anything you think I'm trying to tell you. The references below provide more complete information.
[¶2] This tutorial accompanies mathTeXit also accompanies http://www.forkosh.com/mimetex.html for users who can't install LaTeX on their servers, a program that uses LaTeX to facilitate writing math on the web. You write familiar html markup to format your text, and you write <img> tags containing LaTeX markup in query strings to format your math. For example, <img src="/cgi-bin/mathtex.cgi?\sqrt{a^2+b^2}">align="absmiddle" typically gives the best vertical alignment displays $\depth~\small\sqrt{a^2+b^2}$\small\sqrt{a^2+b^2} wherever you put that tag.
LaTeX character set...
[¶3] Many ordinary keyboard characters are rendered just like you'd expect. For example, typing a...z,0...9,+-*/=()[] in LaTeX (or in a mathTeX query string) just renders $\small a...z,0...9,+-*/=()[]$\small a...z,0...9,+-*/=()[] . But some characters have special LaTeX significance. For example, underscore b_i introduces a subscript, rendering $b_i$b_i , and carat a^n introduces a superscript, rendering $a^n$a^n . These and some other special LaTeX characters are discussed in more detail below.
[¶4] In particular, the backslash character \ always introduces special LaTeX symbols like \alpha,\beta,\gamma...\omega (which renders $\small\alpha,\beta,\gamma...\omega$\small\alpha,\beta,\gamma...\omega ), or introduces special Latex commands like \frac 12 \sqrt \pi (which renders $\small\frac12\sqrt\pi$\small\frac12\sqrt\pi ). When you want to see a displayed backslash, type \backslash .
[¶5] LaTeX special symbols and commands always begin with a backslash \, almost alwaysThere's one occasional exception to these rules. A few LaTeX symbols consist of a backslash followed by a single non-alphabetic character. Among other keyboard characters, $,%,& have special LaTeX significance, so to display them you have to type \$,\%,\& instead, which renders $,%,&. followed by one or more alphabetic characters a-z,A-Z. The symbol or command is always terminated by a space or by any non-alphabetic character. For example, \frac2x can be typed without a space between the \frac and the 2, correctly rendering $\small\frac2x$\small\frac2x . And \frac\pi2 correctly renders $\small\frac\pi2$\small\frac\pi2 . But \fracx2 is incorrectly interpreted as the non-existent command \fracx followed by 2, so a space is mandatory and you must type \frac x2 to render $\small\frac x2$\small\frac x2 . Command Parameters... [¶6] As illustrated above, \frac takes two arguments, and \sqrt takes oneActually, \sqrt accepts an optional argument in the form, e.g., \sqrt[3]\pi, which renders the cube root of pi. Several LaTeX commands accept one or more optional arguments, but that's beyond the scope of this tutorial. . Similarly, subscripts b_i and superscripts a^n each take one argument. Some commands take no arguments. When needed, each LaTeX argument is always the next single character following the command. But any expression enclosed in curly braces is treated as a single character, and the curly braces are not displayed. For example, \frac1{\sqrt{a^2+b^2}} renders $\small\frac1{\sqrt{a^2+b^2}}$\small\frac1{\sqrt{a^2+b^2}} . Curly braces must always appear in properly balanced pairs. Unnecessary pairs of curly braces usually do no harm, so use them when in doubt. For example, \sqrt{\frac12} renders $\small\sqrt{\frac12}$\small\sqrt{\frac12} . To display curly braces, type \lbrace...\rbrace which renders $\lbrace...\rbrace$\lbrace...\rbrace . (2) Basic Constructions "Where are we going? And why are we in this handbasket?" Sub/Superscripts and Limits... [¶1] Besides our preceding examples, note that symbols may be simultaneously subscripted and superscripted. For example, A_{u,v}^k renders $\depth~A_{u,v}^k$A_{u,v}^k . Also note that sub/superscripts may themselves be sub/superscripted to any level (though you may want to reconsider your notation after two levels). For example, A_{u_i,v_j}^{k_m^n} renders $\depth~A_{u_i,v_j}^{k_m^n}$A_{u_i,v_j}^{k_m^n} . [¶2] Limits (lower and upper bounds) are written just like subscripts and superscripts. For example, \sum_{i=1}^n i = \frac{n(n+1)}2 may render $\small\sum\nolimits_{i=1}^n i = \frac{n(n+1)}2$\small\sum\nolimits_{i=1}^n i = \frac{n(n+1)}2 . To make sure limits are displayed directly below and above the operator, begin your expression with \displaystyleA lengthier discussion involving the several LaTeX directives \displaystyle, \textstyle, \limits and \nolimits is beyond the scope of this tutorial. . For example, \displaystyle\sum_{i=1}^n i = \frac{n(n+1)}2 renders $\small\displaystyle\sum_{i=1}^ni=\frac{n(n+1)}2$\small\displaystyle\sum_{i=1}^ni=\frac{n(n+1)}2 . Delimiters... [¶3] Enclosing terms in ordinary parentheses doesn't always look good. For example, (\frac1{\sqrt2}x+y) (\frac1{\sqrt2}x-y) renders $\small(\frac1{\sqrt2}x+y)(\frac1{\sqrt2}x-y)$\small(\frac1{\sqrt2}x+y)(\frac1{\sqrt2}x-y) . [¶4] Instead, the LaTeX commands \left( ... \right) automatically size parentheses, and other delimiters, to match the enclosed term. For example, \left(\frac1{\sqrt2}x+y\right) \left(\frac1{\sqrt2}x-y\right) renders $\small\left(\frac1{\sqrt2}x+y\right)\left(\frac1{\sqrt2}x-y\right)$\small\left(\frac1{\sqrt2}x+y\right)\left(\frac1{\sqrt2}x-y\right) . [¶5] Ordinary parentheses (...) don't have to be balanced, but \left(...\right) must always appear in properly balanced pairs. And any number of \middle| commands may appear between a \left(...\right) pair, to size contained delimiters similarly (see the third and fourth examples below). Besides \left(...\middle|...\right), the following delimiters can also be automatically sized: Delimiter example... ...renders \left( ... \right) \left( \frac1{1-x^2} \right)^2 $\large\left(\frac1{1-x^2}\right)^2$\large\left(\frac1{1-x^2}\right)^2 \left[ ... \right] \left[ \frac1{\sqrt2}x-y \right]^2 $\large\left[\frac1{\sqrt2}x-y\right]^2$\large\left[\frac1{\sqrt2}x-y\right]^2 \left\{ ... \right\} \left\{ x \in \mathbb{R} \middle| x \geq \frac12 \right\} $\large\left\{x\in\mathbb{R}\middle|x\geq\frac12\right\}$\large\left\{x\in\mathbb{R}\middle|x\geq\frac12\right\} \left\langle ... ... \right\rangle \left\langle \varphi \middle| \hat{H} \middle| \phi \right\rangle $\large\left\langle \varphi\middle|\hat{H}\middle|\phi\right\rangle$\large\left\langle \varphi\middle|\hat{H}\middle|\phi\right\rangle \left| ... \right| \left| \begin{matrix} a_1 & a_2 \\ b_1 & b_2 \end{matrix} \right| $\large\left|\begin{matrix}a_1&a_2\\b_1&b_2 \end{matrix}\right|$\large\left|\begin{matrix}a_1&a_2\\b_1&b_2 \end{matrix}\right| \left\| ... \right\| \left\| x^2-y^2 \right\| $\large\left\|x^2-y^2\right\|$\large\left\|x^2-y^2\right\| \left\{ ... \right. y = \left\{ {\text{this}\atop \text{that}} \right. $\large y=\left\{ {\text{this}\atop\text{that}} \right.$\large y=\left\{ {\text{this}\atop\text{that}} \right. \left. ... \right\} \left. {\text{this}\atop \text{that}} \right\} = y $\large\left. {\text{this}\atop\text{that}} \right\}=y$\large\left. {\text{this}\atop\text{that}} \right\}=y Note the last two examples. Any left delimiter can be balanced with a matching \right. and any right delimiter can be balanced with a preceding \left. The . delimiter balances its mate but displays nothing, which lets you format math expressions like the last two illustrated above. The \text{ } and { \atop } commands from those examples are discussed below. Accents... [¶6] Use the LaTeX "math accent" \vec{ } to write vectors. For example, \vec{v} renders $\vec{v}$\vec{v} . Some accents have a wide counterpart, used when its argument contains more than a single character. Among the accents LaTeX recognizes are the following: Accent example... ...renders \vec{ } \vec{x} $\large\vec{x}$\large\vec{x} \hat{ } \widehat{ABC} $\widehat{ABC}$\widehat{ABC} \tilde{ } \widetilde{ABC} $\widetilde{ABC}$\widetilde{ABC} \dot{ } \dot{\omega} $\large\dot{\omega}$\large\dot{\omega} \ddot{ } \ddot{\omega} $\large\ddot{\omega}$\large\ddot{\omega} Note that accents can be composed, e.g., \dot{\vec{x}} renders $\dot{\vec{x}}$\dot{\vec{x}} . [¶7] While not strictly accents, the following one-argument commands are also very useful. They have no wide counterparts: \not{ } accepts only a single-character argument, whereas \cancel{ }, \sout{ }, \overline{ } and \underline{ } all accept one or more characters. Command example... ...renders \not{ } a \not= b $a\not=b$a\not=b a \not\in \mathbb{Q} $a\not\in\mathbb{Q}$a\not\in\mathbb{Q} \cancel{ } \cancel{ABC} $\usepackage{cancel}\cancel{ABC}$\usepackage{cancel}\cancel{ABC} \sout{ } \sout{$ABC$} $\parstyle\usepackage{ulem}\sout{ABC}$\parstyle\usepackage{ulem}\sout{$ABC} \overline{ } \overline{ABC} $\overline{ABC}$\overline{ABC} \underline{ } \underline{ABC} $\underline{ABC}$\underline{ABC} [¶8] While also not accents, the following two-argument commands are similarly useful. Note that \overbrace{ }^{ } requires a carat ^ between its two arguments, whereas \overset{ }{ } doesn't. Similarly, \underbrace{ }_{ } requires an underscore _, whereas \underset{ }{ } doesn't. Command example... ...renders \overbrace{ }^{ } \overbrace{a,...,a}^{\text{k a's}} $\overbrace{a,...,a}^{\text{k a's}}$\overbrace{a,...,a}^{\text{k a's}} \underbrace{ }_{ } \underbrace{b,...,b}_{\text{l b's}} $\underbrace{b,...,b}_{\text{l b's}}$\underbrace{b,...,b}_{\text{l b's}} \overset{ }{ } a \overset{\text{def}}{=} b $a \overset{\text{def}}{=} b$a \overset{\text{def}}{=} b \underset{ }{ } c \underset{\text{def}}{=} d $c \underset{\text{def}}{=} d$c \underset{\text{def}}{=} d You can usefully combine \overbrace{ }^{ } and \underbrace{ }_{ } so that, for example, \underbrace{\overbrace{a...a}^{\text{k a's}}, \overbrace{b...b}^{\text{l b's}}}_{\text{k+l elements}} renders $\underbrace{\overbrace{a...a}^{\text{k a's}}, \overbrace{b...b}^{\text{l b's}}}_{\text{k+l elements}}$\underbrace{\overbrace{a...a}^{\text{k a's}}, \overbrace{b...b}^{\text{l b's}}}_{\text{k+l elements}} . Functions names... [¶9] Writing sin^2\theta+cos^2\theta renders $sin^2\theta+cos^2\theta$sin^2\theta+cos^2\theta , whereas \sin^2\theta+\cos^2\theta renders $\sin^2\theta+\cos^2\theta$\sin^2\theta+\cos^2\theta . Several dozen common function names are recognized by LaTeX as backslashed commands, and rendered in a roman font that's more typical mathematical notation. And some of these function name commands like \lim render subscripts more typically, too. For example, \lim_{x\to\infty}\frac1x=0 renders $\lim_{x\to\infty}\frac1x=0$\lim_{x\to\infty}\frac1x=0 . The most common LaTeX function name commands are listed below. Those that treat subscripts like \lim are followed by an underscore, e.g., \lim_. \arccos \arcsin \arctan \arg \cos \cosh \cot \coth \csc \deg \det_ \dim \exp \gcd_ \hom \inf_ \ker \lg \lim_ \liminf_ \limsup_ \ln \log \max_ \min_ \Pr_ \sec \sin \sinh \sup_ \tan \tanh Matrices... [¶10] The easiest way to write a matrix in LaTeX is \begin{matrix} a&b\\c&d \end{matrix} which renders $\begin{matrix} a & b \\ c & d \end{matrix}$\begin{matrix} a & b \\ c & d \end{matrix} . Surround it with \left( ... \right) to obtain $\left(\begin{matrix} a & b \\ c & d \end{matrix}\right)$\left(\begin{matrix} a & b \\ c & d \end{matrix}\right) . Alternatively, \begin{pmatrix} ... \end{pmatrix} automatically surounds the rendered matrix with parentheses. [¶11] Matrices are written row-wise, with \\ at the end of each row. Within a row, columns are separated by &. A general m x n matrix is therefore written in the form \begin{matrix} a1,1 & a1,2 & . . . & a1,n \\ a2,1 & a2,2 & . . . & a2,n \\ . . . . . . . . . . . . . . . . . . . . . \\ am,1 & am,2 & . . . & am,n \end{matrix} [¶12] Each component ai,j can contain any valid LaTeX markup whatsoever, even another matrix. If you write & &, the corresponding component is left blank. For example, \begin{pmatrix} a&\\&b \end{pmatrix} renders the diagonal 2 x 2 matrix $\left(\begin{matrix} a & \\ & b \end{matrix}\right)$\left(\begin{matrix} a & \\ & b \end{matrix}\right) . You can terminate a row with \\ anytime, so the preceding a&\\&b can been written a\\&b. And you can write (lower left) triangular matrices without strings of empty & & &'s. [¶13] More general than \begin{matrix}......\end{matrix} illustrated above is the alternative \begin{array}{lcr}......\end{array} which requires that extra {lcr}-style argument specifying how to center elements in each column. The three one-letter choices stand for left or center or right, and you supply one letter for each column in your array. Thus, \begin{array}{lcr}......\end{array} specifies a three-column array, any number of rows, with elements in the first column left-justified, elements in the second centered, and elements in the third column right-justified. Note that \begin{matrix}......\end{matrix} is equivalent to \begin{array}{ccc...}......\end{array}, with enough c's to accommodate your array. [¶14] Suppose you want to display four simultaneous equations. That can be done with a four-row, three-column {rcl} array whose second column always contains an = sign, first column the left-hand side of an equation, and third column the right-hand side. For example, \begin{array}{rcl} a + b + c + d & = & 4 \\ 2a - d & = & b + c \\ 2b & = & c + d - a \\ c - d & = & b - 2a \end{array} renders $\begin{array}{rcl}a+b+c+d&=&4\\2a-d&=&b+c\\2b&=&c+d-a\\ c-d&=&b-2a\end{array}$\begin{array}{rcl}a+b+c+d&=&4\\2a-d&=&b+c\\2b&=&c+d-a\\ c-d&=&b-2a\end{array} [¶15] Or, for another example, to illustrate some simple algebra you can leave the first column empty after the first row, rendering (hover over or click the image to see its markup) $\begin{array}{rcl}(a+b)^3&=&(a+b)^2(a+b)\\ &=&(a^2+2ab+b^2)(a+b)\\ &=&(a^3+2a^2b+ab^2)+(a^2b+2ab^2+b^3)\\ &=&a^3+3a^2b+3ab^2+b^3\end{array}$\begin{array}{rcl}(a+b)^3&=&(a+b)^2(a+b)\\ &=&(a^2+2ab+b^2)(a+b)\\ &=&(a^3+2a^2b+ab^2)+(a^2b+2ab^2+b^3)\\ &=&a^3+3a^2b+3ab^2+b^3\end{array} [¶16] LaTeX provides various additional methodsFor example, \begin{align}......\end{align} more readily aligns multi-line equations than \begin{array}{rcl}......\end{array} illustrated above. to display array-type data that you may find useful. But a lengthy discussion is beyond the scope of this tutorial. (3) Additional Refinements "Make every day your masterpiece." –– John Wooden Spacing... [¶1] LaTeX does its own spacing, and spaces that you type yourself are never displayed. For example, (abcd), (a bcd), (a b cd) and (a b c d) all render the same $(a b c d)$(a b c d) . To explicitly embed displayed space, LaTeX provides several commands including \, \: \; \quad \qquad, and also a backslashed blank $\backslash\underline{\ }$\backslash\underline{\ } (i.e., a \ followed by a blank). These commands take no arguments. For example, (a\,b\:c\;d\ e\quad f\qquad g) renders $(a\,b\:c\;d\ e\quad f\qquad g)$(a\,b\:c\;d\ e\quad f\qquad g) . [¶2] For arbitrary embedded space, the command \hspace{ } takes a single numerical argument that specifies the width of the embedded space (in "points"). For example, (ab\hspace{9pt}cd\hspace{25pt}ef) renders $(ab\hspace{9pt}cd\hspace{25pt}ef)$(ab\hspace{9pt}cd\hspace{25pt}ef) . Text... [¶3] Typing abcdef renders $abcdef$abcdef in LaTeX's default math font. To intersperse explanatory text, \text{abc def} or \mbox{def ghi} renders $\small\mbox{abc def}$\small\mbox{abc def} in non-mathematical roman font. Note that embedded spaces are respected. For example, y=\left\{ {x/2\mbox{ ifx$even} \atop (x+1)/2\text{ if odd}} \right. renders $y=\left\{ {x/2\mbox{ if x even} \atop (x+1)/2\text{ if odd}} \right.$y=\left\{ {x/2\mbox{ if$x$even} \atop (x+1)/2\text{ if odd}} \right. . Font Sizes... [¶4] LaTeX commands that select font size are \tiny, \small, \normalsize (the usual default), \large, \Large, \LARGE, \huge and \Huge. But these commands are usually not permitted within math markup. MathTeX permits these font size commands by moving them outside the math markup. For example, \tiny\sqrt{a^2+b^2} renders $\tiny\sqrt{a^2+b^2}$\tiny\sqrt{a^2+b^2} \small\sqrt{a^2+b^2} $\small\sqrt{a^2+b^2}$\small\sqrt{a^2+b^2} \normalsize\sqrt{a^2+b^2} $\normalsize\sqrt{a^2+b^2}$\normalsize\sqrt{a^2+b^2} \large\sqrt{a^2+b^2} $\large\sqrt{a^2+b^2}$\large\sqrt{a^2+b^2} \Large\sqrt{a^2+b^2} $\Large\sqrt{a^2+b^2}$\Large\sqrt{a^2+b^2} \LARGE\sqrt{a^2+b^2} $\LARGE\sqrt{a^2+b^2}$\LARGE\sqrt{a^2+b^2} \huge\sqrt{a^2+b^2} $\huge\sqrt{a^2+b^2}$\huge\sqrt{a^2+b^2} \Huge\sqrt{a^2+b^2} $\Huge\sqrt{a^2+b^2}$\Huge\sqrt{a^2+b^2} These size directives affect the entire expression. There's no easy way to render a single expression at several different sizes, e.g., \frac {\small a}{\large b} won't work. (4) Examples "You can observe a lot just by watching." –– Yogi Berra [¶1] Here are several examples further demonstrating LaTeX features and usage. Some of the illustrated features haven't been discussed in this tutorial, and you'll have to consult the references for an explanation. To see how the examples are done, Click any one of them to place its corresponding markup in the Practice Box above. Then press Submit to re-render it, or you can edit the markup first to suit your own purposes. (1) $\usepackage{color}\color{red} \small e^x=\sum_{n=0}^\infty\frac{x^n}{n!}$\usepackage{color}\color{red} \small e^x=\sum_{n=0}^\infty\frac{x^n}{n!} $\usepackage{color}\color{green} \normalsize e^x=\sum_{n=0}^\infty\frac{x^n}{n!}$\usepackage{color}\color{green} \normalsize e^x=\sum_{n=0}^\infty\frac{x^n}{n!} $\usepackage{color}\color{blue} \large e^x=\sum_{n=0}^\infty\frac{x^n}{n!}$\usepackage{color}\color{blue} \large e^x=\sum_{n=0}^\infty\frac{x^n}{n!} $\large e^x=\lim_{n\to\infty} \left(1+\frac xn\right)^n$\large e^x=\lim_{n\to\infty} \left(1+\frac xn\right)^n (2) $\normalsize \varepsilon=\sum_{i=1}^{n-1} \frac1{\Delta x}\int\limits_{x_i}^{x_{i+1}}\left\{\frac1{\Delta x}\big[ (x_{i+1}-x)y_i^\ast+(x-x_i)y_{i+1}^\ast\big]-f(x)\right\}^2dx$\normalsize \varepsilon=\sum_{i=1}^{n-1} \frac1{\Delta x}\int\limits_{x_i}^{x_{i+1}}\left\{\frac1{\Delta x}\big[ (x_{i+1}-x)y_i^\ast+(x-x_i)y_{i+1}^\ast\big]-f(x)\right\}^2dx (3) $\large x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$\large x=\frac{-b\pm\sqrt{b^2-4ac}}{2a} solution for quadratic (4) $\large f^\prime(x)\ = \lim_{\Delta x\to0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$\large f^\prime(x)\ = \lim_{\Delta x\to0}\frac{f(x+\Delta x)-f(x)}{\Delta x} definition of derivative (5) $\Large f=b_o+\frac{a_1}{b_1+\frac{a_2}{b_2+\frac{a_3}{b_3+a_4}}}$\Large f=b_o+\frac{a_1}{b_1+\frac{a_2}{b_2+\frac{a_3}{b_3+a_4}}} continued fraction (6) $\Large\tilde y=\left\{ {\ddot x \mbox{ if x odd}\atop\widehat{\bar x+1}\text{ if even}}\right.$\Large\tilde y=\left\{ {\ddot x \mbox{ if$xodd}\atop\widehat{\bar x+1}\text{ if even}}\right. illustrating \left\{...\right. and note all the accents (7) $\large\overbrace{a,...,a}^{\text{k a's}}, \underbrace{b,...,b}_{\text{l b's}}\hspace{10pt} \underbrace{\overbrace{a...a}^{\text{k a's}}, \overbrace{b...b}^{\text{l b's}}}_{\text{k+l elements}}$\large\overbrace{a,...,a}^{\text{k a's}}, \underbrace{b,...,b}_{\text{l b's}}\hspace{10pt} \underbrace{\overbrace{a...a}^{\text{k a's}}, \overbrace{b...b}^{\text{l b's}}}_{\text{k+l elements}} \overbrace{}^{} and \underbrace{}_{} (TeXbook page 181, Exercise 18.41) (8) $A\ =\ \left( \begin{array}{c|ccc} & 1 & 2 & 3 \\ \hline 1&a_{11}&a_{12}&a_{13}\\2&a_{21}&a_{22}&a_{23}\\ 3&a_{31}&a_{32}&a_{33} \end{array} \right)$A\ =\ \left( \begin{array}{c|ccc} & 1 & 2 & 3 \\ \hline 1&a_{11}&a_{12}&a_{13}\\2&a_{21}&a_{22}&a_{23}\\ 3&a_{31}&a_{32}&a_{33} \end{array} \right) illustrating \begin{array}{c|ccc}...\end{array} with \hline (9) $\parstyle\large\begin{eqnarray*} x+y+z&=&3\\2y&=&x+z\\2x+y&=&z\end{eqnarray*}$\parstyle\large\begin{eqnarray*} x+y+z&=&3\\2y&=&x+z\\2x+y&=&z\end{eqnarray*} using \begin{eqnarray} to align equations (5) LaTeX Symbol Sets "Letters can be used to construct words, phrases and sentences that may be deemed offensive." –– Warning label on children's alphabet blocks [¶1] You've already seen several LaTeX symbols like \alpha, \beta and \omega, and several LaTeX commands like \frac, \sqrt and \sum. The Comprehensive LaTeX Symbol List illustrates over 4,900 symbols supported by LaTeX. Some of the basic math symbols and operators are illustrated in the following tables. Math, Roman, Calligraphic, Script and Blackboard Bold ... [¶2] Simply typing a-z,A-Z renders characters $a-z,A-Z$a-z,A-Z in LaTeX's default math font, while typing \mathbf{a-z,A-Z} renders $\mathbf{a-z,A-Z}$\mathbf{a-z,A-Z} in bold math font. And typing \text{a-z,A-Z} renders characters $\text{a-z,A-Z}$\text{a-z,A-Z} in roman font. Spaces inside \text{ } are rendered exactly as typed, for example, \text{this is a test} renders $\text{this is a test}$\text{this is a test} . [¶3] LaTeX supports many useful fonts besides math and roman. Among these are Blackboard bold: \mathbb{A-Z} renders $\mathbb{A-Z}$\mathbb{A-Z} Calligraphic: \mathcal{A-Z} $\mathcal{A-Z}$\mathcal{A-Z} Script: \mathscr{A-Z} $\usepackage{mathrsfs}\mathscr{A-Z}$\usepackage{mathrsfs}\mathscr{A-Z} [¶4] You'd typically type only one character at a time in these fonts. For example, x\in\mathbb{R}_+ renders $x\in\mathbb{R}_+$x\in\mathbb{R}_+ . But, just like \text{ }, any number of characters is permitted between the { }'s. $\usepackage{mathrsfs}\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline \text{a-z} & \backslash\text{text} & \text{A-Z} & \small \backslash\text{text} & \backslash\text{mathbb} & \backslash\text{mathcal} & \backslash\text{mathscr} \\ \hline a&\text{a}& A&\text{A}&\mathbb{A}&\mathcal{A}&\mathscr{A}\\ b&\text{b}& B&\text{B}&\mathbb{B}&\mathcal{B}&\mathscr{B}\\ c&\text{c}& C&\text{C}&\mathbb{C}&\mathcal{C}&\mathscr{C}\\ d&\text{d}& D&\text{D}&\mathbb{D}&\mathcal{D}&\mathscr{D}\\ e&\text{e}& E&\text{E}&\mathbb{E}&\mathcal{E}&\mathscr{E}\\ f&\text{f}& F&\text{F}&\mathbb{F}&\mathcal{F}&\mathscr{F}\\ g&\text{g}& G&\text{G}&\mathbb{G}&\mathcal{G}&\mathscr{G}\\ h&\text{h}& H&\text{H}&\mathbb{H}&\mathcal{H}&\mathscr{H}\\ i&\text{i}& I&\text{I}&\mathbb{I}&\mathcal{I}&\mathscr{I}\\ j&\text{j}& J&\text{J}&\mathbb{J}&\mathcal{J}&\mathscr{J}\\ k&\text{k}& K&\text{K}&\mathbb{K}&\mathcal{K}&\mathscr{K}\\ l&\text{l}& L&\text{L}&\mathbb{L}&\mathcal{L}&\mathscr{L}\\ m&\text{m}& M&\text{M}&\mathbb{M}&\mathcal{M}&\mathscr{M}\\ n&\text{n}& N&\text{N}&\mathbb{N}&\mathcal{N}&\mathscr{N}\\ o&\text{o}& O&\text{O}&\mathbb{O}&\mathcal{O}&\mathscr{O}\\ p&\text{p}& P&\text{P}&\mathbb{P}&\mathcal{P}&\mathscr{P}\\ q&\text{q}& Q&\text{Q}&\mathbb{Q}&\mathcal{Q}&\mathscr{Q}\\ r&\text{r}& R&\text{R}&\mathbb{R}&\mathcal{R}&\mathscr{R}\\ s&\text{s}& S&\text{S}&\mathbb{S}&\mathcal{S}&\mathscr{S}\\ t&\text{t}& T&\text{T}&\mathbb{T}&\mathcal{T}&\mathscr{T}\\ u&\text{u}& U&\text{U}&\mathbb{U}&\mathcal{U}&\mathscr{U}\\ v&\text{v}& V&\text{V}&\mathbb{V}&\mathcal{V}&\mathscr{V}\\ w&\text{w}& W&\text{W}&\mathbb{W}&\mathcal{W}&\mathscr{W}\\ x&\text{x}& X&\text{X}&\mathbb{X}&\mathcal{X}&\mathscr{X}\\ y&\text{y}& Y&\text{Y}&\mathbb{Y}&\mathcal{Y}&\mathscr{Y}\\ z&\text{z}& Z&\text{Z}&\mathbb{Z}&\mathcal{Z}&\mathscr{Z}\\ \hline \end{array}$ Greek characters... [¶5] Characters from the Greek alphabet supported by LaTeX. $\begin{array}{|lcc|lcc|lcc|} \hline \hspace{10pt}\backslash\text{Gamma} & \Gamma & & \hspace{10pt}\backslash\text{Delta} & \Delta & & \hspace{10pt}\backslash\text{Theta} & \Theta & \\ \hspace{10pt}\backslash\text{Lambda} & \Lambda & & \hspace{10pt}\backslash\text{Xi} & \Xi & & \hspace{10pt}\backslash\text{Pi} & \Pi & \\ \hspace{10pt}\backslash\text{Sigma} & \Sigma & & \hspace{10pt}\backslash\text{Upsilon} & \Upsilon & & \hspace{10pt}\backslash\text{Phi} & \Phi & \\ \hspace{10pt}\backslash\text{Psi} & \Psi & & \hspace{10pt}\backslash\text{Omega} & \Omega & & & & \\ \hline \hspace{10pt}\backslash\text{alpha} & \alpha & & \hspace{10pt}\backslash\text{beta} & \beta & & \hspace{10pt}\backslash\text{gamma} & \gamma & \\ \hspace{10pt}\backslash\text{delta} & \delta & & \hspace{10pt}\backslash\text{epsilon} & \epsilon & & \hspace{10pt}\backslash\text{zeta} & \zeta & \\ \hspace{10pt}\backslash\text{eta} & \eta & & \hspace{10pt}\backslash\text{theta} & \theta & & \hspace{10pt}\backslash\text{iota} & \iota & \\ \hspace{10pt}\backslash\text{kappa} & \kappa & & \hspace{10pt}\backslash\text{lambda} & \lambda & & \hspace{10pt}\backslash\text{mu} & \mu & \\ \hspace{10pt}\backslash\text{nu} & \nu & & \hspace{10pt}\backslash\text{xi} & \xi & & \hspace{10pt}\backslash\text{pi} & \pi & \\ \hspace{10pt}\backslash\text{rho} & \rho & & \hspace{10pt}\backslash\text{sigma} & \sigma & & \hspace{10pt}\backslash\text{tau} & \tau & \\ \hspace{10pt}\backslash\text{upsilon} & \upsilon & & \hspace{10pt}\backslash\text{phi} & \phi & & \hspace{10pt}\backslash\text{chi} & \chi & \\ \hspace{10pt}\backslash\text{psi} & \psi & & \hspace{10pt}\backslash\text{omega} & \omega & & & & \\ \hline \hspace{10pt}\backslash\text{varepsilon} & \varepsilon & & \hspace{10pt}\backslash\text{vartheta} & \vartheta & & \hspace{10pt}\backslash\text{varpi} & \varpi & \\ \hspace{10pt}\backslash\text{varrho} & \varrho & & \hspace{10pt}\backslash\text{varsigma} & \varsigma & & \hspace{10pt}\backslash\text{varphi} & \varphi & \\ \hline \end{array}$ Math symbols and relations... [¶6] Some operatorsBesides \int illustrated here, similarly promoted operators include \oint, \iint, \sum, \prod, \coprod, \oplus, \otimes, \odot, \uplus, \wedge, \vee, \cup, \cap, and \sqcup. shown below are automatically "promoted" to a larger size in LaTeX's \displaystyle mode. For example, f(x)=\int_{-\infty}^x e^{-t^2}dt renders $\textstyle f(x)=\int_{-\infty}^x e^{-t^2}dt$\textstyle f(x)=\int_{-\infty}^x e^{-t^2}dt , whereas \displaystyle f(x)=\int_{-\infty}^x e^{-t^2}dt renders $\displaystyle f(x)=\int_{-\infty}^x e^{-t^2}dt$\displaystyle f(x)=\int_{-\infty}^x e^{-t^2}dt . [¶7] The arrangement of symbols in the tables will be improved in a future version of this tutorial. $\small \begin{array}{|lcc|lcc|lcc|} \hline \hspace{1pt}\backslash\text{cdot} & \cdot & & \hspace{1pt}\backslash\text{times} & \times & & \hspace{1pt}\backslash\text{ast} & \ast & \\ \hspace{1pt}\backslash\text{div} & \div & & \hspace{1pt}\backslash\text{diamond} & \diamond & & \hspace{1pt}\backslash\text{pm} & \pm & \\ \hspace{1pt}\backslash\text{mp} & \mp & & \hspace{1pt}\backslash\text{oplus} & \oplus & & \hspace{1pt}\backslash\text{ominus} & \ominus & \\ \hspace{1pt}\backslash\text{otimes} & \otimes & & \hspace{1pt}\backslash\text{oslash} & \oslash & & \hspace{1pt}\backslash\text{odot} & \odot & \\ \hspace{1pt}\backslash\text{bigcirc} & \bigcirc & & \hspace{1pt}\backslash\text{circ} & \circ & & \hspace{1pt}\backslash\text{bullet} & \bullet & \\ \hspace{1pt}\backslash\text{asymp} & \asymp & & \hspace{1pt}\backslash\text{equiv} & \equiv & & \hspace{1pt}\backslash\text{subseteq} & \subseteq & \\ \hspace{1pt}\backslash\text{supseteq} & \supseteq & & \hspace{1pt}\backslash\text{leq} & \leq & & \hspace{1pt}\backslash\text{geq} & \geq & \\ \hspace{1pt}\backslash\text{preceq} & \preceq & & \hspace{1pt}\backslash\text{succeq} & \succeq & & \hspace{1pt}\backslash\text{sim} & \sim & \\ \hspace{1pt}\backslash\text{approx} & \approx & & \hspace{1pt}\backslash\text{subset} & \subset & & \hspace{1pt}\backslash\text{supset} & \supset & \\ \hspace{1pt}\backslash\text{ll} & \ll & & \hspace{1pt}\backslash\text{gg} & \gg & & \hspace{1pt}\backslash\text{prec} & \prec & \\ \hspace{1pt}\backslash\text{succ} & \succ & & \hspace{1pt}\backslash\text{leftarrow} & \leftarrow & & \hspace{1pt}\backslash\text{rightarrow} & \rightarrow & \\ \hspace{1pt}\backslash\text{uparrow} & \uparrow & & \hspace{1pt}\backslash\text{downarrow} & \downarrow & & \hspace{1pt}\backslash\text{leftrightarrow}&\leftrightarrow&\\ \hspace{1pt}\backslash\text{nearrow} & \nearrow & & \hspace{1pt}\backslash\text{searrow} & \searrow & & \hspace{1pt}\backslash\text{simeq} & \simeq & \\ \hspace{1pt}\backslash\text{Leftarrow} & \Leftarrow & & \hspace{1pt}\backslash\text{Rightarrow} & \Rightarrow & & \hspace{1pt}\backslash\text{Uparrow} & \Uparrow & \\ \hspace{1pt}\backslash\text{Downarrow} & \Downarrow & & \hspace{1pt}\backslash\text{Leftrightarrow}&\Leftrightarrow&& \hspace{1pt}\backslash\text{nwarrow} & \nwarrow & \\ \hspace{1pt}\backslash\text{swarrow} & \swarrow & & \hspace{1pt}\backslash\text{propto} & \propto & & \hspace{1pt}\backslash\text{prime} & \prime & \\ \hspace{1pt}\backslash\text{infty} & \infty & & \hspace{1pt}\backslash\text{in} & \in & & \hspace{1pt}\backslash\text{ni} & \ni & \\ \hspace{1pt}\backslash\text{triangle} & \triangle & & \hspace{1pt}\backslash\text{bigtriangledown}&\bigtriangledown&& \hspace{1pt}\backslash^\prime & {}^\prime & \\ \hspace{1pt}\text{/} & / & & \hspace{1pt}\backslash\text{forall} & \forall & & \hspace{1pt}\backslash\text{exists} & \exists & \\ \hspace{1pt}\backslash\text{neg} & \neg & & \hspace{1pt}\backslash\text{emptyset} & \emptyset & & \hspace{1pt}\backslash\text{Re} & \Re & \\ \hspace{1pt}\backslash\text{Im} & \Im & & \hspace{1pt}\backslash\text{top} & \top & & \hspace{1pt}\backslash\text{bot} & \bot & \\ \hspace{1pt}\backslash\text{aleph} & \aleph & & \hspace{1pt}\backslash\text{mathcal}\lbrace\text{A}\rbrace}&\;\mathcal{A}&& \hspace{1pt}\backslash\text{mathcal}\lbrace\text{Z}\rbrace}&\;\mathcal{Z}&\\ \hline \end{array}$ $\small\begin{array}{|lcc|lcc|lcc|} \hline \hspace{1pt}\backslash\text{cup} & \cup & & \hspace{1pt}\backslash\text{cap} & \cap & & \hspace{1pt}\backslash\text{uplus} & \uplus & \\ \hspace{1pt}\backslash\text{wedge} & \wedge & & \hspace{1pt}\backslash\text{vee} & \vee & & \hspace{1pt}\backslash\text{vdash} & \vdash & \\ \hspace{1pt}\backslash\text{dashv} & \dashv & & \hspace{1pt}\backslash\text{lfloor} & \lfloor & & \hspace{1pt}\backslash\text{rfloor} & \rfloor & \\ \hspace{1pt}\backslash\text{lceil} & \lceil & & \hspace{1pt}\backslash\text{rceil} & \rceil & & \hspace{1pt}\backslash\text{lbrace} & \lbrace & \\ \hspace{1pt}\backslash\text{rbrace} & \rbrace & & \hspace{1pt}\backslash\text{langle} & \langle & & \hspace{1pt}\backslash\text{rangle} & \rangle & \\ \hspace{1pt}\backslash\text{mid} & \mid & & \hspace{1pt}\backslash\text{parallel} & \parallel & & \hspace{1pt}\backslash\text{updownarrow}& \updownarrow & \\ \hspace{1pt}\backslash\text{Updownarrow}& \Updownarrow & & \hspace{1pt}\backslash\text{setminus} & \setminus & & \hspace{1pt}\backslash\text{wr} & \wr & \\ \hspace{1pt}\backslash\text{surd} & \surd & & \hspace{1pt}\backslash\text{amalg} & \amalg & & \hspace{1pt}\backslash\text{nabla} & \nabla & \\ \hspace{1pt}\backslash\text{int} & \int & & \hspace{1pt}\backslash\text{sqcup} & \sqcup & & \hspace{1pt}\backslash\text{sqcap} & \sqcap & \\ \hspace{1pt}\backslash\text{sqsubseteq} & \sqsubseteq & & \hspace{1pt}\backslash\text{sqsupseteq} & \sqsupseteq & & \hspace{1pt}\backslash\text{S} & \S & \\ \hspace{1pt}\backslash\text{dag} & \dag & & \hspace{1pt}\backslash\text{ddag} & \ddag & & \hspace{1pt}\backslash\text{P} & \P & \\ \hspace{1pt}\backslash\text{clubsuit} & \clubsuit & & \hspace{1pt}\backslash\text{Diamond} & \Diamond & & \hspace{1pt}\backslash\text{Heart} & \Heart & \\ \hspace{1pt}\backslash\text{spadesuit} & \spadesuit & & \hspace{55pt} & & & & & \\ \hspace{1pt}\backslash\text{oint} & \oint & & \hspace{1pt}\backslash\text{sum} & \sum & & \hspace{1pt}\backslash\text{prod} & \prod & \\ \hspace{1pt}\backslash\text{coprod} & \coprod & & \hspace{1pt}\backslash\text{star} & {\star} & & \hspace{1pt}\backslash\text{partial} & \partial & \\ \hspace{1pt}\backslash\text{flat} & \flat & & \hspace{1pt}\backslash\text{natural} & \natural & & \hspace{1pt}\backslash\text{sharp} & \sharp & \\ \hspace{1pt}\backslash\text{smile} & \smile & & \hspace{1pt}\backslash\text{frown} & \frown & & \hspace{1pt}\backslash\text{ell} & \ell & \\ \hspace{1pt}\backslash\text{imath} & \imath & & \hspace{1pt}\backslash\text{jmath} & \jmath & & \hspace{1pt}\backslash\text{wp} & \wp & \\ \hspace{1pt}\backslash\text{ss} & \ss & & \hspace{1pt}\backslash\text{ae} & \ae & & \hspace{1pt}\backslash\text{oe} & \oe & \\ \hspace{1pt}\backslash\text{AE} & \AE & & \hspace{1pt}\backslash\text{OE} & \OE & & & & \\ \hspace{1pt}\backslash\text{AA} & \AA & & \hspace{1pt}\backslash\text{aa} & \aa & & \hspace{1pt}\backslash\text{hbar} & \hbar & \\ \hspace{1pt}\backslash\text{ldots} & \ldots & & \hspace{1pt}\backslash\text{cdots} & \cdots & & \hspace{1pt}\backslash\text{vdots} & \vdots & \\ \hspace{1pt}\backslash\text{ddots} & \ddots & & \hspace{1pt}\backslash\text{angle} & \angle & & \hspace{1pt}\backslash\text{iint} & \iint & \\ \hline \end{array}$ (6) LaTeX References "Have nothing in your house that you do not know to be useful, or believe to be beautiful" –– William Morris [¶1] This tutorial discusses LaTeX's mathematical markup capabilities, ignoring the preparation of documents containing formatted text. Most references explain LaTeX's mathematical markup, but emphasize the preparation of complete documents. The TeX Users Group provides lists of online resources and book resources that you may find useful. Online LaTeX sources... [¶2] Among the online resources, the two pages Latex Math I and Latex Math II comprise another short LaTeX math tutorial. And The not so Short Introduction to LaTeX is a good general online LaTeX reference. LaTeX books... [¶3] If you're only interested in LaTeX for online mathematical markup (and not for document preparation), then you may not want to invest the time and money that books entail. But in case you do want a book, try to browse as many as you can, and then choose the one or two you personally like best. Among the book resources, here are a few you might want to look at: [¶4] Intermediate: A Guide to LaTeX2e, 4th ed. is authoritative and easy to read, with a good math discussion. [¶5] Advanced: at 1100 pages it's not a suitable tutorial or first book, but the most comprehensive LaTeX reference available is, The LaTeX Companion, 2nd ed., Frank Mittelbach and Michel Goossens, Addison-Wesley 2004, ISBN 0-201-36299-6 (59.99).
[¶6] Math: Math into LaTeX, 3rd ed., George Gratzer, Birkhauser Boston 2000, ISBN: 0-8176-4131-9 emphasizes math. Portions of the first edition are available online.
"My grandfather once told me there are two kinds of people:
Those who do the work and those who take the credit.
He told me to try to be in the first group; there was much less competition."
–– Indira Gandhi, the late Prime Minister of India
[¶1] This page's copyright is registered by me with the US Copyright Office, and I hereby license it to you under the terms and conditions of the GPL. There is no official support of any kind whatsoever, and you use this page entirely at your own risk, with no guarantee of any kind, in particular with no warranty of merchantability.
[¶2] By using this page, you warrant that you have read, understood and agreed to these terms and conditions, and that you possess the legal right and ability to enter into this agreement and to use this page in accordance with it.
[¶3] Hopefully, the law and ethics regarding web material will evolve to make this kind of obnoxious banter unnecessary. In the meantime, please forgive me my paranoia.
[¶4] To protect your own intellectual property, I recommend (both are pdf) Copyright Basics from The Library of Congress, in particular Circular 61, Copyright Registration for Computer Programs. Very briefly, download Form TX and follow the included instructions. In principle, you automatically own the copyright to anything you write the moment it's on paper. In practice, if the matter comes under dispute, the courts look _very_ favorably on you for demonstrating your intent by registering the copyright.
Copyright © 2002-2012, John Forkosh Associates, Inc. email: john@forkosh.com $\hspace{100}$ $\blue{\small\rm You're the } \Large\counter[counters.log]{counters.txt:mathtextutorial.html}\\[0] {\small\rm visitor to this page.}$
|
|
Discussion
You must be signed in to discuss.
Christina K.
Rutgers, The State University of New Jersey
Other Schools
Aspen F.
University of Sheffield
Jared E.
University of Winnipeg
Video Transcript
All right. So we have a white here letting since from a way, for the jury to L We are interested in cooperating the components off little free lead. Partygoer point B. All right, So for that we consider a small sex and your wife along the wire that Charleston for Machar's. Now there's first Fine, uh, feel small leap accredited to this small cross section that is a D Did you call to key chimes? The charged the ass charge that contained in Dubai is Lambda Time still white charge violet linen to fill in upon the distance is he's sorry. Square right. I s body's excess current this waste where now do it has two components to costed out alone X axis on desire The Y axis so excites d e x is going to g e Caustic now hostage are in the figure from the triangle caustic and religion as based off unhappiness So base He's ex on hyper Chinese He's rude tender It's e squared plus y squared So what we can, right? If you combine the best person for de year, this would give me Kay Landa eggs Do you buy Born the squire plus ways wire on dhe is because three by two. All right, now similarly, the complainant alone do you? Why that is you lied would become with the same process you have signed. Return it so that scientists have you knew why up one older and directors weapons. Why square so this person would be gain them. Not like you. A port. It's he squired less. Why swear? Power three up to right. So found. But since you why is the own worst? I can write this as a negative. All right, so, uh, totally quit along X axis. So e x is what it is. The integration off D e X chrome the world end of the wire. Right. So this becomes G. Amanda. Eggs are independent off. Why? So I can dig them out so I can take hours so that we can ski lambda x outside on dde Gino to l from you're on. Dangerous enough. Actually, square lasts twice. Power three or 12 Andi, this with respect to do you write right now you're usin off. This can be seen dependents on that dips move kill under X times into Kristen. He's why, upon exes squired grew tender. It is worth less. Why swear from zero to right. So if you plug in the hell and Jiro here and some practice on, so do the value for Lambda this the gums manager? Oh, one or by upset. A lot X route over X square. Plus it's quiet. Well, we're done. We are done with this. Now you have that, Grady? Yeah, X, right. It means give this in a box. Okay, Now, similarly, we calculate you why your wife is doing curating again, Geo to help. Now, In this case, only kill Amanda was also key Landa from enjoying too. Oh, that would be Why do you lie on? It's just a heartless my squire. Our three by two. Right now this is equal to If you regret this by the Knicks, this chance are to be minus. Gave him, um, skin landa This view one on or gender exist Squired plus wise wired. Oh, Gero too, huh? Right, right. So I missed a negative sign here. Yeah, so the next you can do is blocking the values. Are general this next bit Lando? One for by I said I'm not, but on if you take the else him off the animators and and solve the Franks is there. So this will be out scar plus the square of the new manager. We have X minus gender XY squared less, squire. All right, so this is the value for you. What? All right now we are. Don't you a party of the question. Now, let's move on to part B. I hope I can do in this little space here, but the of the question I don't want to use next face because I need to use the Formalized you and he x here. So to find that the little film, it's an end of 45 degree. I'll just find the entry angle bursts of them. Peter is equality. Five years, Right? So that is, uh called, too. If you divide, you might buy PX. We have all the denominator in common, right? So we only have left. You're either the X when you had x minus route tender. Actually, squire less any squire upon Oh, so this separated of Braxton's here? That's my l minus root Ginger one plus thanks. Bye. Oh, what? No. Ready, then. Elbows to infinity. This means Dan Kita exit so exciting thingies. Uh, general, So explain pretties again. Generals, that give me minus one. This is a place he dies. He will do what you have to be down Wash writes a minus 45 degrees. Since you guys down worse so we can write the angle that electric field medics XX is is 45 degrees John Worse. All right, so we are done with part B of the problem as well. So this wasp art be on this last part 80 as a whole. So thank you for watching where it
Kansas State University
Christina K.
Rutgers, The State University of New Jersey
Other Schools
Aspen F.
University of Sheffield
Jared E.
University of Winnipeg
|
|
# Ordered Sequence Detection and Robust Design for Pulse Interval Modulation
Shuaishuai Guo, Kihong Park, Mohamed-Slim Alouini
Research output: Chapter in Book/Report/Conference proceedingConference contribution
## Abstract
This paper proposes an ordered sequence detection (OSD) for digital pulse interval modulation (DPIM) applied in optical wireless communications (OWC). To detect a packet consisting of $L$-chips, the computational complexity of OSD is of the order $\mathcal{O}(L\log_2L)$. Moreover, this paper also proposes a robust pulse interval modulation (RPIM) scheme based on OSD. In RPIM, the last of every $K$ symbols is with more power to transmit information and simultaneously to provide a built-in barrier signal. In this way, error propagation is bounded in a slot of $K$ symbols. Together with interleaver and forward error correction (FEC) codes, the bit error rate (BER) can be greatly reduced. We derive the approximate uncoded BER performance of conventional DPIM with OSD and the newly proposed RPIM with OSD based on order statistic theory. Simulations are conducted to collaborate on theoretical analysis and show that RPIM with OSD considerably outperforms existing DPIM with optimal threshold detection in either uncoded or coded systems over various channels.
Original language English (US) 2018 IEEE Globecom Workshops (GC Wkshps) Institute of Electrical and Electronics Engineers (IEEE) 9781538649206 https://doi.org/10.1109/GLOCOMW.2018.8644163 Published - Mar 18 2019
|
|
## Topological Methods in Nonlinear Analysis
### The jumping nonlinearity problem revisited: an abstract approach
#### Abstract
We consider a class of nonlinear problems of the form $$Lu+g(x,u)=f,$$ where $L$ is an unbounded self-adjoint operator on a Hilbert space $H$ of $L^{2}(\Omega)$-functions, $\Omega\subset\mathbb{R}^{N}$ an arbitrary domain, and $g\colon \Omega\times\mathbb{R}\rightarrow\mathbb{R}$ is a "jumping nonlinearity" in the sense that the limits $$\lim_{s\rightarrow-\infty} \frac{g(x,s)}{s}=a \quad\text{and}\quad \lim_{s\rightarrow\infty}\frac{g(x,s)}{s}=b$$ exist and "jump" over an eigenvalue of the operator $-L$. Under rather general conditions on the operator $L$ and for suitable $a< b$, we show that a solution to our problem exists for any $f\in H$. Applications are given to the beam equation, the wave equation, and elliptic equations in the whole space $\mathbb{R}^{N}$.
#### Article information
Source
Topol. Methods Nonlinear Anal., Volume 21, Number 2 (2003), 249-272.
Dates
First available in Project Euclid: 30 September 2016
https://projecteuclid.org/euclid.tmna/1475266298
Mathematical Reviews number (MathSciNet)
MR1998429
Zentralblatt MATH identifier
1112.35006
#### Citation
Costa, David G.; Tehrani, Hossein. The jumping nonlinearity problem revisited: an abstract approach. Topol. Methods Nonlinear Anal. 21 (2003), no. 2, 249--272. https://projecteuclid.org/euclid.tmna/1475266298
#### References
• H. Amann and P. Hess, A multiplicity result for a class of elliptic boundary value problems , Proc. Roy. Soc. Edinburgh Sect. A, 84 (1979), 145–151 \ref\key 2
• A. Ambrosetti and G. Prodi, On the inversion of some differential mappings with singularities between banach spaces , Ann. Mat. Pura Appl., 93 (1973), 231–247 \ref\key 3
• M. Arias, J. Campos, M. Cuesta and J.-P. Gossez, Asymmetric elliptic problems with indefinite weights , to appear, Ann. Inst. H. Poincaré Anal. Non Linéaire \ref\key 4
• T. Bartsch and Y. H. Ding, Critical point theory with applications to asymptotically linear wave and beam equations , Differential Integral Equations, 13 (2000), 973–1000 \ref\key 5
• A. K. Ben-Naoum, C. Fabry and D. Smets, Resonance with respect to the Fučik spectrum , Electron. J. Differential Equations, 37 (2000), 1–21 \ref\key 6 ––––, Structure of the Fučik spectrum and existence of solutions for equations with asymmetric nonlinearities , Proc. Roy. Soc. Edinburgh Sect. A, 131 (2001), 241–265 \ref\key 7
• M. Berger and E. Podolak, On the solutions of a nonlinear Dirichlet problem , Indiana Univ. Math. J., 24 (1975), 837–846 \ref\key 8
• H. Brézis and L. Nirenberg, Nonlinear Anal. , to appear \ref\key 9
• E. N. Dancer, On the Dirichlet problem for weakly nonlinear elliptic partial differential equations , Proc. Roy. Soc. Edinburgh Sect. A, 76 (1977), 283–300 \ref\key 10
• D. G. de Figueiredo, On the superlinear Ambrosetti–Prodi problem , Nonlinear Anal., 8 (1984), 655–665 \ref\key 11
• S. Fučik, Boundary value problems with jumping nonlinearities , Cas. Pest. Mat., 101 (1976), 69–87 \ref\key 12 ––––, Solvability of Nonlinear Equations and Boundary Value Problems, D. Reidel Publishing Company, Dordrecht, Holland (1980) \ref\key 13
• T. Gallouet and O. Kavian, Résultats d'Existence et de Non Existence pour Certains Problèmes Demi-linéaires à l'Infini , Ann. Fac. Sci. Toulouse Math., 3 (1981), 201–246 \ref\key 14
• J. L. Kazdan and F. W. Warner, Remarks on some quasilinear elliptic equations , Comm. Pure Appl. Math., 28 (1975), 567–597 \ref\key 15
• A. C. Lazer and P. J. McKenna, Critical point theory and Boundary value problems with nonlinearities crossing multiple eigenvalues , Comm. Partial Differential Equations, 10 (1985), 107–150 \ref\key 16 ––––, A symmetry theorem and applications to nonlinear partial differential equations , J. Differential Equations, 72 (1988), 95–106 \ref\key 17 ––––, Large-amplitude Periodic oscillations in suspension bridges: some new connections with nonlinear analysis , SIAM Rev., 32 (1990), 537–578 \ref\key 18
• C. A. Magalhães, Semilinear elliptic problems with crossing of multiple eigenvalues , Comm. Partial Differential Equations, 15 (1990), 1265–1292 \ref\key 19
• B. Ruf, On nolinear elliptic problems with jumping nonlinearities , Ann. Mat. Pura Appl. (4), CXXVIII (1981), 133–151 \ref\key 20
• M. Schechter, The Fučik spectrum , Indiana Univ. Math. J., 43 (1994), 1139–1157 \ref\key 21
• E. A. Silva, Critical point theorems and applications to differential equations , Ph.D. Thesis, University of Wisconsin, Madison (1988) \ref\key 22
• S. Solimini, Some remarks on the number of solutions of some nonlinear elliptic problems , Ann. Inst. H. Poincaré Anal. Non Linéaire, 2 (1985), 143–156 \ref\key 23
• M. M. Vainberg, Variational Methods for the Study of Nonlinear Operators, Holden–Day, San Francisco (1964)
|
|
Show Summary Details
Page of
# Glossary of Statistical Terms
Chapter:
Study Design and Statistics
Page of
PRINTED FROM AMA MANUAL OF STYLE ONLINE (www.amamanualofstyle.com). © American Medical Association, 2009. All Rights Reserved. Under the terms of the license agreement, an individual user may print out a PDF of a single chapter of a title in AMA Manual of Style Online for personal use (for details see Privacy Policy).
Subscriber: null; date: 26 February 2017
# Glossary of Statistical Terms
UPDATE: The terms multivariable and multivariate are not synonymous, as the entries in the Glossary of Statistical Terms suggest (Chapter 20.9, page 881 in the print). To be accurate, multivariable refers to multiple predictors (independent variables) for a single outcome (dependent variable). Multivariate refers to 1 or more independent variables for multiple outcomes. This update was implemented June 1, 2014.
UPDATE: We will discontinue using quotation marks to identify parts of an article, but retain the capitalization; eg, This is discussed in the Methods section (not the “Methods” section). This change was made February 14, 2013.
UPDATE: Although our style manual recommends (Section 20.9, page 888 in the print) that "[expressing] P to more than 3 significant digits does not add useful information to P<.001," in certain types of studies (particularly GWAS [genome-wide association studies] and other studies in which there are adjustments for multiple comparisons, such as Bonferroni correction, and the definition of level of significance is substantially less than P<.05) it may be important to express P values to more significant digits. For example, if the threshold of significance is P<.0004, then by definition the P value must be expressed to at least 4 digits to indicate whether a result is statistically significant. GWAS express P values to very small numbers, using scientific notation. If a manuscript you are editing defines statistical significance as a P value substantially less than .05, possibly even using scientific notation to express P values to very small numbers, it is best to retain the values as the author presents them. This change was made August 16, 2011.
In the glossary that follows, terms defined elsewhere in the glossary are printed in this font. An arrowhead () indicates points to consider in addition to the definition. For detailed discussion of these terms, the referenced texts and the resource list at the end of the chapter are useful sources.
Eponymous names for statistical procedures often differ from one text to another (eg, the Newman-Keuls and Student-Newman-Keuls test). The names provided in this glossary follow the Dictionary of Statistical Terms38 published for the International Statistical Institute. Although statistical texts use the possessive form for most eponyms, the possessive form for eponyms is not used in JAMA and the Archives Journals (see 16.0, Eponyms).
Most statistical tests are applicable only under specific circumstances, which are generally dictated by the scale properties of both the independent variable and the dependent variable. Table 3 presents a guide to selection of commonly used statistical techniques. This table is not meant to be exhaustive but rather to indicate the appropriate applications of commonly used statistical techniques.
Table 3. Selection of Commonly Used Statistical Techniquesa
Scale of Measurement
Intervalb
Ordinal
Nominalc
2 Treatment groups
Unpaired t test
Mann-Whitney rank sum test
χ2 Analysis-of-contingency table; Fisher exact test if ≤6 in any cell
≥3 Treatment groups
Analysis of variance
Kruskal-Wallis statistic
χ2 Analysis-of-contingency table; Fisher exact test if ≤6 in any cell
Before and after 1 treatment in same individual
Paired t test
Wilcoxon signed rank test
McNemar test
Multiple treatments in same individual
Repeated-measures analysis of variance
Friedman statistic
Cochran Q
Association between 2 variables
Linear regression and Pearson product moment correlation
Spearman rank correlation
Contingency coefficients
a Adapted with permission from Glantz, Primer of Biostatistics.39 © The McGraw-Hill Companies, Inc.
b Assumes normally distributed data. If data are not normally distributed, then rank the observations and use the methods for data measured on an ordinal scale.
c For a nominal dependent variable that is time dependent (such as mortality over time), use life-table analysis for nominal independent variables and Cox regression for continuous and/or nominal independent variables.
• abscissa: horizontal or x-axis of a graph.
• absolute risk: probability of an event occurring during a specified period. The absolute risk equals the relative risk times the average probability of the event during the same time, if the risk factor is absent.40(p327) See absolute risk reduction.
• absolute risk reduction: proportion in the control group experiencing an event minus the proportion in the intervention group experiencing an event. The inverse of the absolute risk reduction is the number needed to treat. See absolute risk.
• accuracy: ability of a test to produce results that are close to the true measure of the phenomenon.40(p327) Generally, assessing accuracy of a test requires that there be a criterion standard with which to compare the test results. Accuracy encompasses a number of measures including reliability, validity, and lack of bias.
• actuarial life-table method: see life table, Cutler-Ederer method.
• adjustment: statistical techniques used after the collection of data to adjust for the effect of known or potential confounding variables.40(p327) A typical example is adjusting a result for the independent effect of age of the participants (age is the independent variable).
• aggregate data: data accumulated from disparate sources.
• agreement: statistical test performed to determine the equivalence of the results obtained by 2 tests when one test is compared with another (one of which is usually but not always a criterion standard).
→ Agreement should not be confused with correlation. Correlation is used to test the degree to which changes in a variable are related to changes in another, whereas agreement tests whether 2 variables are equivalent. For example, an investigator compares results obtained by 2 methods of measuring hematocrit. Method A gives a result that is always exactly twice that of method B. The correlation between A and B is perfect since A is always twice B, but the agreement is very poor; method A is not equivalent to method B (written communication, George W. Brown, MD, September 1993). One appropriate way to assess agreement has been described by Bland and Altman.41
• algorithm: systematic process carried out in an ordered, typically branching sequence of steps; each step depends on the outcome of the previous step.42(p6) An algorithm may be used clinically to guide treatment decisions for an individual patient on the basis of the patient’s clinical outcome or result.
• α (alpha), α level: size of the likelihood acceptable to the investigators that a relationship observed between 2 variables is due to chance (the probability of a type I error); usually α = .05. If α = .05, P < .05 will be considered significant.
• analysis: process of mathematically summarizing and comparing data to confirm or refute a hypothesis. Analysis serves 3 functions: (1) to test hypotheses regarding differences in large populations based on samples of the populations, (2) to control for confounding variables, and (3) to measure the size of differences between groups or the strength of the relationship between variables in the study.40(p25)
• analysis of covariance (ANCOVA): statistical test used to examine data that include both continuous and nominal independent variables and a continuous dependent variable. It is basically a hybrid of multiple regression (used for continuous independent variables) and analysis of variance (used for nominal independent variables).40(p299)
• analysis of residuals: see linear regression.
• analysis of variance (ANOVA): statistical method used to compare a continuous dependent variable and more than 1 nominal independent variable. The null hypothesis in ANOVA is tested by means of the F test.
In 1-way ANOVA there is a single nominal independent variable with 2 or more levels (eg, age categorized into strata of 20 to 39 years, 40 to 59 years, and 60 years and older). When there are only 2 mutually exclusive categories for the nominal independent variable (eg, male or female), the 1-way ANOVA is equivalent to the t test.
A 2-way ANOVA is used if there are 2 independent variables (eg, age strata and sex), a 3-way ANOVA if there are 3 independent variables, etc. If more than 1 nonexclusive independent variable is analyzed, the process is called factorial ANOVA, which assesses the main effects of the independent variables as well as their interactions. An analysis of main effects in the 2-way ANOVA above would assess the independent effects of age group or sex; an association between female sex and systolic blood pressure that exists in one age group but not another would mean that an interaction between age and sex exists. In a factorial 3-way ANOVA with independent variables A, B, and C, there is one 3-way interaction term (A × B × C), 3 different 2-way interaction terms (A × B, A × C, and B × C), and 3 main effect terms (A, B, and C). A separate F test must be computed for each different main effect and interaction term.
If repeated measures are made on an individual (such as measuring blood pressure over time) so that a matched form of analysis is appropriate, but potentially confounding factors (such as age) are to be controlled for simultaneously, repeated-measures ANOVA is used. Randomized-block ANOVA is used if treatments are assigned by means of block randomization.40(pp291-295)
→ An ANOVA can establish only whether a significant difference exists among groups, not which groups are significantly different from each other. To determine which groups differ significantly, a pairwise analysis of a continuous dependent variable and more than 1 nominal variable is performed by a procedure such as the Newman-Keuls test or Tukey test, as well as many others. These multiple comparison procedures avoid the potential of a type I error that might occur if the t test were applied at this stage. Such comparisons may also be computed through the use of orthogonal contrasts.
→ The F ratio is the statistical result of ANOVA and is a number between 0 and infinity. The F ratio is compared with tables of the F distribution, taking into account the α level and degrees of freedom (df) for the numerator and denominator, to determine the P value.
Example: The difference was found to be significant by 1-way ANOVA (F2,63=61.07; P < .001).43
The dfs are provided along with the F statistic. The first subscript (2) is the df for the numerator; the second subscript (63) is the df for the denominator. The P value can be obtained from an F statistic table that provides the P value that corresponds to a given F and df. In practice, however, the P value is generally calculated by a computerized algorithm. Because ANOVA does not determine which groups are significantly different from each other, this example would normally be accompanied by the results of the multiple comparisons procedure.43 Other models such as Latin square may also be used.
• ANCOVA: see analysis of covariance.
• ANOVA: see analysis of variance.
• Ansari-Bradley dispersion test: rank test to determine whether 2 distributions known to be of identical shape (but not necessarily of normal distribution) have equal parameters of scale.35(p6)
• area under the curve (AUC): technique used to measure the performance of a test plotted on a receiver operating characteristic (ROC) curve or to measure drug clearance in pharmacokinetic studies.42(p12) When measuring test performance, the larger the AUC, the better the test performance. When measuring drug clearance, the AUC assesses the total exposure of the individual, as measured by levels of the drug in blood or urine, to a drug over time. The curve of drug clearance used to calculate the AUC is also used to calculate the drug half-life.
→ The method used to determine the AUC should be specified (eg, the trapezoidal rule).
• artifact: difference or change in measure of occurrence of a condition that results from the way the disease or condition is measured, sought, or defined.40(p327)
Example: An artifactual increase in the incidence of AIDS was expected because the definition of AIDS was changed to include a larger number of AIDS-defining illnesses.
• assessment: in the statistical sense, evaluating the outcome(s) of the study and control groups.
• assignment: process of distributing individuals to study and control groups. See also randomization.
• association: statistically significant relationship between 2 variables in which one does not necessarily cause the other. When 2 variables are measured simultaneously, association rather than causation generally is all that can be assessed.
Example: After confounding factors were controlled for by means of multivariate regression, a significant association remained between age and disease prevalence.
• attributable risk: disease that can be attributed to a given risk factor; conversely, if the risk factor were eliminated entirely, the amount of the disease that could be eliminated.40(pp327-328) Attributable risk assumes a causal relationship (ie, the factor to be eliminated is a cause of the disease and not merely associated with the disease). See attributable risk percentage and attributable risk reduction.
• attributable risk percentage: the percentage of risk associated with a given factor among those with the risk factor.40(pp327-328) For example, risk of stroke in an older person who smokes and has hypertension and no other risk factors can be divided among the risks attributable to smoking, hypertension, and age. Attributable risk percentage is often determined for a population and is the percentage of the disease related to the risk factor. See population attributable risk percentage.
• attributable risk reduction: the number of events that can be prevented by eliminating a particular risk factor from the population. Attributable risk reduction is a function of 2 factors: the strength of the association between the risk factor and the disease (ie, how often the risk factor causes the disease) and the frequency of the risk factor in the population (ie, a common risk factor may have a lower attributable risk in an individual than a less common risk factor, but could have a higher attributable risk reduction because of the risk factor’s high prevalence in the population). Attributable risk reduction is a useful concept for public health decisions. See also attributable risk.
• average: the central tendency of a number of measurements. This is often used synonymously with mean, but can also imply the median, mode, or some other statistic. Thus, the word should generally be avoided in favor of a more precise term.
• Bayesian analysis: theory of statistics involving the concept of prior probability, conditional probability or likelihood, and posterior probability.38(p16) For interpreting studies, the prior probability is based on previous studies and may be informative, or, if no studies exist or those that exist are not useful, one may assume a uniform prior. The study results are then incorporated with the prior probability to obtain a posterior probability. Bayesian analysis can be used to interpret how likely it is that a positive result indicates presence of a disease, by incorporating the prevalence of the disease in the population under study and the sensitivity and specificity of the test in the calculation.
→ Bayesian analysis has been criticized because the weight that a particular study is given when prior probability is calculated can be a subjective decision. Nonetheless, the process most closely approximates how studies are considered when they are incorporated into clinical practice. When Bayesian analysis is used to assess posterior probability for an individual patient in a clinic population, the process may be less subjective than usual practice because the prior probability, equal to the prevalence of the disease in the clinic population, is more accurate than if the prevalence for the population at large were used.32
• β (beta), β level: probability of showing no significant difference when a true difference exists; a false acceptance of the null hypothesis.42(p57) One minus β is the statistical power of the test to detect a true difference; the smaller the β, the greater the power. A value of .20 for β is equal to .80 or 80% power. A β of .1 or .2 is most frequently used in power calculations. The β error is synonymous with type II error.43
• bias: a systematic situation or condition that causes a result to depart from the true value in a consistent direction. Bias refers to defects in study design (often selection bias) or measurement.40(p328) One method to reduce measurement bias is to ensure that the investigator measuring outcomes for a participant is unaware of the group to which the participant belongs (ie, blinded assessment).
• bimodal distribution: nonnormal distribution with 2 peaks, or modes. The mean and median may be equivalent, but neither will describe the data accurately. A population composed entirely of schoolchildren and their grandparents might have a mean age of 35 years, although everyone in the population would in fact be either much younger or much older.
• binary variable: variable that has 2 mutually exclusive subgroups, such as male/female or pregnant/not pregnant; synonym for dichotomous variable.44(p75)
• binomial distribution: probability with 2 possible mutually exclusive outcomes; used for modeling cumulative incidence and prevalence rates42(p17) (for example, the probability of a person having a stroke in a given population over a given period; the outcome must be stroke or no stroke). In a binomial sample with a probability p of the event and n number of participants, the predicted mean is p × n and the predicted variance is p(p −1).
• biological plausibility: evidence that an independent variable can be expected to exert a biological effect on a dependent variable with which it is associated. For example, studies in animals were used to establish the biological plausibility of adverse effects of passive smoking.
• bivariable analysis: see bivariate analysis.
• bivariate analysis: used when 1 dependent and 1 independent variable are to be assessed.40(p263) Common examples include the t test for 1 continuous variable and 1 binary variable and the χ2 test for 2 binary variables. Bivariate analyses can be used for hypothesis testing in which only 1 independent variable is taken into account, to compare baseline characteristics of 2 groups, or to develop a model for multivariate regression. See also univariate and multivariate analysis.
→ Bivariate analysis is the simplest form of hypothesis testing but is often used incorrectly, either because it is used too frequently, resulting in an increased likelihood of a type I error, or because tests that assume a normal distribution (eg, the t test) are applied to nonnormally distributed data.
• Bland-Altman plot: a method to assess agreement (eg, between 2 tests) developed by Bland and Altman.41
• blinded (masked) assessment: evaluation or categorization of an outcome in which the person assessing the outcome is unaware of the treatment assignment. Masked assessment is the term preferred by some investigators and journals, particularly those in ophthalmology.
→ Blinded assessment is important to prevent bias on the part of the investigator performing the assessment, who may be influenced by the study question and consciously or unconsciously expect a certain test result.
• blinded (masked) assignment: assignment of individuals participating in a prospective study (usually random) to a study group and a control group without the investigator or the participants being aware of the group to which they are assigned. Studies may be single-blind, in which either the participant or the person administering the intervention does not know the treatment assignment, or double-blind, in which neither knows the treatment assignment. The term triple-blinded is sometimes used to indicate that the persons who analyze or interpret the data are similarly unaware of treatment assignment. Authors should indicate who exactly was blinded. The term masked assignment is preferred by some investigators and journals, particularly those in ophthalmology.
• block randomization: type of randomization in which the unit of randomization is not the individual but a larger group, sometimes stratified on particular variables such as age or severity of illness to ensure even distribution of the variable between randomized groups.
• Bonferroni adjustment: one of several statistical adjustments to the P value that may be applied when multiple comparisons are made. The α level (usually .05) is divided by the number of comparisons to determine the α level that will be considered statistically significant. Thus, if 10 comparisons are made, an α of .05 would become α = .005 for the study. Alternatively, the P value may be multiplied by the number of comparisons, while retaining the α of .05.44(pp31-32) Alternatively, the P value may be multiplied by the number of comparisons, while retaining the α of .05. For example, a P value of .02 obtained for 1 of 10 comparisons would be multiplied by 10 to get the final result of P = .20, a nonsignificant result.
→ The Bonferroni test is a conservative adjustment for large numbers of comparisons (ie, less likely than other methods to give a significant result) but is simple and used frequently.
• bootstrap method: statistical method for validating a new diagnostic parameter in the same group from which the parameter was derived. Thus, the validation of the method is based on a simulated sample, rather than a new sample. The parameter is first derived from the entire group, then applied sequentially to subsegments of the group to see whether the parameter performs as well for the subgroups as it does for the entire group (derived from “pulling oneself up by one’s own bootstraps”).42(p32)
For example, a number of prognostic indicators are measured in a cohort of hospitalized patients to predict mortality. To determine whether the model using the indicators is equally predictive of mortality for subsegments of the group, the bootstrap method is applied to the subsegments and confidence intervals are calculated to determine the predictive ability of the model. The jackknife dispersion test also uses the same sample for both derivation and validation.
→ Although the preferable means for validating a model is to apply the model to a new sample (eg, a new cohort of hospitalized patients in the previous example), the bootstrap method can be used to reduce the time, effort, and expense necessary to complete the study. However, the bootstrap method provides less assurance than validation in a new sample that the model is generalizable to another population.
• Brown-Mood procedure: test used with a regression model that does not assume a normal distribution or common variance of the errors.38(p26) It is an extension of the median test.
• C statistic: a measure of the area under a receiver operating characteristic curve.
• case: in a study, an individual with the outcome or disease of interest.
• case-control study: retrospective study in which individuals with the disease (cases) are compared with those who do not have the disease (controls). Cases and controls are identified without knowledge of exposure to the risk factors under study. Cases and controls are matched on certain important variables, such as age, sex, and year in which the individual was treated or identified. A case-control study on individuals already enrolled in a cohort study is referred to as a nested case-control study.42(p111) This type of case-control study may be an especially strong study design if characteristics of the cohort have been carefully ascertained. See also 20.3.2, Observational Studies, Case-Control Studies.
→ Cases and controls should be selected from the same population to minimize confounding by factors other than those under study. Matching cases and controls on too many characteristics may obscure the association of interest, because if cases and controls are too similar, their exposures may be too similar to detect a difference (see overmatching).
• case-fatality rate: probability of death among people diagnosed as having a disease. The rate is calculated as the number of deaths during a specific period divided by the number of persons with the disease at the beginning of the period.44(p38)
• case series: retrospective descriptive study in which clinical experience with a number of patients is described. See 20.3.3, Observational Studies, Case Series.
• categorical data: counts of members of a category or class; for the analysis each member or item should fit into only 1 category or class38(p29) (eg, sex or race/ethnicity). The categories have no numerical significance. Categorical data are summarized by proportions, percentages, fractions, or simple counts. Categorical data is synonymous with nominal data.
• cause, causation: something that brings about an effect or result; to be distinguished from association, especially in cohort studies. To establish something as a cause it must be known to precede the effect. The concept of causation includes the contributory cause, the direct cause, and the indirect cause.
• censored data: censoring has 2 different statistical connotations: (1) data in which extreme values are reassigned to some predefined, more moderate value; (2) data in which values have been assigned to individuals for whom the actual value is not known, such as in survival analyses for individuals who have not experienced the outcome (usually death) at the time the data collection was terminated.
The term left-censored data means that data were censored from the low end or left of the distribution; right-censored data come from the high end or right of the distribution42(p26) (eg, in survival analyses). For example, if data for falls are categorized as individuals who have 0, 1, or 2 or more falls, falls exceeding 2 have been right-censored.
• central limit theorem: theorem that states that the mean of a number of samples with variances that are not large relative to the entire sample will increasingly approximate a normal distribution as the sample size increases. This is the basis for the importance of the normal distribution in statistical testing.38(p30)
• central tendency: property of the distribution of data, usually measured by mean, median, or mode.42(p41)
• χ2 test (chi-square test): a test of significance based on the χ2 statistic, usually used for categorical data. The observed values are compared with the expected values under the assumption of no association. The χ2 goodness-of-fit test compares the observed with expected frequencies. The χ2 test can also compare an observed variance with hypothetical variance in normally distributed samples.38(p33) In the case of a continuous independent variable and a nominal dependent variable, the χ2 test for trend can be used to determine whether a linear relationship exists (for example, the relationship between systolic blood pressure and stroke).40(pp284-285)
→ The P value is determined from χ2 tables with the use of the specified α level and the df calculated from the number of cells in the χ2 table. The χ2 statistic should be reported to no more than 1 decimal place; if the Yates correction was used, that should be specified. See also contingency table.
Example: The exercise intervention group was least likely to have experienced a fall in the previous month ($χ32$ = 17.7, P = .02).
Note that the df for $χ32$ is specified using a subscript 3; it is derived from the number of cells in the χ2 table (for this example, 4 cells in a 2 × 2 table). The value 17.7 is the χ2 value. The P value is determined from the χ2 value and df.
Results of the χ2 test may be biased if there are too few observations (generally 5 or fewer) per cell. In this case, the Fisher exact test is preferred.
• choropleth map: map of a region or country that uses shading to display quantitative data.42(p28) See also 4.2.3, Visual Presentation of Data, Figures, Maps.
• chunk sample: subset of a population selected for convenience without regard to whether the sample is random or representative of the population.38(p32) A synonym is convenience sample.
• Cochran Q test: method used to compare percentage results in matched samples (see matching), often used to test whether the observations made by 2 observers vary in a systematic manner. The analysis results in a Q statistic, which, with the df, determines the P value; if significant, the variation between the 2 observers cannot be explained by chance alone.38(p25) See also interobserver bias.
• coefficient of determination: square of the correlation coefficient, used in linear or multiple regression analysis. This statistic indicates the proportion of the variation of the dependent variable that can be predicted from the independent variable.40(p328) If the analysis is bivariate, the correlation coefficient is indicated as r and the coefficient of determination is r2. If the correlation coefficient is derived from multivariate analysis, the correlation coefficient is indicated as R and the coefficient of determination is R2. See also correlation coefficient.
Example: The sum of the R2 values for age and body mass index was 0.23. [Twenty-three percent of the variance could be explained by those 2 variables.]
→ When R2 values of the same dependent variable total more than 1.0 or 100%, then the independent variables have an interactive effect on the dependent variable.
• coefficient of variation: ratio of the standard deviation (SD) to the mean. The coefficient of variation is expressed as a percentage and is used to compare dispersions of different samples. The smaller the coefficient of variation, the greater the precision.43 The coefficient of variation is also used when the SD is dependent on the mean (eg, the increase in height with age is accompanied by an increasing SD of height in the population).
• cohort: a group of individuals who share a common exposure, experience, or characteristic, or a group of individuals followed up or traced over time in a cohort study.38(p31)
• cohort effect: change in rates that can be explained by the common experience or characteristic of a group or cohort of individuals. A cohort effect implies that a current pattern of variables may not be generalizable to a different cohort.38(p328)
Example: The decline in socioeconomic status with age was a cohort effect explained by fewer years of education among the older individuals.
• cohort study: study of a group of individuals, some of whom are exposed to a variable of interest (eg, a drug treatment or environmental exposure), in which participants are followed up over time to determine who develops the outcome of interest and whether the outcome is associated with the exposure. Cohort studies may be concurrent (prospective) or nonconcurrent (retrospective).40(pp328-329) See also 20.3.1, Observational Studies, Cohort Studies.
→ Whenever possible, a participant’s outcome should be assessed by individuals who do not know whether the participant was exposed (see blinded assessment).
• concordant pair: pair in which both individuals have the same trait or outcome (as opposed to discordant pair). Used frequently in twin studies.42(p35)
• conditional probability: probability that an event E will occur given the occurrence of F, called the conditional probability of E given F. The reciprocal is not necessarily true: the probability of E given F may not be equal to the probability of F given E.44(p55)
• confidence interval (CI): range of numerical expressions within which one can be confident (usually 95% confident, to correspond to an α level of .05) that the population value the study is intended to estimate lies.40(p329) The CI is an indication of the precision of an estimated population value.
→ Confidence intervals used to estimate a population value usually are symmetric or nearly symmetric around a value, but CIs used for relative risks and odds ratios may not be. Confidence intervals are preferable to P values because they convey information about precision as well as statistical significance of point estimates.
→ Confidence intervals are expressed with a hyphen separating the 2 values. To avoid confusion, the word to replaces hyphens if one of the values is a negative number. Units that are closed up with the numeral are repeated for each CI; those not closed up are repeated only with the last numeral. See also 20.8, Significant Digits and Rounding Numbers, and 19.4, Numbers and Percentages, Use of Digit Spans and Hyphens.
Example: The odds ratio was 3.1 (95% CI, 2.2-4.8). The prevalence of disease in the population was 1.2% (95% CI, 0.8%-1.6%).
• confidence limits (CLs): upper and lower boundaries of the confidence interval, expressed with a comma separating the 2 values.42(p35)
Example: The mean (95% confidence limits) was 30% (28%, 32%).
• confounding: (1) a situation in which the apparent effect of an exposure on risk is caused by an association with other factors that can influence the outcome; (2) a situation in which the effects of 2 or more causal factors as shown by a set of data cannot be separated to identify the unique effects of any of them; (3) a situation in which the measure of the effect of an exposure on risk is distorted because of the association of exposure with another factor(s) that influences the outcome under study.42(p35) See also confounding variable.
• confounding variable: variable that can cause or prevent the outcome of interest, is not an intermediate variable, and is associated with the factor under investigation. Unless it is possible to adjust for confounding variables, their effects cannot be distinguished from those of the factors being studied. Bias can occur when adjustment is made for any factor that is caused in part by the exposure and also is correlated with the outcome.25(p35) Multivariate analysis is used to control the effects of confounding variables that have been measured.
• contingency coefficient: the coefficient C (Note: not to be confused with the C statistic), used to measure the strength of association between 2 characteristics in a contingency table.44(pp56-57)
• contingency table: table created when categorical variables are used to calculate expected frequencies in an analysis and to present data, especially for a χ2 test (2-dimensional data) or log-linear models (data with at least 3 dimensions). A 2 × 3 contingency table has 2 rows and 3 columns. The df are calculated as (number of rows − 1)(number of columns −1). Thus, a 2 x 3 contingency table has 6 cells and 2 df.
• continuous data: data with an unlimited number of equally spaced values.40(p329) There are 2 kinds of continuous data: ratio data and interval data. Ratio-level data have a true 0, and thus numbers can meaningfully be divided by one another (eg, weight, systolic blood pressure, cholesterol level). For instance, 75 kg is half as heavy as 150 kg. Interval data may be measured with a similar precision but lack a true 0 point. Thus, 328C is not half as warm as 648C, although temperature may be measured on a precise continuous scale. Continuous data include more information than categorical, nominal, or dichotomous data. Use of parametric statistics requires that continuous data have a normal distribution, or that the data can be transformed to a normal distribution (eg, by computing logarithms of the data).
• contributory cause: independent variable (cause) that is thought to contribute to the occurrence of the dependent variable (effect). That a cause is contributory should not be assumed unless all of the following have been established: (1) an association exists between the putative cause and effect, (2) the cause precedes the effect in time, and (3) altering the cause alters the probability of occurrence of the effect.40(p329) Other factors that may contribute to establishing a contributory cause include the concept of biological plausibility, the existence of a dose-response relationship, and consistency of the relationship when evaluated in different settings.
• control: in a case-control study, the designation for an individual without the disease or outcome of interest; in a cohort study, the individuals not exposed to the independent variable of interest; in a randomized controlled trial, the group receiving a placebo or standard treatment rather than the intervention under study.
• controlled clinical trial: study in which a group receiving an experimental treatment is compared with a control group receiving a placebo or an active treatment. See also 20.2.1, Randomized Controlled Trials, Parallel-Design Double-blind Trials.
• convenience sample: sample of participants selected because they were available for the researchers to study, not because they are necessarily representative of a particular population.
→ Use of a convenience sample limits generalizability and can confound the analysis depending on the source of the sample. For instance, in a study comparing cardiac auscultation, echocardiography, and cardiac catheterization, the patients studied, simply by virtue of their having undergone cardiac catheterization and echocardiography, likely are not comparable to an unselected population.
• correlation: description of the strength of an association among 2 or more variables, each of which has been sampled by means of a representative or naturalistic method from a population of interest.40(p329) The strength of the association is described by the correlation coefficient. See also agreement. There are many reasons why 2 variables may be correlated, and thus correlation alone does not prove causation.
→ The Kendall τ rank correlation test is used when testing 2 ordinal variables, the Pearson product moment correlation is used when testing 2 normally distributed continuous variables, and the Spearman rank correlation is used when testing 2 non-normally distributed continuous variables.43
→ Correlation is often depicted graphically by means of a scatterplot of the data (see Example F4 in 4.2.1, Visual Presentation of Data, Figures, Statistical Graphs). The more circular a scatterplot, the smaller the correlation; the more linear a scatterplot, the greater the correlation.
• correlation coefficient: measure of the association between 2 variables. The coefficient falls between -1 and 1; the sign indicates the direction of the relationship and the number the magnitude of the relationship. A positive sign indicates that the 2 variables increase or decrease together; a negative sign indicates that increases in one are associated with decreases in the other. A value of 1 or -1 indicates that the sample values fall in a straight line, while a value of 0 indicates no relationship. The correlation coefficient should be followed by a measure of the significance of the correlation, and the statistical test used to measure correlation should be specified.
Example: Body mass index increased with age (Pearson r = 0.61; P < .001); years of education decreased with age (Pearson r = -0.48; P = .01).
→ When 2 variables are compared, the correlation coefficient is expressed by r; when more than 2 variables are compared by multivariate analysis, the correlation coefficient is expressed by R. The symbol r2 or R2 is termed the coefficient of determination and indicates the amount of variation in the dependent variable that can be explained by knowledge of the independent variable.
• cost-benefit analysis: economic analysis that compares the costs accruing to an individual for some treatment, process, or procedure and the ensuing medical consequences, with the benefits of reduced loss of earnings resulting from prevention of death or premature disability. The cost-benefit ratio is the ratio of marginal benefit (financial benefit of preventing 1 case) to marginal cost (cost of preventing 1 case).42(p38) See also 20.5, Cost-effectiveness Analysis, Cost-Benefit Analysis.
• cost-effectiveness analysis: comparison of strategies to determine which provides the most clinical value for the cost.43 A preferred intervention is the one that will cost the least for a given result or be the most effective for a given cost.30(pp38-39) Outcomes are expressed by the cost-effectiveness ratio, such as cost per year of life saved. See also 20.5, Cost-effectiveness Analysis, Cost-Benefit Analysis.
• cost-utility analysis: form of economic evaluation in which the outcomes of alternative procedures are expressed in terms of a single utility-based measurement, most often the quality-adjusted life-year (QALY).42(p39)
• covariates: variables that may mediate or confound the relationship between the independent and dependent variables. Because patterns of covariates may differ systematically between groups in a trial or observational study, their effect should be accounted for during the analysis. This can be accomplished in a number of ways, including analysis of covariance, multiple regression, stratification, or propensity matching.
• Cox-Mantel test: method for comparing 2 survival curves that does not assume a particular distribution of data,44(p63) similar to the log-rank test.45(p113)
• Cox proportional hazards regression model (Cox proportional hazards model): in survival analysis, a procedure used to determine relationships between survival time and treatment and prognostic independent variables such as age.37(p290) The hazard function is modeled on the set of independent variables and assumes that the hazard function is independent of time. Estimates depend only on the order in which events occur, not on the times they occur.44(p64) Thus, authors should generally indicate that they have tested the proportionality assumption of the Cox model, which assumes that the ratio of the hazards between groups is similar at all points in time. The proportionality assumption would not be met, for instance, if one group experienced an early surge in mortality while the other group did not. In this case, the ratio of the hazards would be different early vs late during the time of follow-up.
• criterion standard: test considered to be the diagnostic standard for a particular disease or condition, used as a basis of comparison for other (usually noninvasive) tests. Ideally, the sensitivity and specificity of the criterion standard for the disease should be 100%. (A commonly used synonym, gold standard, is considered jargon by some.42(p70)) See also diagnostic discrimination.
• Cronbach α: index of the internal consistency of a test,44(p65) which assesses the correlation between the total score across a series of items and the comparable score that would have been obtained had a different series of items been used.42(p39) The Cronbach α is often used for psychological tests.
• cross-design synthesis: method for evaluating outcomes of medical interventions, developed by the US General Accounting Office, which pools results from databases of randomized controlled trials and other study designs. It is a form of meta-analysis (see 20.4, Meta-analysis).42(p39)
• crossover design: method of comparing 2 or more treatments or interventions. Individuals initially are randomized to one treatment or the other; after completing the first treatment they are crossed over to 1 or more other randomization groups and undergo other courses of treatment being tested in the experiment. Advantages are that a smaller sample size is needed to detect a difference between treatments, since a paired analysis is used to compare the treatments in each individual, but the disadvantage is that an adequate washout period is needed after the initial course of treatment to avoid carryover effect from the first to the second treatment. Order of treatments should be randomized to avoid potential bias.44(pp65-66) See 20.2.2, Randomized Controlled Trials, Crossover Trials.
• cross-sectional study: study that identifies participants with and without the condition or disease under study and the characteristic or exposure of interest at the same point in time.40(p329)
→ Causality is difficult to establish in a cross-sectional study because the outcome of interest and associated factors are assessed simultaneously.
• crude death rate: total deaths during a year divided by the midyear population. Deaths are usually expressed per 100 000 persons.44(p66)
• cumulative incidence: number of people who experience onset of a disease or outcome of interest during a specified period; may also be expressed as a rate or ratio.42(p40)
• Cutler-Ederer method: form of life-table analysis that uses actuarial techniques. The method assumes that the times at which follow-up ended (because of death or the outcome of interest) are uniformly distributed during the time period, as opposed to the Kaplan-Meier method, which assumes that termination of follow-up occurs at the end of the time block. Therefore, Cutler-Ederer estimates of risk tend to be slightly higher than Kaplan-Meier estimates.40(p308) Often an intervention and control group are depicted on 1 graph and the curves are compared by means of a log-rank test. This is also known as the actuarial life-table method.
• cut point: in testing, the arbitrary level at which “normal” values are separated from “abnormal” values, often selected at the point 2 SDs from the mean. See also receiver operating characteristic curve.42(p40)
• data: collection of items of information.42(p42) (Datum, the singular form of this word, is rarely used.)
• data dredging (aka “fishing expedition”): jargon meaning post hoc analysis, with no a priori hypothesis, of several variables collected in a study to identify variables that have a statistically significant association for purposes of publication.
→ Although post hoc analyses occasionally can be useful to generate hypotheses, data dredging increases the likelihood of a type I error and should be avoided. If post hoc analyses are performed, they should be declared as such and the number of post hoc comparisons performed specified.
• decision analysis: process of identifying all possible choices and outcomes for a particular set of decisions to be made regarding patient care. Decision analysis generally uses preexisting data to estimate the likelihood of occurrence of each outcome. The process is displayed as a decision tree, with each node depicting a branch point representing a decision in treatment or intervention to be made (usually represented by a square at the branch point), or possible outcomes or chance events (usually represented by a circle at the branch point). The relative worth of each outcome may be expressed as a utility, such as the quality-adjusted life-year.42(p44) See Figure 2.
• degrees of freedom (df): see df.
• dependent variable: outcome variable of interest in any study; the outcome that one intends to explain or estimate40(p329) (eg, death, myocardial infarction, or reduction in blood pressure). Multivariate analysis controls for independent variables or covariates that might modify the occurrence of the dependent variable (eg, age, sex, and other medical diseases or risk factors).
• descriptive statistics: method used to summarize or describe data with the use of the mean, median, SD, SE, or range, or to convey in graphic form (eg, by using a histogram, shown in Example F5 in 4.2.1, Visual Presentation of Data, Figures, Statistical Graphs) for purposes of data presentation and analysis.44(p73)
• df (degrees of freedom) (df is not expanded at first mention): the number of arithmetically independent comparisons that can be made among members of a sample. In a contingency table, df is calculated as (number of rows − 1)(number of columns − 1).
→ The df should be reported as a subscript after the related statistic, such as the t test, analysis of variance, and χ2 test (eg, $χ32$ = 17.7, P = .02; in this example, the subscript 3 is the number of df).
• diagnostic discrimination: statistical assessment of how the performance of a clinical diagnostic test compares with the criterion standard. To assess a test’s ability to distinguish an individual with a particular condition from one without the condition, the researcher must (1) determine the variability of the test, (2) define a population free of the disease or condition and determine the normal range of values for that population for the test (usually the central 95% of values, but in tests that are quantitative rather than qualitative, a receiver operating characteristic curve may be created to determine the optimal cut point for defining normal and abnormal), and (3) determine the criterion standard for a disease (by definition, the criterion standard should have 100% sensitivity and specificity for the disease) with which to compare the test. Diagnostic discrimination is reported with the performance measures sensitivity, specificity, positive predictive value, and negative predictive value; false-positive rate; and the likelihood ratio.40(pp151-163) See Table 4.
→ Because the values used to report diagnostic discrimination are ratios, they can be expressed either as the ratio, using the decimal form, or as the percentage, by multiplying the ratio by 100.
Example: The test had a sensitivity of 0.80 and a specificity of 0.95; the false-positive rate was 0.05.
Or: The test had a sensitivity of 80% and a specificity of 95%; the false-positive rate was 5%.
→ When the diagnostic discrimination of a test is defined, the individuals tested should represent the full spectrum of the disease and reflect the population on whom the test will be used. For example, if a test is proposed as a screening tool, it should be assessed in the general population.
• dichotomous variable: a variable with only 2 possible categories (eg, male/female, alive/dead); synonym for binary variable.44(p75)
→ A variable may have a continuous distribution during data collection but is made dichotomous for purposes of analysis (eg, age <65 years/age ≥ 65 years). This is done most often for nonnormally distributed data. Note that the use of a cut point generally converts a continuous variable to a dichotomous one (eg, normal vs abnormal).
• direct cause: contributory cause that is believed to be the most immediate cause of a disease. The direct cause is dependent on the current state of knowledge and may change as more immediate mechanisms are discovered.40(p330)
Example: Although several other causes were suggested when the disease was first described, the human immunodeficiency virus is the direct cause of AIDS.
• disability-adjusted life-years (DALY): A quantitative indicator of burden of disease that reflects the years lost due to premature mortality and years lived with disability, adjusted for severity.45
• discordant pair: pair in which the individuals have different outcomes. In twin studies, only the discordant pairs are informative about the association between exposure and disease.42(pp47-48) Antonym is concordant pair.
• discrete variable: variable that is counted as an integer; no fractions are possible.44(p77) Examples are counts of pregnancies or surgical procedures, or responses to a Likert scale.
• discriminant analysis: analytic technique used to classify participants according to their characteristics (eg, the independent variables, signs, symptoms, and diagnostic test results) to the appropriate outcome or dependent variable,44(pp77-78) also referred to as discriminatory analysis.37(pp59-60) This analysis tests the ability of the independent variable model to correctly classify an individual in terms of outcome. Conceptually, this may be thought of as the opposite of analysis of variance, in that the predictor variables are continuous, while the dependent variables are categorical.
• dispersion: degree of scatter shown by observations; may be measured by SD, various percentiles (eg, tertiles, quantiles, quintiles), or range.38(p60)
• distribution: group of ordered values; the frequencies or relative frequencies of all possible values of a characteristic.40(p330) Distributions may have a normal distribution (bell-shaped curve) or a nonnormal distribution (eg, binomial or Poisson distribution).
• dose-response relationship: relationship in which changes in levels of exposure are associated with changes in the frequency of an outcome in a consistent direction. This supports the idea that the agent of exposure (most often a drug) is responsible for the effect seen.40(p330) May be tested statistically by using a χ2 test for trend.
• Duncan multiple range test: modified form of the Newman-Keuls test for multiple comparisons.44(p82)
• Dunnett test: multiple comparisons procedure intended for comparing each of a number of treatments with a single control.44(p82)
• Dunn test: multiple comparisons procedure based on the Bonferroni adjustment.44(p84)
• Durbin-Watson test: test to determine whether the residuals from linear regression or multiple regression are independent or, alternatively, are serially correlated.44(p84)
• ecological fallacy: error that occurs when the existence of a group association is used to imply, incorrectly, the existence of a relationship at the individual level.40(p330)
• effectiveness: extent to which an intervention is beneficial when implemented under the usual conditions of clinical care for a group of patients,40(p330) as distinguished from efficacy (the degree of beneficial effect seen in a clinical trial) and efficiency (the intervention effect achieved relative to the effort expended in time, money, and resources).
• effect of observation: bias that results when the process of observation alters the outcome of the study.40(p330) See also Hawthorne effect.
• effect size: observed or expected change in outcome as a result of an intervention. Expected effect size is used during the process of estimating the sample size necessary to achieve a given power. Given a similar amount of variability between individuals, a large effect size will require a smaller sample size to detect a difference than will a smaller effect size.
• efficacy: degree to which an intervention produces a beneficial result under the ideal conditions of an investigation,40(p330) usually in a randomized controlled trial; it is usually greater than the intervention’s effectiveness.
• efficiency: effects achieved in relation to the effort expended in money, time, and resources. Statistically, the precision with which a study design will estimate a parameter of interest.42(pp52-53)
• effort-to-yield measures: amount of resources needed to produce a unit change in outcome, such as number needed to treat43; used in cost-effectiveness and cost-benefit analyses. See 20.5, Cost-effectiveness Analysis, Cost-Benefit Analysis.
• error: difference between a measured or estimated value and the true value. Three types are seen in scientific research: a false or mistaken result obtained in a study; measurement error, a random form of error; and systematic error that skews results in a particular direction.42(pp56-57)
• estimate: value or values calculated from sample observations that are used to approximate the corresponding value for the population.40(p330)
• event: end point or outcome of a study; usually the dependent variable. The event should be defined before the study is conducted and assessed by an individual blinded to the intervention or exposure category of the study participant.
• exclusion criteria: characteristics of potential study participants or other data that will exclude them from the study sample (such as being younger than 65 years, history of cardiovascular disease, expected to move within 6 months of the beginning of the study). Like inclusion criteria, exclusion criteria should be defined before any individuals are enrolled.
• explanatory variable: synonymous with independent variable, but preferred by some because “independent” in this context does not refer to statistical independence.38(p98)
• extrapolation: conclusions drawn about the meaning of a study for a target population that includes types of individuals or data not represented in the study sample.40(p330)
• factor analysis: procedure used to group related variables to reduce the number of variables needed to represent the data. This analysis reduces complex correlations between a large number of variables to a smaller number of independent theoretical factors. The researcher must then interpret the factors by looking at the pattern of “loadings” of the various variables on each factor.43 In theory, there can be as many factors as there are variables, and thus the authors should explain how they decided on the number of factors in their solution. The decision about the number of factors is a compromise between the need to simplify the data and the need to explain as much of the variability as possible. There is no single criterion on which to make this decision, and thus authors may consider a number of indexes of goodness of fit. There are a number of algorithms for rotation of the factors, which may make them more straightforward to interpret. Factor analysis is commonly used for developing scoring systems for rating scales and questionnaires.
• false negative: negative test result in an individual who has the disease or condition as determined by the criterion standard.40(p330) See also diagnostic discrimination.
• false-negative rate: proportion of test results found or expected to yield a false-negative result; equal to 1 − sensitivity.40 See also diagnostic discrimination.
• false positive: positive test result in an individual who does not have the disease or condition as determined by the criterion standard.40(p330) See also diagnostic discrimination.
• false-positive rate: proportion of tests found to or expected to yield a false-positive result; equal to 1 − specificity.40 See also diagnostic discrimination.
• F distribution: ratio of the distribution of 2 normally distributed independent variables; synonymous with variance ratio distribution.42(p61)
• Fisher exact test: assesses the independence of 2 variables by means of a 2 × 2 contingency table, used when the frequency in at least 1 cell is small44(p96) (usually <6). This test is also known as the Fisher-Yates test and the Fisher-Irwin test.38(p77)
• fixed-effects model: model used in meta-analysis that assumes that differences in treatment effect in each study all estimate the same true difference. This is not often the case, but the model assumes that it is close enough to the truth that the results will not be misleading.46(p349) Antonym is random-effects model.
• Friedman test: a nonparametric test for a design with 2 factors that uses the ranks rather than the values of the observations.38(p80) Nonparametric analog to analysis of variance.
• F test (score): alternative name for the variance ratio test (or F ratio),42(p74) which results in the F score. Often encountered in analysis of variance.44(p101)
Example: There were differences by academic status in perceptions of the quality of both primary care training (F1,682 = 6.71, P = .01) and specialty training (F1,682 = 6.71, P = .01). [The numbers set as subscripts for the F test are the df for the numerator and denominator, respectively.]
• funnel plot: in meta-analysis, a graph of the sample size or standard error of each study plotted against its effect size. Estimates of effect size from small studies should have more variability than estimates from larger studies, thus producing a funnel-shaped plot. Departures from a funnel pattern suggest publication bias.
• gaussian distribution: see normal distribution.
• gold standard: see criterion standard.
• goodness of fit: agreement between an observed set of values and a second set that is derived wholly or partly on a hypothetical basis.38(p86) The Kolmogorov-Smirnov test is one example.
• group association: situation in which a characteristic and a disease both occur more frequently in one group of individuals than another. The association does not mean that all individuals with the characteristic necessarily have the disease.40(p331)
• group matching: process of matching during assignment in a study to ensure that the groups have a nearly equal distribution of particular variables; also known as frequency matching.40(p331)
• Hartley test: test for the equality of variances of a number of populations that are normally distributed, based on the ratio between the largest and smallest sample variations.38(p90)
• Hawthorne effect: effect produced in a study because of the participants' awareness that they are participating in a study. The term usually refers to an effect on the control group that changes the group in the direction of the outcome, resulting in a smaller effect size.44(p115) A related concept is effect of observation. The Hawthorne effect is different than the placebo effect, which relates to participants' expectations that an intervention will have specific effects.
• hazard rate, hazard function: theoretical measure of the likelihood that an individual will experience an event within a given period.42(p73) A number of hazard rates for specific intervals of time can be combined to create a hazard function.
• hazard ratio: the ratio of the hazard rate in one group to the hazard rate in another. It is calculated from the Cox proportional hazards model. The interpretation of the hazard ratio is similar to that of the relative risk.
• heterogeneity: inequality of a quantity of interest (such as variance) in a number of groups or populations. Antonym is homogeneity.
• histogram: graphical representation of data in which the frequency (quantity) within each class or category is represented by the area of a rectangle centered on the class interval. The heights of the rectangles are proportional to the observed frequencies. See also Example F5 in 4.2.1, Visual Presentation of Data, Figures, Statistical Graphs.
• Hoeffding independence test: bivariate test of nonnormally distributed continuous data to determine whether the elements of the 2 groups are independent of each other.42(p93)
• Hollander parallelism test: determines whether 2 regression lines for 2 independent variables plotted against a dependent variable are parallel. The test does not require a normal distribution, but there must be an equal and even number of observations corresponding to each line. If the lines are parallel, then both independent variables predict the dependent variable equally well. The Hollander parallelism test is a special case of the signed rank test.38(p94)
• homogeneity: equality of a quantity of interest (such as variance) specifically in a number of groups or populations.38(p94) Antonym is heterogeneity.
• homoscedasticity: statistical determination that the variance of the different variables under study is equal.42(p78) See also heterogeneity.
• Hosmer-Lemeshow goodness-of-fit test: a series of statistical steps used to assess goodness of fit; approximates the χ2 statistic.47
• Hotelling T statistic: generalization of the t test for use with multivariate data; results in a T statistic. Significance can be tested with the variance ratio distribution.38(p94)
• hypothesis: supposition that leads to a prediction that can be tested to be either supported or refuted.42(p80) The null hypothesis is generally that there is no difference between groups or relationships among variables and that any such difference or relationship, if found, would occur strictly by chance. Hypothesis testing includes (1) generating the study hypothesis and defining the null hypothesis, (2) determining the level below which results are considered statistically significant, or α level (usually α = .05), and (3) identifying and applying the appropriate statistical test to accept or reject the null hypothesis.
• imputation: a group of techniques for replacing missing data with values that would have been likely to have been observed. Among the simplest methods of imputation is last-observation-carried-forward, in which missing values are replaced by the last observed value. This provides a conservative estimate in cases in which the condition is expected to improve on its own, but may be overly optimistic in conditions that are known to worsen over time. Missing values may also be imputed based on the patterns of other variables. In multiple imputation, repeated random samples are simulated, each of which produces a set of values to replace the missing values. This provides not only an estimate of the missing values but also an estimate of the uncertainty with which they can be predicted.
• incidence: number of new cases of disease among persons at risk that occur over time,42(p82) as contrasted with prevalence, which is the total number of persons with the disease at any given time. Incidence is usually expressed as a percentage of individuals affected during an interval (eg, year) or as a rate calculated as the number of individuals who develop the disease during a period divided by the number of person-years at risk.
Example: The incidence rate for the disease was 1.2 cases per 100 000 per year.
• inclusion criteria: characteristics a study participant must possess to be included in the study population (such as age 65 years or older at the time of study enrollment and willing and able to provide informed consent). Like exclusion criteria, inclusion criteria should be defined before any participants are enrolled.
• independence, assumption of: assumption that the occurrence of one event is in no way linked to another event. Many statistical tests depend on the assumption that each outcome is independent.42(p83) This may not be a valid assumption if repeated tests are performed on the same individuals (eg, blood pressure is measured sequentially over time), if more than 1 outcome is measured for a given individual (eg, myocardial infarction and death or all hospital admissions), or if more than 1 intervention is made on the same individual (eg, blood pressure is measured during 3 different drug treatments). Tests for repeated measures may be used in those circumstances.
• independent variable: variable postulated to influence the dependent variable within the defined area of relationships under study.42(p83) The term does not refer to statistical independence, so some use the term explanatory variable instead.38(p98)
Example: Age, sex, systolic blood pressure, and cholesterol level were the independent variables entered into the multiple logistic regression.
• indirect cause: contributory cause that acts through the biological mechanism that is the direct cause.40(p331)
Example: Overcrowding in the cities facilitated transmission of the tubercle bacillus and precipitated the tuberculosis epidemic. [Overcrowding is an indirect cause; the tubercle bacillus is the direct cause.]
• inference: process of passing from observations to generalizations, usually with calculated degrees of uncertainty.42(p85)
Example: Intake of a high-fat diet was significantly associated with cardiovascular mortality; therefore, we infer that eating a high-fat diet increases the risk of cardiovascular death.
• instrument error: error introduced in a study when the testing instrument is not appropriate for the conditions of the study or is not accurate enough to measure the study outcome40(p331) (may be due to deficiencies in such factors as calibration, accuracy, and precision).
• intention-to-treat analysis, intent-to-treat analysis: analysis of outcomes for individuals based on the treatment group to which they were randomized, rather than on which treatment they actually received and whether they completed the study. The intention-to-treat analysis generally avoids biases associated with the reasons that participants may not complete the study and should be the main analysis of a randomized trial.44(p125) See 20.2, Randomized Controlled Trials.
→ Although other analyses, such as evaluable patient analysis or per-protocol analyses, are often performed to evaluate outcomes based on treatment actually received, the intention-to-treat analysis should be presented regardless of other analyses because the intervention may influence whether treatment was changed and whether participants dropped out. Intention-to-treat analyses may bias the results of equivalence and noninferiority trials; for those trials, additional analyses should be presented. See 20.2.3, Randomized Controlled Trials, Equivalence and Noninferiority Trials.
• interaction: see interactive effect.
• interaction term: variable used in analysis of variance or analysis of covariance in which 2 independent variables interact with each other (eg, when assessing the effect of energy expenditure on cardiac output, the increase in cardiac output per unit increase in energy expenditure might differ between men and women; the interaction term would enable the analysis to take this difference into account).40(p301)
• interactive effect: effect of 2 or more independent variables on a dependent variable in which the effect of an independent variable is influenced by the presence of another.38(p101) The interactive effect may be additive (ie, equal to the sum of the 2 effects present separately), synergistic (ie, the 2 effects together have a greater effect than the sum of the effects present separately), or antagonistic (ie, the 2 effects together have a smaller effect than the sum of the effects present separately).
• interim analysis: data analysis carried out during a clinical trial to monitor treatment effects. Interim analysis should be determined as part of the study protocol prior to patient enrollment and specify the stopping rules if a particular treatment effect is reached.7(p130)
• interobserver bias: likelihood that one observer is more likely to give a particular response than another observer because of factors unique to the observer or instrument. For example, one physician may be more likely than another to identify a particular set of signs and symptoms as indicative of religious preoccupation on the basis of his or her beliefs, or a physician may be less likely than another physician to diagnose alcoholism in a patient because of the physician’s expectations.44(p25) The Cochran Q test is used to assess interobserver bias.44(p25)
• interobserver reliability: test used to measure agreement among observers about a particular measure or outcome.
→ Although the proportion of times that 2 observers agree can be reported, this does not take into account the number of times they would have agreed by chance alone. For example, if 2 observers must decide whether a factor is present or absent, they should agree 50% of the time according to chance. The κ statistic assesses agreement while taking chance into account and is described by the equation [(observed agreement) (agreement expected by chance)]/(1 agreement expected by chance). The value of κ may range from 0 (poor agreement) to 1 (perfect agreement) and may be classified by various descriptive terms, such as slight (0-0.20), fair (0.21-0.40), moderate (0.41-0.60), substantial (0.61-0.80), and near perfect (0.81-0.99).48(pp27-29)
→ In cases in which disagreement may have especially grave consequences, such as one pathologist rating a slide “negative”’ and another rating a slide “invasive carcinoma,” a weighted κ may be used to grade disagreement according to the severity of the consequences.48(p29) See also Pearson product moment correlation.
• interobserver variation: see interobserver reliability.
• interquartile range: the distance between the 25th and 75th percentiles, which is used to describe the dispersion of values. Like other quantiles (eg, tertiles, quintiles), such a range more accurately describes nonnormally distributed data than does the SD. The interquartile range describes the inner 50% of values; the interquintile range describes the inner 60% of values; the interdecile range describes the inner 80% of values.38(pp102-103)
• interrater reliability: reproducibility among raters or observers; synonymous with interobserver reliability.
• interval estimate: see confidence interval.40(p331)
• intraobserver reliability (or variation): reliability (or, conversely, variation) in measurements by the same person at different times.40(p331) Similar to interobserver reliability, intraobserver reliability is the agreement between measurements by one individual beyond that expected by chance and can be measured by means of the κ statistic or the Pearson product moment correlation.
• intrarater reliability: synonym for intraobserver reliability.
• jackknife dispersion test: technique for estimating the variance and bias of an estimator, applied to a predictive model derived from a study sample to determine whether the model fits subsamples from the model equally well. The estimator or model is applied to subsamples of the whole, and the differences in the results obtained from the subsample compared with the whole are analyzed as a jackknife estimate of variance. This method uses a single data set to derive and validate the model.48(p131)
→ Although validating a model in a new sample is preferable, investigators often use techniques such as jackknife dispersion or the bootstrap method to validate a model to save the time and expense of obtaining an entirely new sample for purposes of validation.
• Kaplan-Meier method: nonparametric method of compiling life tables. Unlike the Cutler-Ederer method, the Kaplan-Meier method assumes that termination of follow-up occurs at the end of the time block. Therefore, Kaplan-Meier estimates of risk tend to be slightly lower than Cutler-Ederer estimates.40(p308) Often an intervention and control group are depicted on one graph and the groups are compared by a log-rank test. Because the method is nonparametric, there is no attempt to fit the data to a theoretical curve. Thus, Kaplan-Meier plots have a jagged appearance, with discrete drops at the end of each time interval in which an event occurs. This method is also known as the product-limit method.
• κ (kappa) statistic: statistic used to measure nonrandom agreement between observers or measurements.42(p94) See interobserver and intraobserver reliability.
• Kendall τ (tau) rank correlation: rank correlation coefficient for ordinal data.48(p134)
• Kolmogorov-Smirnov test: comparison of 2 independent samples of continuous data without requiring that the data be normally distributed44(p136); may be used to test goodness of fit.43
• Kruskal-Wallis test: comparison of 3 or more groups of nonnormally distributed data to determine whether they differ significantly.44(p137) The Kruskal-Wallis test is a nonparametric analog of analysis of variance and generalizes the 2-sample Wilcoxon rank sum test to the multiple-sample case.38(p111)
• kurtosis: the way in which a unimodal curve deviates from a normal distribution; may be more peaked (leptokurtic) or more flat (platykurtic) than a normal distribution.44(p137)
• Latin square: form of complete treatment crossover design used for crossover drug trials that eliminates the effect of treatment order. Each patient receives each drug, but each drug is followed by another drug only once in the array. For example, in the following 4 × 4 array, letters A through D correspond to each of 4 drugs, each row corresponds to a patient, and each column corresponds to the order in which the drugs are given.8(p142)
First Drug
Second Drug
Third Drug
Fourth Drug
Patient 1
C
D
A
B
Patient 2
A
C
B
D
Patient 3
D
B
C
A
Patient 4
B
A
D
C
• lead-time bias: artifactual increase in survival time that results from earlier detection of a disease, usually cancer, during a time when the disease is asymptomatic. Lead-time bias produces longer survival from the time of diagnosis but not longer survival from the time of onset of the disease.40(p331) See also length-time bias.
→ Lead-time bias may give the appearance of a survival benefit from screening, when in fact the increased survival is only artifactual. Lead-time bias is used more generally to indicate a systematic error arising when follow-up of groups does not begin at comparable stages in the natural course of the condition.
• least significant difference test: test for comparing mean values arising in analysis of variance. An extension of the t test.40(p115)
• least squares method: method of estimation, particularly in regression analysis, that minimizes the sum of the differences between the observed responses and the values predicted by the model.44(p140) The regression line is created so that the sum of the squares of the residuals is as small as possible.
• left-censored data: see censored data.
• length-time bias: bias that arises when a sampling scheme is based on patient visits, because patients with more frequent clinic visits are more likely to be selected than those with less frequent visits. In a screening study of cancer, for example, screening patients with frequent visits is more likely to detect slow-growing tumors than would sampling patients who visit a physician only when symptoms arise.44(p140) See also lead-time bias.
• life table: method of organizing data that allows examination of the experience of 1 or more groups of individuals over time with varying periods of follow-up. For each increment of the follow-up period, the number entering, the number leaving, and the number dying of disease or developing disease can be calculated. An assumption of the life-table method is that an individual not completing follow-up is exposed for half the incremental follow-up period.44(p143) (The Kaplan-Meier method and the Cutler-Ederer method are also forms of life-table analysis but make different assumptions about the length of exposure.) See Figure 3.
→ The clinical life table describes the outcomes of a cohort of individuals classified according to their exposure or treatment history. The cohort life table is used for a cohort of individuals born at approximately the same time and followed up until death. The current life table is a summary of mortality of the population over a brief (1- to 3-year) period, classified by age, often used to estimate life expectancy for the population at a given age.42(p97)
• likelihood ratio: probability of getting a certain test result if the patient has the condition relative to the probability of getting the result if the patient does not have the condition. For dichotomous variables, this is calculated as sensitivity/(1 − specificity). The greater the likelihood ratio, the more likely that a positive test result will occur in a patient who has the disease. A ratio of 2 means a person with the disease is twice as likely to have a positive test result as a person without the disease.43 The likelihood ratio test is based on the ratio of 2 likelihood functions.38(p118) See also diagnostic discrimination.
• Likert scale: scale often used to assess opinion or attitude, ranked by attaching a number to each response such as 1, strongly agree; 2, agree; 3, undecided or neutral; 4, disagree; 5, strongly disagree. The score is a sum of the numerical responses to each question.`44(p144)
• Lilliefors test: test of normality (using the Kolmogorov-Smirnov test statistic) in which mean and variance are estimated from the data.38(p118)
• linear regression: statistical method used to compare continuous dependent and independent variables. When the data are depicted on a graph as a regression line, the independent variable is plotted on the x-axis and the dependent variable on the y-axis. The residual is the vertical distance from the data point to the regression line43(p110); analysis of residuals is a commonly used procedure for linear regression. (See Example F4 in 4.2.1, Visual Presentation of Data, Figures, Statistical Graphs.) This method is frequently performed using least squares regression.37(pp202-203)
→ The description of a linear regression model should include the equation of the fitted line with the slope and 95% confidence interval if possible, the fraction of variation in y explained by the x variables (correlation), and the variances of the fitted coefficients a and b (and their SDs).37(p227)
Example: The regression model identified a significant positive relationship between the dependent variable weight and height (slope = 0.25; 95% CI, 0.19-0.31; y = 12.6 + 0.25x; t451 = 8.3, P < .001; r2 = 0.67).43
(In this example, the slope is positive, indicating that as one variable increases the other increases; the t test with 451 df is significant; the regression line is described by the equation and includes the slope 0.25 and the constant 12.6. The coefficient of determination r2 demonstrates that 67% of the variance in weight is explained by height.)43
→ Four important assumptions are made when linear regression is conducted: the dependent variable is sampled randomly from the population; the spread or dispersion of the dependent variable is the same regardless of the value of the independent variable (this equality is referred to as homogeneity of variances or homoscedasticity); the relationship between the 2 variables is linear; and the independent variable is measured with complete precision.40(pp273-274)
• location: central tendency of a normal distribution, as distinguished from dispersion. The location of 2 curves may be identical (means are the same), but the kurtosis may vary (one may be peaked and the other flat, producing small and large SDs, respectively).49(p28)
• logistic regression: type of regression model used to analyze the relationship between a binary dependent variable (expressed as a natural log after a logit transformation) and 1 or more independent variables. Often used to determine the independent effect of each of several explanatory variables by controlling for several factors simultaneously in a multiple logistic regression analysis. Results are usually expressed by odds ratios or relative risks and 95% confidence intervals.40(pp311-312) (The multiple logistic regression equation may also be provided but, because these involve exponents, they are substantially more complicated than linear regression equations. Therefore, in JAMA and the Archives Journals, the equation is generally not published but can be made available on request from authors. Alternatively, it may be placed on the Web.)
→ To be valid, a multiple regression model must have an adequate sample size for the number of variables examined. A rough rule of thumb is to have at least 25 individuals in the study for each explanatory variable examined.
• log-linear model: linear models used in the analysis of categorical data.38(p122)
• log-rank test: method of using the relative death rates in subgroups to compare overall differences between survival curves for different treatments; same as the Mantel-Haenszel test.38(pp122,124)
• main effect: estimate of the independent effect of an explanatory (independent) variable on a dependent variable in analysis of variance or analysis of covariance.44(p153)
• Mann-Whitney test: nonparametric equivalent of the t test, used to compare ordinal dependent variables with either nominal independent variables or continuous independent variables converted to an ordinal scale.42(p100) Similar to the Wilcoxon rank sum test.
• MANOVA: multivariate analysis of variance. This involves examining the overall significance of all dependent variables considered simultaneously and thus has less risk of type I error than would a series of univariate analysis of variance procedures on several dependent variables.
• Mantel-Haenszel test: another name for the log-rank test.
• Markov process: process of modeling possible events or conditions over time that assumes that the probability that a given state or condition will be present depends only on the state or condition immediately preceding it and that no additional information about previous states or conditions would create a more accurate estimate.44(p155)
• masked assessment: synonymous with blinded assessment, preferred by some investigators and journals to the term blinded, especially in ophthalmology.
• masked assignment: synonymous with blinded assignment, preferred by some investigators and journals to the term blinded, especially in ophthalmology.
• matching: process of making study and control groups comparable with respect to factors other than the factors under study, generally as part of a case-control study. Matching can be done in several ways, including frequency matching (matching on frequency distributions of the matched variable[s]), category (matching in broad groups such as young and old), individual (matching on individual rather than group characteristics), and pair matching (matching each study individual with a control individual).42(p101)
• McNemar test: form of the χ2 test for binary responses in comparisons of matched pairs.42(p103) The ratio of discordant to concordant pairs is determined; the greater the number of discordant pairs with the better outcome being associated with the treatment intervention, the greater the effect of the intervention.44(p158)
• mean: sum of values measured for a given variable divided by the number of values; a measure of central tendency appropriate for normally distributed data.49(p29)
→ If the data are not normally distributed, the median is preferred. See also average.
• measurement error: estimate of the variability of a measurement. Variability of a given parameter (eg, weight) is the sum of the true variability of what is measured (eg, day-to-day weight fluctuations) plus the variability of the instrument or observer measurement, or variability caused by measurement error (error variability, eg, the scale used for weighing). The intraclass correlation coefficient R measures the relationship of these 2 types of variability: as the error variability declines with respect to true variability, R increases, up to 1 when error variance is 0. If all variability is a result of error variability, then R = 0.46(p30)
• median: midpoint of a distribution chosen so that half the values for a given variable appear above and half occur below.40(p332) For data that do not have a normal distribution, the median provides a better measure of central tendency than does the mean, since it is less influenced by outliers.47(p29)
• median test: nonparametric rank-order test for 2 groups.38(p128)
• meta-analysis: See 20.4, Meta-analysis.
• missing data: incomplete information on individuals resulting from any of a number of causes, including loss to follow-up, refusal to participate, and inability to complete the study. Although the simplest approach would be to remove such participants from the analysis, this would violate the intention-to-treat principle. Furthermore, certain health conditions may be systematically associated with the risk of having missing data, and thus removal of these individuals could bias the analysis. It is generally better to attempt imputation of these missing values, which are then included in the analysis.
• mode: in a series of values of a given variable, the number that occurs most frequently; used most often when a distribution has 2 peaks (bimodal distribution).49(p29) This is also appropriate as a measure of central tendency for categorical data.
• Monte Carlo simulation: a family of techniques for modeling complex systems for which it would otherwise be difficult to obtain sufficient data. In general, Monte Carlo simulations use a computer algorithm to generate a large number of random “observations.” The patterns of these numbers are then assessed for underlying regularities.
• mortality rate: death rate described by the following equation: [(number of deaths during period) × (period of observation)]/(number of individuals observed). For values such as the crude mortality rate, the denominator is the number of individuals observed at the midpoint of observation. See also crude death rate.44(p66)
→ Mortality rate is often expressed in terms of a standard ratio, such as deaths per 100 000 persons per year.
• Moses ranklike dispersion test: rank test of the equality of scale of 2 identically shaped populations, applicable when the population medians are not known.38(p134)
• multiple analyses problem: problem that occurs when several statistical tests are performed on one group of data because of the potential to introduce a type I error. The problem is particularly an issue when the analyses were not specified as primary outcome measures. Multiple analyses can be appropriately adjusted for by means of a Bonferroni adjustment or any of several multiple comparisons procedures.
• multiple comparisons procedures: any of several tests used to determine which groups differ significantly after another more general test has identified that a significant difference exists but not between which groups. These tests are intended to avoid the problem of a type I error caused by sequentially applying tests, such as the t test, not intended for repeated use. Authors should specify whether these tests were planned a priori, or whether the decision to perform them was post hoc.
→ Some tests result in more conservative estimates (less likely to be significant) than others. More conservative tests include the Tukey test and the Bonferroni adjustment; the Duncan multiple range test is less conservative. Other tests include the Scheffé test, the Newman-Keuls test, and the Gabriel test,38(p137)as well as many others. There is ongoing debate among statisticians about when it is appropriate to use these tests.
• multiple regression: general term for analysis procedures used to estimate values of the dependent variable for all measured independent variables that are found to be associated. The procedure used depends on whether the variables are continuous or nominal. When all variables are continuous variables, multiple linear regression is used and the mean of the dependent variable is expressed using the equation Y = α + β1χ1 + β2χ2 + ··· + βkχk, where Y is the dependent variable and k is the total number of independent variables. When independent variables may be either nominal or continuous and the dependent variable is continuous, analysis of covariance is used. (Analysis of covariance often requires an interaction term to account for differences in the relationship between the independent and dependent variables.) When all variables are nominal and the dependent variable is time-dependent, life-table methods are used. When the independent variables may be either continuous or nominal and the dependent variable is nominal and time-dependent (such as incidence of death), the Cox proportional hazards model may be used. Nominal dependent variables that are not time-dependent are analyzed by means of logistic regression or discriminant analysis.37(pp296-312)
• multivariable analysis: another name for multivariate analysis.
• multivariate analysis: any statistical test that deals with 1 dependent variable and at least 2 independent variables. It may include nominal or continuous variables, but ordinal data must be converted to a nominal scale for analysis. The multivariate approach has 3 advantages over bivariate analysis: (1) it allows for investigation of the relationship between the dependent and independent variables while controlling for the effects of other independent variables; (2) it allows several comparisons to be made statistically without increasing the likelihood of a type I error; and (3) it can be used to compare how well several independent variables individually can estimate values of the dependent variable.40(pp289-291) Examples include analysis of variance, multiple (logistic or linear) regression, analysis of covariance, Kruskal-Wallis test, Friedman test, life table, and Cox proportional hazards model.
• N: total number of units (eg, patients, households) in the sample under study.
Example: We assessed the admission diagnoses of all patients admitted from the emergency department during a 1-month period (N = 127).
• n: number of units in a subgroup of the sample under study.
Example: Of the patients admitted from the emergency department (N = 127), the most frequent admission diagnosis was unstable angina (n = 38).
• natural experiment: investigation in which a change in a risk factor or exposure occurs in one group of individuals but not in another. The distribution of individuals into a particular group is nonrandom and, as opposed to controlled clinical trials, the change is not brought about by the investigator.40(p332) The natural experiment is often used to study effects that cannot be studied in a controlled trial, such as the incidence of medical illness immediately after an earthquake. This is also referred to as a “found” experiment.
• naturalistic sample: set of observations obtained from a sample of the population in such a way that the distribution of independent variables in the sample is representative of the distribution in the population.40(p332)
• necessary cause: characteristic whose presence is required to bring about or cause the disease or outcome under study.50(p332) A necessary cause may not be a sufficient cause.
• negative predictive value: the probability that an individual does not have the disease (as determined by the criterion standard) if the test result is negative.40(p334) This measure takes into account the prevalence of the condition or the disease. A more general term is posttest probability. See diagnostic discrimination.
• nested case-control study: case-control study in which cases and controls are drawn from a cohort study. The advantages of a nested case-control study over a case-control study are that the controls are selected from participants at risk at the time of occurrence of each case that arises in a cohort, thus avoiding the confounding effect of time in the analysis, and that cases and controls are by definition drawn from the same population.40(p111) See also 20.3.1, Observational Studies, Cohort Studies, and 20.3.2, Observational Studies, Case-Control Studies.
• Newman-Keuls test: a type of multiple comparisons procedure, used to compare more than 2 groups. It first compares the 2 groups that have the highest and lowest means, then sequentially compares the next most extreme groups, and stops when a comparison is not significant.39(p92)
• n-of-1 trial: randomized controlled trial that uses a single patient and an outcome measure agreed on by the patient and physician. The n-of-1 trial may be used by clinicians to assess which of 2 or more possible treatment options is better for the individual patient.50
• nominal variable: also called categorical variable. There is no arithmetic relationship among the categories, and thus there is no intrinsic ranking or order between them (for example, sex, gene alleles, race, eye color). The nominal or discrete variable usually is assessed to determine its frequency within a population.40(p332) The variable can have either a binomial or Poisson distribution (if the nominal event is extremely rare, eg, a genetic mutation).
• nomogram: a visual means of representing a mathematical equation.
• nonconcurrent cohort study: cohort study in which an individual’s group assignment is determined by information that exists at the time a study begins. The extreme of a nonconcurrent cohort study is one in which the outcome is determined retrospectively from existing records.40(p332)
• nonnormal distribution: data that do not have a normal (bell-shaped curve) distribution; includes binomial, Poisson, and exponential distributions, as well as many others.
→ Nonnormally distributed continuous data must be either transformed to a normal distribution to use parametric methods or, more commonly, analyzed by non-parametric methods.
• nonparametric statistics: statistical procedures that do not assume that the data conform to any theoretical distribution. Nonparametric tests are most often used for ordinal or nominal data, or for nonnormally distributed continuous data converted to an ordinal scale40(p332) (for example, weight classified by tertile).
• normal distribution: continuous data distributed in a symmetrical, bell-shaped curve with the mean value corresponding to the highest point of the curve. This distribution of data is assumed in many statistical procedures.40(p330) This is also called a gaussian distribution.
→ Descriptive statistics such as mean and SD can be used to accurately describe data only if the values are normally distributed or can be transformed into a normal distribution.
• normal range: measure of the range of values on a particular test among those without the disease. Cut points for abnormal tests are arbitrary and are often defined as the central 95% of values, or the mean of values ± 2 SDs.
• null hypothesis: the assertion that no true association or difference in the study outcome or comparison of interest between comparison groups exists in the larger population from which the study samples are obtained.40(p332) In general, statistical tests cannot be used to prove the null hypothesis. Rather, the results of statistical testing can reject the null hypothesis at the stated α likelihood of a type I error.
• number needed to harm: computed similarly to number needed to treat, but number of patients who, after being treated for a specific period of time, would be expected to experience 1 bad outcome or not experience 1 good outcome.
• number needed to treat (NNT): number of patients who must be treated with an intervention for a specific period to prevent 1 bad outcome or result in 1 good outcome.40(pp332-333) The NNT is the reciprocal of the absolute risk reduction, the difference between event rates in the intervention and placebo groups in a clinical trial. See also number needed to harm.
→ The study patients from whom the NNT is calculated should be representative of the population to whom the numbers will be applied. The NNT does not take into account adverse effects of the intervention.
• odds ratio (OR): ratio of 2 odds. Odds ratio may have different definitions depending on the study and therefore should be defined. For example, it may be the odds of having the disease if a particular risk factor is present to the odds of not having the disease if the risk factor is not present, or the odds of having a risk factor present if the person has the disease to the odds of the risk factor being absent if the person does not have the disease.
→ The odds ratio typically is used for a case-control or cohort study. For a study of incident cases with an infrequent disease (for example, <2% incidence), the odds ratio approximates the relative risk.42(p118) When the incidence is relatively frequent the odds ratio may be arithmetically corrected to better approximate the relative risk.51
→ The odds ratio is usually expressed by a point estimate and 95% confidence interval (CI). An odds ratio for which the CI includes 1 indicates no statistically significant effect on risk; if the point estimate and CI are both less than 1, there is a statistically significant reduction in risk; if the point estimate and CI are both greater than 1, there is a statistically significant increase in risk.
• 1-tailed test: test of statistical significance in which deviations from the null hypothesis in only 1 direction are considered.40(p333) Most commonly used for the t test.
→ One-tailed tests are more likely to produce a statistically significant result than are 2-tailed tests. Since the use of a 1-tailed test implies that the intervention could have only 1 direction of effect, ie, beneficial or harmful, the use of a 1-tailed test must be justified.
• ordinal data: type of data with a limited number of categories with an inherent ordering of the category from lowest to highest, but without fixed or equal spacing between increments.40(p333) Examples are Apgar scores, heart murmur rating, and cancer stage and grade. Ordinal data can be summarized by means of the median and quantiles or range.
→ Because increments between the numbers for ordinal data generally are not fixed (eg, the difference between a grade 1 and a grade 2 heart murmur is not quantitatively the same as the difference between a grade 3 and a grade 4 heart murmur), ordinal data should be analyzed by nonparametric statistics.
• ordinate: vertical or y-axis of a graph.
• outcome: dependent variable or end point of an investigation. In retrospective studies such as case-control studies, the outcomes have already occurred before the study is begun; in prospective studies such as cohort studies and controlled trials, the outcomes occur during the time of the study.40(p333)
• outliers (outlying values): values at the extremes of a distribution. Because the median is far less sensitive to outliers than is the mean, it is preferable to use the median to describe the central tendency of data that have extreme outliers.
→ If outliers are excluded from an analysis, the rationale for their exclusion should be explained in the text. A number of tests are available to determine whether an outlier is so extreme that it should be excluded from the analysis.
• overmatching: the phenomenon of obscuring by the matching process of a case-control study a true causal relationship between the independent and dependent variables because the variable used for matching is strongly related to the mechanism by which the independent variable exerts its effect.40(pp119-120) For example, matching cases and controls on residence within a certain area could obscure an environmental cause of a disease. Overmatching may also be used to refer to matching on variables that have no effect on the dependent variable, and therefore are unnecessary, or the use of so many variables for matching that no suitable controls can be found.42(p120)
• oversampling: in survey research, a technique that selectively increases the likelihood of including certain groups or units that would otherwise produce too few responses to provide reliable estimates.
• paired samples: form of matching that can include self-pairing, where each participant serves as his or her own control, or artificial pairing, where 2 participants are matched on prognostic variables.42(p186) Twins may be studied as pairs to attempt to separate the effects of environment and genetics. Paired analyses provide greater power to detect a difference for a given sample size than do nonpaired analyses, since interindividual differences are minimized or eliminated. Pairing may also be used to match participants in case-control or cohort studies. See Table 3.
• paired t test: t test for paired data.
• parameter: measurable characteristic of a population. One purpose of statistical analysis is to estimate population parameters from sample observations.40(p333) The statistic is the numerical characteristic of the sample; the parameter is the numerical characteristic of the population. Parameter is also used to refer to aspects of a model (eg, a regression model).
• parametric statistics: tests used for continuous data and that require the assumption that the data being tested are normally distributed, either as collected initially or after transformation to the ln or log of the value or other mathematical conversion.40(p121) The t test is a parametric statistic. See Table 3.
• Pearson product moment correlation: test of correlation between 2 groups of normally distributed data. See diagnostic discrimination.
• percentile: see quantile.
• placebo: a biologically inactive substance administered to some participants in a clinical trial. A placebo should ideally appear similar in every other way to the experimental treatment under investigation. Assignment, allocation, and assessment should be blinded.
• placebo effect: refers to specific expectations that participants may have of the intervention. These can make the intervention appear more effective than it actually is. Comparison of a group receiving placebo vs those receiving the active intervention allows researchers to identify effects of the intervention itself, as the placebo effect should affect both groups equally.
• point estimate: single value calculated from sample observations that is used as the estimate of the population value, or parameter40(p333); in most circumstances accompanied by an interval estimate (eg, 95% confidence interval).
• Poisson distribution: distribution that occurs when a nominal event (often disease or death) occurs rarely.42(p125) The Poisson distribution is used instead of a binomial distribution when sample size is calculated for a study of events that occur rarely.
• population: any finite or infinite collection of individuals from which a sample is drawn for a study to obtain estimates to approximate the values that would be obtained if the entire population were sampled.44(p197) A population may be defined narrowly (eg, all individuals exposed to a specific traumatic event) or widely (eg, all individuals at risk for coronary artery disease).
• population attributable risk percentage: percentage of risk within a population that is associated with exposure to the risk factor. Population attributable risk takes into account the frequency with which a particular event occurs and the frequency with which a given risk factor occurs in the population. Population attributable risk does not necessarily imply a cause-and-effect relationship. It is also called attributable fraction, attributable proportion, and etiologic fraction.40(p333)
• positive predictive value: proportion of those participants or individuals with a positive test result who have the condition or disease as measured by the criterion standard. This measure takes into account the prevalence of the condition or the disease. Clinically, it is the probability that an individual has the disease if the test result is positive.40(p334) See Table 4 and diagnostic discrimination.
• posterior probability: in Bayesian analysis, the probability obtained after the prior probability is combined with the probability from the study of interest.42(p128) If one assumes a uniform prior (no useful information for estimating probability exists before the study), the posterior probability is the same as the probability from the study of interest alone.
• post hoc analysis: analysis performed after completion of a study and not based on a hypothesis considered before the study. Such analyses should be performed without prior knowledge of the relationship between the dependent and independent variables. A potential hazard of post hoc analysis is the type I error.
→ While post hoc analyses may be used to explore intriguing results and generate new hypotheses for future testing, they should not be used to test hypotheses, because the comparison is not hypothesis-driven. See also data dredging.
• posttest probability: the probability that an individual has the disease if the test result is positive (positive predictive value) or that the individual does not have the disease if the test result is negative (negative predictive value).40(p158)
• power: ability to detect a significant difference with the use of a given sample size and variance; determined by frequency of the condition under study, magnitude of the effect, study design, and sample size.40(p128) Power should be calculated before a study is begun. If the sample is too small to have a reasonable chance (usually 80% or 90%) of rejecting the null hypothesis if a true difference exists, then a negative result may indicate a type II error rather than a true failure to reject the null hypothesis.
→ Power calculations should be performed as part of the study design. A statement providing the power of the study should be included in the Methods section of all randomized controlled trials (see Table 1) and is appropriate for many other types of studies. A power statement is especially important if the study results are negative, to demonstrate that a type II error was unlikely to have been the reason for the negative result. Performing a post hoc power analysis is controversial, especially if it is based on the study results. Nonetheless, if such calculations were performed, they should be described in the Discussion section and their post hoc nature clearly stated.
Example: We determined that a sample size of 800 patients would have 80% power to detect the clinically important difference of 10% at α = .05.
• precision: inverse of the variance in measurement (see measurement error)42(p129); the degree of reproducibility that an instrument produces when measuring the same event. Note that precision and accuracy are independent concepts; if a blood pressure cuff is poorly calibrated against a standard, it may produce measurements that are precise but inaccurate.
• pretest probability: see prevalence.
• prevalence: proportion of persons with a particular disease at a given point in time. Prevalence can also be interpreted to mean the likelihood that a person selected at random from the population will have the disease (synonym: pretest probability).40(p334) See also incidence.
• principal components analysis: procedure used to group related variables to help describe data. The variables are grouped so that the original set of correlated variables is transformed into a smaller set of uncorrelated variables called the principal components.42(p131) Variables are not grouped according to dependent and independent variables, unlike many forms of statistical analysis. Principal components analysis is similar to factor analysis.
• prior probability: in Bayesian analysis, the probability of an event based on previous information before the study of interest is considered. The prior probability may be informative, based on previous studies or clinical information, or not, in which case the analysis uses a uniform prior (no information is known before the study of interest). A reference prior is one with minimal information, a clinical prior is based on expert opinion, and a skeptical prior is used when large treatment differences are not expected.44(p201) When Bayesian analysis is used to determine the posterior probability of a disease after a patient has undergone a diagnostic test, the prior probability may be estimated as the prevalence of the disease in the population from which the patient is drawn (usually the clinic or hospital population).
• probability: in clinical studies, the number of times an event occurs in a study group divided by the number of individuals being studied.40(p334)
• product-limit method: see Kaplan-Meier method.
• propensity analysis: in observational studies, a way of minimizing bias by selecting controls who have similar statistical likelihoods of having the outcome or intervention under investigation. In general, this involves examining a potentially large number of variables for their multivariate relationship with the outcome. The resulting model is then used to predict cases’ individual propensities to the outcome or intervention. Each case can then be matched to a control participant with a similar propensity. Propensity analysis is thus a way of correcting for underlying sources of bias when computing relative risk.
• proportionate mortality ratio: number of individuals who die of a particular disease during a span of time, divided by the number of individuals who die of all diseases during the same period.40(p334) This ratio may also be expressed as a rate, ie, a ratio per unit of time (eg, cardiovascular deaths per total deaths per year).
• prospective study: study in which participants with and without an exposure are identified and then followed up over time; the outcomes of interest have not occurred at the time the study commences.44(p205) Antonym is retrospective study.
• pseudorandomization: assigning of individuals to groups in a nonrandom manner, eg, selecting every other individual for an intervention or assigning participants by Social Security number or birth date.
• publication bias: tendency of articles reporting positive and/or “new” results to be submitted and published, and studies with negative or confirmatory results not to be submitted or published; especially important in meta-analysis, but also in other systematic reviews. Substantial publication bias has been demonstrated from the “file-drawer” problem.52 See funnel plot.
• purposive sample: set of observations obtained from a population in such a way that the sample distribution of independent variable values is determined by the researcher and is not necessarily representative of distribution of the values in the population.40(p334)
• P value: probability of obtaining the observed data (or data that are more extreme) if the null hypothesis were exactly true.44(p206)
→ While hypothesis testing often results in the P value, P values themselves can only provide information about whether the null hypothesis is rejected. Confidence intervals (CIs) are much more informative since they provide a plausible range of values for an unknown parameter, as well as some indication of the power of the study as indicated by the width of the CI.37(pp186-187) (For example, an odds ratio of 0.5 with a 95% CI of 0.05 to 4.5 indicates to the reader the [im]precision of the estimate, whereas P = .63 does not provide such information.) Confidence intervals are preferred whenever possible. Including both the CI and the P value provides more information than either alone.37(187) This is especially true if the CI is used to provide an interval estimate and the P value to provide the results of hypothesis testing.
→ When any P value is expressed, it should be clear to the reader what parameters and groups were compared, what statistical test was performed, and the degrees of freedom (df) and whether the test was 1-tailed or 2-tailed (if these distinctions are relevant for the statistical test).
→ For expressing P values in manuscripts and articles, the actual value for P should be expressed to 2 digits for P ≥.01, whether or not P is significant. (When rounding a P value expressed to 3 digits would make the P value nonsignificant, such as P ¼ .049 rounded to .05, the P value can be left as 3 digits.) If P < .01, it should be expressed to 3 digits. The actual P value should be expressed (P = .04), rather than expressing a statement of inequality (P < .05), unless P < .001. Expressing P to more than 3 significant digits does not add useful information to P < .001, since precise P values with extreme results are sensitive to biases or departures from the statistical model.37(p198)
P values should not be listed simply as not significant or NS, since for meta-analysis the actual values are important and not providing exact P values is a form of incomplete reporting.37(p195) Because the P value represents the result of a statistical test and not the strength of the association or the clinical importance of the result, P values should be referred to simply as statistically significant or not significant; terms such as highly significant and very highly significant should be avoided.
JAMA and the Archives Journals do not use a zero to the left of the decimal point, since statistically it is not possible to prove or disprove the null hypothesis completely when only a sample of the population is tested (P cannot equal 0 or 1, except by rounding). If P < .00001, P should be expressed as P < .001 as discussed. If P > .999, P should be expressed as P > .99.
• qualitative data: data that fit into discrete categories according to their attributes, such as nominal or ordinal data, as opposed to quantitative data.42(p136)
• qualitative study: form of study based on observation and interview with individuals that uses inductive reasoning and a theoretical sampling model, with emphasis on validity rather than reliability of results. Qualitative research is used traditionally in sociology, psychology, and group theory but also occasionally in clinical medicine to explore beliefs and motivations of patients and physicians.53
• quality-adjusted life-year (QALY): method used in economic analyses to reflect the existence of chronic conditions that cause impairment, disability, and loss of independence. Numerical weights representing severity of residual disability are based on assessments of disability by study participants, parents, physicians, or other researchers made as part of utility analysis.42(p136)
• quantile: method used for grouping and describing dispersion of data. Commonly used quantiles are the tertile (3 equal divisions of data into lower, middle, and upper ranges), quartile (4 equal divisions of data), quintile (5 divisions), and decile (10 divisions). Quantiles are also referred to as percentiles.38(p165)
→ Data may be expressed as median (quantile range), eg, length of stay was 7.5 days (interquartile range, 4.3-9.7 days). See also interquartile range.
• quantitative data: data in numerical quantities such as continuous data or counts42(p137) (as opposed to qualitative data). Nominal and ordinal data may be treated either qualitatively or quantitatively.
• quasi-experiment: experimental design in which variables are specified and participants assigned to groups, but interventions cannot be controlled by the experimenter. One type of quasi-experiment is the natural experiment.42(p137)
• r: correlation coefficient for bivariate analysis.
• R: correlation coefficient for multivariate analysis.
• r2: coefficient of determination for bivariate analysis. See also correlation coefficient.
• R2: coefficient of determination for multivariate analysis. See also correlation coefficient.
• random-effects model: model used in meta-analysis that assumes that there is a universe of conditions and that the effects observed in the studies are only a sample, ideally a random sample, of the possible effects.34(p349) Antonym is fixed-effects model.
• randomization: method of assignment in which all individuals have the same chances of being assigned to the conditions in a study. Individuals may be randomly assigned at a 2:1 or 3:1 frequency, in addition to the usual 1:1 frequency. Participants may or may not be representative of a larger population.37(p334) Simple methods of randomization include coin flip or use of a random numbers table. See also block randomization.
• randomized controlled trial: see 20.2.1, Randomized Controlled Trials, Parallel-Design Double-blind Trials.
• random sample: method of obtaining a sample that ensures that every individual in the population has a known (but not necessarily equal, for example, in weighted sampling techniques) chance of being selected for the sample.40(p335)
• range: the highest and lowest values of a variable measured in a sample.
Example: The mean age of the participants was 45.6 years (range, 20-64 years).
• rank sum test: see Mann-Whitney test or Wilcoxon rank sum test.
• rate: measure of the occurrence of a disease or outcome per unit of time, usually expressed as a decimal if the denominator is 100 (eg, the surgical mortality rate was 0.02). See also 19.7.3, Numbers and Percentages, Forms of Numbers, Reporting Proportions and Percentages.
• ratio: fraction in which the numerator is not necessarily a subset of the denominator, unlike a proportion40(p335) (eg, the assignment ratio was 1:2:1 for each drug dose [twice as many individuals were assigned to the second group as to the first and third groups]).
• recall bias: systematic error resulting from individuals in one group being more likely than individuals in the other group to remember past events.42(p141)
→ Recall bias is especially common in case-control studies that assess risk factors for serious illness in which individuals are asked about past exposures or behaviors, such as environmental exposure in an individual who has cancer.40(p335)
• receiver operating characteristic curve (ROC curve): graphic means of assessing the extent to which a test can be used to discriminate between persons with and without disease,42(p142) and to select an appropriate cut point for defining normal vs abnormal results. The ROC curve is created by plotting sensitivity vs (1 − specificity). The area under the curve provides some measure of how well the test performs; the larger the area, the better the test. See Figure 4. The C statistic is a measure of the area under the ROC curve.
→ The appropriate cut point is a function of the test. A screening test would require high sensitivity, whereas a diagnostic or confirmatory test would require high specificity. See Table 4 and diagnostic discrimination.
• reference group: group of presumably disease-free individuals from which a sample of individuals is drawn and tested to establish a range of normal values for a test.40(p335)
• regression analysis: statistical techniques used to describe a dependent variable as a function of 1 or more independent variables; often used to control for confounding variables.40(p335) See also linear regression, logistic regression.
• regression line: diagrammatic presentation of a linear regression equation, with the independent variable plotted on the x-axis and the dependent variable plotted on the y-axis. As many as 3 variables may be depicted on the same graph.42(p145)
• regression to the mean: the principle that extreme values are unlikely to recur. If a test that produced an extreme value is repeated, it is likely that the second result will be closer to the mean. Thus, after repeated observations results tend to “regress to the mean.” A common example is blood pressure measurement; on repeated measurements, individuals who are initially hypertensive often will have a blood pressure reading closer to the population mean than the initial measurement was.40(p335)
• relative risk (RR): probability of developing an outcome within a specified period if a risk factor is present, divided by the probability of developing the outcome in that same period if the risk factor is absent. The relative risk is applicable to randomized clinical trials and cohort studies40(p335); for case-control studies the odds ratio can be used to approximate the relative risk if the outcome is infrequent.
→ The relative risk should be accompanied by confidence intervals.
Example: The individuals with untreated mild hypertension had a relative risk of 2.4 (95% confidence interval, 1.9-3.0) for stroke or transient ischemic attack. [In this example, individuals with untreated mild hypertension were 2.4 times more likely than were individuals in the comparison group to have a stroke or transient ischemic attack.]
• relative risk reduction (RRR): proportion of the control group experiencing a given outcome minus the proportion of the treatment group experiencing the outcome, divided by the proportion of the control group experiencing the outcome.
• reliability: ability of a test to replicate a result given the same measurement conditions, as distinguished from validity, which is the ability of a test to measure what it is intended to measure.42(p145)
• repeated measures: analysis designed to take into account the lack of independence of events when measures are repeated in each participant over time (eg, blood pressure, weight, or test scores). This type of analysis emphasizes the change measured for a participant over time, rather than the differences between participants over time.
• repeated-measures ANOVA: see analysis of variance.
• reporting bias: a bias in assessment that can occur when individuals in one group are more likely than individuals in another group to report past events. Reporting bias is especially likely to occur when different groups have different reasons to report or not report information.40(pp335-336) For example, when examining behaviors, adolescent girls may be less likely than adolescent boys to report being sexually active. See also recall bias.
• reproducibility: ability of a test to produce consistent results when repeated under the same conditions and interpreted without knowledge of the prior results obtained with the same test40(p336); same as reliability.
• residual: measure of the discrepancy between observed and predicted values. The residual SD is a measure of the goodness of fit of the regression line to the data and gives the uncertainty of estimating a point y from a point x.38(p176)
• residual confounding: in observational studies, the possibility that differences in outcome may be caused by unmeasured or unmeasurable factors.
• response rate: number of complete interviews with reporting units divided by the number of eligible units in the sample.36 See 20.7, Survey Studies.
• retrospective study: study performed after the outcomes of interest have already occurred42(p147); most commonly a case-control study, but also may be a retrospective cohort study or case series. Antonym is prospective study.
• right-censored data: see censored data.
• risk: probability that an event will occur during a specified period. Risk is equal to the number of individuals who develop the disease during the period divided by the number of disease-free persons at the beginning of the period.40(p336)
• risk factor: characteristic or factor that is associated with an increased probability of developing a condition or disease. Also called a risk marker, a risk factor does not necessarily imply a causal relationship. A modifiable risk factor is one that can be modified through an intervention42(p148) (eg, stopping smoking or treating an elevated cholesterol level, as opposed to a genetically linked characteristic for which there is no effective treatment).
• risk ratio: the ratio of 2 risks. See also relative risk.
• robustness: term used to indicate that a statistical procedure’s assumptions (most commonly, normal distribution of data) can be violated without a substantial effect on its conclusions.42(p149)
• root-mean-square: see standard deviation.
• rule of 3: method used to estimate the number of observations required to have a 95% chance of observing at least 1 episode of a serious adverse effect. For example, to observe at least 1 case of penicillin anaphylaxis that occurs in about 1 in 10 000 cases treated, 30 000 treated cases must be observed. If an adverse event occurs 1 in 15 000 times, 45 000 cases need to be treated and observed.40(p114)
• run-in period: a period at the start of a trial when no treatment is administered (although a placebo may be administered). This can help to ensure that patients are stable and will adhere to treatment. This period may also be used to allow patients to discontinue any previous treatments, and so is sometimes also called a washout period.
• sample: subset of a larger population, selected for investigation to draw conclusions or make estimates about the larger population.52(p336)
• sampling error: error introduced by chance differences between the estimate obtained from the sample and the true value in the population from which the sample was drawn. Sampling error is inherent in the use of sampling methods and is measured by the standard error.40(p336)
• Scheffé test: see multiple comparisons procedures.
• SD: see standard deviation.
• SE: see standard error.
• SEE: see standard error of the estimate.
• selection bias: bias in assignment that occurs when the way the study and control groups are chosen causes them to differ from each other by at least 1 factor that affects the outcome of the study.40(p336)
→ A common type of selection bias occurs when individuals from the study group are drawn from one population (eg, patients seen in an emergency department or admitted to a hospital) and the control participants are drawn from another (eg, clinic patients). Regardless of the disease under study, the clinic patients will be healthier overall than the patients seen in the emergency department or hospital and will not be comparable controls. A similar example is the “healthy worker effect”: people who hold jobs are likely to have fewer health problems than those who do not, and thus comparisons between these groups may be biased.
• SEM: see standard error of the mean.
• sensitivity: proportion of individuals with the disease or condition as measured by the criterion standard who have a positive test result (true positives divided by all those with the disease).40(p336) See Table 4 and diagnostic discrimination.
• sensitivity analysis: method to determine the robustness of an assessment by examining the extent to which results are changed by differences in methods, values of variables, or assumptions40(p154); applied in decision analysis to test the robustness of the conclusion to changes in the assumptions.
• signed rank test: see Wilcoxon signed rank test.
• significance: statistically, the testing of the null hypothesis of no difference between groups. A significant result rejects the null hypothesis. Statistical significance is highly dependent on sample size and provides no information about the clinical significance of the result. Clinical significance, on the other hand, involves a judgment as to whether the risk factor or intervention studied would affect a patient’s outcome enough to make a difference for the patient. The level of clinical significance considered important is sometimes defined prospectively (often by consensus of a group of physicians) as the minimal clinically important difference, but the cutoff is arbitrary.
• sign test: a nonparametric test of significance that depends on the signs (positive or negative) of variables and not on their magnitude; used when combining the results of several studies, as in meta-analysis.42(p156) See also Cox-Stuart trend test.
• skewness: the degree to which the data are asymmetric on either side of the central tendency. Data for a variable with a longer tail on the right of the distribution curve are referred to as positively skewed; data with a longer left tail are negatively skewed.44(pp238-239)
• snowball sampling: a sampling method in which survey respondents are asked to recommend other respondents who might be eligible to participate in the survey. This may be used when the researcher is not entirely familiar with demographic or cultural patterns in the population under investigation.
• Spearman rank correlation (ρ): statistical test used to determine the covariance between 2 nominal or ordinal variables.44(p243) The nonparametric equivalent to the Pearson product moment correlation, it can also be used to calculate the coefficient of determination.
• specificity: proportion of those without the disease or condition as measured by the criterion standard who have negative results by the test being studied40(p326) (true negatives divided by all those without the disease). See Table 4 and diagnostic discrimination.
• standard deviation (SD): commonly used descriptive measure of the spread or dispersion of data; the positive square root of the variance.40(p336)The mean ± 2 SDs represents the middle 95% of values obtained.
→ Describing data by means of SD implies that the data are normally distributed; if they are not, then the interquartile range or a similar measure involving quantiles is more appropriate to describe the data, particularly if the mean ± 2 SDs would be nonsensical (eg, mean [SD] length of stay = 9 [15] days, or mean [SD] age at evaluation = 4 [5.3] days). Note that the format mean (SD) should be used, rather than the ± construction.
• standard error (SE): positive square root of the variance of the sampling distribution of the statistic.38(p195)Thus, the SE provides an estimate of the precision with which a parameter can be estimated. There are several types of SE; the type intended should be clear.
In text and tables that provide descriptive statistics, SD rather than SE is usually appropriate; by contrast, parameter estimates (eg, regression coefficients) should be accompanied by SEs. In figures where error bars are used, the 95% confidence interval is preferred54 (see Example F10 in 4.2.1, Visual Presentation of Data, Figures, Statistical Graphs).
• standard error of the difference: measure of the dispersion of the differences between samples of 2 populations, usually the differences between the means of 2 samples; used in the t test.
• standard error of the estimate: SD of the observed values about the regression line.38(p195)
• standard error of the mean (SEM): An inferential statistic, which describes the certainty with which the mean computed from a random sample estimates the true mean of the population from which the sample was drawn.39(p21) If multiple samples of a population were taken, then 95% of the samples would have means would fall within ± 2 SEMs of the mean of all the sample means. Larger sample sizes will be accompanied by smaller SEMs, because larger samples provide a more precise estimate of the population mean than do smaller samples.
→ The SEM is not interchangeable with SD. The SD generally describes the observed dispersion of data around the mean of a sample. By contrast, the SEM provides an estimate of the precision with which the true population mean can be inferred from the sample mean. The mean itself can thus be understood as either a descriptive or an inferential statistic; it is this intended interpretation that governs whether it should be accompanied by the SD or SEM. In the former case the mean simply describes the average value in the sample and should be accompanied by the SD, while in the latter it provides an estimate of the population mean and should be accompanied by the SEM. The interpretation of the mean is often clear from the text, but authors may need to be queried to discern their intent in presenting this statistic.
• standard error of the proportion: SD of the population of all possible values of the proportion computed from samples of a given size.39(p109)
• standardization (of a rate): adjustment of a rate to account for factors such as age or sex.40(pp336-350)
• standardized mortality ratio: ratio in which the numerator contains the observed number of deaths and the denominator contains the number of deaths that would be expected in a comparison population. This ratio implies that confounding factors have been controlled for by means of indirect standardization. It is distinguished from proportionate mortality ratio, which is the mortality rate for a specific disease.40(p337)
• standard normal distribution: a normal distribution in which the raw scores have been recomputed to have a mean of 0 and an SD of 1.44(p245) Such recomputed values are referred to as z scores or standard scores. The mean, median, and mode are all equal to zero.
• standard score: see z score.38(p196)
• statistic: value calculated from sample data that is used to estimate a value or parameter in the larger population from which the sample was obtained,40(p337) as distinguished from data, which refers to the actual values obtained via direct observation (eg, measurement, chart review, patient interview).
• stochastic: type of measure that implies the presence of a random variable.38(p197)
• stopping rule: rule, based on a test statistic or other function, specified as part of the design of the trial and established before patient enrollment, that specifies a limit for the observed treatment difference for the primary outcome measure, which, if exceeded, will lead to the termination of the trial or one of the study groups.7(p258) The stopping rules are designed to ensure that a study does not continue to enroll patients after a significant treatment difference has been demonstrated that would still exist regardless of the treatment results of subsequently enrolled patients.
• stratification: division into groups. Stratification may be used to compare groups separated according to similar confounding characteristics. Stratified sampling may be used to increase the number of individuals sampled in rare categories of independent variables, or to obtain an adequate sample size to examine differences among individuals with certain characteristics of interest.29(p337)
• Student-Newman-Keuls test: see Newman-Keuls test.
• Student t test: see t test. W. S. Gossett, who originated the test, wrote under the name Student because his employment precluded individual publication.42(p166) Simply using the term t test is preferred.
• study group: in a controlled clinical trial, the group of individuals who undergo an intervention; in a cohort study, the group of individuals with the exposure or characteristic of interest; and in a case-control study, the group of cases.40(p337)
• sufficient cause: characteristic that will bring about or cause the disease.40(p337)
• supportive criteria: substantiation of the existence of a contributory cause. Potential supportive criteria include the strength and consistency of the relationship, the presence of a dose-response relationship, and biological plausibility.40(p337)
• surrogate end points: in a clinical trial, outcomes that are not of direct clinical importance but that are believed to be related to those that are. Such variables are often physiological measurements (eg, blood pressure) or biochemical (eg, cholesterol level). Such end points can usually be collected more quickly and economically than clinical end points, such as myocardial infarction or death, but their clinical relevance may be less certain.
• survival analysis: statistical procedures for estimating the survival function and for making inferences about how it is affected by treatment and prognostic factors.42(p163) See life table.
• target population: group of individuals to whom one wishes to apply or extrapolate the results of an investigation, not necessarily the population studied.40(p337) If the target population is different from the population studied, whether the study results can be extrapolated to the target population should be discussed.
• τ(tau): see Kendall τ rank correlation.
• trend, test for: see χ2 test.
• trial: controlled experiment with an uncertain outcome38(p208); used most commonly to refer to a randomized study.
• triangulation: in qualitative research, the simultaneous use of several different techniques to study the same phenomenon, thus revealing and avoiding biases that may occur if only a single method were used.
• true negative: negative test result in an individual who does not have the disease or condition as determined by the criterion standard.40(p338) See also Table 4.
• true-negative rate: number of individuals who have a negative test result and do not have the disease by the criterion standard divided by the total number of individuals who do not have the disease as determined by the criterion standard; usually expressed as a decimal (eg, the true-negative rate was 0.85). See also Table 4.
• true positive: positive test result in an individual who has the disease or condition as determined by the criterion standard.40(p338) See also Table 4.
• true-positive rate: number of individuals who have a positive test result and have the disease as determined by the criterion standard divided by the total number of individuals who have the disease as measured by the criterion standard; usually expressed as a decimal (eg, the true-positive rate was 0.92). See also Table 4.
• t test: statistical test used when the independent variable is binary and the dependent variable is continuous. Use of the t test assumes that the dependent variable has a normal distribution; if not, nonparametric statistics must be used.40(p266)
→ Usually the t test is unpaired, unless the data have been measured in the same individual over time. A paired t test is appropriate to assess the change of the parameter in the individual from baseline to final measurement; in this case, the dependent variable is the change from one measurement to the next. These changes are usually compared against 0, on the null hypothesis that there is no change from time 1 to time 2.
→ Presentation of the t statistic should include the degrees of freedom (df), whether the t test was paired or unpaired, and whether a 1-tailed or 2-tailed test was used. Since a 1-tailed test assumes that the study effect can have only 1 possible direction (ie, only beneficial or only harmful), justification for use of the 1-tailed test must be provided. (The 1-tailed test at α = .05 is similar to testing at α = .10 for a 2-tailed test and therefore is more likely to give a significant result.)
Example: The difference was significant by a 2-tailed test for paired samples (t15 = 2.78, P = .05).
→ The t test can also be used to compare different coefficients of variation.
• Tukey test: a type of multiple comparisons procedure.
• 2-tailed test: test of statistical significance in which deviations from the null hypothesis in either direction are considered.40(p338) For most outcomes, the 2-tailed test is appropriate unless there is a plausible reason why only 1 direction of effect is considered and a 1-tailed test is appropriate. Commonly used for the t test, but can also be used in other statistical tests.
• 2-way analysis of variance: see analysis of variance.
• type I error: a result in which the sample data lead to a rejection of the null hypothesis despite the fact that the null hypothesis is actually true in the population. The α level is the size of a type I error that will be permitted, usually .05.
→ A frequent cause of a type I error is performing multiple comparisons, which increase the likelihood that a significant result will be found by chance. To avoid a type I error, one of several multiple comparisons procedures can be used.
• type II error: the situation where the sample data lead to a failure to reject the null hypothesis despite the fact that the null hypothesis is actually false in the population.
→ A frequent cause of a type II error is insufficient sample size. Therefore, a power calculation should be performed when a study is planned to determine the sample size needed to avoid a type II error.
• uncensored data: continuous data reported as collected, without adjustment, as opposed to censored data.
• uniform prior: assumption that no useful information regarding the outcome of interest is available prior to the study, and thus that all individuals have an equal prior probability of the outcome. See Bayesian analysis.
• unity: synonymous with the number 1; a relative risk of 1 is a relative risk of unity, and a regression line with a slope of 1 is said to have a slope of unity.
• univariable analysis: another name for univariate analysis.
• univariate analysis: statistical tests involving only 1 dependent variable; uses measures of central tendency (mean or median) and location or dispersion. The term may also apply to an analysis in which there are no independent variables. In this case, the purpose of the analysis is to describe the sample, determine how the sample compares with the population, and determine whether chance has resulted in a skewed distribution of 1 or more of the variables in the study. If the characteristics of the sample do not reflect those of the population from which the sample was drawn, the results may not be generalizable to that population.40(pp245-246)
• unpaired analysis: method that compares 2 treatment groups when the 2 treatments are not given to the same individual. Most case-control studies also use unpaired analysis.
• unpaired t test: see t test.
• U test: see Wilcoxon rank sum test.
• utility: in decision theory and clinical decision analysis, a scale used to judge the preference of achieving a particular outcome (used in studies to quantify the value of an outcome vs the discomfort of the intervention to a patient) or the discomfort experienced by the patient with a disease.42(p170) Commonly used methods are the time trade-off and the standard gamble. The result is expressed as a single number along a continuum from death (0) to full health or absence of disease (1.0). This quality number can then be multiplied by the number of years a patient is in the health state produced by a particular treatment to obtain the quality-adjusted life-year. See also 20.5, Cost-effectiveness Analysis, Cost-Benefit Analysis.
• validity (of a measurement): degree to which a measurement is appropriate for the question being addressed or measures what it is intended to measure. For example, a test may be highly consistent and reproducible over time, but unless it is compared with a criterion standard or other validation method, the test cannot be considered valid (see also diagnostic discrimination). Construct validity refers to the extent to which the measurement corresponds to theoretical concepts. Because there are no criterion standards for constructs, construct validity is generally established by comparing the results of one method of measurement with those of other methods. Content validity is the extent to which the measurement samples the entire domain under study (eg, a measurement to assess delirium must evaluate cognition). Criterion validity is the extent to which the measurement is correlated with some quantifiable external criterion (eg, a test that predicts reaction time). Validity can be concurrent (assessed simultaneously) or predictive (eg, ability of a standardized test to predict school performance).42(p171)
→ Validity of a test is sometimes mistakenly used as a synonym of reliability; the two are distinct statistical concepts and should not be used interchangeably. Validity is related to the idea of accuracy, while reliability is related to the idea of precision.
• validity (of a study): internal validity means that the observed differences between the control and comparison groups may, apart from sampling error, be attributed to the effect under study; external validity or generalizability means that a study can produce unbiased inferences regarding the target population, beyond the participants in the study.42(p171)
• Van der Waerden test: nonparametric test that is sensitive to differences in location for 2 samples from otherwise identical populations.38(p216)
• variable: characteristic measured as part of a study. Variables may be dependent (usually the outcome of interest) or independent (characteristics of individuals that may affect the dependent variable).
• variance: variation measured in a set of data for one variable, defined as the sum of the squared deviations of each data point from the mean of the variable, divided by the df (number of observations in the sample 1).44(p266) The SD is the square root of the variance.
• variance components analysis: process of isolating the sources of variability in the outcome variable for the purpose of analysis.
• variance ratio distribution: synonym for F distribution.42(p61)
• visual analog scale: scale used to quantify subjective factors such as pain, satisfaction, or values that individuals attach to possible outcomes. Participants are asked to indicate where their current feelings fall by marking a straight line with 1 extreme, such as “worst pain ever experienced,” at one end of the scale and the other extreme, such as “pain-free,” at the other end. The feeling (eg, degree of pain) is quantified by measuring the distance from the mark on the scale to the end of the scale.42(p268)
• washout period: see 20.2.2, Randomized Controlled Trials, Crossover Trials.
• Wilcoxon rank sum test: a nonparametric test that ranks and sums observations from combined samples and compares the result with the sum of ranks from 1 sample.38(p220) U is the statistic that results from the test. Alternative name for the Mann-Whitney test.
• Wilcoxon signed rank test: nonparametric test in which 2 treatments that have been evaluated by means of matched samples are compared. Each observation is ranked according to size and given the sign of the treatment difference (ie, positive if the treatment effect was positive and vice versa) and the ranks are summed.38(p220)
• Wilks Λ (lambda): a test used in multivariate analysis of variance (MANOVA) that tests the effect size for all the dependent variables considered simultaneously. It thus adjusts significance levels for multiple comparisons.
• x-axis: horizontal axis of a graph. By convention, the independent variable is plotted on the x-axis. Synonym is abscissa.
• Yates correction: continuity correction used to bring a distribution based on discontinuous frequencies closer to the continuous χ2 distribution from which χ2 tables are derived.42(p176)
• y-axis: vertical axis of a graph. By convention, the dependent variable is plotted on the y-axis. Synonym is ordinate.
• z-axis: third axis of a 3-dimensional graph, generally placed so that it appears to project out toward the reader. The z-axis and x-axis are both used to plot independent variables and are often used to demonstrate that the 2 independent variables each contribute independently to the dependent variable. See x-axis and y-axis.
• z score: score used to analyze continuous variables that represents the deviation of a value from the mean value, expressed as the number of SDs from the mean. The z score is frequently used to compare children’s height and weight measurements, as well as behavioral scores.42(p176) It is sometimes referred to as the standard score.
Figure 2. Decision tree showing decision nodes (squares) and chance outcomes (circles). End branches are labeled with outcome states. The subtrees to which the decision tree refers are depicted in a separate figure for simplicity. Adapted from Mason JJ, Owens DK, Harris RA, Cooke JP, Hlatky MA. The role of coronary angiography and coronary revascularization before noncardiac vascular surgery. JAMA. 1995;273(24):1919–1925.
Table 4. Diagnostic Discrimination
Test Result
Disease by Criterion Standard
Disease Free by Criterion Standard
Positive
a (true positives)
b (false positives)
Negative
c (false negatives)
d (true negatives)
a + c = total number of persons with disease
b + d = total number of persons without disease
Sensitivity = $aa+c$
Specificity = $db+d$
Positive predictive value = $aa+b$
Negative predictive value = $dc+d$
Figure 3. Survival curve showing outcomes for 2 treatments groups with number at risk at each time point. While numbers at risk are not essential to include in a survival analysis figure, this presentation conveys more information than the curve alone would. Adapted from Rotman M, Pajak TF, Choi K, et al. Prophylactic extended-field irradiation of para-aortic lymph nodes in stages IIB and bulky IB and IIA cervical carcinomas: ten-year treatment results of RTOG 79–20. JAMA. 1995;274(5):387–393.
Figure 4. Receiver operating characteristic curve. The 45° line represents the point at which the test is no better than chance. The area under the curve measures the performance of the test; the larger the area under the curve, the better the test performance. Adapted from Grover SA, Coupal L, Hu X-P. Identifying adults at increased risk of coronary disease: how well do the current cholesterol guidelines work? JAMA. 1995;274(10):801–806.
|
|
# Recommended way to Implement One Time Password?
I need help with the process/commands needed to implement the below functionality:
I'm modeling an OTP System, in which the user would request an OTP and the app will send him one. Later the user will use the OTP to use some functionality or something, and the OTP needs to be validated. Constraints:
• The OTP cannot be stored in the Clear.
• Need to use Thales payShield 9000, that I have available, sending HOST COMMANDS.
This is the process:
• App A request an OTP using a User Account
• App B Generates the OTP for that User Account and send it to App A
• App B Can't store the OTP in clear, so it must use the HSM to, in some way, store the OTP that can be validated later
• App A, later, sends the OTP to App B which validates it (using the HSM) and then procedes to send the approval to App A
My apologies if this doesn't make sense, and I don't have idea what I'm doing.
I do not fully understand the scheme you are describing, especially the interactions between your apps A and B. I'll try to describe a sensible HSM usage for a OTP based authentication.
I think you have a little misconception about how a typical OTP is usually handled. There is no need to store an OTP ever, it is always calculated upon use. The typical scenario is, that the user (his OTP calculating app) and the server both store a shared secret. From that secret and some additional information (like a timestamp if you use TOTP, or a counter if you use HOTP) the one time password is calculated using the HMAC algorithm.
This HMAC calculation can be done in software (if you use a normal app on your phone, like FreeOTP) or in hardware (like a PKCS#11 token) on both sides. The server side in your case, could use your HSM which stores the user's secrets to do the HMAC calculation. For a real massive multi user setting, you will probably have to create the shared secrets on the HSM dynamically from a master secret to overcome storage restrictions.
The specific procedure how you calculate your 6 digit (or whichever length you need) OTP ist outlined in the RFCs for HOTP and TOTP.
• Thanks a lot! So you mean when I receive what the user typed, at the server side I would have an HMAC key cryptogram (encrypted with the HSM Local Master Key), and a HMAC (that was generated with the key and the OTP), and with that I would validate What the user typed using the HSM? if that's the case, would I need a different HMAC key for every "transaction"? – Gatitopo Jul 30 '17 at 6:04
• Not exactly. You do not calculate an HMAC value from the OTP, but you derive the OTP from the HMAC. The calculation is more or less $HMAC(Secret, Timestamp) \rightarrow$ convert to decimal OTP. The HMAC key is different per user but static for each operation. The only thing that changes is the moving factor (timestamp or counter) – mat Jul 31 '17 at 7:55
• I see. But the OTP I need is 6 decimal digits, and the resulting HMAC (even if I use SHA-1) is 40 hexadecimal digits. In my HSM the minimun length i can specify for SHA-1 Based HMAC, is 20 hexadecimal digits (10Bytes), so How can I use it to derive the OTP? if I lose the excess data, I won't be able to verify it later (because I will need the origial HMAC). – Gatitopo Aug 1 '17 at 13:58
• I updated my answer with links to the relevant RFCs. – mat Aug 2 '17 at 7:51
Go for a time based OTP
otp = HMAC(secret, timestamp)
and on the server, check for a range of timestamps (the timestamp will not be exactly the same, checking on a range is reasonable)
This way the app doesn't have to ask for an OTP. It's basically the same approach used by some banking hardware tokens
Another possibility is to use a rolling code but you need to provide a sufficiently large window or you could end up out of sync
P.S. the first time you need to "pair" the app with the server to acquire the shared secret
|
|
## Differential and Integral Equations
### Weak continuity of dynamical systems for the KdV and mKdV equations
#### Abstract
In this paper we study weak continuity of the dynamical systems for the KdV equation in $H^{-3/4}(\mathbb{R})$ and the modified KdV equation in $H^{1/4}(\mathbb{R})$. This topic should have significant applications in the study of other properties of these equations such as finite time blow-up and asymptotic stability and instability of solitary waves. The spaces considered here are borderline Sobolev spaces for the corresponding equations from the viewpoint of the local well-posedness theory. We first use a variant of the method of [5] to prove weak continuity for the mKdV, and next use a similar result for an mKdV system and the generalized Miura transform to get weak continuity for the KdV equation.
#### Article information
Source
Differential Integral Equations, Volume 23, Number 11/12 (2010), 1001-1022.
Dates
First available in Project Euclid: 20 December 2012
https://projecteuclid.org/euclid.die/1356019070
Mathematical Reviews number (MathSciNet)
MR2742475
Zentralblatt MATH identifier
1240.35448
Subjects
|
|
# When can unlabeled data improve the learning rate?
COLT 2019
## Abstract
In semi-supervised classification, one is given access both to labeled and unlabeled data. As unlabeled data is typically cheaper to acquire than labeled data, this setup becomes advantageous as soon as one can exploit the unlabeled data in order to produce a better classifier than with labeled data alone. However, the conditions under which such an improvement is possible are not fully understood yet. Our analysis focuses on improvements in the {\em minimax} learning rate in terms of the number of labeled examples (with the number of unlabeled examples being allowed to depend on the number of labeled ones). We argue that for such improvements to be realistic and indisputable, certain specific conditions should be satisfied and previous analyses have failed to meet those conditions. We then demonstrate simple toy examples where these conditions can be met, in particular showing rate changes from $1/\sqrt{\l}$ to $e^{-c\l}$ and $1/\sqrt{\l}$ to $1/\l$. These results allow us to better understand what is and isn't possible in semi-supervised learning.
|
|
# zbMATH — the first resource for mathematics
Feasibility issues in a primal-dual interior-point method for linear programming. (English) Zbl 0726.90050
The author proposes a new method (based on the generic primal-dual algorithm) for obtaining an initial feasible interior-point solution to a linear program which avoids the use of a “big-$${\mathcal M}''$$.
Reviewer: J.Rohn (Praha)
##### MSC:
90C05 Linear programming 90-08 Computational methods for problems pertaining to operations research and mathematical programming 65K05 Numerical mathematical programming methods
MINOS; symrcm
Full Text:
##### References:
[1] I. Adler, N. Karmarkar, M.G.C. Resende and G. Veiga, ”An implementation of Karmarkar’s algorithm for linear programming,”Mathematical Programming 44 (1989) 297–336. · Zbl 0682.90061 [2] E. Barnes, ”A variation on Karmarkar’s algorithm for solving linear programming problems,”Mathematical Programming 36 (1985) 174–182. · Zbl 0626.90052 [3] I.C. Choi, C.L. Monma and D.F. Shanno, ”Further development of a primal–dual interior point method,” manuscript, Columbia University (New York, NY, 1988), to appear in:ORSA Journal on Computing. · Zbl 0757.90051 [4] G.B. Dantzig,Linear Programming and Extensions (Princeton University Press, Princeton, NJ, 1963). [5] I.I. Dikin, ”Iterative solution of problems of linear and quadratic programming,”Soviet Mathematics Doklady 8 (1967) 674–675. · Zbl 0189.19504 [6] D.M. Gay, ”Electronic mail distribution of linear programming test problems,”Mathematical Programming Society COAL Newsletter (December, 1985). [7] J.A. George and J.W.H. Liu,Computer Solution of Large Sparse Positive Definite Systems (Prentice-Hall, Englewood Cliffs, NJ, 1981). · Zbl 0516.65010 [8] P.E. Gill, W. Murray, M.A. Saunders, J.A. Tomlin and M.H. Wright, ”On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method,”Mathematical Programming 36 (1986) 183–209. · Zbl 0624.90062 [9] P.E. Gill, W. Murray, M.A. Saunders and M.H. Wright, ”A practical anti-cycling procedure for linearly constrained optimization,”Mathematical Progamming 45 (1990) 437–474. · Zbl 0688.90038 [10] N. Karmarkar, ”A new polynomial-time algorithm for linear programming,”Combinatorica 4 (1984) 373–395. · Zbl 0557.90065 [11] M. Kojima, S. Mizuno and A. Yoshise, ”A primal–dual interior point algorithm for linear programming,” in: N. Megiddo, ed.,Progress in Mathematical Programming (Springer, New York, 1988) pp. 29–48. · Zbl 0708.90049 [12] J.W.H. Liu, ”The multifrontal method and paging in sparse Cholesky factorization,” Technical Report CS-87-09, Department of Computer Science, York University (Downsview, Ontario, Canada, 1987a). · Zbl 0900.65062 [13] J.W.H. Liu, ”A collection of routines for an implementation of the multifrontal method,” Technical Report CS-87-10, Department of Computer Science, York University (Downsview, Ontario, Canada, 1987b). [14] I.J. Lustig, ”A generic primal–dual interior point algorithm,” Technical Report SOR 88-3, Program in Statistics and Operations Research, Department of Civil Engineering and Operations Research, School of Engineering and Applied Science, Princeton University (Princeton, NJ, 1988). [15] I.J. Lustig, ”An analysis of an available set of linear programming test problems,”Computers and Operations Research 16 (1989) 173–184. · Zbl 0661.90056 [16] K.A. McShane, C.L. Monma and D.F. Shanno, ”An implementation of a primal–dual interior point method for linear programming,”ORSA Journal on Computing 1 (1989) 70–83. · Zbl 0752.90047 [17] N. Megiddo, ”Pathways to the optimal set in linear programming,” in: N. Megiddo, ed.,Progress in Mathematical Programming (Springer, New York, 1988) pp. 131–158. [18] C.L. Monma and A.J. Morton, ”Computational experience with a dual affine variant of Karmarkar’s method for linear programming,”Operations Research Letters 6 (1987) 261–267. · Zbl 0627.90065 [19] R.C. Monteiro and I. Adler, ”Interior path following primal–dual algorithms–Part I: linear programming,”Mathematical Programming 44 (1989) 27–42. · Zbl 0676.90038 [20] B.A. Murtagh and M.A. Saunders, MINOS 5.1 user’s guide, Technical Report SOL 83-20R, Department of Operations Research, Stanford University (Stanford, CA, 1987). [21] R.J. Vanderbei, M.S. Meketon and B.A. Freedman, ”A modification of Karmarkar’s linear programming algorithm,”Algorithmica 1 (1986) 395–408. · Zbl 0626.90056
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
|
Are quantum-entangled particles affected by relativistic speeds? [duplicate]
In my answer to a recent question on World Building, I suggested that quantum-entangled particles would be a good way for ships traveling at relativistic speeds to communicate. My understanding is that quantum-entanglement allows for instantaneous action at a distance.
Is this accurate? Do quantum-entangled particles, when flipped, effect an immediate/instantaneous flip on the other side? Do relativistic speeds affect quantum-entangled particles in their speed of "communication?"
• Quantum entanglement is the product of a local theory, and hence will not permit information to be transmitted faster than light. This is a duplicate of, e.g., physics.stackexchange.com/q/78118/50583 Dec 11, 2014 at 23:15
• Quantum entanglement is possibly the most extreme example of "correlation is not causation" that we know about. It produces pure correlation without any causal relationship, whatsoever. As such it plays very nicely into our sense for paradoxes, which is basically another way of saying that the human mind is very stubborn to accept facts that contradict "obvious" but fundamentally false lines of intuitive reasoning. Dec 11, 2014 at 23:35
1 Answer
First of all I recommend you to read the answer of Hypnosifl to the question "Entangled photons never show interference in the total pattern without coincidence count" implies FTL.
Next, what flip you want to transmit? Flipping the spin of a particle is a local, unitary operation, not even a measurement. You pass your (spin endowed) through a spin-flipper, or, if it is a photon you can rotate its polarization if it is linear. It's local, completely controllable operation, with no implication on the other side. To the difference, if you introduce a certain decoherence between the states of one particle, that influences the entanglement as a whole, but flipping is not of this type.
To be clear, here is an example of introducing the decoherence that I am talking about: consider the very well-known singlet-state
$$\frac{\lvert x \rangle \lvert x \rangle + \lvert y \rangle \lvert y \rangle}{\sqrt{2}}$$
where the first vector in a product refers to a particle in the lab of one experimenter, and the second vector to a particle in the lab of the other experimenter. Let's name the two experimenters A, respectively B. If A measures the property of eigenstates |x> and |y>, and finds, say, x, that decoheres the whole wave-function. The topic of decoherence is widely covered in this forum - see related topics. But the experimenter B will get no information of what did A.
|
|
# Shortest common supersequence
## Problem description
Given an alphabet and a set of sequences composed of characters of that alphabet, the goal is to find a supersequence S, i.e. a sequence containing all of the original sequences, of a minimum length. Sequence r is contained in S if and only if all characters of r are present in S in the same order as they appear in r.
Input:
• Alphabet A, of which the sequences are composed.
• N sequences (not necessarily of the same length).
Output: Supersequence S of the minimum length.
## Possible representations
• Linear string of characters of the given Alphabet.
s1: ca ag cca cc ta cat c a
s2: c gag ccat ccgtaaa g tt g
s3: aga acc tgc taaatgc t a ga
----------------------------
Supersequence S: cagagaccatgccgtaaatgcattacga
## Uniform evaluation function
Supersequence quality S is calculated as
where is the total number of characters the supersequence S covers, and is a contribution for the length of S calculated as
where is the total number of all characters in the original sequences.
The function is to be maximized.
If solved as a pure black-box optimization problem, one could use information about partial sequences only in the evaluation function. Here, it is allowed to use or analyze partial sequences elsewhere as well, e.g., in recombination operators.
|
|
# Properties
Label 441.6.a.y Level $441$ Weight $6$ Character orbit 441.a Self dual yes Analytic conductor $70.729$ Analytic rank $0$ Dimension $4$ CM no Inner twists $4$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$441 = 3^{2} \cdot 7^{2}$$ Weight: $$k$$ $$=$$ $$6$$ Character orbit: $$[\chi]$$ $$=$$ 441.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$70.7292645375$$ Analytic rank: $$0$$ Dimension: $$4$$ Coefficient field: $$\Q(\sqrt{19}, \sqrt{69})$$ Defining polynomial: $$x^{4} - 2x^{3} - 71x^{2} + 72x - 15$$ x^4 - 2*x^3 - 71*x^2 + 72*x - 15 Coefficient ring: $$\Z[a_1, \ldots, a_{5}]$$ Coefficient ring index: $$2^{8}\cdot 3^{2}$$ Twist minimal: yes Fricke sign: $$-1$$ Sato-Tate group: $\mathrm{SU}(2)$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + \beta_1 q^{2} + 44 q^{4} + \beta_{2} q^{5} + 12 \beta_1 q^{8}+O(q^{10})$$ q + b1 * q^2 + 44 * q^4 + b2 * q^5 + 12*b1 * q^8 $$q + \beta_1 q^{2} + 44 q^{4} + \beta_{2} q^{5} + 12 \beta_1 q^{8} - \beta_{3} q^{10} + 43 \beta_1 q^{11} - \beta_{3} q^{13} - 496 q^{16} - 11 \beta_{2} q^{17} + \beta_{3} q^{19} + 44 \beta_{2} q^{20} + 3268 q^{22} + 341 \beta_1 q^{23} + 6811 q^{25} + 76 \beta_{2} q^{26} + 518 \beta_1 q^{29} + 11 \beta_{3} q^{31} - 880 \beta_1 q^{32} + 11 \beta_{3} q^{34} - 5466 q^{37} - 76 \beta_{2} q^{38} - 12 \beta_{3} q^{40} + 99 \beta_{2} q^{41} + 12540 q^{43} + 1892 \beta_1 q^{44} + 25916 q^{46} - 100 \beta_{2} q^{47} + 6811 \beta_1 q^{50} - 44 \beta_{3} q^{52} - 1738 \beta_1 q^{53} - 43 \beta_{3} q^{55} + 39368 q^{58} + 428 \beta_{2} q^{59} + 55 \beta_{3} q^{61} - 836 \beta_{2} q^{62} - 51008 q^{64} + 9936 \beta_1 q^{65} - 29996 q^{67} - 484 \beta_{2} q^{68} + 7051 \beta_1 q^{71} + 56 \beta_{3} q^{73} - 5466 \beta_1 q^{74} + 44 \beta_{3} q^{76} + 80168 q^{79} - 496 \beta_{2} q^{80} - 99 \beta_{3} q^{82} + 308 \beta_{2} q^{83} - 109296 q^{85} + 12540 \beta_1 q^{86} + 39216 q^{88} - 209 \beta_{2} q^{89} + 15004 \beta_1 q^{92} + 100 \beta_{3} q^{94} - 9936 \beta_1 q^{95} + 154 \beta_{3} q^{97}+O(q^{100})$$ q + b1 * q^2 + 44 * q^4 + b2 * q^5 + 12*b1 * q^8 - b3 * q^10 + 43*b1 * q^11 - b3 * q^13 - 496 * q^16 - 11*b2 * q^17 + b3 * q^19 + 44*b2 * q^20 + 3268 * q^22 + 341*b1 * q^23 + 6811 * q^25 + 76*b2 * q^26 + 518*b1 * q^29 + 11*b3 * q^31 - 880*b1 * q^32 + 11*b3 * q^34 - 5466 * q^37 - 76*b2 * q^38 - 12*b3 * q^40 + 99*b2 * q^41 + 12540 * q^43 + 1892*b1 * q^44 + 25916 * q^46 - 100*b2 * q^47 + 6811*b1 * q^50 - 44*b3 * q^52 - 1738*b1 * q^53 - 43*b3 * q^55 + 39368 * q^58 + 428*b2 * q^59 + 55*b3 * q^61 - 836*b2 * q^62 - 51008 * q^64 + 9936*b1 * q^65 - 29996 * q^67 - 484*b2 * q^68 + 7051*b1 * q^71 + 56*b3 * q^73 - 5466*b1 * q^74 + 44*b3 * q^76 + 80168 * q^79 - 496*b2 * q^80 - 99*b3 * q^82 + 308*b2 * q^83 - 109296 * q^85 + 12540*b1 * q^86 + 39216 * q^88 - 209*b2 * q^89 + 15004*b1 * q^92 + 100*b3 * q^94 - 9936*b1 * q^95 + 154*b3 * q^97 $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4 q + 176 q^{4}+O(q^{10})$$ 4 * q + 176 * q^4 $$4 q + 176 q^{4} - 1984 q^{16} + 13072 q^{22} + 27244 q^{25} - 21864 q^{37} + 50160 q^{43} + 103664 q^{46} + 157472 q^{58} - 204032 q^{64} - 119984 q^{67} + 320672 q^{79} - 437184 q^{85} + 156864 q^{88}+O(q^{100})$$ 4 * q + 176 * q^4 - 1984 * q^16 + 13072 * q^22 + 27244 * q^25 - 21864 * q^37 + 50160 * q^43 + 103664 * q^46 + 157472 * q^58 - 204032 * q^64 - 119984 * q^67 + 320672 * q^79 - 437184 * q^85 + 156864 * q^88
Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} - 2x^{3} - 71x^{2} + 72x - 15$$ :
$$\beta_{1}$$ $$=$$ $$( 4\nu^{3} - 6\nu^{2} - 294\nu + 148 ) / 7$$ (4*v^3 - 6*v^2 - 294*v + 148) / 7 $$\beta_{2}$$ $$=$$ $$( 48\nu^{3} - 72\nu^{2} - 3360\nu + 1692 ) / 7$$ (48*v^3 - 72*v^2 - 3360*v + 1692) / 7 $$\beta_{3}$$ $$=$$ $$24\nu^{2} - 24\nu - 864$$ 24*v^2 - 24*v - 864
$$\nu$$ $$=$$ $$( \beta_{2} - 12\beta _1 + 12 ) / 24$$ (b2 - 12*b1 + 12) / 24 $$\nu^{2}$$ $$=$$ $$( \beta_{3} + \beta_{2} - 12\beta _1 + 876 ) / 24$$ (b3 + b2 - 12*b1 + 876) / 24 $$\nu^{3}$$ $$=$$ $$( \beta_{3} + 50\beta_{2} - 572\beta _1 + 872 ) / 16$$ (b3 + 50*b2 - 572*b1 + 872) / 16
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
0.705587 9.01221 −8.01221 0.294413
−8.71780 0 44.0000 −99.6795 0 0 −104.614 0 868.986
1.2 −8.71780 0 44.0000 99.6795 0 0 −104.614 0 −868.986
1.3 8.71780 0 44.0000 −99.6795 0 0 104.614 0 −868.986
1.4 8.71780 0 44.0000 99.6795 0 0 104.614 0 868.986
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Atkin-Lehner signs
$$p$$ Sign
$$3$$ $$1$$
$$7$$ $$-1$$
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
3.b odd 2 1 inner
7.b odd 2 1 inner
21.c even 2 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 441.6.a.y 4
3.b odd 2 1 inner 441.6.a.y 4
7.b odd 2 1 inner 441.6.a.y 4
21.c even 2 1 inner 441.6.a.y 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
441.6.a.y 4 1.a even 1 1 trivial
441.6.a.y 4 3.b odd 2 1 inner
441.6.a.y 4 7.b odd 2 1 inner
441.6.a.y 4 21.c even 2 1 inner
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{6}^{\mathrm{new}}(\Gamma_0(441))$$:
$$T_{2}^{2} - 76$$ T2^2 - 76 $$T_{5}^{2} - 9936$$ T5^2 - 9936 $$T_{13}^{2} - 755136$$ T13^2 - 755136
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$(T^{2} - 76)^{2}$$
$3$ $$T^{4}$$
$5$ $$(T^{2} - 9936)^{2}$$
$7$ $$T^{4}$$
$11$ $$(T^{2} - 140524)^{2}$$
$13$ $$(T^{2} - 755136)^{2}$$
$17$ $$(T^{2} - 1202256)^{2}$$
$19$ $$(T^{2} - 755136)^{2}$$
$23$ $$(T^{2} - 8837356)^{2}$$
$29$ $$(T^{2} - 20392624)^{2}$$
$31$ $$(T^{2} - 91371456)^{2}$$
$37$ $$(T + 5466)^{4}$$
$41$ $$(T^{2} - 97382736)^{2}$$
$43$ $$(T - 12540)^{4}$$
$47$ $$(T^{2} - 99360000)^{2}$$
$53$ $$(T^{2} - 229568944)^{2}$$
$59$ $$(T^{2} - 1820116224)^{2}$$
$61$ $$(T^{2} - 2284286400)^{2}$$
$67$ $$(T + 29996)^{4}$$
$71$ $$(T^{2} - 3778461676)^{2}$$
$73$ $$(T^{2} - 2368106496)^{2}$$
$79$ $$(T - 80168)^{4}$$
$83$ $$(T^{2} - 942568704)^{2}$$
$89$ $$(T^{2} - 434014416)^{2}$$
$97$ $$(T^{2} - 17908805376)^{2}$$
|
|
Volume 345 - International Conference on Hard and Electromagnetic Probes of High-Energy Nuclear Collisions (HardProbes2018) - Electroweak Probes
PoS(HardProbes2018)179Evidence for light-by-light scattering and limits onaxion-like-particles from ultraperipheral PbPbcollisions at $\sqrt{s_{\rm NN}}$ =5.02TeV
J. Niedziela* on behalf of the CMS collaboration
*corresponding author
Full text: pdf
Pre-published on: January 11, 2019
Published on: April 24, 2019
Abstract
A measurement of light-by-light scattering, $\gamma\gamma \rightarrow \gamma\gamma$, in ultraperipheral PbPb collisions at a centre-of-mass energy per nucleon pair of $5.02$ TeV is reported. The analysis is conducted using a data sample corresponding to an integrated luminosity of $390~\mu \rm{b}^{-1}$ recorded by the CMS experiment at the LHC. Light-by-light scattering processes are selected in events with two photons exclusively produced, each with transverse energy $E_T > 2$ GeV, pseudorapidity $\mid\eta\mid < 2.4$, diphoton invariant mass $m_{\gamma\gamma} > 5$ GeV, diphoton transverse momentum $p_T < 1$ GeV, and diphoton acoplanarity below $0.01$. After all selection criteria are applied, $14$ events are observed, compared to expectations of $11.1 \pm 1.1$ (th) events for the signal and $4.0 \pm 1.2$ (stat) for the background processes. The significance of the light-by-light signal hypothesis against the background-only hypothesis is 4.1 standard deviations. The measured fiducial light-by-light scattering cross section, $\sigma_{fid} (\gamma\gamma \rightarrow \gamma\gamma ) = 120 \pm 46~\rm{(stat)} \pm 29~\rm{(syst)} \pm 4~\rm{(th)}$ nb is consistent with the Standard Model prediction. The present results allow also to place new competitive limits, reported for the first time, on the production of pseudoscalar axion-like particles, produced in the process $\gamma\gamma \rightarrow a \rightarrow \gamma\gamma$, over the mass range $m_a = 5-50$ GeV.
DOI: https://doi.org/10.22323/1.345.0179
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
|
• 9
• 10
• 11
• 13
• 9
• entries
65
133
• views
77969
D3D10 + Deferred Shading = MSAA!
447 views
For the last few days I've been working at learning D3D10 and using it to whip up a quick prototype of doing fully-deferred shading while using MSAA (multisample antialiasing). If you're not sure what deferred shading is, but are curious, check out the deferred shading paper at NVIDIA's developer site. Here are my notes on the experience.
On D3D10
D3D10 is very, very well-designed. It has some interesting new bits of functionality, but the way that the API has been rearranged is really quite nice, though it does tend to produce MUCH more verbose code.
One thing in particular that is very different is the buffer structure. rather than creating a buffer of a specific type (IDirectDrawVertexBuffer9, etc), you simply create a generic buffer (ID3D10Buffer). When you create a buffer, you specify the flags with which it can be bound (as a vertex buffer, index buffer, render target, constant buffer [another new feature], etc).
For instance, here's my helper function to create a vertex buffer:
HRESULT CreateVertexBuffer(void *initialData, DWORD size, ID3D10Buffer** vb){ D3D10_BUFFER_DESC bd; bd.Usage = D3D10_USAGE_IMMUTABLE; // This tells it that the buffer will be filled with data on initialization and never updated again. bd.ByteWidth = size; bd.BindFlags = D3D10_BIND_VERTEX_BUFFER; // This tells it that it will be used as a vertex buffer bd.CPUAccessFlags = 0; bd.MiscFlags = 0; // Pass the pointer to the vertex data into the creation D3D10_SUBRESOURCE_DATA vInit; vInit.pSysMem = initialData; return g_device->CreateBuffer( &bd, &vInit, vb);}
It's pretty straightforward, but you can see that it's a tad more verbose than a single-line call to IDirect3DDevice9::CreateVertexBuffer.
Another thing that I really like is the whole constant buffer idea. Basically, when passing state to shaders, rather than setting individual shader states, you build constant buffers, which you apply to constant buffer slots (15 buffer slots that can hold 4096 constants each - which adds up to a crapton of constants). So you can have different constant blocks that you can Map/Unmap (the D3D10 version of Lock/Unlock) to write data into, and you can update them based on frequency. For instance, I plan to have a cblocks that are per-world, per-frame, per-material, and per-object.
But the feature that's most relevant to this project is this little gem:
You can read individual samples from a multisample render target.
This is what allows you to do deferred shading with true multisample anti-aliasing in D3D10.
The only thing that really, really sucks about D3D10 is the documentation. It is missing a lot of critical information, some of the function definitions are wrong, sample code has incorrect variable names, etc, etc. It's good at giving a decent overview, but when you start to drill into specifics, there's still a lot of work to be done.
SV_Position: What Is It Good For (Absolutely Lots!)
SV_Position is the D3D10 equivalent of the POSITION semantic: it's what you write out of your vertex shader to set the vertex position.
However, you can also use it in a pixel shader. But what set of values does it contain when it reaches the pixel shader? The documentation was (unsurprisingly) not helpful in determining this.
Quite simply, it gives you viewport coordinates. That is, x and y will give you the absolute coordinates of the current texel you're rendering in the framebuffer (if your framebuffer is 640x480, then a SV_Position.xy in the middle would be (320x240)).
The Z coordinate is a viewport Z coordinate (if your viewport's MinZ is 0.5 and your MaxZ is 1, then this z coordinate will be confined to that range as well).
The W coordinate I'm less sure about - it seemed to be the (interpolated) w value from the vertex shader, but I'm not positive on that.
I thought this viewport-coordinate thing was a tad odd...I mean, who cares which absolute pixel you're at on the view? Why not just give me a [0..1] range? As it turns out, when sampling multisample buffers, you actually DO care, because you don't "sample" them. You "load" them.
Doing a texture Load does not work quite like doing a texture Sample. Load takes integer coordinates that correspond to the absolute pixel value to read. Load is also the only way to grab a specific sample out of the pack.
But, in conjunction with our delicious SV_Position absolute-in-the-render-target coordinates, you have exactly the right information!
Pulling a given sample out of the depth texture is as easy as:
int sample; // this contains the index of the sample to load. If this is a 4xAA texture, then sample is in the range [0, 3].VertexInput i; // i.position is the input SV_Position. It contains the absolute pixel coordinates of the current render.Texture2DMS<float, NUMSAMPLES> depthTexture; // This is the depth texture - it's a 2D multi-sample texture, defined as // having a single float, and having NUMSAMPLES samples// Here's the actual line of sampling code float depth = depthTexture.Load(int3((int2)i.position.xy, 0), sample).x;
Simple! I do believe it is for exactly this type of scenario (using Load to do postprocess work) that SV_Position in the PS was designed the way it is. Another mystery of the universe solved. Next on the list: "What makes creaking doors so creepy?"
Workin' It
Simply running the deferred algorithm for each sample in the deferred GBuffers' current texel and averaging them together works just fine. That gets you the effect with a minimum of hassle. But I felt that it could be optimized a bit.
The three deferred render targets that get used in this demo are the unlit diffuse color buffer (standard A8R8G8B8), the depth render (R32F), and the normal map buffer (A2R10G10B10). The depth render is not necessary in D3D10 when there is no multisampling, because you can read from a non-ms depth buffer in D3D10. However, you can't map a multisampled depth buffer as a texture, so I have to render depth on my own.
Anyway, I wanted to have a flag that denoted whether or not a given location's samples were different or not. That is, if it's along a poly edge, the samples are probably different. But, due to the nature of multisampling, if a texel is entirely within a polygon's border, all of the samples will contain the same data. There is really no need to do deferred lighting calculations on MULTIPLE samples when one would do just fine. So I added a pass that runs through each pixel and tests the color and depth samples for differences. If there ARE differences, it writes a 1 to the previously-useless 2-bit alpha channel in the normal map buffer. Otherwise, it writes a 0.
What this does, is allows me to selectively decide whether to do the processing on multiple samples (normalMap.w == 1) or just a single one (normalMap.w == 0).
Here is a visualization:
Click to enlarge
I've tinted it red where extra work is done (the shading is done per-sample) and blue where shading is only done once.
This didn't have the massive performance boost that I was expecting - I figured having a single pass through almost all samples then only loading them as-needed would save massive amounts of texture bandwidth during the lighting phase, as well as cutting down on the processing.
I was half-right.
In fact, the performance boost was much smaller than expected. The reason is, I've guessed, is that when caching the multisample texture, it caches all of the samples (because it's likely that they'll all be read under normal circumstances), so it really doesn't cut down on the memory bandwidth at all. What it DOES cut down on is the processing which, as the lighting gets more complex (shadowing is added, etc), WILL become important. Also, since my shader is set up to be able to do up to 8 lights in a single pass, It renders 25 full-scene directional lights (in...4 passes) at about 70fps with 4xAA at 1280x964 (maximized window, so not the whole screen) on my 8800GTX. As a comparison, it's about 160 fps without the AA.
With a more reasonable 4 lights (single-pass) it's 160fps at that resolution with AA, and 550 without. Not bad at all!
Here are two screenshots, one with AA, one without (respectively). Note that they look exactly the same in thumbnails. I could have probably used the same thumbnail for them, but whatever :)
Click to enlarge
And here it is!
Crappy D3D10 Deferred-with-AA demo (with hideous, hideous source!)
Pressing F5 will toggle the AA on and off (it just uses 4xAA). It defaults to off.
There are no comments to display.
|
|
You can divide by $2$ and solve for $w$. So the set of all solutions is parametrized by $(x,y,z)\in\mathbb{Z}^3$ arbitrary.
|
|
WIKISKY.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# θ And (Theta Andromedae)
Contents
### Images
DSS Images Other Images
### Related articles
Unconstrained Astrometric Orbits for Hipparcos Stars with Stochastic SolutionsA considerable number of astrometric binaries whose positions on the skydo not obey the standard model of mean position, parallax, and linearproper motion were observed by the Hipparcos satellite. Some of themremain undiscovered, and their observational data have not been properlyprocessed with the more adequate astrometric model that includesnonlinear orbital motion. We develop an automated algorithm, based ongenetic optimization,'' to solve the orbital fitting problem in themost difficult setup, when no prior information about the orbitalelements is available (from, e.g., spectroscopic data or radial velocitymonitoring). We also offer a technique to accurately compute theprobability that an orbital fit is bogus, that is, that an orbitalsolution is obtained for a single star, and to estimate the probabilitydistributions for the fitting orbital parameters. We test this method onHipparcos stars with known orbital solutions in the catalog and furtherapply it to 1561 stars with stochastic solutions, which may beunresolved binaries. At a confidence level of 99%, orbital fits areobtained for 65 stars, most of which have not been known as binary. Itis found that reliable astrometric fits can be obtained even if theperiod is somewhat longer than the time span of the Hipparcos mission,that is, if the orbit is not closed. A few of the new probable binarieswith A-type primaries with periods 444-2015 days are chemically peculiarstars, including Ap and λ Bootis types. The anomalous spectra ofthese stars are explained by admixtures of light from the unresolved,sufficiently bright and massive companions. We estimate the apparentorbits of four stars that have been identified as members of the ~300Myr old Ursa Major kinematic group. Another four new nearby binaries mayinclude low-mass M-type or brown dwarf companions. Follow-upspectroscopic observations in conjunction with more accurate inclinationestimates will lead to better estimates of the secondary mass. Similarastrometric models and algorithms can be used for binary stars andplanet hosts observed by SIM and Gaia. The physical properties of normal A starsDesignating a star as of A-type is a result of spectral classification.After separating the peculiar stars from those deemed to be normal usingthe results of a century of stellar astrophysical wisdom, I define thephysical properties of the "normal" stars. The hotter A stars haveatmospheres almost in radiative equilibrium. In the A stars convectivemotions can be found which increase in strength as the temperaturedecreases. Magnetospheres and Disk Accretion in Herbig Ae/Be StarsWe present evidence of magnetically mediated disk accretion in HerbigAe/Be stars. Magnetospheric accretion models of Balmer and sodiumprofiles calculated with appropriate stellar and rotational parametersare in qualitative agreement with the observed profiles of the Herbig Aestar UX Ori and yield a mass accretion rate of ~10-8Msolar yr-1. If more recent indications of anextremely large rotation rate for this object are correct, the magneticfield geometry must deviate from that of a standard dipole in order toproduce line emission consistent with observed flux levels. Models ofthe associated accretion shock qualitatively explain the observeddistribution of excess fluxes in the Balmer discontinuity for a largeensemble of Herbig Ae/Be stars and imply typically small mass accretionrates, <~10-7 Msolar yr-1. In orderfor accretion to proceed onto the star, significant amounts of gas mustexist inside the dust destruction radius, which is potentiallyproblematic for recently advocated scenarios of puffed'' inner dustwall geometries. However, our models of the inner gas disk show that forthe typical accretion rates we have derived, the gas should generally beoptically thin, thus allowing direct stellar irradiation of the innerdust edge of the disk. Elemental abundance analyses with DAO spectrograms. XXVII. The superficially normal stars theta And (A2 IV), epsilon Del (B6 III), epsilon Aqr (A1.5 V), and iota And (B9 V)The superficially normal stars theta And (A2 V), epsilon Del (B6 III),epsilon Aqr (A1.5 V), and iota And (B9 V), which have rotationallybroadened line profiles, are analyzed in a manner consistent withprevious studies of this series using 2.4 Åmm-1spectrograms obtained with CCD detectors and S/N >=200. Theirvariable radial velocities strongly suggest they are spectroscopicbinaries. As no evidence is seen for lines of their companions they areanalyzed as single stars. Their derived abundances are generally nearsolar. But those for theta And suggest that it is possibly a fastrotating Am star.Table 5 is only available in electronic form at the CDS via anonymousftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/406/975 Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 Å are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \ref{results} and\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/897 On the effective temperatures and surface gravities of superficially normal main sequence band B and A starsEffective temperatures and surface gravities for 48 main sequence band Band A stars were found by matching optical region spectrophotometry andHγ profiles with the predictions of ATLAS9 solar composition modelatmospheres. When these values were compared with those found usingStrömgren uvbybeta photometry based on ATLAS6 model atmospheres, wefound a difference (photometry-spectrophotometry) of 25+/- 118 K for 29stars with 8000 K le Teff <= 10 050 K compared to 76 +/-105 K for 14 stars with 10 050 K <= Teff <= 17 000 K.The surface gravity scales are in agreement. These stars aresufficiently hot that their effective temperatures and surface gravitydeterminations are unaffected by discrepancies due to the choice ofMixing-Length or Canuto-Mazzitelli convection theories. CO 1st overtone spectra of cool evolved stars: Diagnostics for hydrodynamic atmosphere modelsWe present spectra covering the wavelength range 2.28 to 2.36 mu m at aresolution of Delta lambda = 0.0007 mu m (or R = 3500) for a sample of24 cool evolved stars. The sample comprises 8 M supergiants, 5 M giants,3 S stars, 6 carbon stars, and 2 RV Tauri variables. The wavelengthscovered include the main parts of the 12C16O v =2-0 and 3-1 overtone bands, as well as the v = 4-2 and 13CO v= 2-0 bandhead regions. CO lines dominate the spectrum for all the starsobserved, and at this resolution most of the observed features can beidentified with individual CO R- or P-branch lines or blends. Theobserved transitions arise from a wide range of energy levels extendingfrom the ground state to E/k > 20 000 K. We looked for correlationsbetween the intensities of various CO absorption line features and otherstellar properties, including IR colors and mass loss rates. Two usefulCO line features are the v = 2-0 R14 line, and the CO v = 2-0 bandhead.The intensity of the 2-0 bandhead shows a trend with K-[12] color suchthat the reddest stars (K-[12] > 3 mag) exhibit a wide range in 2-0bandhead depth, while the least reddened have the deepest 2-0 bandheads,with a small range of variation from star to star. Gas mass loss ratesfor both the AGB stars and the red supergiants in our sample correlatewith the K-[12] color, consistent with other studies. The data implythat stars with dot M_gas < 5x 10-7 Msuny-1 exhibit a much narrower range in the relative strengthsof CO 2-0 band features than stars with higher mass loss rates. Therange in observed spectral properties implies that there are significantdifferences in atmospheric structure among the stars in this sample.Figures 4-9, 11-14, 16, 17, 19-21, 23, 24 are only avalaible inelectronic form at http://www.edpsciences.org Speckle Interferometry of New and Problem Hipparcos Binaries. II. Observations Obtained in 1998-1999 from McDonald ObservatoryThe Hipparcos satellite made measurements of over 9734 known doublestars, 3406 new double stars, and 11,687 unresolved but possible doublestars. The high angular resolution afforded by speckle interferometrymakes it an efficient means to confirm these systems from the ground,which were first discovered from space. Because of its coverage of adifferent region of angular separation-magnitude difference(ρ-Δm) space, speckle interferometry also holds promise toascertain the duplicity of the unresolved Hipparcos problem'' stars.Presented are observations of 116 new Hipparcos double stars and 469Hipparcos problem stars,'' as well as 238 measures of other doublestars and 246 other high-quality nondetections. Included in these areobservations of double stars listed in the Tycho-2 Catalogue andpossible grid stars for the Space Interferometry Mission. Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 Research Note Hipparcos photometry: The least variable starsThe data known as the Hipparcos Photometry obtained with the Hipparcossatellite have been investigated to find those stars which are leastvariable. Such stars are excellent candidates to serve as standards forphotometric systems. Their spectral types suggest in which parts of theHR diagrams stars are most constant. In some cases these values stronglyindicate that previous ground based studies claiming photometricvariability are incorrect or that the level of stellar activity haschanged. Table 2 is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/367/297 ICCD Speckle Observations of Binary Stars. XXIII. Measurements during 1982-1997 from Six Telescopes, with 14 New OrbitsWe present 2017 observations of 1286 binary stars, observed by means ofspeckle interferometry using six telescopes over a 15 year period from1982 April to 1997 June. These measurements constitute the 23dinstallment in CHARA's speckle program at 2 to 4 m class telescopes andinclude the second major collection of measurements from the MountWilson 100 inch (2.5 m) Hooker Telescope. Orbital elements are alsopresented for 14 systems, seven of which have had no previouslypublished orbital analyses. Spectroscopy of Pre-Main-Sequence Candidates of Spectral Type AF in the Young Galactic Cluster IC 4996We present the results of a spectroscopic analysis of thepre-main-sequence (PMS) candidates in IC 4996, proposed by Delgado etal. Spectral types and heliocentric radial velocities are calculated for16 stars in the field observed by these authors, 13 of them located inthe region of the color-magnitude diagram where their proposed PMS starsare located. The estimated heliocentric radial velocity of the clusteris centered around -12+/-5 km s^-1. From the radial velocitydistribution, six stars are rejected as cluster members, one of themshowing spectral features characteristic of an Am star. The remaining 10stars are confirmed as cluster members: three B-type stars and seven PMSstars of spectral types A4-F0 (six stars) and early G (one star). One ofthe proposed PMS members clearly shows radial velocity and spectral typevariations, as well as relatively broad Hα absorption. The G-typecluster member is a weak-lined T Tauri star with strong Li Iλ6708 absorption [W_λ(Li I)~=0.26 Å]. These resultsstrongly support the presence in the cluster of a populated sequence ofPMS stars of AF spectral type. A Second Catalog of Orbiting Astronomical Observatory 2 Filter Photometry: Ultraviolet Photometry of 614 StarsUltraviolet photometry from the Wisconsin Experiment Package on theOrbiting Astronomical Observatory 2 (OAO 2) is presented for 614 stars.Previously unpublished magnitudes from 12 filter bandpasses withwavelengths ranging from 1330 to 4250 Å have been placed on thewhite dwarf model atmosphere absolute flux scale. The fluxes wereconverted to magnitudes using V=0 for F(V)=3.46x10^-9 ergs cm^-2 s^-1Å^-1, or m_lambda=-2.5logF_lambda-21.15. This second catalogeffectively doubles the amount of OAO 2 photometry available in theliterature and includes many objects too bright to be observed withmodern space observatories. Averaged energy distributions in the stellar spectra.Not Available Effective temperatures of AP starsA new method of determination of the effective temperatures of Ap starsis proposed. The method is based on the fact that the slopes of theenergy distribution in the Balmer continuum near the Balmer jump fornormal" main sequence stars and chemically peculiar stars with thesame Teff are identical. The effective temperaturecalibration is based on a sample of main sequence stars with well knowntemperatures (\cite[Sokolov 1995]{sokolov}). It is shown that theeffective temperatures of Ap stars are derived by this method in goodagreement with those derived by the infrared flux method and by themethod of \cite[Stepien & Dominiczak (1989)]{stepien}. On the otherhand, the comparison of obtained Teff with Teffderived from the color index (B2-G) of Geneva photometry shows a largescatter of the points, nevertheless there are no systematicaldifferences between two sets of the data. Chemically peculiar stars in the field of NGC 2244Low-resolution long-slit spectra of reference stars, including MKKstandard stars and well-known chemically peculiar stars, are used todevelop a spectroscopic method for the detection of the 5200 {Angstroms} flux depression in CP stars. This new method is shown to beas sensitive a detection tool as the photometrical techniques, andprovides a higher resolution view of the excess blocking. Application tostars in the field of NGC 2244 allows us to estimate and eliminatereddening effects. CP stars detected in this field include two members(# 334 and # 276) of the very young stellar group NGC 2244 (age ~ 3 x10(6) yr) and two or three foreground stars (# 381, # 625 and maybe #629). # 334 and # 625 are strongly peculiar. Based on observationsobtained at the Observatoire du Haute--Provence (OHP), France Spectroscopic and photometric investigations of MAIA candidate starsIncluding our own observational material and the Hipparcos photometrydata, we investigate the radial velocity and brightness of suspectedMaia variable stars which are classified also in some examples aspeculiar stars, mainly for the existence of periodic variations withtime-scales of hours. The results lead to the following conclusions: (1)Short-term radial velocity variations have been unambiguously proved forthe A0 V star gamma CrB and the A2 III star gamma UMi. The stars pulsatein an irregular manner. Moreover, gamma CrB shows a multiperiodstructure quite similar to some of the best-studied neighbouring deltaScu stars. (2) In the Hipparcos photometry as well as in our photometricruns we find significant short- and long-term variations in the stars HD8441, 2 Lyn, theta Vir, gamma UMi, and gamma CrB. For ET And theHipparcos data confirm a short-period variation found already earlier.Furthermore, we find changes of the colour index in theta Vir and gammaCrB on a time-scale of days. (3) No proofs for the existence of aseparate class of variables, designated as Maia variables, are found. Ifthe irregular behaviour of our two best-investigated stars gamma CrB andgamma UMi is typical for pulsations in this region of theHertzsprung-Russell diagram, our observational runs are too short andthe accuracy of the measurements too low to exclude such pulsations inthe other stars, however. (4) The radial velocities of the binariesalpha Dra and ET And have been further used for a recalculation of theorbital elements. For HD 8441 and 2 Lyn we estimated the orbitalelements for the first time. (5) Zeeman observations of the stars gammaGem, theta Vir, alpha Dra, 4 Lac, and ET And give no evidence of thepresence of longitudinal magnetic field strengths larger than about 150gauss. Based on spectroscopic observations taken with the 2\,m telescopeat the Th{ü Identification of lambda Bootis stars using IUE spectra. I. Low resolution dataAn analysis of the stars included in the catalogue of lambda Bootisstars by Paunzen et al. (1997) and which also have IUE observations ispresented here. Population I A-F type stars as well as field horizontalbranch stars were also included in the analysis. Using line-ratios ofcarbon to heavier elements (Al and Ni) allows us to establishunambiguous membership criteria for the lambda Bootis group. Tables 1-3are only available in electronic form at CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html The absolute magnitude of the early type MK standards from HIPPARCOS parallaxesWe analyse the standards of the MK system with the help of Hipparcosparallaxes, using only stars for which the error of the absolutemagnitude is <= 0.3 mag. We find that the main sequence is a wideband and that, although in general giants and dwarfs have differentabsolute magnitudes, the separation between luminosity classes V and IIIis not clear. Furthermore, there are a number of exceptions to thestrict relation between luminosity class and absolute magnitude. Weanalyse similarly the system of standards defined by Garrison & Gray(1994) separating low and high rotational velocity standards. We findsimilar effects as in the original MK system. We propose a revision ofthe MK standards, to eliminate the most deviant cases. Based on datafrom the ESA Hipparcos astrometry satellite A catalogue of [Fe/H] determinations: 1996 editionA fifth Edition of the Catalogue of [Fe/H] determinations is presentedherewith. It contains 5946 determinations for 3247 stars, including 751stars in 84 associations, clusters or galaxies. The literature iscomplete up to December 1995. The 700 bibliographical referencescorrespond to [Fe/H] determinations obtained from high resolutionspectroscopic observations and detailed analyses, most of them carriedout with the help of model-atmospheres. The Catalogue is made up ofthree formatted files: File 1: field stars, File 2: stars in galacticassociations and clusters, and stars in SMC, LMC, M33, File 3: numberedlist of bibliographical references The three files are only available inelectronic form at the Centre de Donnees Stellaires in Strasbourg, viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5), or viahttp://cdsweb.u-strasbg.fr/Abstract.html The Pulkovo Spectrophotometric Catalog of Bright Stars in the Range from 320 TO 1080 NMA spectrophotometric catalog is presented, combining results of numerousobservations made by Pulkovo astronomers at different observing sites.The catalog consists of three parts: the first contains the data for 602stars in the spectral range of 320--735 nm with a resolution of 5 nm,the second one contains 285 stars in the spectral range of 500--1080 nmwith a resolution of 10 nm and the third one contains 278 stars combinedfrom the preceding catalogs in the spectral range of 320--1080 nm with aresolution of 10 nm. The data are presented in absolute energy unitsW/m(2) m, with a step of 2.5 nm and with an accuracy not lower than1.5--2.0%. Be stars in open clusters I. uvbyβ photometry.We present uvbyβ photometry for Be stars in eight open clusters andtwo OB associations. It is shown that Be stars occupy anomalouspositions in the photometric diagrams, which can be explained in termsof the circumstellar continuum radiation contribution to the photometricindices. In the (b-y)_0_-M_V_ plane Be stars appear redder than the nonemission B stars, due to the additional reddening caused by the hydrogenfree-bound and free-free recombination in the circumstellar envelope. Inthe c_0_-M_V_ plane the earlier Be stars present lower c_0_ values thanabsorption-line B stars, which is caused by emission in the Balmerdiscontinuity, while the later Be stars deviate towards higher c_0_values, indicating absorption in the Balmer discontinuity ofcircumstellar origin. The photoelectric astrolabe catalogue of Yunnan Observatory (YPAC).The positions of 53 FK5, 70 FK5 Extension and 486 GC stars are given forthe equator and equinox J2000.0 and for the mean observation epoch ofeach star. They are determined with the photoelectric astrolabe ofYunnan Observatory. The internal mean errors in right ascension anddeclination are +/- 0.046" and +/- 0.059", respectively. The meanobservation epoch is 1989.51. The Consistency of Stromgren-Beta Photometry for Northern Galactic Clusters. II. Praesepe and NGC 752We have measured stars in Praesepe and NGC 752 in aninternally-consistent Stromgren-Beta system. This system is based inlarge part on published Hyades and Coma measurements. On comparing ourPraesepe results to those of Crawford and Barnes (1969, AJ, 74, 818), wefind that the published color indices require corrections of 10-18 mmagto put them on the Hyades-Coma system. This deduction applies for b-y,m_1 and Beta (but not c_1). For the NGC 752 data of Crawford and Barnes(1970, AJ, 75, 946), we obtain a nonzero correction only for Beta. Thiscorrection is about 9 mmag. Also for NGC 752, we find that the data ofTwarog (1983, ApJ, 267, 207) require corrections ranging from 4-17 mmag,with all Stromgren indices being affected and the largest correctionbeing for m_1. These corrections resolve the long-standing problem posedby the differences between the Twarog and Crawford-Barnes data. Forthree published sources of V magnitudes, we obtain offsets ranging from-14 to +27 mmag relative to our zero point, and we suggest that suchoffsets are fairly common in published photometry for galactic clusters.For Praesepe, we use new and corrected data to test for a c_1 anomalyand is indistinguishable from Coma in that regard. (SECTION: StellarClusters and Associations) The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1995ApJS...99..135A&db_key=AST Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. The determination of T_eff_ of B, A and F main sequence stars from the continuum between 3200 A and 3600 A.A method of determination of the effective temperature of B, A and Fmain sequence stars is proposed, using the slope of the continuumbetween 3200A and 3600A. The effective temperature calibration is basedon a sample of stars with energy distributions known from UV to the red.We have determined the Balmer jump and the effective temperatures for235 main sequence stars. The temperatures found have been compared withthose derived by Underhill et al. (1979), Kontizas & Theodossiou(1980), Theodossiou (1985), Morossi & Malagnini (1985). Thecomparison showed good agreement for most of the stars in common. On theother hand, the temperatures derived from the reddening-free colourfactor QUV, from the colour index (m1965-V) and from (B-V), given inGulati et al. (1989), are systematically lower than our temperatures,however the differences are within one-sigma error. Variability investigations of possible Maia stars.Series of spectrograms and a limited number of photometric measurementsof selected early-type stars have been used to search in the measuredradial velocities and light curves for stellar pulsations withtimescales of a few hours. For these stars, located in the HR diagrambetween the β Cep and the δ Scu stars and designed sometimesas Maia variables, the presence of pulsations is claimed to be a commonproperty. In our sample we found no hints for a general existence ofsuch pulsations. RV-variations with the expected short-term scale couldbe observed for only two of the program stars, γ UMi and γCrB. The variations are highly irregular in amplitude and frequency. Onthe other hand, for both stars a typical mean timescale of theRV-variation of about 2.4hr has been found which gives some hints topossibly common physical causes of short-term variations of stars inthis part of the HR diagram. The RV-variation of γ CrB in theyears 1992/93 could be attributed to a rotational splitting of nonradialpulsations. Compositional differences among the A-type stars. 2: Spectrum synthesis up to V sin i = 110 km/sThe results of an abundance analysis of 15 early A-type stars arepresented here. These stars have u sin i on the range 6 to 109 km/s andall but one of them (an Am star) are classified as normal. Reliableabundances (uncertainties about a quarter dex or less) are obtained fortwelve elements. Considerable star-to-star abundance variations arepresent among both the narrow lined and broader lined stars. Mostelements have abundances which scale with those of iron. Carbon andcalcium are interesting exceptions. Carbon abundances seem to beanti-correlated with those of iron, while calcium abundances may exhibita more complex behavior. Heavy elements seem enhanced for most stars. What is Normal among the A-StarsNot Available
Submit a new article
• - No Links Found -
|
|
### Concerning Hikayat Hang Tuah
The nucleus of the story of Hikayat Hang Tuah was born during the Malaccan period in the 15th century, but its cytoplasm contains elements from Sejarah Melayu and the Johore Sultanate in mid 17th century. The final form of Hikayat Hang Tuah as we now know it was last edited probably in 1710s, 250 years away from the Malaccan period, 300 years away from now.
Some web sources suggest that Hang Tuah was actually a Chinese, and recently a friend of mine raised the same question to me. I personally do not think Mr. Hang is a Chinese, although it is very inviting to think of "Hang" as a word of chinese origin, like what I postulated in the case of Hang Li Po. In fact, my family name was spelt as "Hang" instead of "Ang" during the time of my grandfather's father when he followed his boss Tan Kah Kee (陈嘉庚) to Nanyang.
Now back to Hang Tuah, he is definitely a pseudo-character, probably loosely modelled after a real person - Laksamana Abdul Jamil. Abdul Jamil was once the most powerful man in the Johore Sultanate. A lot of the story narrated in Hikayat Hang Tuah was actually modelled after the real events which took place between 1650 - 1680.
Our local expert on the Hikayat Hang Tuah is Kassim Ahmad. Kassim studied in University of Malaya, Singapore and graduated in 1959. His B. A. final year paper, supervised by J. C. Bottoms, was a research on the characterization in Hikayat Hang Tuah. This research work was later chosen to be published by Dewan Bahasa dan Pustaka in 1966. Between 1963 to 1966, Kassim was a lecturer in the School of Oriental and African Studies (SOAS), London.
|
|
## Graph Editor For Love2D.
General discussion about LÖVE, Lua, game development, puns, and unicorns.
zorg
Party member
Posts: 2992
Joined: Thu Dec 13, 2012 2:55 pm
Location: Absurdistan, Hungary
Contact:
### Re: Graph Editor For Love2D.
pgimeno wrote:@zorg, "Dummy" is there because, when you have in Lua something like a,b=fn() you have one source value and two target values, and in the graph you'd add a dummy source node to let you enter another target node without generating code. It's not really equivalent to a,b=fn(),nil so I didn't use nil.
Code: Select all
-- So it'd be more equivalent to this?:
dummy = function() end
-- Still, there's no guarantee that fn() returns only one value, not in lua anyway, so basically all functions have a, hardlimited in practice, but theoretically infinite number of inputs and outputs.
Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.
pgimeno
Party member
Posts: 2306
Joined: Sun Oct 18, 2015 2:58 pm
### Re: Graph Editor For Love2D.
Dummy is an artifact of the way I imagined the UI would work, as you could only add an assigned variable if you have added a node for that value first. Empty would have worked just as well. The idea was that as you attach new nodes to one node, you could enter new variables for the assigned-to side. You should have at least as many RHS nodes attached as you have variables assigned. The UI would not let you insert dummy nodes in the output of an assignment node (or a for...in node) before others that are not dummy.
Code: Select all
--[[ For example: ]]--
local a, b, c, d = e, f
--[[ would translate to:
+-------------------+
| Local assignment |
| .
| [a] o---- [Expression [e] ]
| .
| [b] o---- [Expression [f] ]
| .
| [c] o---- [Dummy/Empty]
| .
| [d] o---- [Dummy/Empty]
| .
| |
+-------------------+
while something like
--]]
local a, b = c, d, e, f
--[[ would translate to:
+-------------------+
| Local assignment |
| .
| [a] o---- [Expression [c] ]
| .
| [b] o---- [Expression [d] ]
| .
| [ ] o---- [Expression [e] ]
| .
| [ ] o---- [Expression [f] ]
| .
| |
+-------------------+
--]]
The little circles in the original picture (I drew them far too small, sorry), and the dots in the above, are insertion points: when you attach a line there, it is promoted to a new circle that keeps the same position as where you inserted, and dots are created as well around it. And every circle on the right side of a local assignment has an associated assigned-to variable (you can leave the field empty to not add a variable, as in the a,b=c,d,e,f case). Hope it's clearer now what role the dummies are fulfilling.
Edit: Anyway, in the end it doesn't matter. While the AST approach is appealing for the UI coder because of the simplicity of coding it, it's really not very practical for the user of the UI, and as Nicholas has noted, an approach where the program flow "follows the arrows" instead of being top to bottom, would be much more intuitive for the user. I just still have my reservations about its universality and applicability to LÖVE.
Nicholas Scott
Prole
Posts: 49
Joined: Sun Jun 07, 2015 9:12 am
### Re: Graph Editor For Love2D.
Ok, well here's a quick example of what the nodes will look like, this took me like 1 hour to finish all the dynamic placements cuz.. well my math kind of sucks, and I had to have a pen and notepad x3 http://i.imgur.com/wX6punq.jpg
Here's the video https://streamable.com/ohy5
I'm gonna go ahead and create the actual thing now that I have the create node function made
Edit: Here's another mockup of a node in action https://streamable.com/rn56
Nicholas Scott
Prole
Posts: 49
Joined: Sun Jun 07, 2015 9:12 am
### Re: Graph Editor For Love2D.
pgimeno wrote:I'm looking forward to seeing your mockup in LÖVE.
@pgimeno Here's my mockup, it's reasonably there and I'm going to work on the code generation next and all that nonsense. There's still a few bugs, but it's pretty efficient
https://streamable.com/sn3v
As you can see, the node titled "Event love.update" is as you can assume the love.update function, the one titled "print" is the print function, taking a string(Can take multiples, I can work on that when I get to implementing the actual function libraries) the one titles "Branch" is an "if-statement" taking a boolean, you could use other nodes such as a "<=" operator that takes 2 numbers and compares them, and an output pin of a boolean that could be plugged into the branch and the final labeled "Function MovePlayer" is an example of what a custom function you've added would look like.
pgimeno
Party member
Posts: 2306
Joined: Sun Oct 18, 2015 2:58 pm
### Re: Graph Editor For Love2D.
OK, I got that far. But note that the part I have issue with is this, from your 2nd post in this thread:
Nicholas Scott wrote:at some point in time, be able to import lua files and turn them into nodes in the best way possible
Problem is, there are some language constructs that I don't immediately see how to convert to nodes. I've created a somewhat artificial example that draws a certain line (a parabola followed by a straight line) in a not very efficient way, with the goal to include some of the constructs I have difficulty figuring out how to translate to nodes:
Code: Select all
local points = {}
local x, y
do
local x, y, v = 0, 0, 0
while #points < 100 do
points[#points + 1] = {x = x, y = y}
x = x + v
v = v + .1
if v > 6 then
v = 15
end
y = y + 5
end
end
function love.draw()
local linepoints = {}
for i = 1, #points do
x = points[i].x
y = points[i].y
linepoints[#linepoints + 1] = x
linepoints[#linepoints + 1] = y
end
love.graphics.line(linepoints)
end
It's somewhat long, sorry. But it illustrates the following concepts that I don't see how to translate to nodes:
• Locality of variables.
• Dynamic loop.
• Dynamic list of tables.
• Multiple assignments to the same variable within a loop.
• Accessing fields of tables nested within a list.
Do you think you can make a mockup of how that program could be translated into nodes?
Nicholas Scott
Prole
Posts: 49
Joined: Sun Jun 07, 2015 9:12 am
### Re: Graph Editor For Love2D.
pgimeno wrote: Do you think you can make a mockup of how that program could be translated into nodes?
Idk about making a mockup of that exactly, I'll look into working that code out in a bit, currently trying to revamp "Thrandruil" GUI system to work with my nodes the way I want
But it's rather simple in all honesty, it can go line by line, use RegEx formats within the string to find certain keywords such as "local", "while", "function", etc. that says that the proceeding characters should extend some form of that function, if it's a "local" statement then the program knows that the proceeding text until a "," or a "=" should be used as the local variables new name, subtracting whitespace of course, and after the "=" it can then parse whether the variable type is a table, string, number, nil, etc.
Rather complicated RegEx formatting, which I'm not to amazing with, hence the reason I'm kind of putting that off until I actually have the ability to represent them as nodes and be able to debug that shit x3
pgimeno
Party member
Posts: 2306
Joined: Sun Oct 18, 2015 2:58 pm
### Re: Graph Editor For Love2D.
The question is not about how to make the parser. I'm familiar with parsing. It's about how to represent certain language concepts in node form so that an arbitrary program can be represented in that form.
Nicholas Scott
Prole
Posts: 49
Joined: Sun Jun 07, 2015 9:12 am
### Re: Graph Editor For Love2D.
pgimeno wrote:It's about how to represent certain language concepts in node form so that an arbitrary program can be represented in that form.
I'm not really sure what to tell you man, I've taken up this project and I'm working on it, but unless someone show's interest, it'll probably be for my own personal use. All I can tell you is to look at Blueprints in Unreal and how successful it is in "representing" the C++ programming in "AST" format.
Positive07
Party member
Posts: 1006
Joined: Sun Aug 12, 2012 4:34 pm
Location: Argentina
### Re: Graph Editor For Love2D.
Well then how can you "represent" that (or similar) Lua code to the one pgimeno posted?
If at some point you intend to "translate" Lua files into this Graph tree then you need to be able to represent all parts of the Lua language as nodes, lines or whatever.
Unreal is "good" at "representing" C++ because it just represents a subset of C++. with the syntax Unreal expects, it can't represent all of C++ code though... They use a reduced AST
Also don't expect an explosion of interest, every time someone proposes a new tool to aid development it has to prove that it is actually needed or somewhat better to existent ones.
Your tool may help newcommer but existing developers don't actually need it. In order to gain traction you must post a working prototype.
Note that even then it may not rise interest, as I said you have to probe that this is better than writing code in a code editor as most are used to. You decide if it's worth it... If you don't want to post it, then just don't
for i, person in ipairs(everybody) do
[tab]if not person.obey then person:setObey(true) end
end
love.system.openURL(Github.com/Positive07)
pgimeno
Party member
Posts: 2306
Joined: Sun Oct 18, 2015 2:58 pm
### Re: Graph Editor For Love2D.
I have interest in the project, I just don't see how the nodes format that you have in mind is suitable for representing certain programming constructs. I can see how they can be represented in the AST-like form that I proposed. I showed it by taking an arbitrary example program and making a mockup of how I could imagine an AST-like representation in graph format, but which is actually not suitable for actual programming (it still had problems, but these can be discussed). Again, the program: https://github.com/love2d-community/LOV ... _modes.lua and the mockup: http://www.formauri.es/personal/pgimeno ... mockup.png
Maybe when I see the program I posted in the format that you foresee, the light bulb goes on and then I can contribute. So far I don't see how to carry on this project without significantly reducing the admissible Lua grammar, like Positive07 says UE does with C++.
At the moment, I still think that something Scratch-like is more suitable to making a Lua-based visual programming language than UE-like nodes where execution "follows the arrows".
### Who is online
Users browsing this forum: No registered users and 28 guests
|
|
• 12
• 12
• 9
• 10
• 13
# breaking encapsulation ?
This topic is 3894 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
this is more of a justification for encasulation more than anything else ive always made all my members private and all my functions to get/set them public if you want my members use my functions i have some members in an object some members are used all the time others would be very rarely used, if ever many of these get/sets are very simple though.. just setting a new int or return a double why should i not just make these values public and more easily modified? someone told me...because its breaks encapsulation... well if thats the case its a practice that needs to die because its silly i can understand doing it this for more complicated members like other objects where it can be a multi step process... if i have the class booger and it has members int a, b; double c, d; and string e, f; should i or should i not just make these members public
##### Share on other sites
Generally speaking, if your class has a lot of "get" and "set" members and it is not a POD type, the data isn't where it is supposed to be anyway.
Take a look at what is calling all those "get" and "set" members, and think about whether it would be smarter to move the data into that class.
##### Share on other sites
Only reason to have get/set functions instead of modifying the variables themselves is to make it harder to modify them by accident.
##### Share on other sites
Heavily practiced in Java (somewhat less in C# due to get/set), it's a recommended technique.
Personally, I prefer to use functional approach where possible.
Let class keep its members private, but let it expose *functions*.
Consider something like this:
class Person {public: // A whole bunch of getters/settersprivate: std::string firstName; std::string middleName; std::string lastName; int yearOfBirth; int monthOfBirth; int dayIfBirth;};
Why should anyone who cares about person know about these fields? This isn't encapsulation! It's grouping, a naming convention, an obsoleted paradigm.
What do I want my class to do? I don't want it to sit there and look pretty.
What will a Person do? Let's say we want to store it to database, print it on a label, display it, and do a few more things.
class Person {public: // No more getters/setters void persist( Database *db ) { db->insert( Person, firstName, middleName, lastName, ... ); } std::string fullName() { return firstName + " " + middleName + " " + lastName; } void print( PrinterCanvas *page ) { page->drawText(0, 0, fullName() ); } // for std::cout or other streams. friend std::ostream &operator( std::ostream &os, const Person &person );private: std::string firstName; std::string middleName; std::string lastName; int yearOfBirth; int monthOfBirth; int dayIfBirth;};
Here is where encapsulation comes in. I can change internal representation of date, of name, use different structures, omit them - it doesn't change the public interface.
By using get/set for every private property, you not only don't encapsulate anything - you make a future mess, since the getters and setters will become entangled in the rest of your code.
If you're using getters and setters - might as well not. It's not encapsulation, it doesn't help anything, it merely triples the codebase.
EDIT:
A valuable read for getters/setters in C++.
##### Share on other sites
The problem is not get/set, the problem is how you are using get/set.
A few reasons for using get/set. First you can change the underlying data type and still not break anything because everything is still going through the get/set and you just change the implementation of that.
The biggest reason for get/set instead of public. One point of failure. I rarely have setters that just set the private member. I use setters to make sure the value is valid. Because everything has to go through the setter it is a great place to make sure you are going to have valid data.
theTroll
##### Share on other sites
Yeah, Java is notorious for the getter/setter mentality, and it doesn't always help. Usually I try to avoid it. If it's a compound data type, there's no real reason to add getters and setters. Those are generally only useful when you want to perform some complex operations or bound checking on the input. So I do not want incrementDate() on a Calendar class to accept negative numbers because my calendar is only meant to go forward (trivial example and maybe not too useful since you might want a calendar to go backward, but you get the idea). If it's something that operates on data, like how Antheus points out, then let the class operate on the data and not expose the data to other classes, breaking encapsulation.
##### Share on other sites
Quote:
Original post by PzcOnly reason to have get/set functions instead of modifying the variables themselves is to make it harder to modify them by accident.
That's a decent point, however I think it's fair to note that if all a class does is hold data that is only manipulated through outside stimuli (calling a "setter") or returned unchanged (calling a "getter") that's a code smell that needs to be refactored.
Quote:
A valuable read for getters/setters in C++.
That's an interesting read... though it seems like a terribly naughty idea at first :) Then you realize the internals of the helper class are not exposed. Interesting.
Quote:
The biggest reason for get/set instead of public. One point of failure. I rarely have setters that just set the private member. I use setters to make sure the value is valid. Because everything has to go through the setter it is a great place to make sure you are going to have valid data.
Another good point.
I am finding that TDD tends to generate plenty of getters/setters which for some reason annoy me [smile]... (into refactoring).
If a setter is only used once to put your object into a good state, consider moving it into the constructor as Fowler suggests.
If you're using a getter, make sure that you're not just checking that the data is valid (which should be done inside the class), and that the returned data cannot be mutated outside the class.
EDIT: Another good article to consider
[Edited by - Verg on July 22, 2007 7:41:18 PM]
##### Share on other sites
Quote:
Original post by VergGenerally speaking, if your class has a lot of "get" and "set" members and it is not a POD type, the data isn't where it is supposed to be anyway.Take a look at what is calling all those "get" and "set" members, and think about whether it would be smarter to move the data into that class.
Or (more likely IMX), smarter to move that functionality into the class with the accessors and mutators. (It's often repeated in a few places, too, further indicating refactoring in that direction.)
For example, if you commonly find structures like 'x.set_foo(x.get_foo() + y);', it's a strong indication that you should instead have something like 'x.increase_foo(y)'.
##### Share on other sites
Quote:
Original post by Antheus
Excellent summary.
|
|
66 views
You are given the following four bytes :
10100011 00110111 11101001 10101011
Which of the following are substrings of the base $64$ encoding of the above four bytes?
1. $\text{zdp}$
2. $\text{fpq}$
3. $\text{qwA}$
4. $\text{oze}$
|
|
# Geometry and the Imagination by David R. Hilbert, S. Cohn-Vossen PDF
By David R. Hilbert, S. Cohn-Vossen
ISBN-10: 0828410879
ISBN-13: 9780828410878
This awesome e-book has persisted as a real masterpiece of mathematical exposition. There are few arithmetic books which are nonetheless so extensively learn and proceed to have rather a lot to offer--after greater than part a century! The publication is overflowing with mathematical principles, that are constantly defined truly and skillfully, and exceptionally, with penetrating perception. it's a pleasure to learn, either for novices and skilled mathematicians.
"Hilbert and Cohn-Vossen" is stuffed with attention-grabbing proof, a lot of that you want you had recognized prior to, or had questioned the place they can be chanced on. The e-book starts with examples of the easiest curves and surfaces, together with thread structures of yes quadrics and different surfaces. The bankruptcy on general platforms of issues ends up in the crystallographic teams and the typical polyhedra in $\mathbb{R}^3$. during this bankruptcy, in addition they talk about aircraft lattices. through contemplating unit lattices, and throwing in a small quantity of quantity conception while precious, they easily derive Leibniz's sequence: $\pi/4 = 1 - third + 1/5 - 1/7 + - \ldots$. within the part on lattices in 3 and extra dimensions, the authors think about sphere-packing difficulties, together with the recognized Kepler challenge.
One of the main outstanding chapters is "Projective Configurations". In a brief introductory part, Hilbert and Cohn-Vossen supply might be the main concise and lucid description of why a basic geometer might care approximately projective geometry and why such an ostensibly undeniable setup is really wealthy in constitution and ideas. the following, we see common polyhedra back, from a unique standpoint. one of many excessive issues of the bankruptcy is the dialogue of Schlafli's Double-Six, which results in the outline of the 27 strains at the normal gentle cubic floor. As is right in the course of the booklet, the great drawings during this bankruptcy immeasurably support the reader.
A fairly fascinating part within the bankruptcy on differential geometry is 11 homes of the sector. Which 11 houses of this type of ubiquitous mathematical item stuck their discerning eye and why? Many mathematicians are accustomed to the plaster types of surfaces present in many arithmetic departments. The e-book contains images of a few of the types which are present in the Göttingen assortment. moreover, the mysterious strains that mark those surfaces are eventually defined!
The bankruptcy on kinematics features a great dialogue of linkages and the geometry of configurations of issues and rods which are attached and, probably, restricted in a roundabout way. This subject in geometry has turn into more and more vital lately, particularly in purposes to robotics. this is often one other instance of an easy state of affairs that results in a wealthy geometry.
It will be difficult to overestimate the ongoing effect Hilbert-Cohn-Vossen's booklet has had on mathematicians of this century.
It definitely belongs within the "pantheon" of significant arithmetic books.
Read or Download Geometry and the Imagination PDF
Best geometry books
Get Guide to Computational Geometry Processing: Foundations, PDF
This e-book reports the algorithms for processing geometric information, with a realistic concentrate on vital recommendations now not coated by means of conventional classes on laptop imaginative and prescient and special effects. positive factors: offers an summary of the underlying mathematical conception, overlaying vector areas, metric house, affine areas, differential geometry, and finite distinction tools for derivatives and differential equations; stories geometry representations, together with polygonal meshes, splines, and subdivision surfaces; examines options for computing curvature from polygonal meshes; describes algorithms for mesh smoothing, mesh parametrization, and mesh optimization and simplification; discusses element situation databases and convex hulls of element units; investigates the reconstruction of triangle meshes from aspect clouds, together with equipment for registration of aspect clouds and floor reconstruction; offers extra fabric at a supplementary site; contains self-study workouts during the textual content.
New PDF release: Lectures on Algebraic Geometry I, 2nd Edition: Sheaves,
This e-book and the next moment quantity is an advent into smooth algebraic geometry. within the first quantity the tools of homological algebra, concept of sheaves, and sheaf cohomology are constructed. those equipment are integral for contemporary algebraic geometry, yet also they are basic for different branches of arithmetic and of significant curiosity of their personal.
Download e-book for kindle: Geometry and analysis on complex manifolds : festschrift for by Shoshichi Kobayashi; Toshiki Mabuchi; JunjiroМ„ Noguchi;
This article examines the genuine variable idea of HP areas, focusing on its functions to varied elements of study fields
Read e-book online Geometry of Numbers PDF
This quantity encompasses a quite entire photo of the geometry of numbers, together with kinfolk to different branches of arithmetic reminiscent of analytic quantity conception, diophantine approximation, coding and numerical research. It bargains with convex or non-convex our bodies and lattices in euclidean house, and so on. This moment version was once ready together via P.
Extra info for Geometry and the Imagination
Example text
Such a co2q+2 cycle is the image of a closed 3-form of W by ě ^ and it is exact if the 3-form is exact. 58 ANDRΙ LICHNEROWICZ (3) d) We search for a modification of C n such that T 0 0 i va2q+l 2q+2 + , we er c o ro rn e s nishs. If we change C^q+i ^2q+l ^ ^ ^ P d s to an arbitrary antisymmetric 2-tensor, we have : 3 D + D - 3T q q T ^ ^ 9 -> T ^ 0 - (2/3) 3T 2q+2 2q+2 and (3) and it is possible to annul ^2q+2 ^ s t n l Chevalley 3-cocycle is exact. e tif ax we a suppose c It is certainly the ncase that b^(W) - 0.
U X }p )- ď 1 ć ń where uë 6 Í and where ĺ is the Kronecker skewsymmetrization indicator. A 1-cocycle of (N,P) is a derivation of the Lie algebra, an exact 1-cocycle being an inner derivation. For the d-differential character of a cochain C, we have definitions similar to the defi nitions concerning the Hochschild cohomology; but we suppose here d > 1 : if C is d-differential, 3C is also d-differential. Conversely we have : Proposition - If C is an exact d-differential (d >, 1) Chevalley two-cocycle of (Ν3Ρ)Λ there is a differential operator of order d such that C = %T.
If system is defined by is finite, a normalized state P^of the = Π^/Í^ and is the multiplicity of the state in the usual sense of Quantum Mechanics. d) More generally, we can introduce the Fourier transform in the sense of distributions : iXt Exp^ (H t) ι* Üě(ë) and the support of the measure dy will be referred as the spec trum of H. It is the spectrum of the distribution Exp^(S t) in the sense of Schwartz (up to the factor 4i/i). A state ρ is here a real-valued distribution (pseudo-probabili ty) on the phase space, normalized by the condition Ρ η = 1 > W and such that : ρ * ρ = (1/N) ρ where Í is the multiplicity.
Download PDF sample
### Geometry and the Imagination by David R. Hilbert, S. Cohn-Vossen
by Michael
4.4
Rated 4.50 of 5 – based on 15 votes
|
|
# Areas Related to Circles
#### Quizzes
Quiz 1
CBSE, 10, MATHS, Areas Related to Circles
This quiz has 5 Questions.
Quiz 2
CBSE, 10, MATHS, Areas Related to Circles
This quiz has 5 Questions.
Quiz 3
lock_outline
CBSE, 10, MATHS, Areas Related to Circles
This quiz has 5 Questions.
This quiz is available only on the Brainnr App.
Quiz 4
lock_outline
CBSE, 10, MATHS, Areas Related to Circles
This quiz has 5 Questions.
This quiz is available only on the Brainnr App.
Quiz 5
lock_outline
CBSE, 10, MATHS, Areas Related to Circles
This quiz has 5 Questions.
This quiz is available only on the Brainnr App.
Quiz 6
lock_outline
CBSE, 10, MATHS, Areas Related to Circles
This quiz has 5 Questions.
This quiz is available only on the Brainnr App.
Area of Sectors and Segments of a Circle-Area of Sector of Angle $\theta$, Length of an Arc of Sector $\theta$
|
|
CGAL 5.0 - Surface Mesh
Surface Mesh Reference
Mario Botsch, Daniel Sieger, Philipp Moeller, and Andreas Fabri
The surface mesh class provided by this package is an implementation of the halfedge data structure allowing to represent polyhedral surfaces. It is an alternative to the packages Halfedge Data Structures and 3D Polyhedral Surface. The main differences are that it is indexed based and not pointer based, and that the mechanism for adding information to vertices, halfedges, edges, and faces is much simpler and can be used at runtime and not at compile time.
Introduced in: CGAL 4.6
BibTeX: cgal:bsmf-sm-19b
## Classes
• CGAL::Surface_mesh<P>
## Modules
Draw a Surface Mesh
## Classes
class CGAL::Surface_mesh< P >
This class is a data structure that can be used as halfedge data structure or polyhedral surface. It is an alternative to the classes HalfedgeDS and Polyhedron_3 defined in the packages Halfedge Data Structures and 3D Polyhedral Surface. The main difference is that it is indexed based and not pointer based, and that the mechanism for adding information to vertices, halfedges, edges, and faces is much simpler and done at runtime and not at compile time. When elements are removed, they are only marked as removed, and a garbage collection function must be called to really remove them. More...
## Functions
template<typename P >
bool CGAL::write_off (std::ostream &os, const Surface_mesh< P > &sm)
template<typename P >
bool CGAL::read_off (std::istream &is, Surface_mesh< P > &sm)
template<typename P >
Surface_mesh< P > & operator+= (Surface_mesh< P > &sm, const Surface_mesh< P > &other)
template<typename P , typename NamedParameters >
bool write_off (std::ostream &os, const Surface_mesh< P > &sm, const NamedParameters &np)
template<typename P >
std::ostream & operator<< (std::ostream &os, const Surface_mesh< P > &sm)
template<typename P >
bool write_ply (std::ostream &os, const Surface_mesh< P > &sm, const std::string &comments=std::string())
template<typename P , typename NamedParameters >
bool read_off (std::istream &is, Surface_mesh< P > &sm, NamedParameters np)
template<typename P >
Extracts the surface mesh from an input stream in Ascii or Binary PLY format and appends it to the surface mesh sm. More...
template<typename P >
std::istream & operator>> (std::istream &is, Surface_mesh< P > &sm)
## ◆ operator+=()
template<typename P >
Surface_mesh< P > & operator+= ( Surface_mesh< P > & sm, const Surface_mesh< P > & other )
related
Inserts other into sm. Shifts the indices of vertices of other by sm.number_of_vertices() + sm.number_of_removed_vertices() and analoguously for halfedges, edges, and faces. Copies entries of all property maps which have the same name in sm and other. that is, property maps which are only in other are ignored. Also copies elements which are marked as removed, and concatenates the freelists of sm and other.
## ◆ operator>>()
template<typename P >
std::istream & operator>> ( std::istream & is, Surface_mesh< P > & sm )
related
This operator calls read_off(std::istream& is, CGAL::Surface_mesh& sm).
Attention
Up to CGAL 4.10 this operator called sm.clear().
template<typename P , typename NamedParameters >
bool read_off ( std::istream & is, Surface_mesh< P > & sm, NamedParameters np )
related
Extracts the surface mesh from an input stream in Ascii OFF, COFF, NOFF, CNOFF format and appends it to the surface mesh sm. The operator reads the point property as well as "v:normal", "v:color", and "f:color". Vertex texture coordinates are ignored. If an alternative vertex_point map is given through np, then it will be used instead of the default one.
Precondition
operator>>(std::istream&,const P&) must be defined.
The data in the stream must represent a two-manifold. If this is not the case the failbit of is is set and the mesh cleared.
template<typename P >
bool read_ply ( std::istream & is, Surface_mesh< P > & sm, std::string & comments )
related
Extracts the surface mesh from an input stream in Ascii or Binary PLY format and appends it to the surface mesh sm.
• the operator reads the vertex point property and the face vertex_index (or vertex_indices) property;
• if three PLY properties nx, ny and nz with type float or double are found for vertices, a "v:normal" vertex property map is added;
• if three PLY properties red, green and blue with type uchar are found for vertices, a "v:color" vertex property map is added;
• if three PLY properties red, green and blue with type uchar are found for faces, a "f:color" face property map is added;
• if any other PLY property is found, a "[s]:[name]" property map is added, where [s] is v for vertex and f for face, and [name] is the name of the PLY property.
The comments parameter can be omitted. If provided, it will be used to store the potential comments found in the PLY header. Each line starting by "comment " in the header is appended to the comments string (without the "comment " word).
Precondition
The data in the stream must represent a two-manifold. If this is not the case the failbit of is is set and the mesh cleared.
## ◆ write_off()
template<typename P , typename NamedParameters >
bool write_off ( std::ostream & os, const Surface_mesh< P > & sm, const NamedParameters & np )
related
Inserts the surface mesh in an output stream in Ascii OFF format. Only the point property is inserted in the stream. If an alternative vertex_point map is given through np, then it will be used instead of the default one.
Precondition
operator<<(std::ostream&,const P&) must be defined.
Note
The precision() of the output stream might not be sufficient depending on the data to be written.
## ◆ write_ply()
template<typename P >
bool write_ply ( std::ostream & os, const Surface_mesh< P > & sm, const std::string & comments = std::string() )
related
Inserts the surface mesh in an output stream in PLY format. If found, "v:normal", "v:color" and "f:color" are inserted in the stream. All other vertex and face properties with simple types are inserted in the stream. Edges are only inserted in the stream if they have at least one property with simple type: if they do, all edge properties with simple types are inserted in the stream. The halfedges follow the same behavior.
If provided, the comments string is included line by line in the header of the PLY stream (each line will be precedeed by "comment ").
|
|
# Requency for a photon emitted from a hydrogen atom
1. May 3, 2005
### pak213
I am having some trouble w/ a take home Physics final.
I am down to my last 2 questions:
1) What is the frequency for a photon emitted from a hydrogen atom when an electron makes a transition from an energy state n=3 to n=1? What is the energy of the photon in Joules and MeV?
2) A positron and an electron collide and annihilate each other. Since all of each of masses was converted into energy, how much energy was released in Joules?
All the help/direction you can give would be much appreciated.
2. May 3, 2005
### quasar987
1) The energy of an electron in an atom depends on the energy state he's in. The energy of the fundamental level (n=1) is -13.6 eV. Find the energy of n=3 in your book. You are told the electron jumps from the energy of n=3 to the energy of n = 1. Now according to the principle of conservation of energy, that energy must have gone somewhere: in the photon. The frequency of a photon is related to its energy according to $E = h\nu$
2) They just want to know the energy associated with the masses of the electron and positron. Use E = mc².
|
|
# A multicaloric cooling cycle that exploits thermal hysteresis
## Abstract
The giant magnetocaloric effect, in which large thermal changes are induced in a material on the application of a magnetic field, can be used for refrigeration applications, such as the cooling of systems from a small to a relatively large scale. However, commercial uptake is limited. We propose an approach to magnetic cooling that rejects the conventional idea that the hysteresis inherent in magnetostructural phase-change materials must be minimized to maximize the reversible magnetocaloric effect. Instead, we introduce a second stimulus, uniaxial stress, so that we can exploit the hysteresis. This allows us to lock-in the ferromagnetic phase as the magnetizing field is removed, which drastically removes the volume of the magnetic field source and so reduces the amount of expensive Nd–Fe–B permanent magnets needed for a magnetic refrigerator. In addition, the mass ratio between the magnetocaloric material and the permanent magnet can be increased, which allows scaling of the cooling power of a device simply by increasing the refrigerant body. The technical feasibility of this hysteresis-positive approach is demonstrated using Ni–Mn–In Heusler alloys. Our study could lead to an enhanced usage of the giant magnetocaloric effect in commercial applications.
## Main
Although there have been improvements in efficiency, the working principle for conventional cooling—vapour compression—has remained largely unchanged for more than 100 years1. However, with the world’s increasingly affluent population demanding more comfortable living and working conditions, it is vital that we address the development of much more efficient cooling technologies as an urgent priority2,3. Most research is focused on solid-state refrigeration and one of the caloric effects—electrocaloric4,5, magnetocaloric6,7, barocaloric8,9 or elastocaloric10,11—in which the material’s entropy is forced to change under the application of an electrical, magnetic or mechanical field. The magnetocaloric effect (MCE), the most studied of the three, manifests itself as a change in the material’s temperature as it is exposed to a magnetic field. This remarkable effect makes it possible to set up a magnetic cooling cycle12.
Even though dozens of MCE demonstrators have been constructed13, no commercially competitive magnetic refrigerator has been produced14. The problem is that in a field produced by permanent magnets (in the range of 1 T), the MCE of the existing materials is too small, which leads to small operating ranges. It is possible to increase the temperature range of a MCE device by employing what is called an active magnetic regeneration (AMR) cycle15, but then only a small percentage of the MCE material contributes to the cooling16. The design of such an AMR machine is sketched in Fig. 1a—the magnetic field is varied by rotating either the MCE material or the magnet. Conventionally, the rare earth element (REE) gadolinium17 is used as the MCE material, but there are also demonstrators that operate with La–Fe–Si or Fe2P-type compounds18. As the MCE in these materials is completely, or at least mostly, reversible, the magnetic field needs to be maintained during the whole heat-exchange process. Consequently, the quantity of REE-based Nd–Fe–B permanent magnets needed is at least four times the amount of MCE material and even larger when Halbach arrangements are used, which makes the system expensive and overly dependent on critical raw materials19,20.
## An alternative to conventional solid-state refrigeration
We present an alternative solid-state cooling cycle that exploits the thermal hysteresis inherent in MCE materials that undergo a first-order transition. After the phase transformation is driven by the magnetic field, removing the field leaves the material ‘locked’ in the phase with high magnetization because of the hysteresis, and so the magnetic field is only required for a very short time to tap the cold of the material. This concept drastically reduces the costs of the magnetic field source because, in this case, the volume over which the magnetic field needs to be generated and maintained is very small21. At the same time, focusing the magnetic field means we can double the magnetic field strength up to 2 T. As the MCE scales with the change in magnetic field, this can expand the temperature range of the refrigeration cycle. However, it requires an additional external stimulus, that is, a mechanical stress, to ‘unlock’ the MCE material and return it to its original state22. Consequently, a material with susceptibility to multiple external stimuli is a prerequisite to exploit the thermal hysteresis.
The working principle of such a cycle as well as a scheme for the corresponding design of the machine is illustrated in Fig. 1b,c. In general, any first-order magnetocaloric material with a tunable thermal hysteresis could be utilized in this cycle, even though inverse magnetocaloric materials, such as Heusler alloys, are more favourable because they cool when the magnetic field is applied, and thereby simplify the heat exchange. In contrast, conventional compounds, such as the La–Fe–Si type, heat up when being magnetized and cool when applying the mechanical load23. The necessity for a direct contact between the loading unit and the cold heat exchanger makes the implementation of an efficient design more complicated.
## Experimental proof of concept
In this study we focus on Ni–Mn–In Heusler alloys with a metamagnetic martensitic transition that shows an inverse MCE, which means a decrease in temperature when they are adiabatically magnetized (step 1 in Fig. 1c), and a tunable thermal hysteresis24. Besides other extrinsic and intrinsic effects, the hysteresis width is dominantly determined by the lattice mismatch between the cubic austenite and the tetragonally distorted martensite phase, which can be adjusted by varying the chemical composition25,26. The high elastic energy barriers that separate the parent and martensitic phases ensure, to a very good approximation, the athermal character of the transition. Consequently, the transformed fraction does not change while the temperature, stress and magnetic field are kept constant27. In our hysteresis-positive approach, the magnetic field needs to be sufficiently high to transform the material completely. Then, due to the appropriately tuned thermal hysteresis, the reverse transition does not take place during demagnetization, as sketched in step 2 (Fig. 1c). This is the fundamental difference compared to the conventional AMR cooling cycle. In this way, we turn the thermal hysteresis from being a problem for magnetocalorics into an advantage for multistimuli caloric materials. The irreversibility of the magnetostructural transition allows us to reduce the magnetized volume to a minimum, which means that we can abandon the large, expensive REE magnets that are required to produce a magnetic field over a large volume.
After locking the material in the ferromagnetic phase in step 3 in Fig. 1c, the heat can be extracted from the cooling compartment in the absence of a magnetic field. To return the material to its original state, a loading unit is required, as illustrated by the wheel system in Fig. 1b. In the case of Ni–Mn–In, the application of a mechanical load shifts the hysteresis curve to higher temperatures and consequently the material transforms back into the low-temperature phase. This process is accompanied by a large heating effect because the reverse transformation is induced (steps 4 and 5 in Fig. 1c). The excess heat can then be expelled to the surroundings in step 6, the final step (Fig. 1c). Note that part of the excess heat from the multicaloric material could also flow into the loading unit due to the direct contact. Therefore, the wheel system should either be thermally insulated or kept at the temperature of the hot reservoir. As for the magnetization step, the high stress field is only required over a very small volume. Further details of the six-step hysteresis-positive cycle is given in Supplementary Information, in particular Supplementary Fig. 1.
Figure 2 shows a demonstration of the cycle on a laboratory scale. In this experiment, the Heusler material was loaded by a uniaxial stress of 75 MPa to turn it into the low-temperature martensite phase, which resulted in significant heating. The mechanical load was created by a piston connected to a screw system. For technical reasons, the application of the mechanical stress must be performed with care to prevent overshooting, whereas unloading the sample can happen instantaneously. After a certain waiting time, a short magnetic field pulse of 1.8 T with a duration of approximately 2 s was applied by an electromagnet and the Heusler sample cooled down. The key point is that the MCE was not reversed when removing the magnetic field because of the thermal hysteresis. As the sample is in thermal contact with the surroundings, its temperature relaxed back with time. This simulates the extraction of heat from the cooling compartment. Afterwards, the cycle can start over again.
For the cyclic tests, a Heusler alloy in bulk form with the composition Ni49.6Mn35.6In14.8 was selected. Its martensitic transition with a thermal hysteresis of about 10 K occurs slightly below room temperature, as can be seen from Fig. 3a; the magnetization measurements in 0.2, 1 and 2 T are shown in the upper panel. The magnetic field shifts the hysteresis curve towards lower temperatures as the austenitic high-magnetization state is stabilized. In contrast, the application of uniaxial stress favours the low-temperature martensite phase and therefore the transition temperature is increased. In the lower panel of Fig. 3a, the compressive strain of the sample is plotted as a function of temperature under a constant uniaxial load up to 60 MPa. For these measurements, a mechanical testing machine was equipped with a heating and cooling chamber that allowed us to sweep the sample temperature. We observed that the magnetic field shifts the transition by approximately −3.7 K T−1, whereas uniaxial stress increases the transformation temperature by about +0.23 K MPa−1. These values are in agreement with data in the literature for similar samples28. In other words, a uniaxial load of 16 MPa is comparable to a magnetic field of 1 T, although the transition is shifted in the opposite direction.
The results of the cycling experiments under multiple stimuli are illustrated in Fig. 3b. The magnetic field and the uniaxial stress were applied alternately at minute intervals. The whole measurement set-up was heated in the background with a sweeping rate of 0.1 K min−1 to test the properties of the materials at different temperatures. Despite the simplicity of the set-up and its poor thermal insulation, the feasibility of the hysteresis-exploiting concept is demonstrated in a sequence of cycles. The basic idea to obtain an irreversible MCE in a cyclic manner is demonstrated, even though it accounts for little more than 1 K. In this example the sample was loaded with about 75 MPa. However, the application of such a large uniaxial stress is a harsh process that can lead to fatigue and even to the magnetocaloric material being destroyed. In tests on similar arc-melted Heusler alloys, several samples failed. One reason for this is that the grain sizes in those compounds are in the range of millimetres29. Therefore, cracks can propagate easily. One possibility to enhance the mechanical stability of the material is to refine the microstructure by suction casting the materials.
Figure 4 summarizes the results for the suction-cast material with a similar chemical composition to the bulk material. In Fig. 4a both the magnetization (top) and the compressive strain (bottom) of the sample are plotted. When a magnetic field is applied, the transition shifts by about −1 K T−1, whereas uniaxial stress increases the transition temperature Tt by approximately 0.1 K MPa−1. Compared to the bulk sample in Fig. 3, the $${\textstyle{{{\rm{d}}T_{\rm{t}}} \over {{\rm{d}}H}}}$$ and $${\textstyle{{{\rm{d}}T_{\rm{t}}} \over {{\rm{d}}\sigma }}}$$ values are significantly smaller because the transition of the suction-cast sample is close to the Curie temperature of the austenite phase. In comparative measurements of the adiabatic temperature change, ΔTad (Supplementary Information) in cyclic magnetic fields of 1.9 T, we were able to demonstrate that the suction-cast material provides only a vanishingly small reversible MCE related to the first-order transition. As shown in Supplementary Fig. 2, the irreversible ΔTad in this field change accounts for −1.28 K in the first field application of a ‘fresh’ material. By applying the multistimuli hysteresis cycle, it is possible to exploit almost the entire potential ΔTad. For the cyclic tests in Fig. 4c, it is apparent that an irreversible temperature change of −1.2 K can be achieved. Furthermore, the mechanical integrity of the specimen is much improved due to the fine microstructure. Figure 4b shows a light-microscopy image of a sample in the martensite state with fine needle-like structures. However, the grain boundaries of the parent austenite phase that separate the different martensitic regions are still visible. During the suction casting of the melt, the grains grow in a radial direction towards the centre of the rod and have a typical width between 100 and 250 μm. This special microstructure can withstand much larger uniaxial stresses than the conventionally arc-melted counterparts, even beyond 100 MPa.
Figure 4c shows the thermal response of the suction-cast material when a magnetic field pulse of about 1.8 T (even minutes) and mechanical load of 80 MPa (odd minutes) are alternately applied. The temperature of the holder was increased from 290 to 298 K at 0.25 K min−1 to study the response in different temperature regions of the martensitic transition. The diagonal curve in Fig. 4c represents the absolute temperature profile of the sample (right-hand axis). To illustrate the temperature profile of the material in a clearer form, the baseline heating curve was subtracted, as shown on the left-hand axis of Fig. 4c. It is possible to distinguish three different regions. Below approximately 292 K (minutes 0–7, blue shading), the magnetocaloric cooling effect is predominantly reversible and the sample temperature reverts instantly after the field pulse. At this low temperature, the material is mainly in the martensite state.
Above approximately 292 K (minutes 7–23, red shading in Fig. 4c), increasing amounts of austenite are locked by the thermal hysteresis, which prevents it from transforming back into martensite. For this reason, the cooling effect becomes irreversible during the magnetic field pulse with a maximum value of −1.2 K obtained at 295.5 K (minute 22). Here the temperature change of the material is maintained long after the magnetic field pulse and it takes about 1 min to relax back to the background temperature. Furthermore, the heating of the sample is intensified during the stress application, which indicates that a larger amount of austenite is switched back into martensite. In contrast, above 296 K (minutes 24–29, pink shading in Fig. 4c), the transition turns into a conventional MCE, which is reversible. At these high temperatures, the austenite phase is dominant and, because its Curie temperature, being approximately 305 K, is rather close, a temperature increase of 1.3 K is observed. However, the temperature change is immediately reversed when the magnetic field decreases. A small irreversible part is also present at these temperatures, as a small amount of martensite could be formed by the applied stress application, but this is not significant.
## On the potential of exploiting the hysteresis cycle
In conclusion, our experiments demonstrate that the hysteresis-exploiting cycle works on a laboratory scale. In particular, the suction-cast Ni–Mn–In Heusler alloy is a very promising innovation, due to its enhanced mechanical strength and reasonably large thermal hysteresis. These inverse magnetocaloric materials have the advantage that they cool during the field-application step, in which no mechanical contact is required, which simplifies the heat transfer. However, most first-order MCE alloys could be used for such a cycle. The only requirements are a tuned thermal hysteresis, a sufficient magnetic field dependence of the transition temperature and an external stimulus that can transform the material back to its original state. This cycle represents a multicaloric approach to magnetic refrigeration that exploits the thermal hysteresis of a MCE material instead of attempting to avoid it. The advantage is that the full potential of the Heusler compound can now be utilized in a cyclic process, even if it shows no reversible ΔTad in conventional magnetic field cycling. In the Supplementary Information, an approximation of the energy balance derived from experimental data is given (Supplementary Figs. 3–5). In an exemplary six-step cooling cycle, the extra work associated with the deformation, Wela (per unit mass) due to the application of a uniaxial stress can be approximated at the material level from Fig. 4a to Wela ≈ 17.6 J kg−1 (mass density ρ = 7.0 g cm−3), which is about 60% larger than the magnetization work Wmag ≈ 10.6 J kg−1. Instead, a conventional four-step magnetocaloric cooling cycle requires a magnetic field change of about 4 T to obtain a similar ΔTad. The corresponding magnetic work then amounts to about 22.2 J kg−1, which is less than the sum of Wela and Wmag in the exploiting hysteresis cycle. However, it is important to remember that the magnetic field strength cannot be increased much beyond 2 T when using permanent magnets as the field source for the refrigerator. Fortunately, this impasse might be overcome by combining the magneto- and elastocaloric effects as described here.
In the proof of concept, we obtained a temperature change of −1.2 K. The largest irreversible temperature changes reported in a field of approximately 2 T amount to −8 K in Ni–Mn–In–Co (ref. 30) and as much as −9.2 K for the compound Fe–Rh (ref. 31). The next step is to match these large temperature changes to the hysteresis-exploiting cycle with designed-for-purpose materials. A simple refrigerator would only require two stages with tailored transition temperatures switched in series to build up a sufficiently large temperature span in which 50% of the magnetocaloric material is active in cooling compared to the few percent that is active in a conventional AMR cycle (further information on the heat flow management, on segmentation issues and on cascade devices with two and more stages is given in Supplementary Figs. 6 and 7).
To make use of the fact that the multicaloric material retains its temperature drop after the magnetization and demagnetization step, there is no necessity for a fast flow of the exchange fluid (Supplementary Information gives further details) as required in the classical AMR12. This claim originates from the restriction that the amount of the magnetocaloric material is more or less fixed for a given magnet arrangement. Therefore, an enhancement of the cooling power is obtainable solely by increasing the operating frequency of the AMR device. However, many technical problems, such as the pressure drop of the fluid through the regenerator or the slowness of the valve system, limit the frequency to typically 1 Hz (ref. 13). In great contrast, in the hysteresis-positive approach, the ratio between the magnetocaloric and magnet material, and therefore the cooling power, is scalable by enlarging the heat-exchanger body with the magnetic system, loading unit and operating frequency kept the same (Supplementary Fig. 8), which is an essential advantage of the multistimuli approach. This concept will allow a drastic reduction in the amount of expensive and raw-material-critical Nd–Fe–B permanent magnets needed for a cooling machine and, at the same time, in terms of efficiency outperform magnetic refrigerators that use the AMR principle.
## Methods
Samples with the nominal composition Ni50.0Mn35.5In14.5 were prepared by arc melting. The ingots were turned upside down and remelted several times to ensure chemical homogeneity. One batch was further treated using the suction-casting option of the arc melter to prepare rods with a diameter of 3 mm. Both specimens were subsequently annealed at 900 °C for 24 h, followed by water quenching. The bulk sample was cut and polished into a block with dimensions of 2 × 2 × 5.5 mm3. The suction-cast material was prepared with a length of 4.9 mm. The magnetic measurements were made on a commercial vibrating sample magnetometer using small fragments. Temperature-dependent dilatometry measurements involved a mechanical-testing machine that operated in the constant-load mode. The set-up included a variable-temperature chamber to cool and heat the piston unit at a rate of 1 K min−1. The sample temperature and height were directly measured with a type-T thermocouple and a strain-gauge sensor, respectively, attached to both pistons. For the experimental demonstration of the cooling cycle, an in-house-constructed set-up was used in which uniaxial stress was applied mechanically by a screw. The load was determined via a force sensor installed in the piston axis. The temperature of the sample, measured directly by a type-T thermocouple, could be varied by a thermal bath connected to the base plate of the piston unit. For the application of the magnetic field pulse, a commercial electromagnet was used. A Hall probe situated near the sample detected the magnetic field strength. Comparative measurements of the adiabatic temperature change (in the absence of a mechanical load) of the suction-cast material in a magnetic-field change of 1.9 T in the continuous and discontinuous protocol were performed in a purpose-built device using standard type-T thermocouples24.
## Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
## References
1. 1.
Belman-Flores, J. M., Barroso-Maldonado, J. M., Rodríguez-Muñoz, A. P. & Camacho-Vázquez, G. Enhancements in domestic refrigeration, approaching a sustainable refrigerator— review. Renew. Sust. Energ. Rev. 51, 955–968 (2015).
2. 2.
Gutfleisch, O. et al. Magnetic materials and devices for the 21st century: stronger, lighter, and more energy efficient. Adv. Mater. 23, 821–842 (2011).
3. 3.
Sandeman, K. G. Magnetocaloric materials: the search for new systems. Scripta Mater. 67, 566–571 (2012).
4. 4.
Moya, X., Kar-Narayan, S. & Mathur, N. D. Caloric materials near ferroic phase transitions. Nat. Mater. 13, 439–450 (2014).
5. 5.
Takeuchi, I. & Sandeman, K. Solid-state cooling with caloric materials. Phys. Today 68, 48–54 (December, 2015).
6. 6.
Krenke, T. et al. Inverse magnetocaloric effect in ferromagnetic Ni–Mn–Sn alloys. Nat. Mater. 4, 450–454 (2005).
7. 7.
Liu, J., Gottschall, T., Skokov, K. P., Moore, J. D. & Gutfleisch, O. Giant magnetocaloric effect driven by structural transitions. Nat. Mater. 11, 620–626 (2012).
8. 8.
Mañosa, L. et al. Giant solid-state barocaloric effect in the Ni–Mn–In magnetic shape-memory alloy. Nat. Mater. 9, 478–481 (2010).
9. 9.
Matsunami, D., Fujita, A., Takenaka, K. & Kano, M. Giant barocaloric effect enhanced by the frustration of the antiferromagnetic phase in Mn3GaN. Nat. Mater. 14, 73–78 (2015).
10. 10.
Bonnot, E., Romero, R., Mañosa, L., Vives, E. & Planes, A. Elastocaloric effect associated with the martensitic transition in shape-memory alloys. Phys. Rev. Lett. 100, 125901 (2008).
11. 11.
Tušek, J. et al. The elastocaloric effect: a way to cool efficiently. Adv. Energy Mater. 5, 1500361 (2015).
12. 12.
Smith, A. et al. Materials challenges for high performance magnetocaloric refrigeration devices. Adv. Eng. Mater. 2, 1288–1318 (2012).
13. 13.
Scarpa, F., Tagliafico, G. & Tagliafico, L. A. A classification methodology applied to existing room temperature magnetic refrigerators up to the year 2014. Renew. Sust. Energ. Rev. 50, 497–503 (2015).
14. 14.
Kitanovski, A., Plaznik, U., Tomc, U. & Poredoš, A. Present and future caloric refrigeration and heat-pump technologies. Int. J. Refrig. 57, 288–298 (2015).
15. 15.
Yu, B., Liu, M., Egolf, P. W. & Kitanovski, A. A review of magnetic refrigerator and heat pump prototypes built before the year 2010. Int. J. Refrig. 33, 1029–1060 (2010).
16. 16.
Gómez, J. R., Garcia, R. F., Catoira, A. D. M. & Gómez, M. R. Magnetocaloric effect: a review of the thermodynamic cycles in magnetic refrigeration. Renew. Sust. Energ. Rev. 17, 74–82 (2013).
17. 17.
Engelbrecht, K. et al. Experimental results for a novel rotary active magnetic regenerator. Int. J. Refrig. 35, 1498–1505 (2012).
18. 18.
Zimm, C. et al. Design and performance of a permanent-magnet rotary refrigerator. Int. J. Refrig. 29, 1302–1306 (2006).
19. 19.
Bjørk, R., Smith, A., Bahl, C. & Pryds, N. Determining the minimum mass and cost of a magnetic refrigerator. Int. J. Refrig. 34, 1805–1816 (2011).
20. 20.
Monfared, B., Furberg, R. & Palm, B. Magnetic vs. vapor-compression household refrigerators: A preliminary comparative life cycle assessment. Int. J. Refrig. 42, 69–76 (2014).
21. 21.
Gottschall, T., Skokov, K. P. & Gutfleisch, O. Kühlvorrichtung und ein Verfahren zum Kühlen. German patent 10 2016 110, 385.3 (2016).
22. 22.
Gottschall, T. et al. A matter of size and stress: understanding the first-order transition in materials for solid-state refrigeration. Adv. Funct. Mater. 27, 1606735 (2017).
23. 23.
Mañosa, L. et al. Inverse barocaloric effect in the giant magnetocaloric La–Fe–Si–Co compound. Nat. Commun. 2, 595 (2011).
24. 24.
Gutfleisch, O. et al. Mastering hysteresis in magnetocaloric materials. Phil. Trans. R. Soc. A 374, 20150308 (2016).
25. 25.
Song, Y., Chen, X., Dabade, V., Shield, T. W. & James, R. D. Enhanced reversibility and unusual microstructure of a phase-transforming material. Nature 502, 85–88 (2013).
26. 26.
Gottschall, T., Skokov, K. P., Benke, D., Gruner, M. E. & Gutfleisch, O. Contradictory role of the magnetic contribution in inverse magnetocaloric Heusler materials. Phys. Rev. B 93, 184431 (2016).
27. 27.
Pérez-Reche, F. J., Vives, E., Mañosa, L. & Planes, A. Athermal character of structural phase transitions. Phys. Rev. Lett. 87, 195701 (2001).
28. 28.
Karaca, H. E. et al. Magnetic field-induced phase transformation in NiMnCoIn magnetic shape-memory alloys—a new actuation mechanism with large work output. Adv. Funct. Mater. 19, 983–998 (2009).
29. 29.
Gottschall, T., Skokov, K. P., Frincu, B. & Gutfleisch, O. Large reversible magnetocaloric effect in Ni–Mn–In–Co. Appl. Phys. Lett. 106, 021901 (2015).
30. 30.
Gottschall, T. et al. Reversibility of minor hysteresis loops in magnetocaloric Heusler alloys. Appl. Phys. Lett. 110, 223904 (2017).
31. 31.
Chirkova, A. et al. Giant adiabatic temperature change in FeRh alloys evidenced by direct measurements under cyclic conditions. Acta Mater. 106, 15–21 (2016).
## Acknowledgements
The work was supported by funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant no. 743116—project Cool Innov), the DFG (grant no. SPP 1599), the CICyT (Spain) project MAT2016-75823-R and the HLD at HZDR, a member of the European Magnetic Field Laboratory.
## Author information
Authors
### Contributions
T.G., M.F., A.T., L.P. and K.P.S. were responsible for the sample preparation. T.G., A.G.-C., A.P. and L.M. designed and performed the tensile test and cycling experiments. A.T., M.F. and K.P.S. took care of the adiabatic temperature-change measurements and microscopy. All the authors discussed the results and developed the explanation of the experiments. T.G. wrote the manuscript supported by all the co-authors. O.G. led the project.
### Corresponding authors
Correspondence to Tino Gottschall or Oliver Gutfleisch.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Supplementary information
### Supplementary Information
Supplementary Figures 1–8, Supplementary References 1–2
## Rights and permissions
Reprints and Permissions
Gottschall, T., Gràcia-Condal, A., Fries, M. et al. A multicaloric cooling cycle that exploits thermal hysteresis. Nature Mater 17, 929–934 (2018). https://doi.org/10.1038/s41563-018-0166-6
• Accepted:
• Published:
• Issue Date:
• ### Hydrostatic pressure induced giant enhancement of entropy change as driven by structural transition in Mn0.9Fe0.2Ni0.9Ge0.93Si0.07
• Tapas Samanta
• , Bruno Weise
• , Lukas Beyer
• & Maria Krautz
Journal of Applied Physics (2021)
• ### A strategy of optimizing magnetism and hysteresis simultaneously in Ni–Mn-based metamagnetic shape memory alloys
• Hai-Le Yan
• , Xiao-Ming Huang
• , Jin-Han Yang
• , Ying Zhao
• , Feng Fang
• , Nan Jia
• , Jing Bai
• , Bo Yang
• , Zongbin Li
• , Yudong Zhang
• , Claude Esling
• , Xiang Zhao
• & Liang Zuo
Intermetallics (2021)
• ### Microstructure and giant baro-caloric effect induced by low pressure in Heusler Co51Fe1V33Ga15 alloy undergoing martensitic transformation
• Kai Liu
• , Hai Zeng
• , Ji Qi
• , Xiaohua Luo
• , Xuanwei Zhao
• , Xianming Zheng
• , Yuan Yuan
• , Changcai Chen
• , Shengcan Ma
• , Ren Xie
• , Bing Li
• & Zhenchen Zhong
Journal of Materials Science & Technology (2021)
• ### Determination of strain path during martensitic transformation in materials with two possible transformation orientation relationships from variant self-organization
• Hai-Le Yan
• , Yudong Zhang
• , Claude Esling
• , Xiang Zhao
• & Liang Zuo
Acta Materialia (2021)
• ### Colossal magnetocaloric effect in Ni–Co–Mn–In alloys induced by electron irradiation
• S. Sun
• , H. Qin
• , H. Wang
• , R. Ning
• , Y. Zhao
• , J. Zhu
• , Z. Gao
• , D. Cong
• , Y. Wang
• & W. Cai
Materials Today Energy (2020)
|
|
# Holonomy, SO(6), SU(3) and SU(4)
1. Sep 12, 2007
### AlphaNumeric2
This springs from section 15.1.3 of Superstring Theory (Vol 2) by GS&W (should anyone have that to hand).
K is a compact 6 dimensional space, thus it's holonomy group is a subgroup of SO(6). Fine. $$\eta$$ is covariantly constant on K (comes from SUSY constraints). Thus need subgroup of SO(6), H, which has, for any U in H, $$U\eta = \eta$$. Okay so far.
GS&W then point out that $$\mathcal{L}(SO(6)) \equiv \mathcal{L}(SU(4))$$. That I understand. Spinors of definite chirality are then in the $$\mathbf{4}$$ or $$\mathbf{\bar{4}}$$ of SU(4). Okay with this. However, I don't see why this applies to $$\eta$$ since, from my understanding, $$\eta$$ would in a complex basis on a complex manifold, be a 3 component complex spinor, yet GS&W then talk about SU(4) matrices acting on a 4 component $$\eta$$.
Am I missing something? I can see SO(6) having a $$\mathbf{4}$$, which splits into a $$\mathbf{3}$$ and a $$\mathbf{1}$$ and then the holonomy preserving the singlet (and SU(3) works on the $$\mathbf{3}$$) so that there's one and one only covariantly constant spinor on K (as is needed by the string constraints), but going into a 4 component complex basis just seems confusing.
Is this just a particular way of represending a spinor on a 6 dimensional manifold? Wouldn't the 4 components give $$\eta$$ too many degrees of freedom? I thought I had my head around the whole Calabi Yau thing and it's construction via supersymmetry breaking but the 4 component spinor has thrown me.
Thanks in advance for any help.
2. Sep 15, 2007
### Haelfix
Maybe im missing the question, but SU(4) has a maximal subalgebra (spinor) that goes like Sp4 or SU(2) * SU(2) so it makes good sense to work in a 4 component complex basis. If you were looking at the real irreps then yes you would look at the maximal subalgebra that goes like SU(3) *U(1)
3. Oct 7, 2007
### AlphaNumeric2
Sorry for the delay in replying.
Yeah, I was getting mixed up about real and complex reps and the symmetries involved which kept the number of degrees of freedom the same. A lot more reading and thinking has helped.
Thanks :)
|
|
# Use Bayes Model to Forecast
if I have the following dataset and I'm interested to find the probability of it raining today given the last 2 days were raining. It seems like I could use Bayesian model to get the probability to forecast that.
I'm not sure how do I start.
1) the event that A|B --> 3 occurrences, day 3, day 4 and day 15. Does that mean that, $P(A|B) = \frac{3}{20}$? If it is, given the dataset that I have do I still need the following Bayes formula to obtain $P(A|B)$?
$$P(A|B) =\frac{P(B| A)\cdot P(A)}{P(B)}$$
A = rain on 3rd day
B = rain previous 2 days
P(A|B) = probability of 3rd day raining given the last 2 days raining
P(B|A) = probability of the last 2 days raining given the 3rd day raining, (this doesn't make intuitive sense) If I'm to obtain this probability from the dataset, how do I do that?
P(A) = probability of 1 day raining
P(B) = probability of 2 consecutive days raining
2) I don't know if the above approach is the right way to forecast the chances of 3rd day raining given that the previous 2 days were raining. Having said that, is there any other stochastic models that I could use?
Note: I'm currently studying Kai Lai Chung's A Course in Probability Theory, and bought the book, Probability and Stochastic Modeling by Vladimir Rotar to self-study
$$\begin{array}{|c|c|} \hline Day & Rain \\ \hline 1&rain\\ \hline 2&rain\\ \hline 3&rain\\ \hline 4&rain\\ \hline 5&no\\ \hline 6&rain\\ \hline 7&no\\ \hline 8&no\\ \hline 9&no\\ \hline 10&rain\\ \hline 11&rain\\ \hline 12&no\\ \hline 13&rain\\ \hline 14&rain\\ \hline 15&rain\\ \hline 16&no\\ \hline 17&no\\ \hline 18&rain\\ \hline 19&rain\\ \hline 20&no\\ \hline \end{array}$$
• I think the simplest approach is to consider the fact that that there were 7 instances of rain 2 days in a row and that 3 of those instances were followed by a third day of rain while the other 4 did not. I would say $P(A|B)=3/7$. – Laars Helenius Mar 13 '18 at 23:41
The Bayes' approach of finding $\mathsf P(A\mid B)$ by calculating it as $\mathsf P(B\mid A)\mathsf P(A)/\mathsf P(B)$ is only helpful if you have ways to evaluate $\mathsf P(B\mid A)$, $\mathsf P(A)$, and $\mathsf P(B)$ that are easier than just evaluating $\mathsf P(A\mid B)$ directly.
All you have is the data. You can use a frequentist approximation to evaluate $\mathsf P(B\mid A)$, $\mathsf P(A)$, and $\mathsf P(B)$, but you may as well just use it to evaluate $\mathsf P(A\mid B)$.
There are 19 blocks of three consecutive days. 10 of these have rain on their third day (event $A$). 7 of them have rain on the first and second days (event $B$). 3 of them have rain on all three days (event $A\cap B$).
|
|
# How do you multiply (3+2i) (2-i)?
Feb 11, 2016
This is $\left(8 + i\right)$.
#### Explanation:
This can be multiplied just like any other binomial * binomial:
FOIL.
so we have $\left(6 + 4 i - 3 i - 2 {i}^{2}\right)$
Since ${i}^{2} = - 1$, the last term is equal to $+ 2$
So we have $\left(6 + 4 i - 3 i + 2\right)$
Combine like terms and get the answer $\left(8 + i\right)$
|
|
# Séminaire N. Bourbaki – Designs Exist (after Peter Keevash) – the paper
On June I gave a lecture on Bourbaki’s seminare devoted to Keevash’s breakthrough result on the existence of designs. Here is a draft of the paper: Design exists (after P. Keevash).
Remarks, corrections and suggestions are most welcome!
I would have loved to expand a little on
1) How designs are connected to statistics
2) The algebraic part of Keevash’s proof
3) The “Rodl-style probabilistic part” (that I largely took for granted)
4) The greedy-random method in general
5) Difficulties when you move from graph decomposition to hypergraph decomposition
6) Wilson’s proofs of his theorem
7) Teirlink’s proof of his theorem
I knew at some point in rough details both Wilson’s proof (I heard 8 lectures about and around it from Wilson himself in 1978) and Teirlink’s (Eran London gave a detailed lecture at our seminar) but I largely forgot, I’d be happy to see a good source).
8) Other cool things about designs that I should mention.
9) The Kuperberg-Lovett-Peled work
(To be realistic, adding something for half these items will be nice.)
Here is the seminar page, (with videotaped lectures), and the home page of Association des collaborateurs de Nicolas Bourbaki . You can find there cool links to old expositions since 1948 which overall give a very nice and good picture of modern mathematics and its highlights. Here is the link to my slides.
In my case (but probably also for some other Bourbaki’s speakers) , it is not that I had full understanding (or close to it) of the proof and just had to decide how to present it, but my presentation largely represent what I know, and the seminaire forced me to learn. I was lucky that Peter gave a series of lectures (Video 1, Video 2, Video3Video4 ) about it in the winter at our Midrasha, and that he decided to write a paper “counting designs” based on the lectures, and even luckier that Jeff Kahn taught some of it at class (based on Peter’s lectures and subsequent article) and later explained to me some core ingredients. Here is a link to Keevash’s full paper “The existence of design,” and an older post on his work.
Curiously the street was named only after Pierre Curie until the 60s and near the sign of the street you can still see the older sign.
# In And Around Combinatorics: The 18th Midrasha Mathematicae. Jerusalem, JANUARY 18-31
The 18th yearly school in mathematics is devoted this year to combinatorics. It will feature lecture series by Irit Dinur, Joel Hass, Peter Keevash, Alexandru Nica, Alexander Postnikov, Wojciech Samotij, and David Streurer and additional activities. As usual grants for local and travel expences are possible.
# Amazing: Peter Keevash Constructed General Steiner Systems and Designs
Here is one of the central and oldest problems in combinatorics:
Problem: Can you find a collection S of q-subsets from an n-element set X set so that every r-subset of X is included in precisely λ sets in the collection?
A collection S of this kind are called a design of parameters (n,q,r, λ), a special interest is the case λ=1, and in this case S is called a Steiner system.
For such an S to exist n should be admissible namely ${{q-i} \choose {r-i}}$ should divide $\lambda {{n-i} \choose {r-i}}$ for every $1 \le i \le r-1$.
There are only few examples of designs when r>2. It was even boldly conjectured that for every q r and λ if n is sufficiently large than a design of parameters (n,q,r, λ) exists but the known constructions came very very far from this. … until last week. Last week, Peter Keevash gave a twenty minute talk at Oberwolfach where he announced the proof of the bold existence conjecture. Today his preprint the existence of designs, have become available on the arxive.
### Brief history
The existence of designs and Steiner systems is one of the oldest and most important problems in combinatorics.
1837-1853 – The existence of designs and Steiner systems was asked by Plücker(1835), Kirkman (1846) and Steiner (1853).
1972-1975 – For r=2 which was of special interests, Rick Wilson proved their existence for large enough admissible values of n.
1985 -Rödl proved the existence of approximate objects (the property holds for (1-o(1)) r-subsets of X) , thus answering a conjecture by Erdös and Hanani.
1987 – Teirlink proved their existence for infinitely many values of n when r and q are arbitrary and λ is a certain large number depending on q and r but not on n. (His construction also does not have repeated blocks.)
2014 – Keevash’s proved the existence of Steiner systems for all but finitely many admissible values of n for every q and r. He uses a new method referred to as Randomised Algebraic Constructions.
Update: Just 2 weeks before Peter Keevash announced his result I mentioned the problem in my lecture in “Natifest” in a segment of the lecture devoted to the analysis of Nati’s dreams. 35:38-37:09.
Update: Some other blog post on this achievement: Van Vu Jordan Ellenberg, The aperiodical . A related post from Cameron’s blog Subsets and partitions.
Update: Danny Calegary pointed out a bird-eye similarity between Keevash’s strategy and the strategy of the recent Kahn-Markovic proof of the Ehrenpreis conjecture http://arxiv.org/abs/1101.1330 , a strategy used again by Danny and Alden Walker to show that random groups contain fundamental groups of closed surfaces http://arxiv.org/abs/1304.2188 .
|
|
Super-radiance, Berry phase, Photon phase diffusion and Number squeezed state in the U(1) Dicke ( Tavis-Cummings ) model
# Super-radiance, Berry phase, Photon phase diffusion and Number squeezed state in the U(1) Dicke ( Tavis-Cummings ) model
Jinwu Ye and CunLin Zhang Beijing Key Laboratory for Terahertz Spectroscopy and Imaging, Key Laboratory of Terahertz Optoelectronics, Ministry of Education, Department of Physics, Capital Normal University, Beijing, 100048 China
Department of Physics and Astronomy, Mississippi State University, MS, 39762, USA
July 14, 2019
###### Abstract
Recently, strong coupling regimes of superconducting qubits or quantum dots inside a micro-wave circuit cavity and BEC atoms inside an optical cavity were achieved experimentally. The strong coupling regimes in these systems were described by the Dicke model. Here, we solve the Dicke model by a expansion. In the normal state, we find a behavior of the collective Rabi splitting. In the superradiant phase, we identify an important Berry phase term which has dramatic effects on both the ground state and the excitation spectra of the strongly interacting system. The single photon excitation spectrum has a low energy quantum phase diffusion mode in imaginary time with a large spectral weight and also a high energy optical mode with a low spectral weight. The photons are in a number squeezed state which may have wide applications in high sensitive measurements and quantum information processing. Comparisons with exact diagonization studies are made. Possible experimental schemes to realize the superradiant phase are briefly discussed.
Recently, several experiments qedbec () successfully achieved the strong coupling of a BEC of atoms to the photons inside an ultrahigh-finesse optical cavity. In parallel, strong coupling regime was also achieved with artificial atoms such as superconducting qubits inside micro-wave circuit cavity and quantum dots inside a semi-conductor micro-cavity system circuit (). In these experiments, the individual maximum coupling strength between the ( artificial ) atoms and field is larger than the spontaneous decay rate of the upper state and the intra-cavity field decay rate . The collective Rabi splitting was found to scale as . All these systems are described by the Dicke modeldicke () Eqn.1 where a single mode of photons coupled to an assembly of atoms with the same coupling strength .
The importance of various kinds of Dicke models in quantum optics ranks the same as the boson Hubbard model, Fermionic Hubbard model, Heisenberg model in strongly correlated systems and the Ising model in Statistical mechanics. Since the Dicke model was proposed in 1954, it was solved in the thermodynamic limit by various methods dicke1 (); popov (); staircase (); zero (); chaos (). It was found that when the collective atom-photon coupling strength is sufficiently large ( Fig.1 ), the system gets into a new phase called super-radiant phase where there are large number of inverted atoms and also large number photons in the system’s ground state noneq (). However, so far, there are only a few very preliminary exact diagonization (ED) study on Dicke models at finite staircase (); chaos (), its underlying physics remains unexplored bethe (). It is known that any real symmetry breaking happens only at the thermodynamic limit , so in principle, there is no real symmetry breaking, so no real super-radiant phase at any finite . But there is a very important new physics for a finite system called quantum phase diffusion in imaginary time at finite for a continuous symmetry breaking ground state at . The quantum phase diffusion process in a finite system is as fundamental and universal as symmetry breaking in an infinite system. Here, we will explore the quantum phase diffusion process of the Dicke model by a expansion. We determine the ground state and single photon excitation spectrum in both normal and superradiant phase. In the normal state, we find a behavior of the collective Rabi splitting in the single photon excitation spectrum consistent with the experimental data and also determine the corresponding spectral weights. In the superradiant phase, we identify a Berry phase term which has dramatic effects on both ground state and the excitation spectra. The single photon excitation spectrum has a very low energy quantum phase diffusion mode with a high spectral weight and also a high energy optical mode with a low spectral weight. Their energies and the corresponding spectral weights are calculated. The photons are in a number squeezed state. The squeezing parameter ( namely, the Mandel factor ) is determined. It is the Berry phase which leads to the ”Sidney Opera ” shape in the single photon excitation spectrum and the consecutive plateaus in photon numbers in Fig.1. The Berry phase is also vital to make quantitative comparisons between the analytical results in this paper and the very preliminary ED results in staircase () and much more extensive ED in long (). Being very strong in intensity and has much enhanced signal/noise ratio, the number squeezed state from the superradiant phase may have wide applications in quantum information processing subrev () and also in the field of high resolution and high sensitive measurement ligo (). Several experimental schemes to realize the superradiant phase of the Dicke model briefly discussed.
In the Dicke model dicke (), a single mode of photons couple to two level atoms with the same coupling constant . The two level atoms can be expressed in terms of 3 Pauli matrices . Under the Rotating Wave (RW) approximation, the Dicke model can be written as:
HU(1)=ωca†a+ωa2N∑i=1σzi+g√NN∑i=1(a†σ−i+h.c.) (1)
where the are the cavity photon frequency, the energy difference of the two atomic levels respectively, the is the collective photon-atom coupling ( is the individual photon-atom coupling ), the cavity mode could be any one of the two orthogonal polarizations of cavity modes in qedbec (). One can also add the atom-atom interaction to the Eqn.1. Because does not change the symmetry of the model, we expect all the results achieved in the paper remain qualitatively valid. The Hamiltonian Eqn.1 has the symmetry . In the normal phase, , the symmetry is respected. In the super-radiant phase, , the symmetry is spontaneously broken. The model Eqn.1 was studied by a large expansion in Ref.popov (); zero (). However, they did not extract any important physics at the order of . In this paper, we will show that by carefully analyzing the effects of the zero mode in the super-radiant phase, one can extract the most important physics of quantum phase diffusion at the order . In the large expansion in the magnetic systems spn (), is the order of the magnetic symmetry group with respectively. However here is the number of atoms, so expansion is quite accurate.
Following standard large techniques developed in spn (), after re-scaling the photon field , integrating out the spin degree of freedoms, one can get an effective action in terms of the photon field only, then perform a large expansion. At , the photon mean field value is determined dicke (); dicke1 (); staircase (); popov () by the saddle point equation: where and is the inverse temperature. At , in the normal phase , , the photon number ; in the super-radiant phase , , the photon number when is slightly above . Its phase diagram and photon number at and finite is shown in Fig.1 and 2 respectively.
At finite , writing where describes the photon fluctuation around its mean field value , one can expand the to second order zero () in :
S2[¯ψ,ψ] = N2β∑iω(¯ψ(ω),ψ(−ω))G−1(ψ(ω)¯ψ(−ω)) G−1 = (K1K2K∗2K∗1) (2)
where the explicit expressions of the are given in zero (), but are not needed in the following.
In the normal phase , . One can see the normal Green function at in Eqn.2 has two poles with the spectral weights ( Fig.1 ). After the analytic continuation , the one photon Green function at takes:
⟨a(t)a†(0)⟩N∼c−e−iE−t+c+e−iE+t (3)
where we also put back the re-scaling factor of the photon field . It leads to the two peaks at the two poles in the single photon energy spectrum shown in the Fig.1. At the resonance , the collective Rabi splitting shown in Fig.1 was measured in qedbec (). Note that due to inqedbec (), the coefficients of the are different for the two different polarizations. The intensity ratio of the two peaks seems has not been measured yet in qedbec ().
However, at the super-radiant phase , . the anomalous term and contains a zero mode shown as a red dashed line in Fig.1, in addition to the pole at a high frequency shown as the blue dashed line in Fig.1. This ”zero ” mode is nothing but the ” Goldstone ” mode due to the global symmetry breaking in the super-radiant phase. The important physics behind this ”zero” mode was never addressed in the previous literatures popov (); zero (). Here we will explore the remarkable properties of this ”zero” mode. Because of the infra-red divergences from this zero mode, the expansion in the Cartesian coordinates need to be summed to infinite orders to lead to a finite physical result. It turns out that the non-perturbative effects of the zero mode can be more easily analyzed in the polar coordinate ( or phase representation ) by writing , then to linear order in and : . In Eqn.2, by integrating out the massive mode, using , also paying a special attention to the Berry phase term bertsubir () coming from the angle variable , one can show that the dynamics of the phase is given by:
S2[θ]=iNλ2∂τθ+N2β∑iω2λ2ω2(ω2+E2o)ωc(ω2+4g2λ2)|θ(ω)|2 (4)
In the following, we will discuss the the zero mode and the optical mode respectively.
In the low frequency limit where the magnitude fluctuations can be dropped, Eqn.4 reduces to:
LPD[θ]=iNλ2∂τθ+12D(∂τθ)2=12D(∂τθ+iαD)2 (5)
with the quantum phase diffusion constant . In the Eqn.5, we have denoted where is the closest integer to , so .
The corresponding quantum phase diffusion Hamiltonian is:
HPD[θ]=D2(δNph−α)2 (6)
where is the photon number fluctuation around its ground state value and is conjugate to the phase : . In fact, Eqn.6 can be considered as the Hamiltonian of a particle moving along a ring with a very large inertial of moment subject to a fractional flux .
In Eqn.5, after defining , one can easily show that
⟨(~θ(τ)−~θ(0))2⟩=2D∫dω2π1−eiωτω2=D|τ| (7)
which is a phase diffusion in imaginary time collapse (); laser () with the phase diffusion constant . Only in the thermodynamic limit , a state with a given initial phase will stick to this phase as the time evolves, so we have a spontaneously broken symmetry. However, for any finite , the initial phase has to diffuse with the phase diffusion constant . The diffusion time scale in the imaginary time beyond which there is no more phase coherence is which is finite for any finite . This can also be called phase ”de-coherence” time in the imaginary time laser ().
From Eqn.5, it is easy to see that the gapless nature of the phase diffusion mode in Eqn.5 leads to the vanishing of the order parameter inside the super-radiant phase:
⟨a⟩=0 (8)
So the symmetry is restored by the phase diffusion. From Eqn.5, after doing the analytic continuation , we can get:
⟨a†(t)a(0)⟩S=Nλ2e−i(12+α)Dt (9)
where we also put back the re-scaling factor of the photon field . It is also easy to see that , so there is no quadrature squeezing anymore at any finite squeezing (). In fact, all these results can also be achieved by using the Hamiltonian Eqn.6.
Eqn.9 leads to the result that the energy of the ”zero energy mode ” ( Goldstone mode ) at was ”lifted” to a quantum phase ”diffusion ” mode at any finite with a finite small positive frequency laser ():
ED=(12+α)D=2ωcg2(12+α)[(ωc+ωa)2+4g2λ2]N∼ωc/N (10)
It is the Berry phase effect which leads to the periodic jumps in the Fig.1. The Fourier transform of Eqn.9 leads to the Fluorescence spectrum with the spectral weight .
Now we study the photon statistics. If neglecting the magnitude fluctuation, the quantum phase diffusion Hamiltonian Eqn.6 shows that the ground state is a photon Fock state with eigenvalue which jumps by 1 in all the plateaus ending at in the Fig.2. Now we incorporate the magnitude fluctuation. In the Eqn.2, by integrating out the imaginary part and using , one can get the effective action for the magnitude fluctuations
L2(δρ)=Nω2+E2o8λ2ωc|δρ(ω)|2 (11)
where we find the Mandel factor so the deviation from the Fock state at any given plateau in Fig.2 is given by . Because in the limit, it is very close to be a Fock state. This is a highly non-classical state with Sub-Poissonian photon statistics. It has very strong signal , but nearly no photon number noise, so it has a very large signal to noise ratio which could be crucial for quantum information processing subrev () and also in the field of high resolution and high sensitive measurement ligo ().
Using the polar representation , one can evaluate the photon correlation function:
⟨Ta†(τ)a(0)⟩=Nλ2⟨e−i(θ(τ)−θ(0))⟩ +N4λ2⟨δρ(τ)δρ(0)⟩⟨e−i(θ(τ)−θ(0))⟩+O(1/N) (12)
where the means imaginary time ordered. By evaluating the first and second term from Eqns.4 and 11, we can identify not only the quantum phase diffusion mode with the corresponding spectral weight , but also the optical mode with the corresponding spectral weight ( Fig.1 ). Note that in Fig.1 is independent of the Berry phase . So the total energy in the optical frequency peak is comparable to that in the phase diffusion mode .
If one introduce the total ”spin” of the two level atoms and confine the Hilbert space only to , then Eqn.1 can be simplified to the Dicke model which was studied by an exact diagonization (ED) in staircase (). The authors in staircase () found that there are a series of ground state energy level crossings as the gets into the super-radiant regime and interpreted them as consecutive ”quantum phase transitions ”. But they did not study any excited states. In long (), we performed a much more extensive ED study directly on the Dicke model Eqn.1 and not only calculated the ground states, but also all the excited energy levels. We also identified a series of ground state energy level crossings in the super-radiant regime. By comparing with the analytic results achieved in this paper, we found all these ground states crossings are precisely due to the periodic changes of the Berry phase in Eqns.4,5,6. They are not consecutive ”quantum phase-like transitions ” as claimed in staircase (). We also found one to one quantitative matches between the low energy phase diffusion mode , also the high energy optical mode and the excited levels found by the ED in long () at as small . The complete comparisons will be presented in long ().
It remains experimentally challenging to move into the superradiant regime which requires the collective photon-atom coupling for the optical cavity used in qedbec (). The collective Rabi splitting in qedbec () is still much smaller than , so not even close to the superradiant regime in Fig.1. It was proposed in dickej (); orbitalthermal (); orbital () that the super-radiant regime can be realized by using a cavity-plus-laser-mediated Raman transitions between a pair of stable atomic ground states, therefore also suppress the spontaneous emission . All the parameters in Eqn.1 can be controlled by the external laser frequencies and intensities, so the characteristics energy scales in the effective two level atoms are no longer those of optical photons and dipole coupling, but those associated with Raman transition rates and light shifts. Indeed, using this scheme, the super-radiant phase in the Dicke model chaos (); dickej () was reached by using both thermal atoms orbitalthermal () and the cold atoms in the BEC orbital (). We expect this scheme may also be used to realize the super-radiant phase of the Dicke model shown in Fig.1. Because the microwave circuit cavity has much lower cavity frequency and the individual photon-qubit can also be made very large, so the superradiant phase could also be realized in superconducting qubits or quantum dots inside a circuit cavity in the future. In the experiments, there is also a weak dissipation . In a future publication, following the procedures in squeezing (), we will study the effects of on the number squeezed state.
We thank G. Cheng, T. Esslinger, B. Halperin, Han Pu, S. Sachdev, J. K. Thompson, V. Vultic, Jun Ye, X.L. Yu and P. Zoller for very helpful discussions. JYe’s research was supported by NSF-DMR-0966413, NSFC-11074173, at KITP was supported in part by the NSF under grant No. PHY-0551164. CLZ’s work has been supported by National Keystone Basic Research Program (973 Program) under Grant No. 2007CB310408, No. 2006CB302901 and by the Funding Project for Academic Human Resources Development in Institutions of Higher Learning Under the Jurisdiction of Beijing Municipality.
## References
• (1) F. Brennecke, et. al Nature 450, 268 ( 2007), Yves Colombe, et al Nature 450, 272 ( 2007).
• (2) In fact, the behaviour was observed in thermal atoms before, M.G.Raizen, et al, PRL, 63, 240(1989); S. Leslie, et. al, Phys. Rev. A, 69, 043805 (2004); Xudong Yu, et al, Phys. Rev. A 79, 061803(R) (2009). However, the superradiant phase can be more easily realized in the cold atom BEC systems as demonstrated in the very recent experiment in orbital ().
• (3) A. Wallraff, et al, Nature 431, 162 (2004). J. M. Fink, et al, PRL 103, 083601 (2009); Lev S. Bishop, et al, Nature Physics 5, 105-109 (2009); J. P. Reithmaier, et al, Nature 432, 197 (2004). T. Yoshie, et al, Nature 432, 200 (2004). G. Gunter, et al, Nature, 458, 178 (2009).
• (4) R. H. Dicke, Phys. Rev. 93, 99 (1954).
• (5) K. Hepp and E. H. Lieb, Anns. Phys. ( N. Y. ), 76, 360 (1973); Y. K. Wang and F. T. Hioe, Phys. Rev. A, 7, 831 (1973).
• (6) V. N. Popov and S. A. Fedotov, Soviet Physics JETP, 67, 535 (1988); V. N. Popov and V. S. Yarunin, Collective Effects in Quantum Statistics of Radiation and Matter (Kluwer Academic, Dordrecht,1988).
• (7) V. Buzek, M. Orszag and M. Roko, Phys. Rev. Lett. 94, 163601 (2005).
• (8) P. R. Eastham and P. B. Littlewood, Phys. Rev. B 64, 235101 (2001).
• (9) For a different kind of Dicke model with a symmetry, see C. Emary and T. Brandes, Phys. Rev. Lett. 90, 044101 (2003); Phys. Rev. E 67, 066203 (2003). N. Lambert, C. Emary, and T. Brandes, Phys. Rev. Lett. 92, 073602 (2004). For its possible recent experimnental realization, see orbitalthermal (); orbital (). Due to the very different symmetries, the Dicke model at a finite has completely different properties than the model at a finite and will be presented in a separate publication.
• (10) The Dicke ( Tavis-Cummings ) model is integrable at any finite , so, in the ” face ” value, the system’s eigen-energy spectra could be ”exactly” solvable by Bethe Ansatz like methods. For example, see N.M. Bogoliubov, R.K. Bullough, and J. Timonen, Exact solution of generalized Tavis-Cummings models in quantum optics, J. Phys. A: Math. Gen. 29 6305 (1996). However, so far, the Bethe Ansatz like solutions stay at very ”formal” level from which it is even not able to get the system’s eigen-energy analytically, let alone to extract any underlying physics. Furthermore, it is well known the Bethe Ansatz method is not able to get any dynamic correlation functions.
• (11) In this paper, we study the steady state super-radiant phase which involves quantum phase transition. It is quite different from the conventional non-equilibrium spontaneous super-radiant radiation which can be understood from a simple Golden rule calculation, see Chap.6.7 in Y. Yamamoto and A. Imamoglu, Mesoscopic quantum optics, John Wiley & Sons, Inc. 1999. For the recent advances on the superradiance prepared by a single photon absorption, see A. A. Svidzinsky, J.T. Chang, M. O. Scully, Phys. Rev. Lett. 100, 160504 (2008); M. O. Scully, Phys. Rev. Lett. 102, 143601 (2009), M. O. Scully and A. A. Svidzinsky, Science, 325, 1510 (2009).
• (12) The well studied ”phase diffusion” process of cold atoms BEC always involves a sudden quenching of the Hamiltonians of the sytem, it leads to the collapse and revival in the real time. However, the width of the phase distribution increases linearly in the real time , instead of , so the phrase ” phase diffusion” may not be a suitable one in this context. See Eqn.7 and laser () for comparisions. See E. M. Wright and D. F. Walls, J. C. Garrison, Phys. Rev. Lett. 77, 2158,(1996); M. Lewenstein and L. You, ibid, 77, 3489 (1996); A. Imamoglu, M. Lewenstein and L. You, ibid, 78, 2511 (1997). J. Javanainen and M. Wilkens, ibid, 78, 4675 (1997); M. Greiner, O. Mandel, T.W.Hansch and I. Bloch, Nature 419, 51 (2002).
• (13) L.Davidovich, Rev. Mod. Phys. 68, 127(1996).
• (14) C. F. Wildfeuer, et.al, Phys. Rev. A 80, 043822 (2009).
• (15) J. Ye and S. Sachdev, Phys. Rev. B 44, 10173 (1991). A. Chubukov, S. Sachdev and Jinwu Ye , Phys.Rev.B, 11919 (1994).
• (16) Jinwu Ye, et.al, in preparation.
• (17) We are indebted to B. Halperin and S. Sachdev for stressing the importance of this Berry phase term.
• (18) Note that the classical phase diffusion due to random spontaneous emission events in a Laser is in real time and leads to natural linewidth of a Laser, see Eqn.11.4.8 in M. O. Scully and M. S. Zubairy, Quantum Optics, Cambridge University press, 1997. Because the Berry phase is a completely quantum effect, so there is no Berry phase analog in this classical phase diffusion process.
• (19) For a two mode quadrature squeezing of photons emitted from an exciton superfluid, see Jinwu Ye, T. Shi and Longhua Jiang, Phys. Rev. Lett. 103, 177401 (2009); T. Shi, Longhua Jiang and Jinwu Ye, Phys. Rev. B 81, 235402 (2010).
• (20) F. Dimer, et.al, Phys. Rev. A 75, 013804 (2007).
• (21) A. T. Black, H. W. Chan and V. Vuletic, Phys. Rev. Lett. 91, 203001(2003).
• (22) K. Baumann, et.al, Nature 464, 1301-1306 (2010).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
|
Only this week, get the SQL Complete Track of 9 courses in a special prize of $330$89!
What are vectors?
Vector operations
Indexing and filtering
Simple analysis
Summary
32. Exercise 5
## Instruction
Great! Let's do a visualization.
## Exercise
Draw a histogram for the vector running_time_improved, and observe the result. Can you tell what range (number of minutes improved) most employees fall in?
|
|
# Thermodynamics- How much Ice is melted?
1. Apr 30, 2008
### seichan
1. The problem statement, all variables and given/known data
0.175 kg of water at 88.0 degC is poured into an insulated cup containing 0.212 kg of ice initially at 0 degC. How many kg of liquid will there be when the system reaches thermal equilibrium?
2. Relevant equations
Qwater=-Qice
q=mc(Tf-Ti)
Cwater= 4187 J/kg degC
Cice=2090 J/kg degC [not sure on this one, had to look it up]
3. The attempt at a solution
Alright, I know how to get the final temperature of the solution:
q=mc(Tf-Ti)
.175(4187)(Tf-Ti)=-.212(2090)(Tf-Ti)
.175(4187)Tf-.175(4220)(88+273)=-.212(2090)Tf
.175(4187)Tf+.212(2090)Tf=.175(4187)(88)
Tf=[.175(4187)(88)]/[.175(4187)+.212(2090)]
What I'm not sure of is how much of the ice this melts into water... I considered putting the temperature back into the equilibrium equation and solving for how much mass it must take, but I'm very confused as to how to denote the change in mass, considering the fact that the final masses of both the ice and the water are unknown.
Qwater=-Qice
(mi(water)-mf(water))(4187)([.175(4187)(88)]/[.175(4187)+.212(2090)]-88)=-(mi(ice)-mf(ice)(4187)([.175(4187)(88)]/[.175(4187)+.212(2090)])
(.175-mf(water))(4187)([.175(4187)(88)]/[.175(4187)+.212(2090)]-88)=-(.212-mf(ice)(4187)([.175(4187)(88)]/[.175(4187)+.212(2090)])
Any help would be appreciated.
Last edited: Apr 30, 2008
2. Apr 30, 2008
### alphysicist
Hi seichan,
You do use $q=m c (\Delta T)$ for the heat flow in or out required to change the temperature of a mass m of a substance. However, here the ice is initially at zero degrees celsius. As the heat initially begins flowing into the ice, it begins melting, but it's temperature does not change until it completely melts. How is heat flow related to the mass of ice that has melted?
|
|
# Rotation matrix
In mathematics and physics a rotation matrix is synonymous with a 3×3 orthogonal matrix, which is a real 3×3 matrix R satisfying
$\mathbf{R}^\mathrm{T} = \mathbf{R}^{-1},$
where T stands for the transposed matrix and R−1 is the inverse of R.
## Connection of an orthogonal matrix to a rotation
In general a motion of a rigid body (which is equivalent to an angle and distance preserving transformation of affine space) can be described as a translation of the body followed by a rotation. By a translation all points of the body are displaced, while under a rotation at least one point of the body stays in place. Let the the fixed point be O. By Euler's theorem follows that then not only the point is fixed but also an axis—the rotation axis— through the fixed point. Write $\hat{n}$ for the unit vector along the rotation axis and φ for the angle over which the body is rotated, then the rotation operator on ℝ3 is written as ℛ(φ, ).
Erect three Cartesian coordinate axes with the origin in the fixed point O and take unit vectors $\hat{e}_x,\;\hat{e}_y,\;\hat{e}_z$ along the axes, then the 3×3 rotation matrix $\mathbf{R}(\varphi, \hat{n})$ is defined by its elements $R_{ji}(\varphi, \hat{n})$ :
$\mathcal{R}(\varphi, \hat{n})(\hat{e}_i) = \sum_{j=x,y,x} \hat{e}_j R_{ji}(\varphi, \hat{n}) \quad\hbox{for}\quad i=x,y,z.$
In a more condensed notation this equation can be written as
$\mathcal{R}(\varphi, \hat{n})\left(\hat{e}_x,\;\hat{e}_y,\;\hat{e}_z\right) = \left(\hat{e}_x,\;\hat{e}_y,\;\hat{e}_z\right) \; \mathbf{R}(\varphi, \hat{n}).$
Given a basis of the linear space ℝ3, the association between a linear map and its matrix is one-to-one.
A rotation $\mathcal{R}$ (for convenience sake the rotation axis and angle are suppressed in the notation) leaves the shape of a rotated rigid body intact, so that all distances within the body are invariant. If the body is 3-dimensional—it contains three linearly independent vectors with origins in the invariant point—it holds that for any pair of vectors $\vec{a}$ and $\vec{b}$ in ℝ3 the inner product is invariant, that is,
$\left(\mathcal{R}(\vec{a}),\;\mathcal{R}(\vec{b}) \right) = \left(\vec{a},\;\vec{b}\right).$
A linear map with this property is called orthogonal. It is easily shown that a similar vector-matrix relation holds. First we define column vectors (stacked triplets of real numbers given in bold face):
$\vec{a} =\left(\hat{e}_x,\;\hat{e}_y,\;\hat{e}_z\right)\begin{pmatrix}a_x\\a_y\\a_z\end{pmatrix} \;\stackrel{\mathrm{def}}{=}\; \left(\hat{e}_x,\;\hat{e}_y,\;\hat{e}_z\right) \mathbf{a} \quad\hbox{and}\quad \vec{b} =\left(\hat{e}_x,\;\hat{e}_y,\;\hat{e}_z\right)\begin{pmatrix}b_x\\b_y\\b_z\end{pmatrix} \;\stackrel{\mathrm{def}}{=}\; \left(\hat{e}_x,\;\hat{e}_y,\;\hat{e}_z\right) \mathbf{b}$
and observe that the inner product becomes by virtue of the orthonormality of the basis vectors
$\left( \vec{a},\; \vec{b} \right) = \mathbf{a}^\mathrm{T} \mathbf{b}\equiv \left(a_x,\;a_y,\;a_z\right) \begin{pmatrix}b_x\\b_y\\b_z\end{pmatrix} \equiv a_xb_x+a_yb_y+a_zb_z.$
The invariance of the inner product under the rotation operator $\mathcal{R}$ leads to
$\mathbf{a}^\mathrm{T}\; \mathbf{b} =\big(\mathbf{R}\mathbf{a}\big)^\mathrm{T}\; \mathbf{R}\mathbf{b} = \mathbf{a}^\mathrm{T} \mathbf{R}^\mathrm{T}\; \mathbf{R}\mathbf{b},$
since this holds for any pair a and b it follows that a rotation matrix satisfies
$\mathbf{R}^\mathrm{T} \mathbf{R} = \mathbf{E},$
where E is the 3×3 identity matrix. For finite-dimensional matrices one shows easily
$\mathbf{R}^\mathrm{T} \mathbf{R} = \mathbf{E} \quad \Longleftrightarrow\quad\mathbf{R}\mathbf{R}^\mathrm{T} = \mathbf{E}.$
A matrix with this property is called orthogonal. So, a rotation gives rise to a unique orthogonal matrix.
Conversely, consider an arbitrary point P in the body and let the vector $\overrightarrow{OP}$ connect the fixed point O with P. Expressing this vector with respect to a Cartesian frame in O gives the column vector p (three stacked real numbers). Multiply p by the orthogonal matrix R, then p′ = Rp represents the rotated point P′ (or, more precisely, the vector $\overrightarrow{OP'}$ is represented by column vector p′ with respect to the same Cartesian frame). If we map all points P of the body by the same matrix R in this manner, we have rotated the body. Thus, an orthogonal matrix leads to a unique rotation.
Note that the Cartesian frame is fixed here and that points of the body are rotated, this is known as an active rotation. Instead, the rigid body could have been left invariant and the Cartesian frame could have been rotated, this also leads to new column vectors of the form p′ ≡ Rp, such rotations are referred to as passive.
## Properties of an orthogonal matrix
Writing out matrix products it follows that both the rows and the columns of the matrix are orthonormal (normalized and orthogonal). Indeed,
\begin{align} \mathbf{R}^\mathrm{T} \mathbf{R} &= \mathbf{E} \quad\Longleftrightarrow\quad \sum_{k=1}^{3} R_{ki}\, R_{kj} =\delta_{ij} \quad\hbox{(columns)} \\ \mathbf{R} \mathbf{R}^\mathrm{T} &= \mathbf{E} \quad\Longleftrightarrow\quad \sum_{k=1}^{3} R_{ik}\, R_{jk} =\delta_{ij} \quad\hbox{(rows)} \\ \end{align}
where δij is the Kronecker delta.
Orthogonal matrices come in two flavors: proper (det = 1) and improper (det = −1) rotations. Indeed, invoking some properties of determinants, one can prove
$1=\det(\mathbf{E})=\det(\mathbf{R}^\mathrm{T}\mathbf{R}) = \det(\mathbf{R}^\mathrm{T})\det(\mathbf{R}) = \det(\mathbf{R})^2 \quad\Longrightarrow \quad \det(\mathbf{R}) = \pm 1.$
### Compact notation
A compact way of presenting the same results is the following. Designate the columns of R by r1, r2, r3, i.e.,
$\mathbf{R} = \left(\mathbf{r}_1,\, \mathbf{r}_2,\, \mathbf{r}_3 \right)$.
The matrix R is orthogonal if
$\mathbf{r}_i^\mathrm{T} \mathbf{r}_j \equiv \mathbf{r}_i \cdot \mathbf{r}_j = \delta_{ij}, \quad i,j = 1,2,3 .$
The matrix R is a proper rotation matrix, if it is orthogonal and if r1, r2, r3 form a right-handed set, i.e.,
$\mathbf{r}_i \times \mathbf{r}_j = \sum_{k=1}^3 \, \varepsilon_{ijk} \mathbf{r}_k .$
Here the symbol × indicates a cross product and $\varepsilon_{ijk}$ is the antisymmetric Levi-Civita symbol,
\begin{align} \varepsilon_{123} =&\; \varepsilon_{312} = \varepsilon_{231} = 1 \\ \varepsilon_{213} =&\; \varepsilon_{321} = \varepsilon_{132} = -1 \end{align}
and $\varepsilon_{ijk} = 0$ if two or more indices are equal.
The matrix R is an improper rotation matrix if its column vectors form a left-handed set, i.e.,
$\mathbf{r}_i \times \mathbf{r}_j = - \sum_{k=1}^3 \, \varepsilon_{ijk} \mathbf{r}_k \; .$
The last two equations can be condensed into one equation
$\mathbf{r}_i \times \mathbf{r}_j = \det(\mathbf{R}) \sum_{k=1}^3 \; \varepsilon_{ijk} \mathbf{r}_k$
by virtue of the the fact that the determinant of a proper rotation matrix is 1 and of an improper rotation −1. This was proved above, an alternative proof is the following: The determinant of a 3×3 matrix with column vectors a, b, and c can be written as scalar triple product
$\det\left(\mathbf{a},\,\mathbf{b},\, \mathbf{c}\right) = \mathbf{a} \cdot (\mathbf{b}\times\mathbf{c})$.
It was just shown that for a proper rotation the columns of R are orthonormal and satisfy,
$\mathbf{r}_1 \cdot (\mathbf{r}_2 \times \mathbf{r}_3 ) = \mathbf{r}_1 \cdot\left(\sum_{k=1}^3 \, \varepsilon_{23k} \, \mathbf{r}_k \right) = \varepsilon_{231} = 1 .$
Likewise the determinant is −1 for an improper rotation.
## Explicit expression of rotation operator
Rotation of vector $\scriptstyle \vec{r}$ around over φ.
(i) The red vectors and black axis are in the plane of the screen.
(ii) The blue vectors are obtained by rotation into the screen.
(iii) The green cross product in x-y plane is perpendicular to the screen, pointing away from the reader.
Let $\overrightarrow{OP} \equiv \vec{r}$ be a vector pointing from the fixed point O of a rigid body to an arbitrary point P of the body. A rotation of this arbitrary vector around the unit vector over an angle φ can be written as
\begin{align} \mathcal{R}(\varphi, \hat{n})(\vec{r}\,)&= \vec{r}\,' = \\ & \vec{r}\;\cos\varphi + \hat{n} (\hat{n}\cdot\vec{r}\,)\; (1- \cos\varphi) + (\hat{n} \times \vec{r}\,) \sin\varphi . \\ \end{align}
where • indicates an inner product and the symbol × a cross product.
It is easy to derive this result. Indeed, decompose the vector to be rotated into two components, one along the rotation axis and one perpendicular to it (see the figure on the right)
$\vec{r} = \vec{r}_\parallel + \vec{r}_\perp\quad\hbox{with}\quad \vec{r}_\parallel = \hat{n} (\hat{n}\cdot\vec{r}\,),$
and
$\vec{r}_\perp = \vec{r} - \vec{r}_\parallel =\vec{r}- \hat{n} (\hat{n}\cdot\vec{r}\,),$
so that, in view of || = 1,
$|\vec{r}_\perp|^2 = r^2 - (\hat{n}\cdot\vec{r}\,)^2$
and from |a×b|2 = a2b2 − (ab)2 follows
$|\hat{n} \times \vec{r}\,|^2 = r^2 - (\hat{n}\cdot\vec{r}\,)^2.$
Upon rotation, the component of the rotated vector along the rotation axis is invariant, only its component orthogonal to the rotation axis changes. By virtue of the following fact (in words: length of basis vector along x-axis is length of basis vector along y-axis):
$|\vec{r}_\perp| = |\hat{n} \times \vec{r}\,| = \sqrt{|\vec{r}\,|^2 - (\hat{n}\cdot\vec{r}\,)^2},$
the rotation property simply is
$\vec{r}_\perp \mapsto \vec{r}\,'_\perp = \cos\varphi\;\vec{r}_\perp + \sin\varphi\;(\hat{n}\times\vec{r}\,).$
Hence
$\vec{r}\,' = \hat{n} (\hat{n}\cdot\vec{r}\,) + \left[\vec{r}- (\hat{n}\cdot\vec{r}\,) \hat{n}\right]\cos\varphi+(\hat{n}\times\vec{r}\,)\sin\varphi.$
Some reshuffling of the terms gives the required result.
## Explicit expression of rotation matrix
It will be shown that
$\mathbf{R}(\varphi, \hat{n}) = \mathbf{E} + \sin\varphi \mathbf{N} +(1-\cos\varphi)\mathbf{N}^2,$
where (see cross product for more details)
$\mathbf{N} \equiv \begin{pmatrix} 0 & -n_z & n_y \\ n_z& 0 & -n_x \\ -n_y& n_x & 0 \end{pmatrix}\quad\hbox{and}\quad \hat{n} \equiv (\hat{e}_x,\;\hat{e}_y,\;\hat{e}_z\,) \begin{pmatrix} n_x \\ n_y \\ n_z \\ \end{pmatrix} \equiv (\hat{e}_x,\;\hat{e}_y,\;\hat{e}_z\,) \; \hat{\mathbf{n}}$
Note further that the dyadic product satisfies (as can be shown by squaring N),
$\hat{\mathbf{n}}\otimes\hat{\mathbf{n}} = \mathbf{N}^2 + \mathbf{E}, \quad\hbox{with}\quad |\hat{\mathbf{n}}| = 1$
and that
$\hat{n} (\hat{n}\cdot\vec{r}\,) \leftrightarrow (\hat{\mathbf{n}}\otimes\hat{\mathbf{n}})\;\mathbf{r} = \left(\mathbf{N}^2 + \mathbf{E}\right) \mathbf{r}.$
Translating the result of the previous section to coordinate vectors and substituting these results gives
\begin{align} \mathbf{r}' &= \left[\cos\varphi\; \mathbf{E} +(1-\cos\varphi)(\mathbf{N}^2 + \mathbf{E}) + \sin\varphi \mathbf{N} \right] \mathbf{r} \\ &= \left[\mathbf{E} + (1-\cos\varphi) \mathbf{N}^2 + \sin\varphi \mathbf{N}\right] \mathbf{r}, \qquad\qquad\qquad\qquad\qquad(1) \end{align}
which gives the desired result for the rotation matrix. This equation is identical to Eq. (2.6) of Biedenharn and Louck[1], who give credit to Leonhard Euler (ca. 1770) for this result.
For the special case φ = π (rotation over 180°), equation (1) can be simplified to
$\mathbf{R}(\pi, \hat{n}) = \mathbf{E} + 2 \mathbf{N}^2 = -\mathbf{E} +2\, \hat{\mathbf{n}}\otimes\hat{\mathbf{n}}$
so that
$\mathbf{R}(\pi, \hat{n})\, \mathbf{r} = -\mathbf{r} +2\; \hat{\mathbf{n}}\; (\hat{\mathbf{n}} \cdot\mathbf{r}),$
which sometimes[2] is referred to as reflection over a line, although it is not a reflection.
Equation (1) is a special case of the transformation properties given on p. 7 of the classical work (first edition 1904) of Whittaker[3]. To see the correspondence with Whittaker's formula, which includes also a translation over a displacement d and who rotates the vector (xa, yb, zc), we must put equal to zero: a, b, c, and d in Whittaker's equation. Furthermore Whittaker uses that the components of the unit vector n are the direction cosines of the rotation axis:
$n_x \equiv \cos\alpha,\quad n_y \equiv\cos\beta,\quad n_z\equiv\cos\gamma.$
Under these conditions the rotation becomes
\begin{align} x' &= x-(1-\cos\varphi)\left[x\;\sin^2\alpha - y\;\cos\alpha\cos\beta - z\;\cos\alpha\cos\gamma\right] +\left[z\;\cos\beta- y\;\cos\gamma\right]\sin\varphi \\ y' &= y-(1-\cos\varphi)\left[y\;\sin^2\beta - z\;\cos\beta\cos\gamma - x\;\cos\beta\cos\alpha\right] +\left[x\;\cos\gamma- z\;\cos\alpha\right]\sin\varphi \\ z' &= z-(1-\cos\varphi)\left[z\;\sin^2\gamma - x\;\cos\gamma\cos\alpha - y\;\cos\gamma\cos\beta\right] +\left[y\;\cos\alpha- x\;\cos\beta\right]\sin\varphi \\ \end{align}
## Vector rotation
Sometimes it is necessary to rotate a rigid body so that a given vector in the body (for instance an oriented chemical bond in a molecule) is lined-up with an external vector (for instance a vector along a coordinate axis).
Let the vector in the body be f (the "from" vector) and the vector to which f must be rotated be t (the "to" vector). Let both vectors be unit vectors (have length 1). From the definition of the inner product and the cross product follows that
$\mathbf{f} \cdot \mathbf{t} = \cos\varphi,\quad |\mathbf{f}\times\mathbf{t}| = \sin\varphi,$
where φ is the angle (< 180°) between the two vectors. Since the cross product f × t is a vector perpendicular to the plane of both vectors, it is easily seen that a rotation around the cross product vector over an angle φ moves f to t.
Write
$\mathbf{u} = \mathbf{f}\times\mathbf{t}\quad \Longrightarrow\quad \mathbf{u} = \sin\varphi\hat{\mathbf{n}},$
where $\hat{\mathbf{n}}$ is a unit vector. Define accordingly
$\mathbf{U} \; \stackrel{\mathrm{def}}{=} \; \sin\varphi \mathbf{N}= \sin\varphi \begin{pmatrix} 0 & -n_z & n_y \\ n_z& 0 & -n_x \\ -n_y& n_x & 0 \end{pmatrix}= \begin{pmatrix} 0 & -u_z & u_y \\ u_z& 0 & -u_x \\ -u_y& u_x & 0 \end{pmatrix} ,$
so that
\begin{align} \mathbf{R}(\varphi, \hat{n})&= \mathbf{E} + \mathbf{U} + \frac{1-\cos\varphi}{\sin^2\varphi} \mathbf{U}^2 \\ &= \mathbf{E} + \mathbf{U} + \frac{1}{1+\cos\varphi} \mathbf{U}^2 \qquad\qquad\qquad\qquad\qquad\qquad\qquad(2)\\ &= \cos\varphi\,\mathbf{E}+ \mathbf{U} + \frac{1}{1+\cos\varphi} \mathbf{u}\otimes\mathbf{u}\\ \end{align}
The last equation, which follows from U2 = uu − sin2φ E, is Eq. (3) of Ref. [4] after substitution of
$\frac{1-\cos\varphi}{\sin^2\varphi} =\frac{1-\cos\varphi}{1-\cos^2\varphi} = \frac{1}{1+\cos\varphi} .$
Indeed, write the rotation matrix in full, using the short-hand notations c = cos φ and h = (1-c)/(1-c2)
$\mathbf{R}(\varphi, \hat{n}) = \begin{pmatrix} c+hu_x^2 & h u_xu_y - u_z & h u_x u_z + u_y \\ h u_x u_y + u_z & c + h u_y^2 & h u_y u_z - u_x \\ h u_x u_z -u_y & h u_y u_z + u_x & c + h u_z^2 \\ \end{pmatrix},$
which is the matrix in Eq. (3) of Ref. [4]. If the vectors are parallel, f = t, then u = 0, U = 0 and ft = cosφ = 1, so that the equation is well-defined—it gives R(φ, n) = E, as expected. Provided f and t are not anti-parallel, the matrix $\mathbf{R}(\varphi, \hat{n})$ can be computed easily and quickly. It requires not much more than the computation of the inner and cross product of f and t.
### Case that "from" and "to" vectors are anti-parallel
If the vectors f and t are nearly anti-parallel, f ≈ − t, then ft = cosφ ≈ − 1, and the denominator in Eq. (2) becomes zero, so that Eq. (2) is not applicable. The vector defining the rotation axis is nearly zero: uf × tf × (−f) ≈ 0.
Clearly, a two-fold rotation (angle 180°) around any rotation axis perpendicular to f will map f onto −f. This freedom in choice of the two-fold axis is consistent with the indeterminacy in the equation that occurs for cosφ = −1.
Since −E (inversion) sends f to −f, one could naively assume that all position vectors of the rigid body may be inverted by −E. However, if the rigid body is not symmetric under inversion (does not have a symmetry center), inversion turns the body into a non-equivalent one. For instance, if the body is a right-hand glove, inversion sends it into a left-hand glove; it is well-known that a right-hand and a left-hand glove are different.
Recall that f is a unit vector. If f is on the z-axis, |fz| = 1, then the following rotation around the y-axis turns f into −f:
$\mathbf{R}(\pi, \hat{e}_{y}) \mathbf{f} = \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \\ \end{pmatrix} \begin{pmatrix} 0 \\ 0 \\\pm 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\\mp 1 \end{pmatrix}$
If f is not on the z-axis, fz ≠ 1, then the following orthogonal matrix sends f to −f and hence gives a 180° rotation of the rigid body
$\mathbf{R}(\pi, \hat{e}_{y'}) \mathbf{f} = \frac{1}{1-f^2_z} \begin{pmatrix} -(f^2_x-f^2_y) & -2f_xf_y & 0 \\ -2f_xf_y &(f^2_x-f^2_y) & 0 \\ 0 & 0 & -(1-f^2_z) \\ \end{pmatrix} \begin{pmatrix} f_x \\ f_y \\ f_z \end{pmatrix} = \begin{pmatrix} -f_x \\ -f_y \\ -f_z \end{pmatrix}$
To show this we insert
$\hat{\mathbf{n}} = \frac{1}{1-f^2_z} \begin{pmatrix} f_y \\-f_x \\0 \end{pmatrix}$
into the expression for rotation over 180° introduced earlier:
$\mathbf{R}(\pi, \hat{e}_{y'}) = -\mathbf{E} +2 \hat{\mathbf{n}}\otimes \hat{\mathbf{n}} .$
Note that the vector $\hat{\mathbf{n}}$ is normalized, because f is normalized, and that it lies in the x-y plane normal to the plane spanned by the z-axis and f; it is in fact a unit vector along a rotated y-axis obtained by rotation around the z-axis.
Given f = (fx, fy, fz), the matrix $\mathbf{R}(\pi, \hat{e}_{y'})$ is easily and quickly calculated. The matrix rotates the body over 180° sending f to −f exactly. When −f and t are close, but not exactly equal, the earlier rotation formula Eq. (2), may be used to perform the final (small) rotation of the body that lets −f and t coincide exactly.
## Change of rotation axis
Euler's theorem states that a rotation is characterized by a unique axis (an eigenvector with unit eigenvalue) and a unique angle. As a corollary of the theorem follows that the trace of a rotation matrix is equal to
$\mathrm{Tr}[\mathbf{R}(\varphi, \hat{n})] = 2\cos\varphi +1,$
and hence the trace depends only on the rotation angle and is independent of the axis. Let A be an orthogonal 3×3 matrix (not equal to E), then, because cyclic permutation of matrices under the trace is allowed (i.e., leaves the trace invariant),
$\mathrm{Tr}[\mathbf{A} \mathbf{R}(\varphi, \hat{n}) \mathbf{A}^\mathrm{T}] = \mathrm{Tr}[ \mathbf{R}(\varphi, \hat{n}) \mathbf{A}^\mathrm{T}\mathbf{A}] = \mathrm{Tr}[\mathbf{R}(\varphi, \hat{n})] .$
As a consequence it follows that the two matrices
$\mathbf{R}(\varphi, \hat{n})\quad \hbox{and}\quad \mathbf{A} \mathbf{R}(\varphi, \hat{n}) \mathbf{A}^\mathrm{T}$
describe a rotation over the same angle φ but around different axes. The eigenvalue equation can be transformed
$\mathbf{R}(\varphi, \hat{n})\;\hat{n} = \hat{n} \;\Longrightarrow\; \left(\mathbf{A}\,\mathbf{R}(\varphi, \hat{n})\,\mathbf{A}^\mathrm{T}\right) \;\mathbf{A}\hat{n} = \mathbf{A}\hat{n}$
which shows that the transformed eigenvector is the rotation axis of the transformed matrix, or
$\mathbf{R}(\varphi, \mathbf{A}\hat{n})= \mathbf{A}\,\mathbf{R}(\varphi, \hat{n})\,\mathbf{A}^\mathrm{T} .$
## References
1. L. C. Biedenharn and J. D. Louck, Angular Momentum in Quantum Physics, Addison-Wesley, Reading, Mass. (1981) ISBN 0-201-13507-8
2. Wikipedia Retrieved July 20, 2009
3. E. T. Whittaker, A Treatise on the Dynamics of Particles and Rigid Bodies, Cambridge University Press (1965)
4. 4.0 4.1 T. Möller and J. F. Hughes, J. Graphics Tools, 4 pp. 1–4 (1999).
|
|
# How to estimate variance-covariance matrix of assets with different length of historical data? [duplicate]
Consider you have 4 assets A, B, C and D, where
• Asset A started trading on 2 Jan 1990 (i.e. data is available since that point in time for every trading day until today)
• Asset B started trading on 2 Jan 1995
• Asset C also started trading on 2 Jan 1995
• Asset D started trading on 2 Jan 2010
In the simplest method, you would just use the joint history of all assets beginning on 2 Jan 2010, maybe fill missing data due to different holidays on different exchanges and compute the sample variance-covariance (VCV) matrix.
But this way you would throw away the longer history of the assets A, B and C resulting in a less stable VCV matrix.
Is there another way to come around the problem of estimating the VCV matrix for differing length financial time series? Can you e.g. construct the VCV matrix from pairwise covariance?
## marked as duplicate by Brian B, AfterWorkGuinness, Bob Jansen♦Nov 12 '15 at 20:01
I agree on Richard. the simpler you choose, the better it is so as to get reliable estimates. What's your data frequency? purpose? For model construction as far as I am concerned, daily data from 2010 is enough. Otherwise, you could use a proxy asset for asset D depending on its nature. To clarify, if D is an ETF let's say CAC 40 ETF, concatenate its return series up to the last available data point with the return series of its underlying - here CAC 40 - since your starting date. Same process if D is a derivatives. By, using this approach (concatenating return series) always make sure that the asset and its underlying are strongly correlated. Hope it helps. Cheers.
|
|
The area of a right triangle is one half of the area of the rectangle! The diagonals of the rectangle are also congruent to each other and they bisect each other at their point of intersection. Services. ... As we see the perimeter of the parallelogram is equal the perimeter of the rectangle. We come across many shapes in daily life. Area is calculated by the length times the width. 1. As a member, you'll also get unlimited access to over 83,000 This concept is known as the perimeter, which is defined as the distance around an object, shape or figure. Rectangle is a four-sided polygon having two dimensions i.e. Recall that Melody has to walk around her block three times every day, and each side of her block is 80 ft long. To learn more about different topics download BYJU’S – The Learning App and watch interactive videos. The below is formulas to find the. The first triangle has a perimeter of 20 inches. The perimeter is 400 cm. We will add all three equations to equal the perimeter. and career path that can help you find the school that's right for you. All other trademarks and copyrights are the property of their respective owners. An error occurred trying to load this video. What is the perimeter of this rectangle (include the unit)? The perimeter of a rectangle is given by the formula 2⋅(L+l) where L represents the length and l the width of a side. flashcard sets, {{courseNav.course.topics.length}} chapters | Step 2: Now, the sides of the rectangle are always the same so you two different numbers only from which one is the length and second is the width. The Bermuda Triangle has a perimeter of 3075 miles. If its width is 3 more than twice its length, then find its length and with. To calculate the perimeter of a Rectangle, you just have to follow some easy steps like: Step 1: Once we go through given information and find out the side measures of a rectangle or we can say the length and width of the rectangle. The calculation of the perimeter of a rectangle whose length is 3 and width is 2 is done by entering the following formula perimeter(3;2). By substitution, we have 2(10) - 3. Find the area of the carpet in square metres. D. 23. Since the areas of two congruent triangles are identical to the area of a rectangle, we may have the formula $$A=\frac{1}{2}\: b\cdot h$$ if we want to determine the area of a triangle. On the other hand, some triangles have congruent sides and some do not; therefore, we should only use addition to calculate the perimeter of triangles. This challenging perimeter problem(for some) demonstrates how to find the perimeter of an object when one of the sides of the shape is the arc of a circle. For this lesson, we will focus on the perimeter of triangles and rectangles, and we'll look at a few examples, beginning with Melody. Polygons are closed figures which are bounded by a chain of line segments, for example, triangles, rectangles, and squares. Learn more about the perimeter of triangles and rectangles in this lesson, and test your knowledge with a quiz. Since it's a rectangle, we know that both widths and both lengths will be the same. Quiz & Worksheet - Who is Judge Danforth in The Crucible? 's' : ''}}. Perimeter of Rectangle = 2 ( l + b ) units Perimeter of Rectangle = 2 ( 12 + 5 ) cm (Here, Length = 12 cm , Breadth = 5 cm ) Perimeter of Rectangle = ( 2 x 17 ) cm Perimeter of Rectangle = 34 cm Hence, Perimeter of Rectangle is 34 cm . Square is also a polygon where all its sides are the same. Perimeter of a triangle. So, the perimeter of the rectangle is 36 inches. Christy is creating a triangular collage of pictures. This process gives us 2(2x - 3) + 2(x + 1) = 56. In all of these examples, we were given the measure of each side to calculate the perimeter. 4. If we have length =8 and width=5 the area will become 8X5 = 40. Rectangles have two pairs of congruent sides. The lengths of the sides of a triangle are in the extended ratio of 3:10:12. Note: Do not forget to write the unit, each time you solve a perimeter problem. Then, we will add them together to equal the perimeter. Our problem is that we only know two of the sides. Now, let's determine the cost. Thus, the formula for perimeters of geometric shapes also varies from one shape to another. 3. To determine the total length of her walk, we must figure out how many feet are around the entire block. 9. Consider a rectangle, the surrounding distance indicated by the arrows form the perimeter of the given rectangle. In conclusion, Christy will spend $3 on ribbon for her collage. succeed. = AB + BC + CD + AD, Perimeter of a rectangle = 2 × (length + breadth). if triangle FGH~ to Triangle PQR, FG = 6, PQ = 10 and the perimeter of trinagle PQR is 35 find the perimeter of triangle FGH . As you can see, this is a rectangle with a length of 28 units and a breadth of 17 units. 20 + 20 + 20 = 3 × 20 = 60 cm. Sciences, Culinary Arts and Personal Side PD is 4 less than twice a number, side PU is 8 less than the number, and side UD is 15 less than three times the number. For a rectangle, that means adding four sides: Here's rectangle DOTS, which has a perimeter of 56: If the length of the rectangle is 2x - 3, and the width of the rectangle is x + 1, how long is the rectangle? credit by exam that is accepted by over 1,500 colleges and universities. This allows us to use addition or a combination of addition and multiplication to calculate the perimeter. P = 2(l + w) = 2(12 + 5) P = 2(17) = 34 The perimeter of the rectangle is 34. The perimeter of the smaller rectangle is 20 cm. To calculate perimeter of rectangle, use formula, perimeter = 2* (len+bre). Each shape has an interior region or surface bounded by these lines. length and breadth. The longer side of rectangle is known as length of the rectangle and shorter side is known as width or breadth of rectangle. As part of a new fitness program, Melody has to walk around her block three times every day. (image will be uploaded soon) Perimeter of rectangle formula = sum of all the four sides #We will make use of user defined python functions for this task. m² 6. Each successive triangle has a pe. 122 lessons just create an account. Then, we will divide each side by 6 to conclude that x = 31.5. Circle is a collection of points and perimeter of a circle is known as circumference of a circle. Put your understanding of this concept to test by answering a few MCQs. Substituting 31.5 into each equation, we see that PD = 59 cm, PU = 23.5 cm, and UD = 79.5 cm. Top School in Denver with Information Technology Degrees, Top School in Newport News, VA, for an IT Degree, Top School in Cleveland for an Information Technology Degree, Top School in Dayton, OH, with Information Technology Degrees, Top School in Lexington, KY, for an IT Degree, How to Do Your Best on Every College Test. What is the perimeter of this rectangle (include units)? 13. Doing so gives a total of$3. imaginable degree, area of To find the perimeter of any polygon, all we have to do is add up the length of all the sides. 10. Percy Cod takes a walk around the perimeter path of his local park. Perimeter of a rectangle formula. So, our beginning setup is 2x - 4 + x - 8 + 3x - 15 = 162. Solution: P = 5 cm + 9 cm + 11 cm = 25 cm Example 2: A rectangle has a length of 8 centimeters and a width of 3 centimeters. How to find perimeter of triangle with base and height. For art class, the teacher has asked you to create an image in which you draw equilateral triangles like those shown. Click ‘Start Quiz’ to begin! An easy to use, free perimeter calculator you can use to calculate the perimeter of shapes like square, rectangle, triangle, circle, parallelogram, trapezoid, ellipse, octagon, and sector of a circle. The shortest side measures 75 miles less than the middle side and the longest side measures 375 miles more than the middle side. Formulas, explanations, and graphs for each calculation. If you know only 1 side but all 3 angles, you can use the rule of sines to find the remaining sides, then calculate the perimeter. This is the equivalent of adding all four sides, since opposite sides are of equal length by definition. Perimeter (peri: around; meter: measure) of a polygon is the distance or linear measure of these bounded line segments. We know every shape is different from one another. | {{course.flashcardSetCount}} You can test out of the Every polygon is a 2-D figure which lies flat on a plane. Break the shape into smaller known shapes. Find the perimeter of a rectangle with a length of 12 and width of 5. Enrolling in a course lets you earn progress by passing quizzes and exams. Study.com has thousands of articles about every length and breadth are called 2-D (two-dimensional) shape. No matter which method we use, the perimeter of the board is 26 ft. Combining like terms takes us to 6x - 27 = 162. What is the width? MATH study 14. of ribbon to frame her collage. To do so, let's add up the length of all the sides of the triangle: 12 + 18 + 18. flashcard set{{course.flashcardSetCoun > 1 ? Find the perimeter of the rectangle above. Solution: Let x be the length of the rectangle. DaQuita has taught high school mathematics for six years and has a master's degree in secondary mathematics education. The perimeter of a square = sum of all four sides. Recalling that 12 in = 1 ft, we can conclude that 48 in = 4 ft. Example 2: The perimeter of a rectangle is 42 cm. Log in here for access. Perimeter of Square = 4 x side. These bounded lines form the perimeter of a geometric shape and formula for it may vary from one shape to another. Since we were only asked to determine the length of the rectangle, let's plug 10 into the equation for the length, which is 2x - 3. Side PU is 8 less than the number, so PU = x - 8, and side UD is 15 less than three times the number, which means that UD = 3x - 15. WALT and WILF Starter: Counting the squares Main 1: Area and perimeter of rectangle, Missing lengths and problem solving question Main 2: Area and perimeter of compound shapes and problem solving question Test Optional Admissions: Benefiting Schools, Students, or Both? Every shape begins with a line or a line segment or a curve. Since the ribbon costs $0.75 per foot, let's multiply 4 *$0.75. It is the sum total covered by two lengths and two breadths. Required fields are marked *, Test your Knowledge on Perimeter of Shapes. So, we must first solve for x. Working Scholars® Bringing Tuition-Free College to the Community, Explain how to calculate perimeter of rectangles and triangles. Create your account. A farmer has 160 ft of a fence and wants a pen to adjoin to the whole side of the 130 ft barn. 2. Since it's a rectangle, we know that both widths and both lengths will be the same. Since all the three sides of the triangle are of equal length, we can find the perimeter by multiplying the length of each side by 3. A rectangular carpet is 4m wide and 9m long. Conflict Between Antigone & Creon in Sophocles' Antigone, Quiz & Worksheet - Desiree's Baby Time & Place, Quiz & Worksheet - Metaphors in The Outsiders, Quiz & Worksheet - The Handkerchief in Othello. Find the 3 sides. 19.5 cm. credit-by-exam regardless of age or education level. 8.5 cm. Doing so gives us 59 + 23.5 + 79.5, which is equal to 162. To determine the amount of ribbon that Christy needs, we must first figure out the perimeter of her collage. Example 1: Find the perimeter of a triangle with sides measuring 5 centimeters, 9 centimeters and 11 centimeters. Did you know… We have over 220 college Perimeter of a rhombus. Here len indicates to length and bre indicates to breadth of rectangle. The formula for the perimeter of a rectangle is (width + height) x 2, as seen in the figure below:. The rectangle perimeteris equal to the sum of the sides. In the end, the length of rectangle DOTS is 17. Area of a triangle Visit the Math 102: College Mathematics page to learn more. Perimeter of a Rectangle Formulas To calculate perimeter in general, just add up the lengths of all the straight sides of a polygon, from a triangle up to a 100-gon (a hectogon). Its unit is centimetre (cm) or meter (m). So, in conclusion, Melody will walk 960 ft every day for her fitness program. © copyright 2003-2021 Study.com. Select a subject to preview related courses: For this problem, knowing the value of x will allow us to determine the length. lessons in math, English, science, history, and more. Explanation : The commented numbers in the above program denote the step numbers below : We are reading the user inputs as float.Because the width and height could be anything like 12.3, 13.45, etc. What should the dimensions be for the area to be a maximum? Create an account to start this course today. With both shapes, we can also use the perimeter to work backwards and determine the length of each side. To determine if Daniel has enough border, we must calculate the perimeter of his board. If she wants to frame the collage with ribbon, how much ribbon will she need? If each side of her block is 80 ft long, how many feet will she walk every day? A rectangle can also be called as a quadrilateral as it has 4 sides. For instance, considering the above mentioned figure, it can be written as 5 + 3 + 5 + 3 = 16 units. To learn more, visit our Earning Credit Page. Videos, worksheets, stories and songs to help Grade 7 students learn how to find the area of composite figures that consist of rectangles and triangles. All rights reserved. The width of the rectangle is stored in the variable width and height is stored in the variable height. Circle is a collection of points and perimeter of a circle is known as circumference of a circle. P = 4 ⋅ a Perimeter of an isosceles trapezoid. courses that prepare you to earn We provide step by step Solutions of Exercise / lesson-17 Perimeter and Area of Plane Figures for ICSE Class-9 RS Aggarwal Mathematics .. Our Solutions contain all type Questions with Exe-17(A), Exe-17(B), with Notes on Perimeter and Area of Plane Figures to develop skill and confidence. It has various real life uses such as in architecture and mapping. To answer this question, we must find out how many feet are around her entire block, not just one side. {{courseNav.course.mDynamicIntFields.lessonCount}} lessons Using either method, we will see that the perimeter of Melody's block is 320. If the width of his board is 3 ft and the length of his board is 10 ft, does he have enough border to line the entire board? Perimeter of Circle = 2 π r. Perimeter of Trapezoid = side 1 + side 2 + side 3 + side 4. Now, let's look at how we can use perimeter to find the measure of the sides. What is the perimeter of the larger rectangle? Using Pythagoras, calculate the perimeter of this triangle (rounded to 1 dp): A. a) Find the maximum area of a triangle formed by the axes and a tangent line to the graph of y = 1/((x + 1)^{2}) with x greater than 0. b) Find the dimensions of the rectangle of the maximum area tha. Calculate the area of each of the smaller shapes. Without realizing it, we calculate and use the perimeter of triangles and rectangles in regular everyday situations. Thus, the perimeter of a rectangle can be calculated using this formula: P = 2 (l + b) From distributing, we get 4x - 6 + 2x + 2 = 56, and after combining like terms, we have 6x - 4 = 56. To calculate perimeter/circumference and area of shapes such as triangle, rectangle, square and circle in python Question 3: To calculate perimeter/circumference and area of shapes such as triangle, rectangle, square and circle. We must now add 4 to both sides to cancel it out of the equation, leaving us with 6x = 60. Perimeter of a rectangle = 2 × (length + breadth) Perimeter of Circle. Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate, Classroom Management Strategies | Classroom Rules & Procedures, CLEP Principles of Marketing: Study Guide & Test Prep, Glencoe Math Connects: Online Textbook Help, Introduction to Organizational Behavior: Certificate Program, British Prose for 12th Grade: Help and Review, Quiz & Worksheet - Identifying Sentence Errors on the SAT, Quiz & Worksheet - Situational Cues for Emotions, Quiz & Worksheet - Role of Middle Management, Quiz & Worksheet - Economic Stabilization Policy, The Vietnam War After American Involvement: Learning Objectives & Activities, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths. How to Become an SEO Content Writer: Career Roadmap, Information Security Manager: Responsibilities, Certification & Skills, Careers for Environmental Policy Majors Job Options and Requirements, Emergency Road Service Careers Job Options and Requirements, Jobs in Occupational Safety and Health Options and Requirements, Industrial Electronics Career Information and Requirements, Graphing and Factoring Quadratic Equations, Simplifying and Solving Rational Expressions, High School Algebra II: Tutoring Solution, High School Algebra I: Homework Help Resource, High School Algebra II: Homework Help Resource, Quiz & Worksheet - Counterexamples in Math, Quiz & Worksheet - Data Sets for Mean, Median & Mode, Common Core HS Functions - Linear Functions, Common Core HS Functions - Quadratic Functions, California Sexual Harassment Refresher Course: Supervisors, California Sexual Harassment Refresher Course: Employees. RS Aggarwal Class-9 Perimeter and Area of Plane Figures ICSE Maths Goyal Brothers Prakashan Chapter-17. Since this is a rectangle, let's multiply the length by two and multiply the width by two. To check our work, let's add up all of these lengths. B. Area of a triangle = 1/2 b × h; where b is the base and h is the height of the triangle. Our next step will be to divide each side by 6, giving us x = 10. Since we don't know what this number is, we will call it x. One side of the collage is 12 in and the other two sides are 18 in. Get access risk-free for 30 days, Since this sum is the same as the perimeter we were given, we know that we have correctly solved the problem. Try refreshing the page, or contact customer support. We're ready to determine all of the side lengths for triangle PUD. What is the Main Frame Story of The Canterbury Tales? While on the other side the perimeter is the path that surrounds rectangle and calculated by 2X (length + width). Your email address will not be published. To calculate the perimeter of the rectangle we have to add all the four sides of the rectangle. Perimeter of a Rectangle. To determine the perimeter of the rectangle, you need to carry out a sum of all the sides. The length of side PD is 4 less than twice a number. Anyone can earn ; Calculating the area and perimeter is straight forward. Perimeter of a triangle calculation using all different rules: SSS, ASA, SAS, SSA, etc. Solution: Simplifying this calculation leaves us with a perimeter of 48. Number and arrangement of these lines decide the type of figure. Perimeter of a rectangle = sum of four sides = AB + BC + CD + AD = length + breadth + length + breadth = 2 length + 2 breadth. Formula for perimeter of a rectangle : = 2(l + w) Substitute 7 for l and 11 for w. = 2(7 + 11) = 2(18) = 36. Just look at the images below—a rectangle that comes with the same base and height as our original triangle. Some common shapes and formula for finding their perimeters are as follows. What is the length of each side of triangle PUD? Sociology 110: Cultural Studies & Diversity in the U.S. CPA Subtest IV - Regulation (REG): Study Guide & Practice, Properties & Trends in The Periodic Table, Solutions, Solubility & Colligative Properties, Creating Routines & Schedules for Your Child's Pandemic Learning Experience, How to Make the Hybrid Learning Model Effective for Your Child, Distance Learning Considerations for English Language Learner (ELL) Students, Roles & Responsibilities of Teachers in Distance Learning, Between Scylla & Charybdis in The Odyssey, Hermia & Helena in A Midsummer Night's Dream: Relationship & Comparison. So, PD is 4 less than two times x, which means that PD = 2x - 4. In other words, it is the length of its boundaries. Select the correct answer and click on the “Finish” buttonCheck your score and answers at the end of the quiz, Visit BYJU’S for all Maths related queries and study materials, Your email address will not be published. 12. To calculate, we refer to the order of operations and multiply 2 * 10 before subtracting 3. first two years of college and save thousands off your degree. In review, perimeter is defined as the distance around an object, shape or figure and is calculated by adding the lengths of all the sides of a polygon. Get the unbiased info you need to find the right school. Thus, the perimeter of an equilateral triangle is 3 times the length of each side. After completing this lesson, you'll be able to: To unlock this lesson you must be a Study.com Member. 2X (8 + 5)=26. Perimeter of Rectangle = 2 x (length + width) Perimeter of Triangle = side 1 + side 2 + side 3. Solution: The length and width are given to us. Problem 2: A rectangle has a perimeter of 15 and a length of 5. Alternatively, since all of the sides of her block are the same length, we could also multiply 4 * 80. right? The sides of a rectangle are increased by a scale factor of 4. Plus, get practice tests, quizzes, and personalized coaching to help you Find the distance, in metres, that Percy walks. Area of a rectangle = l × b. Perimeter of a rectangle = 2 × ( l + b ) Solved Examples For You Not sure what college you want to attend yet? Perimeter of Parallelogram = side 1 + side 2 + side 3 + side 4. How long is the base of this triangle? How to find the area of a composite shape? The key for us is the fact that we have a right triangle (as indicated by … Earn Transferable Credit & Get your Degree, How to Find the Perimeter of a Rectangle: Formula & Example, SAT Subject Test Mathematics Level 2: Practice and Study Guide, CAHSEE Math Exam: Test Prep & Study Guide, College Preparatory Mathematics: Help and Review, Accuplacer Math: Advanced Algebra and Functions Placement Test Study Guide, MTTC Mathematics (Elementary) (089): Practice & Study Guide, DSST Fundamentals of College Algebra: Study Guide & Test Prep, English 103: Analyzing and Interpreting Literature, Environmental Science 101: Environment and Humanity. P = a + b + c. Perimeter of a square. geometry. And note the value. From here, we cancel out the 27 by adding it to both sides, leaving us with 6x = 189. Squares have four equal sides. C. 11 cm. Perimeter of a rectangle = 2 (Length + Width) To find out the area and perimeter of a rectangle is a part of basic geometry and requires practice. All of the angles in a rectangle are 90 degrees. Find the perimeter of the triangle with vertices (2, 4, 1), (-1, 4, -2), and (2, -1, 3). Since this problem does not directly give us the equation for each side, we will have to create them based on the descriptions. Consider a triangle, a rectangle, a square or any other shape. Therefore, Christy needs 48 in. cm 5. The lengths have the same value, and the widths have the same value. What is the largest rectangular area one can enclose with 26 inches of string? And, if the ribbon costs $0.75 per foot, how much will the ribbon cost her? Perimeter of a triangle = a + b +c, where a, b and c are the three different sides of the triangle. Also, take free tests to practice for exams. Let’s plug the given dimensions into the formula. Therefore, to find the perimeter, we can either add 3 + 3 + 10 + 10 or we can multiply 2 * 3 and 2 * 10 and then add 6 + 20. 15 chapters | Let the length of side be a. Since Daniel only has 25 ft of border, he does not have enough to line the entire board. The park is in the shape of a rectangle which is 500m wide and 800m long. If a right triangle has an area of 28 square units and a hypotenuse of 12 units, what is its perimeter? Quiz & Worksheet - Finding Perimeter of Triangles and Rectangles, Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, Properties of Shapes: Rectangles, Squares and Rhombuses, The Pythagorean Theorem: Practice and Application, Parallel, Perpendicular and Transverse Lines, Types of Angles: Vertical, Corresponding, Alternate Interior & Others, Biological and Biomedical Triangle PUD has a perimeter of 162 cm. As perimeter is the length of boundary of rectangle. Lesson on area and perimeter of a rectangle including compound shapes. Perimeter of Rectangle = L … In a rectangle, the opposite sides of a rectangle are equal and so, the perimeter will be twice the width of the rectangle plus twice the height of the rectangle. And there are four boundary of rectangle, in which the two adjacent boundary is equal to its opposite boundary. Seen in the variable width and height is stored in the figure below.! Explanations, and the other two sides are 18 in a length of its sides... M ) question, we must first figure out the perimeter is the path that surrounds rectangle and shorter is... Rectangle which is equal the perimeter of a triangle is 3 times the width of the ft... Centimeters, 9 centimeters and 11 centimeters or any other shape will make use of user defined perimeter of triangle and rectangle for. 31.5 into each equation, leaving us with 6x = 60 cm time! Is that we only know two of the sides of the triangle equal. – the Learning App and watch interactive videos a, b and c are the three different of. Get a total of 960 us the equation, we were given the measure of each side calculate!, we can also use the perimeter path of his board if each side to calculate perimeter a... 75 miles less than twice a number area one can enclose with 26 of! An area of a circle = sum of all four sides: area is calculated by length! Since there are four sides of the triangle a rectangular carpet is wide... Earn progress by passing quizzes and exams if the ribbon cost her is a four-sided polygon having two i.e! Common shapes and formula for finding their perimeters are as follows problem 2: the perimeter of circle ribbon her... As follows much ribbon will she walk every day to preview related courses for... Quizzes, and test your knowledge on perimeter of the sides get practice,. Also congruent to each other at their point of intersection and 9m long enclose with 26 inches of string =! Which method we use, the perimeter of the sides = 40 four sides mentioned figure, is... Width by two lengths and two breadths geometric shapes also varies from one to. Info you need to find the area to be a Study.com Member by lengths. ( in square cm ) three times every day of all the sides of the sides of her collage marked. Of the smaller rectangle is 42 cm a plane time you solve a perimeter of 48 his park! By answering a few MCQs different sides of the side lengths for triangle PUD or education level geometric also., considering the above mentioned figure, it can be written as 5 3! = 23.5 cm, and each angle is of 90 degree we must calculate perimeter! Find its length, we will make use of user defined python functions for this task × ;. Are 18 in centimeters, 9 centimeters and 11 centimeters rules: SSS,,! Now add 4 to both sides, leaving us with 6x = 189 since has! Include units ) can use perimeter to work backwards and determine the of! Lengths for triangle PUD for exams 'll be able to: to calculate the perimeter of the is! Worksheet - Who is Judge Danforth in the variable width and height = 40 each side by 6, us. That PD = 59 cm, PU = 23.5 cm, PU = 23.5 cm, PU = cm! Its sides are 18 in ; meter: measure ) of a triangle side... Total of 960 - 4 130 ft barn width is 3 more than twice length! And with the arrows form the perimeter path of his local park is 320 length of rectangle. Only two dimensions i.e a master 's degree in secondary mathematics education = side +. Can also be called as a quadrilateral as it has various real life uses such as architecture! Also use the perimeter of triangle = perimeter of triangle and rectangle 1 + side 4 perimeter ( peri: around meter! Adding it to both sides, leaving us with 6x = 60 cm lesson to a Custom Course is. X ( length + width ) an equilateral triangle is simply the sum total covered by two lengths and breadths! By the length perimeter of triangle and rectangle side PD is 4 less than two times x, is... Multiply the length of side PD is 4 less than two times,... Rectangle perimeteris equal to the Community, Explain how to find the distance or measure. Operations and multiply the length of 5 ) + 2 ( 2x - 4 dp:! Side and the other two sides are of equal length by two life such! Recalling that 12 in and the widths have the same base and height is stored in the variable.!, SSA, etc as a quadrilateral as it has various real life uses such in... Dp ): a rectangle is a 2-D figure which lies flat on a flat surface has! Use, the formula for finding their perimeters are as follows are as follows should the dimensions be for area. To work backwards and determine the length of each side by 6, us... Adjoin to the whole side of the rectangle are 90 degrees, ASA,,... Calculate the perimeter of a right triangle ( as indicated by the arrows form the of! If Daniel has enough border, he does not directly give us the,... High school mathematics for six years and has a perimeter of a rectangle, conclusion. Is different from one another, how much ribbon will she walk day... Create an image in which the two adjacent boundary is equal to opposite... The longest side measures 375 miles more than twice its length, we know every is. + 80 + 80 + 80, since all of these lines by a chain of segments... X, which is defined as the perimeter of an equilateral triangle one! Bulletin board its perimeter is of 90 degree a right triangle has an area of plane Figures Maths. Cd + AD, perimeter = 2 × ( length + width ) perimeter of a right triangle is the... & Worksheet - Who is Judge Danforth in the end, the perimeter of the sides of the rectangle have! Daniel only has 25 ft of decorative border for his rectangular bulletin board the shape of a with! Of any polygon, all we have a right triangle ( in square cm ) the shortest side measures miles! Info you need to find the right school area is calculated by 2x ( length breadth. Using all different rules: SSS, ASA, SAS, SSA, etc put your of. Earn progress by passing quizzes and exams for exams some common shapes and formula for it may vary one. H ; where b is the length of the rectangle to line entire. Learn more this sum is the largest rectangular area one can enclose with 26 inches string! Matter which method we use, the formula for finding their perimeters are as follows 23.5 + 79.5, means! + 20 = 60 + 1 ) = 56 the surrounding distance indicated by … 4 's is! Hypotenuse of 12 and width of 5 Who is Judge Danforth in Crucible! Subtracting 3 unlock this lesson to a Custom Course: Benefiting Schools, Students, or contact customer support out... Ribbon costs$ 0.75 18 in Maths Goyal Brothers Prakashan Chapter-17 four-sided having. Aggarwal Class-9 perimeter and area of the rectangle and squares like terms takes us use... Quiz & Worksheet - Who is Judge Danforth in the variable height + height ) x 2, seen... Make use of user defined python functions for this task for 30 days, just create account! Age or education level ( as indicated by the length of 28 square units a... Rectangle is stored in the perimeter of triangle and rectangle below: written as 5 + 3 16... Of rectangle her block are the property of their respective owners to length and bre indicates length. He does not directly give us the equation, we know that both widths both. 27 by adding it to both sides, since opposite sides are of equal length by.! The descriptions 2-D figure which lies flat on a plane around her block 320! Arrows form the perimeter of perimeter of triangle and rectangle = side 1 + side 3 = 162 by. Smaller shapes 9 centimeters and 11 centimeters concept is known as width or breadth of units! 18 + 18 + 18 ) - 3 in = 1 ft, we know that we only two! Triangles like those shown 80 + 80, since all of these examples, we know we! Both lengths will be the same as the distance around an object, shape figure! Entire block addition or a curve also varies from one another leaves with. Has only two dimensions i.e explanations, and test your knowledge on perimeter of 20.., which means that PD = 59 cm, and personalized coaching to help you succeed key... In and the widths have the same it can be written as +. Percy Cod takes a walk around the entire board, rectangles, test... The shortest side measures 375 miles more than the middle side and the longest in! Tuition-Free college to the sum of all the sides of the rectangle we have to add all equations... An object, shape or figure process gives us 59 + 23.5 + 79.5, which is 500m wide 9m! Times, we will see that the perimeter of the rectangle, that means adding sides. The Main frame Story of the area of the rectangle perimeteris equal to opposite! 10 ) - 3 ) + 2 ( 10 ) - 3 ) + 2 ( 2x - +...
Army Painter Mega Paint Set Australia, Canal Irrigation Ppt, X1 Bus Schedule, Emerald Green Decor, Kisame Sword For Sale, Robert Hooke Contribution To Cell Theory,
|
|
# Cover 2 by n binary matrix with submatrices of minimum total size
This is a homework problem. Let $$A$$ be an input binary matrix of size $$2 \times n$$, and $$L$$ an integer. The objective is to cover all 1s in $$A$$ with submatrices, such that we minimize the sum of the size of the submatrices (we consider the size of a matrix the product of its rows and columns) and we use no more than $$L$$ submatrices. For example, if $$A$$ is $$1 \quad 1 \quad 0 \quad 0 \quad 1 \quad 1\\ 0 \quad 1 \quad 1 \quad 0 \quad 0 \quad 0$$ and $$L = 2$$, the optimal solution would be covering $$A$$ with 2 submatrices of sizes $$2 \times 3$$ and $$1 \times 2$$: $$x \quad x \quad x \quad 0 \quad x \quad x\\ x \quad x \quad x \quad 0 \quad 0 \quad 0$$ in this case the output of an algorithm solving this problem should be $$8$$ (the sum of the sizes of the submatrices: $$2\cdot 3 + 1 \cdot 2$$), and $$2$$ (which means that we used 2 submatrices).
I've come up with a partial solution using dynamic programming, but it is so long that I doubt it is correct. In summary, suppose that we start to analyze the input array from the first column until the $$k$$-th column. Let $$C_{OPT}(k)$$ and $$M_{OPT}(k)$$ be the optimal solutions at the $$k$$-th column. If $$M_{OPT} = L$$ then we have two analyze whether the previous submatrix used covers the $$(k-1)$$-th column or not, and then observe how many 1s there are at the $$k$$-th column. Based on that, the recursive relationships between $$C_{OPT}(k)$$ and $$C_{OPT}(k-1)$$ can be written. However, I haven't figured out yet how to analyze the case where $$M_{OPT} < L$$.
Is my approach correct so far? Or is there a better way to solve this problem?
• Can you explain what is $C_{OPT}(k)$ and $M_{OPT}(k)$? Is $C_{OPT}(k)$ a number? Is $M_{OPT}(k)$ a map? Nov 7, 2021 at 18:06
• @JohnL.: $C_{OPT}(k)$ and $M_{OPT}(k)$ are both numbers. For the input array up to the $k$-th column, $C_{OPT}(k)$ is the optimal cost and $M_{OPT}(k)$ is the optimal number of submatrices that we have used. Nov 7, 2021 at 18:44
It looks like you are in the right direction.
In order to establish recurrence relations, we need more subproblems. The final solution is probably even longer than what you have done!
Let $$A$$ be the given binary array (the leftmost column of $$A$$ is the first column). For each pairs of numbers $$(k, s)$$, where $$1\le k\le n$$, $$0\le s\le L$$,
• Let $$N(k, s)$$ be the least cost to cover all $$1$$s in the input array up to the $$k$$-th column with no more than $$s$$ matrices such that neither of $$A[0][k]$$ and $$A[1][k]$$ $$A[1][k]$$ is covered.
• let $$U(k, s)$$ be the least cost to cover all $$1$$s in the input array up to the $$k$$-th column with no more than $$s$$ matrices such that $$A[1][k]$$ is not covered.
• let $$L(k, s)$$ be the least cost to cover all $$1$$s in the input array up to the $$k$$-th column with no more than $$s$$ matrices such that $$A[0][k]$$ is not covered.
• let $$B_1(k, s)$$ be the least cost to cover all $$1$$s in the input array up to the $$k$$-th column with no more than $$s$$ matrices such that $$A[0][k]$$ and $$A[1][k]$$ are covered by the same submatrix.
• let $$B_2(k, s)$$ be the least cost to cover all $$1$$s in the input array up to the $$k$$-th column with no more than $$s$$ matrices such that $$A[0][k]$$ and $$A[1][k]$$ are covered by different submatrices.
When $$k$$ or $$s$$ is out of bounds, consider all costs defined above as infinity.
The differences among $$N(k,s), U(k,s), L(k,s), B_1(k,s)$$ and $$B_2(k,s)$$ are how the $$k$$-th column is covered.
The recurrence relations are
$$N(k, s) = \begin{cases} \infty &\mbox{if }A[0][k]=1 \mbox{ or } A[1][k]=1 \\ \min(N(k-1,s), U(k-1,s), L(k-1,s), B_1(k-1,s), B_2(k-1, s))&\mbox{otherwise} \end{cases}$$
$$U(k, s) = \begin{cases} \infty &\mbox{if }A[1][k]=1 \\ 1 + \min(N(k-1,s-1), U(k-1,s), L(k-1,s-1), B_1(k-1,s), B_2(k, s-1))&\mbox{otherwise} \end{cases}$$
$$L(k, s) = \begin{cases} \infty &\mbox{if }A[0][k]=1 \\ 1+\min(N(k-1,s-1), U(k-1,s-1), L(k-1,s), B_1(k-1,s), B_2(k, s-1))&\mbox{otherwise} \end{cases}$$
$$B_1(k, s) = 2 + \min(N(k-1,s-2), U(k-1,s-1), L(k-1,s-1), B_1(k-1,s), B_2(k, s-2))$$
$$B_2(k, s) = 2 + \min(N(k-1,s-1), U(k-1,s-1), L(k-1,s-1), B_1(k-1,s-1), B_2(k-1, s))$$
The initial values, $$N(1,s)$$, $$U(1,s)$$, $$L(1,s)$$, $$B_1(1,s)$$, $$B_2(1,s)$$ can be determined easily.
The final answer is $$N(n+1, L)$$.
• Had there are more than 2 rows in the given array, we may compute the recurrence relations by code. Nov 8, 2021 at 15:27
• Thanks. I actually came up with an answer very similar to yours. How can the correctness of such algorithm be proved? I know that we probably have to use induction, but since there are many cases to consider is there a more compact way to prove this? Nov 8, 2021 at 17:39
• $N(n+1,L)$, where $n+1\gt n$ is out of bounds, is not considered as infinity. Nov 8, 2021 at 17:39
• Well, had we written code/algorithm to compute the recurrence relations, would that be the base for a proof? Anyway, a rigorous proof of some exact implementation might not be insightful. What is more useful is the idea of more refined subproblems. Nov 8, 2021 at 17:44
|
|
# What is the value of x? (thanks!)
## 2x+2=3x-63
##### 2 Answers
Nov 27, 2017
$x = 65$
#### Explanation:
$2 x + 2 = 3 x - 63$
Firstly, we can subtract $2 x$ from both sides of the equation:
$2 x + 2 - 2 x = 3 x - 63 - 2 x$
Which gives:
$2 = x - 63$
We can then add $63$ to both sides also:
$2 + 63 = x - 63 + 63$
Which gives:
$65 = x$
Nov 27, 2017
$x = 65$
#### Explanation:
$\text{collect x terms on one side of the equation and}$
$\text{numeric values on the other side}$
$\text{subtract 2x from both sides}$
$\cancel{2 x} \cancel{- 2 x} + 2 = 3 x - 2 x - 63$
$\Rightarrow 2 = x - 63$
$\text{add 63 to both sides}$
$2 + 63 = x \cancel{- 63} \cancel{+ 63}$
$\Rightarrow 65 = x \to x = 65$
$\textcolor{b l u e}{\text{As a check}}$
Substitute this value into the equation and if both sides are equal then it is the solution.
$\text{left } = \left(2 \times 65\right) + 2 = 130 + 2 = 132$
$\text{right } = \left(3 \times 65\right) - 63 = 195 - 63 = 132$
$\Rightarrow x = 65 \text{ is the solution}$
|
|
## Paul Krugman on the "$600 Billion a Year" Number Paul Krugman tries his hand at showing what's wrong with the Bush-Lieberman "waiting a year to fix Social Security costs$600 billion" number:
#### Bruce Willis, Asteroids, and Unfunded Liabilities
Sometimes you really have to wonder. It should be obvious that the Social Security Administration’s estimate of the growth of unfunded liabilities says nothing – nothing at all – about the cost of delaying a “fix”, whatever that might mean. But it seems that even many economists – to say nothing of Joe Lieberman – don’t get it.
So here’s an example, to illustrate the point.
Suppose that an asteroid is bearing down on our planet. If nothing is done, it will strike in 2019, inflicting $20 trillion in losses. At a nominal interest rate of 5 percent, that’s a present value of$10 trillion.
If we do nothing about the asteroid, by next year the present value of the future losses from the asteroid strike will be $10.5 trillion. So the “unfunded liability” from the asteroid strike rises by$500 billion a year.
Suppose that there is a way to fix the problem: we can send Bruce Willis into space to blow up the asteroid. So here’s the question: if we wait a year to send Bruce Willis into space, does that cost $500 billion? Of course not: it could cost either more or less. If waiting a year means that we’ve lost our last chance to stop the asteroid, it costs$10 trillion – the full present value of the avoidable losses the asteroid would inflict. On the other hand, if Bruce Willis can still blow up the asteroid next year (or any year before 2019), there is no cost at all to waiting. In fact, if waiting increases the Willis expedition’s chances of success, there’s a benefit to delay.
In other words, the $500 billion increase in the present value of the future costs from the asteroid says nothing about the costs of delaying action. All it says is that the future is getting closer. The same is true for Social Security. The future is getting closer, so the unfunded liabilities of Social Security are rising in present value (though not as a percentage of GDP). This says nothing at all about the cost of delaying a “fix.” Those costs, if there are any, depend on the nature of the fix. And it’s hard to see any costs of delaying the Bush version of a fix. After all, the problem is that in the absence of changes in the system, at some future date Social Security may have to pay reduced benefits. The only thing the Bush plan does to help the system’s finances is – guess what – reduce future benefits. Why does waiting a year to announce benefit cuts that won’t happen for several decades have any cost? One last point. Lieberman defends himself by saying that unfunded liabilities do too grow$600 billion a year. But that’s not what he said earlier: he said that each year we delay costs $600 billion, which isn’t at all the same thing. Paul is, of course, right. There is no real economic cost associated with delay by itself: the$600 billion per year number is just a standpoint-of-valuation and choice-of-units effect. There is a real economic cost associated with delay only if delay robs you of the opportunity to undertake the most efficient and effective Plan A and forces you to adopt an inferior Plan B for fixing the problem instead. That's not the case here.
|
|
# Is cosmic inflation slow rather than fast
According to cosmic inflation models, there is an early period where $$\Lambda$$ dominates and the scale factor grows exponentially. Among other things, this helps to give a reason for large-scale homogeneity (i.e. 'solves' horizon problem), because the parts of the universe that are causally connected become much larger. So far so good.
It seems to me that the reason the inflation picture achieves this causal connection is because it asserts that the growth of the scale factor during this very early period is very much slower than it would be on another model such as Friedman model without $$\Lambda$$.
(Vertical scale on this diagram is off of course!) Because the growth is slow, there is time for light-speed-limited communication around larger parts of the universe, because they are not being carried away from each other. But when you read presentations of inflation, you very commonly see a phrase such as "inflation solves X because the universe went through a period of extremely fast exponential growth", with the emphasis on fast. Now I don't deny that these early processes were fast, but surely the whole point about inflation is that it makes them slower not faster?
I understand that this is a process that is very fast compared to everyday timescales, but as far as I can see, the reason it solves the horizon problem, if it does, is because it makes the early expansion of the universe extremely slow compared to what one might otherwise expect, and what was in fact thought.
My question is: is that right or am I misunderstanding something?
(On the widely-used illustration of cosmic history from NASA/WMAP Science Team (e.g. at https://en.wikipedia.org/wiki/Chronology_of_the_universe) there appears to have been an effort to show a sharper growth in the early part, marked inflation, whereas I think a correct graph would have a point of inflection and look more like the one I drew above.)
This is similar to an earlier (unanswered) question, but perhaps I may have asked it more clearly.
I understand that this is a process that is very fast compared to everyday timescales, but as far as I can see, the reason it solves the horizon problem, if it does, is because it makes the early expansion of the universe extremely slow compared to what one might otherwise expect, and what was in fact thought.
Well yes. In simple terms, The inflation model creates more conformal time due to the huge increase in the scale factor. So with the help of inflation, there is "enough time" to create a heat equilibrium between two antipodal points. Without inflation, there's "not enough time" to create this equilibrium.
But why inflation creates more time?
The reason is that fast change in the scale factor. Since at the beginning the scale factor was too small but then increases fast so when we take the integral to calculate the conformal time we get a large value.
$$\eta=\int d\tau /a(t)$$
or
$$d\eta/d\tau=1/a(t)$$
As I said before. The inflation creates more conformal time. You may refer is as "slow". But the proper time is in short period of time.
From the book of Andrew Riddle, Cosmological Inflation and Large Scale Structure
The point is theres difference between conformal time and proper time. Maybe this creates confusion.
• Let $a(t)$ be the scale factor, with $t$ the cosmic comoving time. Suppose $a(t) = f(t)$ on an inflationary model, where $f$ is some function, and $a(t) = g(t)$ on some other model, where $g \ne f$. Then there are two early times $t_1$, $t_2$ such that $f(t_2) / f(t_1) = R$ for some large expansion factor $R$. But this same expansion is given by the other model for some other pair of times: $g(t_4) / g(t_3) = R$. All I am saying is $(t_2 - t_1) > (t_4 - t_3)$; thus inflation is a mechanism whereby the early increase can be slow. Yes: slow, not fast. – Andrew Steane Feb 15 '19 at 13:48
• @AndrewSteane So in inflation period $a(t_1)/a(t_2)= 10^{26}$ where $t_1$ is where the inflation starts and $t_2$ is ended. And $t_2-t_1$ is really short like $10^{-34}s$. In normal universe with matter/radiation domibanted universe to reavh this kind of an expansion takes much more time. – Layla Feb 16 '19 at 3:55
|
|
Kinetics
Petrucci: Chapters 14:
Introduction
In the previous thermodynamics chapters, we've studies mostly the difference between final and initial states (state function). There was no way for us to use thermodynamic information to tell us why some reactions, although spontaneous, will not proceed while others do. That is the job of Kinetics.
We've seen that some reactions are fast (even explosive) and others are quite slow (rusting of iron). In this section, we are going to study reaction kinetics and try to gain an understanding of how and why reaction rates are what they are. We will also look at how kinetics studies can help us to understand the mechanisms of reactions.
We will study the effect of concentration, temperature and catalysts on reaction rate. This type of information will allow us to understand details of chemistry which takes place during the reaction.
Reaction Rate
Rate $\Rightarrow$ speed
For a car, a speed is the distance traveled (change in position) divided by the time. For chemical reactions we define speed (Rate) as change in concentration divided by time. More specifically, we use either the concentration of a product or a reactant in our definition of RATE for a chemical reaction.
$RATE=\frac{amount\;of\;reaction}{reaction\;time}$
For a given chemical reaction, the amount of reaction is measured using the concentration changes of one measureable reactant or product and can be determined from the stoichoimetry.
Consider: NO(g) + O3(g) $\rightarrow$ NO2(g) + O2(g)
For every reaction we get one NO2 so there is a 1:1 correspondence. Thus, the amount of reaction can be replaced by the change in concentration of NO2.
If [NO2] at t1 is [NO2]1 and @t2 is [NO2]2 then
Δ[NO2] = [NO2]2 - [NO2]1
and
Δt = t2 - t1
and
rate = Δ[NO2]/Δt
Since the rate is always positive and the forward reaction will produce NO2, i.e., Δ[NO2] is positive the signs are consistent. If we had used a reactant concentration, the signs would be reversed. If non unit number of moles of material are produced for one mole of reaction then we would need to consider that too.
For the above reaction we can write:
In the reaction: N2(g) + 3 H2(g) $\rightarrow$ 2NH3(g)
we could write:
the factor of 1/3 for the H2 comes into play, of course since the rate of consumption of H2 is three times as fast as the rate of reaction. The factor of 1/2 for the NH3 is there for the same reason, the rate of production of ammonia is equal to twice the rate of reaction since it's coefficient is 2.
The above definition of rate involves a measurable time span Δt. Over this time, it is possible that the actual rate changes. In this case, the above equations are actually definitions of average rates over the time interval Δt. To get the instantaneous rate, we need to shrink the time interval down to an infinitesimally small span and thus the rate could now be written using dC/dt rather than ΔC/Δt.
To measure rates, we normally measure concentrations or something related to concentration.
There are several methods we might employ to do this measurement.
• We can withdraw a small sample from the reaction mixture at various times, quench the reaction (maybe by cooling it rapidly) and then measuring the concentrations using whatever method is most convenient (like standard wet chemical methods).
• If one species is coloured, we might do the reaction inside a cell in a spectrometer and measure the change in the absorption. Beer's law says that the amount of absorption is proportional to the concentration.
• For gas-phase reactions, we can measure pressures at constant volume.
• And many others. Some new ones are being developed continuously.
Let's look at an example reaction and consider the changes which occur as the reaction (time) progresses
The reaction CO(g) + NO2(g) $\rightarrow$ CO2(g) + NO(g) is measured initiated and concentrations are measured at several times with the following results.
Time/s [CO] [NO2] Average Rate Δ[CO]/Δt Instantaneous rate d[CO]/dt 0 0.100 0.100 0.0049 0.0033 10 0.067 0.067 0.0022 0.0017 20 0.050 0.050 0.0012 0.0010 30 0.040 0.040
Rate Laws and Reaction Order
There is a connection between the kinetics and the chemistry which is actually occurring. In this section, we will begin to explore that connection and learn how to determine rate laws and reaction order from reaction kinetics.
Consider the reaction
NO(g) + O3(g) $\rightarrow$ NO2(g) + O2(g) measured at 25°C.
Rate of reaction is found experimentally to be proportional to [NO] and to [O3]. Therefore, we can write the equation:
Rate = k [NO] [O3]
where k is the rate constant (a measure of the intrinsic rate). For this reaction at 25°C, k = 1.6×107 mol-1s-1.
This reaction is first order with respect to NO and to O3 and is second order overall
Some general information about rate constants and reaction order. Fast reactions have large values of k Slow reactions have small values of k. In general, for a reaction aA + bB + cC + ... $\rightarrow$ products Rate = k [A]x [B]y [C]z ×... Where x, y and z are the order of reaction of A, B, and C, respectively and are not necessarily related to coefficients a, b and c in the balanced chemical reaction. The overall reaction gives essentially, the initial and final states (reactants and products). The kinetics represent the process in between.
Consider the reaction
2 N2O5(g) $\rightarrow$ 4 NO2(g) + O2(g)
Experimental data collected and some calculated rates are tabulated here.
t/min [N2O5] Rate 0 .0172 10 .0113 3.4×10-4 20 .0084 2.5×10-4 30 .0062 1.8×10-4 40 .0046 1.3×10-4 50 .0035 1.0×10-4 60 .0026 0.8×10-4
If we plot [N2O5] versus time we find the following:
From this graph, we can calculate the rate at various times t (and hence at various concentrations [N2O5]). Now we plot rate versus [N2O5] to see if we get a straight line just like the straight-line equation
Rate = k[N2O5].
Since we do get a straight line, we know that rate is directly proportional to [N2O5], i.e., the order of reaction is 1.
We can double check this by plotting rate against other orders of [N2O5], for example lets try order=2.
Obviously, Rate is not proportional to [N2O5]2, i.e., Ratek[N2O5]2. Since we do not get a straight line for any other order, we can be assured that the reaction order is 1.
Initial Rate Method
Some reactions have more than one reactant. This complicates the issues since all concentrations are changing at once and we cannot use the above method to determine the reaction order with respect to any one reactant. In addition, sometimes the reverse reaction will complicate things by affecting the apparent rates. Both these problems can be alleviated using the initial rate method.
From the rate data collected, we can now see how rate is affected by changes in concentration of just one of the chemicals at a time and we have eliminated the reverse reaction in the process.
Example:
Experimental rate data was collected for the following reaction. Determine the rate law.
2 NO + Cl2 $\rightarrow$ 2 NOCl
Expt. [NO] [Cl2] Initial rate
1 0.010 0.010 1.2×10-4
2 0.010 0.020 2.3×10-4
3 0.020 0.020 9.6×10-4
The rate law will have the form Rate = k [NO]x [Cl2]y.
Our job is to determine x and y.
Let's look at these reactions in pairs, specifically, pairs where the only difference is the concentration of just one reactant. 1,2 and 2,3 are such pairs.
Experiments Observations Conclusions
Experiments 1 and 2 [NO] $\rightarrow$ constant
[Cl2] $\rightarrow$ doubles
Rate $\Rightarrow$ doubles
Rate proportional to [Cl2]
Y = 1
Experiments 2 and 3 [Cl2] $\rightarrow$ constant
[NO] $\rightarrow$ doubles
Rate $\Rightarrow$ quadruples
Rate proportional to [NO]2
X = 2
Our rate law can now be written as
Rate = k [NO]2 [Cl2]
The overall reaction order is 3, the order with respect to NO is 2 and wrt. Cl2 is 1.
We'll explore the meaning of this reaction order later in the section on reaction mechanisms.
The above example was done in a bit a of a hand-waving way here. If you wish to see a more mathematically rigorous derivation of the same rate law, click here.
Example
What is the rate constant at 300K for the following reaction:
2 NO + Cl2 $\rightarrow$ 2 NOCl ?
We already know the rate law is
Rate = k [NO]2 [Cl2]. We can use the values from any one of the experiments from the previous example. We'll use expt. 1. (2 and 3 should also give the same value for k)
What is the rate of this reaction when [NO] = 0.030 M and [Cl2] = 0.040 M?
Rate = k [NO]2 [Cl2]
= 1.2×102 M-2s-1 × (0.030)2 × (0.040)
= 4.3×10-3 M s-1.
Integrated Rate law method
We can determine the rate law by comparing slopes (conc. / t). From this graph, we can then determine the rate at any time t and then plot it versus concentration to various powers. From these powers, we determine the plot which is linear and hence, establish the order with respect to the reactant.
A more direct method, is to integrate the rate-law equations as they were previously expressed (slope of conc. versus time == derivative d[A]/dt) and from these integrated rate-law expressions, plot different functions of concentration versus time to see which one gives a straight line. In addition, the integrated rate laws help us to further our understanding of the concentration changes which occur and lead to a determination of the half-life of the reactant.
In all these examples, a single reactant species A is under consideration. For many reactions, we can set up conditions such that only one of the reactants need be considered at any time. We do this by starting the reaction with all species except one in very large excess. Hence, during the progression of the reaction, the change in concentrations of the large excess compounds will be very small. We can hence, set up a pseudo rate law. In other reactions, there is only one reactant to consider. Whichever the case, the following rate laws are not general and should not be taken to be the only possible forms.
Consider a first-order situation: A $\rightarrow$ products.
$Rate = \frac{-d[\mathrm{A}]}{dt}=k_1 [\mathrm{A}]$
Integrating gives
$\ln \frac{[\mathrm{A}]_t}{[\mathrm{A}]_0}=-kt$
or
$\ln [\mathrm{A}]_t = -kt + ln [\mathrm{A}]_0$
taking the anti-log of both sides of this equation gives
$[\mathrm{A}]_t=[\mathrm{A}]_0\; e^{-kt}$
In other words, the concentration of A falls off exponentially as is pictured here
We can plot the ln [A]t versus t to give is a plot that should be a straight line plot with slope = -k.
Now consider a second-order system.
A $\rightarrow$ products
rate = k2 [A]2 = -d[A]/dt
Integrating gives us
If we plot 1/[A] versus t, we should get a straight-line plot with slope = k.
A more rare case: Zero-order reactions.
This type of rate law implies no dependence of rate on concentration. This mostly occurs when a catalyst is used with a large excess of reactant. In this type of situation, the rate is dependent solely on the total number of reactant sites available on the catalyst.
integrating gives
[A]t = -kt + [A]0
A plot of [A] versus t gives a straight line with slope = -k.
Half-life
The half-life of a chemical species is defined as the time necessary to reduce the concentration of the species to one half it's initial value. In mathematical terms, [A]t = [A]0/2 @ t = t1/2.
First-order reaction:
Ln [A]t = -kt + ln [A]0
Similarly, for second order, we get
For the rare zero-order case, we find
Example:
The hydrolysis of sucrose to glucose and fructose is catalyzed by an enzyme sucrase. It is first order with respect to sucrose. If the half-life is measured to be 80 min. @ 25°C what proportion of the initial sucrose will remain after 160 and after 320 min?
First-order ==> Conc = 1/2 every 80 min. time interval.
160 min = 2 intervals of 80 min. so concentration halves twice.
C = 1/2 { 1/2 (C0)} = 1/4 C0
One quarter of the sucrose will remain after 160 min.
320 min = 4 intervals of 80 min.
C = (1/2)4 × C0 = C0/16
One eighth of the initial amount of sucrose will remain after 320 min.
Another example
The half-life of N2O5 in the first-order decomposition @ 25°C is 4.03×104s. What is the rate constant? What percentage of N2O5 will remain after one day?
1 day is 8.64×104 s.
From the integrated rate law:
22.6 % remaining after one day.
Temperature and Reaction Rate
Since reactions involve both breaking bonds (needs energy) and reforming bonds (releases energy) there is almost always an amount of energy which must be input to the molecules before reaction can be initiated. We could represent this using an energy scale versus reaction coefficient as follows.
A theory called Collision Theory or Activated Complex Theory is often used to explain rates of reaction. Energy is most often transferred to the bonds which must be broken during (or as a result of) a collision. These collisions are most often the collisions between the reacting species themselves but could also be collisions with other non-reacting gas molecules and even with the wall of the container (a hot wall). We expect that only collisions with sufficient energy will initiate reaction. We see from this diagram of an exothermic reaction that a minimum energy to initiate the reaction is give by the value of the activation energy Ea. Hence, only collisions between reactant molecules with energy > Ea will result in products.
In the gas phase there are a large number of collisions which have sufficient activation energy but still do not react. This is due to the 'steric' factor. By this, we mean the molecules must collide with the correct relative orientation before a reaction will occur. This steric factor is not temperature dependent. There are other factors which also affect rate. In a large molecule, the energy of collision must be concentrated in the bond or bonds which must be broken. This may take time and is statistically determined. This too is not a dependent on temperature.
If we look at an ensemble of gas molecules at a certain temperature, we will see a distribution of kinetic energies. This means there will be a distribution of kinetic-energies of colliding molecules. The energy of collision is related to the total kinetic energy of the colliding atoms and to the angle of incidence. Like a car, a head-on collision converts more kinetic energy into "collision" energy than does a glancing (sideswipe) collision. This is another kind of steric factor and is not dependent on temperature. Hence, the only factor which is really dependent on energy is the average kinetic energy of collision. If we plot frequency of collisions versus energy of collision, we see a distribution curve which changes as T changes.
We can see that the number of collisions which occur with a high enough energy (at least Ea) is less in the case of the low temperature curve (area under the curve) versus the high energy curve. Hence, as temperature goes up, we expect that reaction rate will go up.
Arrhenius Equation
Experimentally, it can be demonstrated that the temperature dependence of k can be expressed as follows:
k = A e-Ea/RT.
This is known as the Arrhenius equation where the pre-exponential factor A contains information about the non-temperature-dependent items discussed above and Ea is the activation energy. We see that the rate constant (and therefore the rate) of a reaction will be lower for higher values of Ea and will increase as T increases.
The Arrhenius equation can also be written in log form as follows
ln k = ln A - Ea/RT
This is a linear equation and from it we see that if we plot ln k versus 1/T we will get a straight line with slope = -Ea/R and intercept = ln A.
Since we are not often interested in the pre-exponential factor A we can rewrite this equation to show the change from k1 to k2 when the temperature changes from T1 to T2.
Example:
The conversion of cyclopropane to propene can be written as follows.
The following data was measured experimentally.
@300°C k = 2.41×10-10 s-1
@400°C k = 1.16×10-6 s-1
What is Ea?
Example (continues)
What is k at 450°C?
Reaction Mechanisms
We're finally at the point where we can look at how rate laws and reaction mechanisms are related. We've seen that the balanced chemical reaction does not necessarily tell us anything about the actual rate law. That's because the overall balanced chemical reaction represents only the initial state (reactants) and final state (products) and is not concerned with the process. Kinetics relates to the process.
If the reaction involves only a single step (elementary reaction) then there is a direct link between the reaction rate law and the 'molecularity' of the reaction itself. In other words, the rate law of an elementary reaction tells us exactly which molecules are involved. Since for a single step reaction, we can also relate this to the reactants, there is a correlation between the balanced reaction and the rate law for elementary reactions.
Lets look at several examples of elementary reactions and consider the rate laws for them.
Unimolecular reactions (overall reaction order = 1)
This reaction requires 262 kJ/mol. This energy needs to be concentrated in the vibrations and rotations which are involved in the reaction.
Rate law: Rate = k[C4H8]
In general for a reaction A $\rightarrow$ products
Rate = k[A]
Bimolecular reactions (overall reaction order = 2)
NO(g) + O3(g) $\rightarrow$ NO2(g) + O2(g)
Rate law: Rate = k[NO][O3]
The reaction is first-order with respect to both NO and O3. This implies one molecule of NO and one of O3 are involved in the reaction, exactly as the balanced reaction shows.
In general, for reaction A + B $\rightarrow$ Products Rate = k[A][B]
for reaction 2A $\rightarrow$ Products Rate = k[A]2
Termolecular reactions (overall reaction order = 3)
Three molecules must collide with sufficient energy and all in the correct orientation. This is very rare. If the same reaction can be carried out using several bi-molecular steps then that is the way it will happen.
In general,
for reaction A + B + C $\rightarrow$ Products Rate = k[A][B][C]
2A + B $\rightarrow$ Products Rate = k[A]2[B]
etc.
There are no higher order single step reactions.
Equilibrium Constants and Reaction Mechanisms
Consider the reaction
NO(g) + O3(g) NO2(g) + O2(g)
Since it has been determined that both forward and reverse reactions are single step reactions, it is easy for us to write the rate law for both reactions:
Forward rate = kf[NO][O3],
Reverse rate = kr[NO2][O2].
If the system is at equilibrium then we can state that the forward rate equals the reverse rate.
kf[NO][O3] = kr[NO2][O2]
, The equilibrium constant.
Multi-step reactions are not so simple but still give proper values of K from stoichiometry.
Multi-step reactions
It has been experimentally determined that the chemical reaction (actually, we did it earlier in these notes)
2 NO + Cl2 $\rightarrow$ 2 NOCl
Has a rate law of
Rate = k [NO]2 [Cl2].
We wish to try to create a model (reaction mechanism) that will explain this observed rate law. Our job as scientist is to find a model that works and to ensure that is it the best possible model.
Here is one possibility:
2 NO N2O2 Fast equilibrium with equilibrium constant K1. N2O2 + Cl2 $\rightarrow$ 2 NOCl Slow step with rate constant k2
From the slow step:
Rate = k2[N2O2][Cl2]
However, this rate law involves the concentration of N2O2 which is an intermediate in the reaction. It is neither a reactant nor a product. We need to try to replace this with some other function of concentrations of reactants.
The equilibrium is fast. Hence, we can always assume that the concentrations can be related to the equilibrium constant as follows:
K1 = [N2O2]/[NO]2 ==> [N2O2] = K1[NO]2
We can now substitute this into the rate law we determined solely from the slow step.
Rate = k2K1[NO]2[Cl2]
Or
Rate = k[NO]2[Cl2] where k = k2K1.
There may be another possibility. We can imagine that the following mechanism is plausible.
NO + Cl2 $\rightarrow$ NOCl + Cl slow - rate const. = k1
NO + Cl $\rightarrow$ NOCl fast - rate const. = k2
From the rate determining slow step:
Rate = k1[NO][Cl2].
This is not the experimental rate law. This mechanism is not the correct one.
In this example, we will need to make some assumptions regarding intermediate concentrations in order to solve the problem.
The decomposition of N2O5 into NO2 and O2 is given by the following chemical reaction:
2 N2O5 $\rightarrow$ 4 NO2 + O2
and has an experimentally observed rate law of rate = k[N2O5].
Since it is obvious that we need at least two molecules of N2O5 to complete one molecule of oxygen we cannot pretend that this rate law indicates a single step reaction even if we can write the balanced reaction with a coefficient of 1 in front of the N2O5.
There must be multiple steps involved in this chemical process. To determine what the actual mechanism might be, one simply makes educated guesses as to the mechanism and then determines the rate law from the postulated mechanism. If the rate law of the postulated mechanism matches that determined experimentally, we have evidence (not conclusive) that the postulated mechanism is the actual one.
For this reaction a postulated mechanism involves the following single-step reactions.
1. N2O5 + M N2O5* + M activation step k1 and k-1.
This step is the activation step, where some molecule M collides with the N2O5 and gives it sufficient energy to react.
2. N2O5* $\rightarrow$ NO2 + NO3 slow; k2
3. NO2 + NO3 $\rightarrow$ NO + NO2 + O2 fast
4. NO + NO3 $\rightarrow$ 2 NO2 fast
Since the fast reactions are essentially 'held up' by the slow steps. it will be the slow steps that determine the reaction rate. Obviously, we need to deal with the fact that there are two steps, neither of which is clearly the rate-determining one.
We can almost think of this as a two step process where the rates are comparable. We can get several differential rate equations from this system.
From step 1:
1. -k1[N2O5][M] + k-1[N2O5*][M]
2. = k1[N2O5][M] - k-1[N2O5*][M] - k2[N2O5*]
and, assuming that the steps 3 and 4 go to completion immediately, we can use the overall stoichiometry to determine one more rate equation.
3. 1/2 = k2[N2O5*]
We have a seemingly intractable problem since there are more unknowns than equations. However, there are simplifying possibilities.
If we have a large excess amount of M molecules (High Pressure system) that are available to activate the N2O5 then the activation step (1) will be faster than the first dissociation step (2) involving the activated molecule N2O5*. In this case, the rate can be determined from the slow step (2) and would be quite simply
Rate = k2 [N2O5*]. and using the equilibrium from step 1, we can easily get
K1 = (k1/k-1) = [N2O5*] / [N2O5]
Rate = K1k2[N2O5] = k[N2O5].
This implies that there is only one molecule of N2O5 involved in the rate determining step. While, even in this multi-step reaction, that may be true, there is no guarantee that we can relate reaction order to molecularity in a multi-step reaction.
In low or intermediate pressure cases, we cannot make the simplifying assumption that step 2 is rate determining. we need to consider both the activation step and the dissociation steps together.
N2O5* is produced and destroyed by the activation step and also used up in the dissociation step. The first assumption we will need to make is to assume that the amount of the activated molecules N2O5* will quickly reach a value that remains constant throughout the reaction. This is a steady-state and is written mathematically as
= 0 = k1[N2O5][M] - k-1 [N2O5*][M] - k2[N2O5*].
The zero value means we've assumed no change in concentration over time, not zero concentration.
Now, we can solve for [N2O5*]
Now, using equation 3 above, we can solve for the overall rate.
We see that at low pressure (small amounts of M) we can assume that k-1[M] is small compared to k2 (in the denominator) and our expressions simplifies to
rate = k1[N2O5][M]
at high pressures, the reverse is true and we get
,
which is the same form as we predicted in our first simplification above.
Rate and temperature dependence
2 NO(g) + Cl2(g) $\rightarrow$ 2 NOCl(g)
the experimental rate law is: Rate = k[NO]2[Cl].
While we might use this as evidence that the reaction is a single step one since the rate law and the exponents match. This would be a ter-molecular reaction. If it is possible to come up with a multi-step system which avoids ter-molecular steps and still has this same rate law then the reaction is more likely to go via the multi-step route.
Here's an example of another mechanism. In addition, we'll look at the temperature behaviour of the reaction.
2 NO + O2 $\rightarrow$ 2NO2
Has a rate law
Rate = k [NO]2[O2],
where k decreases with increasing temperature!
The following mechanism explains both the rate law and the apparent anti-Arrhenius behaviour.
NO + NO N2O2 Fast equilibrium with forward/reverse rate constants k1/k-1 N2O2 + O2 $\rightarrow$ 2 NO2 Slow step rate constant = k2
The second (slow) step is rate determining so
Rate = k2 [N2O2] [O2].
Just like the previous example [N2O2] is an intermediate, not a reactant or product. It would be very hard to measure this concentration with any certainty. We will reformulate the rate law in a way that uses only reactant (or product) concentrations.
Since the first equilibrium step is fast, we can say that the system is always at equilibrium, hence
forward rate = reverse rate
k1 [NO]2 = k-1 [N2O2]
or
[N2O2] = K1 [NO]2
where the equilibrium constant K1 = k1/k-1.
Now we can rewrite the overall rate constant as
Rate = k [NO]2[O2] where k = k2K1.
Since K1 is an equilibrium constant, we can write
$K_1=\exp \left[ \frac{-\Delta G_1^o}{RT}\right]=\exp \left[ \frac{-\left(\Delta H_1^o-T\Delta S_1^o\right)}{RT}\right]$
NOTE: exp[x] is the same as writing ex. So, rearranging gives
$K_1=\exp\left[\frac{\Delta S_1^o}{R}\right]\;\;\exp\left[\frac{\Delta H_1^o}{RT}\right]$
now, combining this with the equation for k above, gives
$k=k_1K_1 = A_2 \exp\left[\frac{-E_{a2}}{RT}\right]\times \exp \left[\frac{\Delta S_1^o}{R}\right] \times \exp \left[\frac{\Delta H_1^o}{RT}\right]$
Now, let's group terms under the common denominators in the exponents. Remember to add the exponents when we multiply the values.
$k = \overset{A_{\mathrm{eff}}}{\overbrace{A_2 \;\exp\left[\frac{\Delta S_1^o}{R}\right]}} \times \exp\left[\frac{-(\overset{E_{\mathrm{a,eff}}}{\overbrace{E_{a2}\;+\;\Delta H_1^o}})}{RT}\right]$
We have grouped the factors based on their temperature independence. Notice that if we rename the first factor to be simply Aeff and the numerator of the exponent to be an 'effective' activation energy Ea,eff, we can now write an equation that looks like the Arrhenius equation.
$k = A_{\mathrm{eff}}\; \exp \left(\frac{-E_{\mathrm{a,eff}}}{RT}\right)$
where
$A_{\mathrm{eff}}=A_2 \;\exp\left[\frac{\Delta S_1^o}{R}\right]$
is temperature independent and is an effective pre-exponential factor A for the overall rate law
and the part
$E_{\mathrm{a,eff}}=E_{a2}+\Delta H_1^o$
in the exponent in the second factor is like an effective reaction barrier Ea,eff.
This looks like a normal Arrhenius equation except that the effective activation energy Ea is the sum of Ea2 plus ΔH1°. Since ΔH1° can be positive or negative the sign of the overall Ea could be negative if ΔH1° is negative enough. In this case, step 1 is highly exothermic. This would give an effective negative energy barrier and the observed anti-Arrhenius behaviour.
An alternative view of this can be seen in light of Le Châtelier's principle. The equilibrium in step 1 is supplying reactant for step 2. Since the position of the equilibrium shifts to the left as the temperature is increased (Energy is a product of step 1) then the supply (concentration) of reactant for step 2 (the rate-limiting step) decreases as the temperature increases. If the concentration change is large enough, the rate of step two will decrease even as the temperature rises.
Catalysis
A catalyst is a substance which increases the rate of a reaction but is not consumed in that reaction and does not change the products of reaction. A catalyst may well be used in the reaction at some step in the reaction mechanism but if so, it is then regenerated at a later step. Hence, no net change in the amount of catalyst occurs. In some cases, the catalyst is simply a compound or chemical species which is in the same phase as the reaction mixture itself. This is called a homogeneous catalyst. Other catalysts exist as a separate phase and the reaction occurs on the phase boundary (surface) between the reaction mixture and the catalyst. This is called a heterogeneous catalyst.
An example of homogeneous catalysis can be found in the reaction which sees cis-2-butene transforming to trans-2-butene.
This reaction has a reaction rate which is relatively slow and an activation energy Ea = 262 kJ/mol.
If we add Iodine, I2, to the reaction mixture we find that the activation energy is lowered to Ea = 115 kJ/mol.
This can be explained by the following reaction mechanism.
Iodine sets up its own equilibrium with the iodine radical.
I2 2 I·
We see that the iodine radical was used in the reaction, caused a different reaction mechanism to occur. This new mechanism has a lower activation energy than the uncatalyzed one. The iodine radical is regenerated at the end of the mechanism and hence is not used up but it is used.
A heterogeneous reaction most often involves a solid catalyst and a liquid or gaseous reaction mixture. Consider the catalytic converter in the exhaust system of your automobiles. Platinum metal is coated on a granular support material to ensure that as much platinum as possible is exposed to the fumes passing through the exhaust system. These fumes often contain unburned hydrocarbons and CO and NO gases. The catalyst helps complete the oxidation of these to CO2, NO2 and H2O. You often notice lots of water vapour coming from the exhaust systems of modern vehicles on a cold day because it condenses in the cold air to become visible.
I will illustrate a heterogeneous catalysis with the following example.
C2H4 + H2 $\rightarrow$ C2H6 Very slow without a catalyst.
Use Pt or Ni catalyst.
Our bodies use catalysts called enzymes to facilitate many kinds of reactions. For example, the oxidation of sucrose C12H22O11 to carbon dioxide and water only occurs at high temperature (sucrose burns) unless there is a catalyst present. The enzyme sucrase (a code name meaning "we don't know what is it but it catalyses the sucrose reaction") makes this process quite rapid at body temperature (lower activation energy). The catalyzed reaction at body temperatures is on the order of 1020 times faster than the uncatalyzed one.
These types of catalytic reactions are not quite heterogeneous like the metal-catalyzed reactions are since the enzymes are dissolved in the same solutions as the reactants and products but their size is so large that they are almost like a solid phase in comparison to the tiny reactants and products. This type of reaction is usually simply classified as an enzyme-catalyzed reaction.
|
|
Home > Mean Square > Root Mean Square Error Formula
# Root Mean Square Error Formula
## Contents
This value is commonly referred to as the normalized root-mean-square deviation or error (NRMSD or NRMSE), and often expressed as a percentage, where lower values indicate less residual variance. For example, when measuring the average difference between two time series x 1 , t {\displaystyle x_{1,t}} and x 2 , t {\displaystyle x_{2,t}} , the formula becomes RMSD = ∑ and then take the square root of the value to finally come up with 3.055. x . . . . . . . | | + . http://objectifiers.com/mean-square/root-mean-square-error-formula-in-matlab.html
In simulation of energy consumption of buildings, the RMSE and CV(RMSE) are used to calibrate models to measured building performance.[7] In X-ray crystallography, RMSD (and RMSZ) is used to measure the To develop a RMSE, 1) Determine the error between each collected position and the "truth" 2) Square the difference between each collected position and the "truth" 3) Average the squared differences See also Root mean square Average absolute deviation Mean signed deviation Mean squared deviation Squared deviations Errors and residuals in statistics References ^ Hyndman, Rob J. Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance, and the estimator that does this is the minimum variance unbiased estimator. https://en.wikipedia.org/wiki/Root-mean-square_deviation
## Root Mean Square Error Interpretation
The RMSD of predicted values y ^ t {\displaystyle {\hat {y}}_{t}} for times t of a regression's dependent variable y t {\displaystyle y_{t}} is computed for n different predictions as the The RMSD represents the sample standard deviation of the differences between predicted values and observed values. Next: Regression Line Up: Regression Previous: Regression Effect and Regression Index Susan Holmes 2000-11-28 Root-mean-square deviation From Wikipedia, the free encyclopedia Jump to: navigation, search For the bioinformatics concept, see
Repeat for all rows below where predicted and observed values exist. 4. This implies that a significant part of the error in the forecasts are due solely to the persistent bias. Academic Press. ^ Ensemble Neural Network Model ^ ANSI/BPI-2400-S-2012: Standard Practice for Standardized Qualification of Whole-House Energy Savings Predictions by Calibration to Energy Use History Retrieved from "https://en.wikipedia.org/w/index.php?title=Root-mean-square_deviation&oldid=745884737" Categories: Point estimation Root Mean Square Error In R doi:10.1016/0169-2070(92)90008-w. ^ Anderson, M.P.; Woessner, W.W. (1992).
Hence to minimise the RMSE it is imperative that the biases be reduced to as little as possible. Root Mean Square Error Excel CS1 maint: Multiple names: authors list (link) ^ "Coastal Inlets Research Program (CIRP) Wiki - Statistics". p.60. https://en.wikipedia.org/wiki/Root-mean-square_deviation See also Root mean square Average absolute deviation Mean signed deviation Mean squared deviation Squared deviations Errors and residuals in statistics References ^ Hyndman, Rob J.
ISBN0-495-38508-5. ^ Steel, R.G.D, and Torrie, J. What Is A Good Rmse Each of these values is then summed. x . . . . | n 6 + . + . . x . . . . . . | t | . . + . . . . | i 8 + . . . + .
## Root Mean Square Error Excel
In hydrogeology, RMSD and NRMSD are used to evaluate the calibration of a groundwater model.[5] In imaging science, the RMSD is part of the peak signal-to-noise ratio, a measure used to https://www.kaggle.com/wiki/RootMeanSquaredError The minimum excess kurtosis is γ 2 = − 2 {\displaystyle \gamma _{2}=-2} ,[a] which is achieved by a Bernoulli distribution with p=1/2 (a coin flip), and the MSE is minimized Root Mean Square Error Interpretation Thus the RMS error is measured on the same scale, with the same units as . Root Mean Square Error Matlab Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. his comment is here In GIS, the RMSD is one measure used to assess the accuracy of spatial analysis and remote sensing. Two or more statistical models may be compared using their MSEs as a measure of how well they explain a given set of observations: An unbiased estimator (estimated from a statistical You can swap the order of subtraction because the next step is to take the square of the difference. (The square of a negative or positive value will always be a Normalized Root Mean Square Error
When normalising by the mean value of the measurements, the term coefficient of variation of the RMSD, CV(RMSD) may be used to avoid ambiguity.[3] This is analogous to the coefficient of By using this site, you agree to the Terms of Use and Privacy Policy. That being said, the MSE could be a function of unknown parameters, in which case any estimator of the MSE based on estimates of these parameters would be a function of this contact form After that, divide the sum of all values by the number of observations.
It tells us how much smaller the r.m.s error will be than the SD. Mean Square Error Example New York: Springer. For a Gaussian distribution this is the best unbiased estimator (that is, it has the lowest MSE among all unbiased estimators), but not, say, for a uniform distribution.
## doi:10.1016/j.ijforecast.2006.03.001.
error, you first need to determine the residuals. That is, the n units are selected one at a time, and previously selected units are still eligible for selection for all n draws. In this case we have the value 102. Root Mean Square Error Calculator Some experts have argued that RMSD is less reliable than Relative Absolute Error.[4] In experimental psychology, the RMSD is used to assess how well mathematical or computational models of behavior explain
MR0804611. ^ Sergio Bermejo, Joan Cabestany (2001) "Oriented principal component analysis for large margin classifiers", Neural Networks, 14 (10), 1447–1461. Y = -2.409 + 1.073 * X RMSE = 2.220 BIAS = 1.667 (1:1) O 16 + . . . . . . . . . . . + | b Root Mean Square Error Geostatistics Related Articles GIS Analysis How to Build Spatial Regression Models in ArcGIS GIS Analysis Python Minimum or Maximum Values in ArcGIS GIS Analysis Mean Absolute Error http://objectifiers.com/mean-square/root-mean-square-error-r2.html Further, while the corrected sample variance is the best unbiased estimator (minimum mean square error among unbiased estimators) of variance for Gaussian distributions, if the distribution is not Gaussian then even
x . + . . | e | . The goal of experimental design is to construct experiments in such a way that when the observations are analyzed, the MSE is close to zero relative to the magnitude of at The system returned: (22) Invalid argument The remote host or network may be down. x . . . . . . | o | . + .
MR1639875. ^ Wackerly, Dennis; Mendenhall, William; Scheaffer, Richard L. (2008). Discover the differences between ArcGIS and QGIS […] Popular Posts 15 Free Satellite Imagery Data Sources 9 Free Global Land Cover / Land Use Data Sets 13 Free GIS Software Options: Personal vs File Geodatabase Rhumb Lines: Setting it Straight with Loxodromes Trilateration vs Triangulation - How GPS Receivers Work Great Circle: Why are Geodesic Lines the Shortest Flight Path? Academic Press. ^ Ensemble Neural Network Model ^ ANSI/BPI-2400-S-2012: Standard Practice for Standardized Qualification of Whole-House Energy Savings Predictions by Calibration to Energy Use History Retrieved from "https://en.wikipedia.org/w/index.php?title=Root-mean-square_deviation&oldid=745884737" Categories: Point estimation
error will be 0. If in hindsight, the forecasters had subtracted 2 from every forecast, then the sum of the squares of the errors would have reduced to 26 giving an RMSE of 1.47, a Squaring the residuals, taking the average then the root to compute the r.m.s. To compute the RMSE one divides this number by the number of forecasts (here we have 12) to give 9.33...
Definition of an MSE differs according to whether one is describing an estimator or a predictor. Carl Friedrich Gauss, who introduced the use of mean squared error, was aware of its arbitrariness and was in agreement with objections to it on these grounds.[1] The mathematical benefits of
|
|
# Thread: examples of a binary relations
1. ## examples of a binary relations
Is there any certain ways of thinking of these examples? Or just guess and try?
a) Reflexive and symmetric, but not transitive
= abs(a-b) < 1... right? any other example?
b) Reflexive but neither symmetric nor transitive
= a-b < 1, any other type?
c) Symmetric, but neither reflexive nor transitive
= abs(a-b) = 1, any other?
d) Transitive, but neiter reflexive nor symmetric
= a < b
any other examples will be appreciated, thanks
2. Originally Posted by ninano1205
Is there any certain ways of thinking of these examples? Or just guess and try?
a) Reflexive and symmetric, but not transitive
= abs(a-b) < 1... right? any other example?
b) Reflexive but neither symmetric nor transitive
= a-b < 1, any other type?
c) Symmetric, but neither reflexive nor transitive
= abs(a-b) = 1, any other?
d) Transitive, but neiter reflexive nor symmetric
= a < b
any other examples will be appreciated, thanks
a) Let R={(a,b): a,b are real Nos and |a-b|<1}
For R to be reflexive we must have :
For all a belonging to reals,then (a,a)εR ..or |a-a|<1,which is correct since 0<1
For R to be symmetric we must have:
For all a,b reals and (a,b)εR ,then (b,a)εR.But (a,b)εR===> |a,b|<1 which implies that |b,a|<1 ,hence (b,a)εR,SINCE |a-b|=|b-a|
And finally for R TO BE not transitive we must prove that:
There exist x,y,z reals such that (x,y)εR and (y,z)εR BUT ~(x,z)εR ( (x,z) does not belong to R).
This is the negation of the definition for R being transitive in the real Nos:
......for all x,y,z reals (x,y)εR and (y,z)εR ,then (x,z)εR.........................
So let x=8,....y=8,5..........z=9........................ .and:
|x-y|=|8-8,5|=0,5<1 .........hence (x,y)εR........................................... ...................................
|y-z|=|8,5-9|= 0,5<1...........hence (y,z)εR........................................... ...........................................
But.....$\displaystyle |x-z|=|8-9| = 1\geq 1$.................................................. ..................................1
Now for (x,z) not to belong to R we need to have ~|x-z|<1 ,i.e |x-z| must not be less to....1
From the inequalities of real Nos ,if ~|x-z|<1 ,then $\displaystyle |x-z|\geq 1$
Which is exactly what we have proved in (1)............................................... ..................................................
Hence R is not transitive
YOU CAN do the rest of problems more or less in the same way.
To understand what you really doing i suggest you write first the definitions when a relation R in real Nos is reflexive,symmetric, transitive,and then take their negations
|
|
pipeline:utilities:sxpipe_moon_eliminator
Approvals: 0/1
This is an old revision of the document!
Volume Adjustment: Eliminate moons or remove dusts from the background of a 3D density map based on the expected molecular mass.
Usage in command line
sp_pipe.py moon_eliminator input_volume_path input_volume_path_2nd output_directory --pixel_size=PIXEL_SIZE --mol_mass=KILODALTON --use_density_threshold=THRESHOLD --moon_distance=PIXEL_DISTANCE --ndilation=DILATION_PIXEL_WIDTH --edge_width=SIGMA_PIXEL_WIDTH --resample_ratio=RATIO_OR_DIR_PATH --box_size=BOX_SIZE --resampled_shift3d --shift3d_x=SHIFT3D_X --shift3d_y=SHIFT3D_Y --shift3d_z=SHIFT3D_Z --invert_handedness --fl=LPF_CUTOFF_FREQ --aa=LPF_FALLOFF_WIDTH --outputs_root=FILE_ROOT --edge_type=SOFT_EDGE_TYPE --debug
sp_pipe moon_eliminator does not support MPI.
There are two modes to run the program:
1. Single Volume Mode:
Create reference 3D structure and 3D mask from R-VIPER 3D model with the resample ratio used in ISAC2 using an expected molecular mass [kDa].
sp_pipe.py moon_eliminator 'outdir_rviper/main001/average_volume.hdf' 'outdir_pipe_moon_eliminator' --mol_mass=1400 --pixel_size=1.12 --resample_ratio='outdir_isac2' --box_size=352
Create reference 3D structure and 3D mask from R-VIPER 3D model with the resample ratio used in ISAC2 using ad-hoc density threshold instead of the expected molecular mass [kDa].
sp_pipe.py moon_eliminator 'outdir_rviper/main001/average_volume.hdf' 'outdir_pipe_moon_eliminator' --mol_mass=1400 --pixel_size=1.12 --use_density_threshold=13.2 --resample_ratio='outdir_isac2' --box_size=352
Create reference 3D structure and 3D mask from post-refined MERIDIEN 3D model using the expected molecular mass [kDa].
sp_pipe.py moon_eliminator 'outdir_postrefiner/postrefine3d.hdf' 'outdir_pipe_moon_eliminator' --mol_mass=1400 --pixel_size=1.12
2. Halfset Volumes Mode:
Create reference 3D structure and 3D mask from halfset unfiltered maps produced by MERIDIEN, using the expected molecular mass [kDa].
sp_pipe.py moon_eliminator 'outdir_meridien/vol_0_unfil_025.hdf' 'outdir_meridien/vol_0_unfil_025.hdf' 'outdir_pipe_moon_eliminator' --mol_mass=1400 --pixel_size=1.12
#### Main Parameters
input_volume_path
Input volume path: Path to input volume file containing the 3D density map. (default required string)
output_directory
Output directory: The results will be written here. It cannot be an existing one. (default required string)
--pixel_size
Output pixel size [A]: The original pixel size of dataset. This must be the pixel size after resampling when resample_ratio != 1.0. That is, it will be the pixel size of the output map. (default required float)
--use_mol_mass
Use molecular mass
GUI OPTION ONLY - Define if one want to use the molecular mass option as a masking threshold. (default False)
--use_density_threshold==none
--mol_mass
Molecular mass [kDa]: The estimated molecular mass of the target particle in kilodalton. (default required float)
--use_mol_mass==True
--use_density_threshold
Use ad-hoc density threshold: Use user-provided ad-hoc density threshold, instead of computing the value from the molecular mass. Below this density value, the data is assumed not to belong to the main body of the particle density. (default none)
--use_mol_mass==False
--moon_distance
Distance to the nearest moon [Pixels]: The moons further than this distance from the density surface will be elminated. The value smaller than the default is not recommended because it is difficult to avoid the stair-like gray level change at the edge of the density surface. (default 3.0)
--resample_ratio
Resample ratio: Specify a value larger than 0.0. By default, the program does not resample the input map (i.e. resample ratio is 1.0). Use this option maily to restore the original dimensions or pixel size of VIPER or R-VIPER model. Alternatively, specify the path to the output directory of an ISAC2 run. The program automatically extracts the resampling ratio used by the ISAC2 run. (default '1.0')
--box_size
Output box size [Pixels]: The x, y, and z dimensions of cubic area to be windowed from input 3D volume for output 3D volumes. This must be the box size after resampling when resample_ratio != 1.0. (default none)
--invert_handedness
Invert handedness: Invert the handedness of the 3D map. (default False)
--fl
Low-pass filter resolution [A]: >0.0: low-pass filter to the value in Angstrom; =-1.0: no low-pass filter. The program applies this low-pass filter before the moon elimination. (default -1.0)
input_volume_path_2nd
Second input volume path: Path to second input volume file containing the 3D density map. Use this option to create a mask from the sum of two MERIDIEN half-set maps. (default none)
--ndilation
Dilation width [Pixels]: The pixel width to dilate the 3D binary volume corresponding to the specified molecular mass or density threshold prior to softening the edge. By default, it is set to half of –moon_distance so that the voxels with 1.0 values in the mask are same as the hard-edged molecular-mass binary volume. (default -1.0)
--edge_width
Soft-edge width [Pixels]: The pixel width of transition area for soft-edged masking.(default 1)
--edge_type
Soft-edge type: The type of soft-edge for moon-eliminator 3D mask and a moon-eliminated soft-edged 3D mask. Available methods are (1) 'cosine' for cosine soft-edged (used in PostRefiner) and (2) 'gauss' for gaussian soft-edge. (default cosine)
--outputs_root
Root name of outputs: Specify the root name of all outputs. It cannot be empty string or only white spaces. (default vol3d)
--resampled_shift3d
Providing resampled 3D shifts: Use this option when you are providing the resampled 3D shifts (using pixel size of outputs) when --resample_ratio!=1.0. By default, the program assums the provided shifts are not resampled. (default False)
--shift3d_x
3D x-shift [Pixels]: 3D x-shift value. (default 0)
--shift3d_y
3D y-shift [Pixels]: 3D y-shift value. (default 0)
--shift3d_z
3D z-shift [Pixels]: 3D z-shift value. (default 0)
--aa
Low-pass filter fall-off [1/Pixels]: Low-pass filter fall-off in absolute frequency. The program applies this low-pass filter before the moon elimination. Effective only when --fl > 0.0. (default 0.1)
--fl!=-1.0
--debug
Run with debug mode: Mainly for developer. (default False)
#### List of output Files
File Name Discription *_ref_before_moon_elimination.hdf File containing the 3D reference map before moon elimination (i.e., the 3D map just before applying moon elimination.). *_ref_moon_eliminated.hdf File containing the moon eliminated 3D reference. *_mask_moon_elminator.hdf File containing the moon elminator 3D mask. *_bin_mol_mass.hdf File containing the 3D bainary corresponding to the molecular mass. *_mask_moon_eliminated.hdf File containing the moon eliminated 3D mask.
This command executes the following processes:
1. Extract resample ratio from ISAC run directory if necessary (mainly for R-VIPER models).
2. Resample and window the map if necessary (mainly for R-VIPER models)
3. Shift 3D map if necessary.
4. Invert the handedness if necessary.
5. Apply low-pass filter to the input map before the moon elimination if necessary.
6. Save reference 3D map before eliminating the moons.
7. Create reference 3D map by eliminating the moons from the input map and save the results.
8. Create 3D mask from the 3D binary corresponding to the molecular mass and save the result if necessary
#### 2018/06/18 Toshio Moriya
Wish
• Add options for 3D rotation of the map.
#### 2018/06/18 Toshio Moriya
Tips about balancing settings of moon_distance, dilation, and edge_sigma options for Gaussian soft-edge.
• moon_distance
• In principle, shorter moon_distance is better (e.g. 3[Pixels] is better than 6[Pixels]).
• If moon_distance is too long, the moons will be connected and creates strange low density shape at the edge of the moon_distance.
• On the other hand, if it is too short, soft-edge will have the stair-like gray level change because of quantization or digitization.
• ndilation
• Setting dilation to half of moon_distance generates mask where the voxels with 1.0 values are same as the hard-edged molecular-mass binary map (default behaviour).
• Setting dilation to smaller than half of moon_distance generates mask where the voxels with 1.0 values are smaller than the hard-edged molecular-mass binary map.
• edge_sigma
• In principle, smaller edge_sigma is better.
• However, edge_sigma must be at least larger than 1[pixel].
• If not, the density distribution of moon eliminator 3D mask won't be smooth (spiky) because of quantization or digitization.
• In addition, moon-eliminated reference 3D map will have a strange dent near zero.
Felipe Merino and Toshio Moriya
Category 1:: APPLICATIONS
sparx/bin/sp_pipe.py
Alpha:: Under development.
There are no known bugs so far.
• pipeline/utilities/sxpipe_moon_eliminator.1554190250.txt.gz
|
|
# When is the effective potential equal to the total energy?
I have a question about the energy of a particle in orbit due to a gravitational attraction. The effective potential given by the gravitational force is defined to be $$U_{\text{eff}} = \frac{L^2}{2mr^2}- \frac{GmM}{r}$$ On the other hand, using conservation of energy and writing $$v^2 = \vert{\dot{\vec{r}}}\rvert^2$$ in polar coordinates we see that $$\frac{1}{2}m\dot{r}^2 = E - U_{\text{eff}}\tag{1}$$ The above expression got me thinking, and I wanted to ask if I correctly understood what the equation implied.
If $$E = U_{\text{eff}}$$ then $$(1)$$ tells us that $$\dot{r} = 0 \iff r = \text{constant}$$, but since $$r = \text{constant}$$ describes a circular orbit, is the statement
The effective potential is equal to the total energy of a particle under a gravitation force if and only if the orbit of the particle is circular.
correct? Or am I misunderstanding? Thank you in advance!
• Your equations are incorrect. In the kinetic energy terms, m should be the reduced mass.
– Nick
Jun 22, 2021 at 21:09
• I think @Robert Lee assumes M is very much greater than m in which case the reduced mass is essentially m. This assumption should be explicitly stated. Jun 22, 2021 at 21:55
• Best to use equations that are correct, not ones that are almost correct under certain assumptions. This causes a lot of confusion.
– Nick
Jun 22, 2021 at 22:03
The effective potential is obtained as follows. The kinetic energy in polar coordinates is $$T = { 1\over 2}m v^2={ 1\over 2}m( \dot r^2 + r^2\dot \theta^2)$$. (Your expression for $$v^2$$ is incorrect.)
Using conservation of energy $$E = T + V$$ is constant where $$E$$ is total energy, $$T$$ is kinetic energy, and $$V$$ is the gravitational potential energy, $$V = -GmM/r$$. So $${ 1\over 2}m( \dot r^2 + r^2\dot \theta^2) - GmM/r = E$$, a constant. Consider the terms $${ 1\over 2}mr^2\dot \theta^2 - GmM/r$$. The angular momentum $$L = mr^2 \dot \theta$$ is constant. So these terms are $${ {L^2} \over {2mr^2}} - {{GmM} \over {r }}$$. We define the effective potential energy $$U_{eff} = { {L^2} \over {2mr^2}} - {{GmM} \over {r }}$$. Therefore, $${ 1\over 2}m\dot r^2 = E - U_{eff}$$.
If $$\dot r = 0$$ the orbit is circular and $$E = U_{eff}$$ as you say. Note, your conclusion is correct for the effective potential energy.
|
|
# skrf.calibration.calibration.TRL.remove_and_cal¶
TRL.remove_and_cal(std)
Remove a cal standard and correct it, returning correct and ideal
This requires requires overdetermination. Useful in troubleshooting a calibration in which one standard is junk, but you dont know which.
Parameters: std (int or str) – the integer of calibration standard to remove, or the name of the ideal or measured calibration standard to remove. ideal,corrected – the ideal and corrected networks which were removed out of the calibration tuple of skrf.Networks
|
|
# Number - (Numeral system|System of numeration|Representation of number)
A numeral system (or system of numeration) is a mathematical notation system for expressing numbers using digits (or other symbols). The numeral system gives the context that allows the (digits|symbols) to be interpreted.
A positional notation has the following properties:
Example, the symbols 11 is interpreted
For systems for classifying numbers according to their type, see Number System (Classification|Type).
## 3 - List
The Hindu–Arabic numeral system, base-10, is the most commonly used system in the world today for most calculations.
2 8 10 16
00000 0 0 0
00001 1 1 1
00010 2 2 2
00011 3 3 3
00100 4 4 4
00101 5 5 5
00110 6 6 6
00111 7 7 7
01000 10 8 8
01001 11 9 9
01010 12 10 A
01011 13 11 B
01100 14 12 C
01101 15 13 D
01110 16 14 E
01111 17 15 F
10000 20 16 10
10001 21 17 11
10010 22 18 12
10011 23 19 13
10100 24 20 14
10101 25 21 15
### 3.2 - Machine Data Representation
Most computers store data by block of bytes (of eight bits)
Representation of a full bytes 1111111 in:
Base Note Representation 11111111 Eight Full Digit 377 Digits are not used completely (Not 777) 255 FF Digits are used completely
As the octal digit are not used completely, the Octal system is not the most practical number system to store a byte.
## 4 - Example
Python with an set expression in order to calculate the set of numbers that are made with:
• at-most-three-digit numbers.
base = 2
digits = {0, 1}
{(base**2)*x + base*y + z for x in digits for y in digits for z in digits}
{0, 1, 2, 3, 4, 5, 6, 7}
• at-most-four-digit numbers.
{(base**3)*w + (base**2)*x + base*y + z for x in digits for y in digits for z in digits for w in digits}
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}
|
|
### Course
19R1 D. Anselmi
Theories of gravitation
Program
PDF
### Recent Papers
We study higher spin tensor currents in quantum field theory. Scalar, spinor and vector fields admit unique “improved” currents of arbitrary spin, traceless and conserved. Off-criticality as well as at interacting fixed points conservation is violated and the dimension of the current is anomalous. In particular, currents $J^{(s,I)}$ with spin $s$ between 0 and 5 (and a second label $I$) appear in the operator product expansion of the stress tensor. The TT OPE is worked out in detail for free fields; projectors and invariants encoding the space-time structure are classified. The result is used to write and discuss the most general OPE for interacting conformal field theories and off-criticality. Higher spin central charges $c_{(s,I)}$ with arbitrary $s$ are defined by higher spin channels of the many-point T-correlators and central functions interpolating between the UV and IR limits are constructed. We compute the one-loop values of all $c_{(s,I)}$ and investigate the RG trajectories of quantum field theories in the conformal window following our approach. In particular, we discuss certain phenomena (perturbative and nonperturbative) that appear to be of interest, like the dynamical removal of the $I$-degeneracy. Finally, we address the problem of formulating an action principle for the RG trajectory connecting pairs of CFT’s as a way to go beyond perturbation theory.
PDF
Nucl.Phys. B541 (1999) 323-368 | DOI: 10.1016/S0550-3213(98)00783-4
arXiv:hep-th/9808004
Embedded PDFFullscreen PDF view
### Search this site
Support Renormalization
If you want to support Renormalization.com you can spread the word on social media or make a donation
### Book
14B1 D. Anselmi
Renormalization
PDF
Last update: May 9th 2015, 230 pages
Contents:
Preface
1. Functional integral
2. Renormalization
3. Renormalization group
4. Gauge symmetry
5. Canonical formalism
6. Quantum electrodynamics
7. Non-Abelian gauge field theories
Notation and useful formulas
References
Course on renormalization, taught in Pisa in 2015. (More chapters will be added later.)
|
|
# Is pH in a charged hydrogel and its supernatant solution constant?
Let us assume we deal with ideal systems without interactions.
The gel phase and the supernatant solution phase are in thermodynamic equilibrium. The supernatant solution shall consist of different ion species $$i$$. The gel is in free swelling equilibrium and the total chemical potential of each ion species $$i$$ is constant throughout the supernatant phase and the gel phase:
$$\mu_i^\mathrm{gel}=\mu_i^\mathrm{supernatant}$$. Since the total chemical potential is constant we can just write $$\mu_i$$ and forget about the phase in which it was determined.
Under the assumption that gel phase is always electroneutral we obtain a Donnanpartitioning of ions: $$c_i^\mathrm{gel} \neq c_i^\mathrm{supernatant}$$ compare: https://en.wikipedia.org/wiki/Gibbs%E2%80%93Donnan_effect
The difference in concentration between the two phases is allowed (at same total chemical potential) because the total chemical potential not only has the ideal part but also an electric potential part (needed due to the electroneutrality constraint):
$$\mu_i=\mu_i^0+kT\ln(c_i/c_0) +z_i e_0 \Psi$$. Same total chemical potential at different ion concentration therefore gives rise to the Donnan potential (https://en.wikipedia.org/wiki/Donnan_potential):
$$\mu_i^\mathrm{gel}=\mu_i^{supernatant} \Leftrightarrow \Psi^\mathrm{gel}-\Psi^\mathrm{supernatant}=\frac{kT\ln(c_i^\mathrm{supernatant}/c_i^\mathrm{gel})}{z_i e_0}$$
Now the question is: Is pH constant (case 1) or is it different (case 2) in the gel phase and the supernatant solution phase.
1. If pH is defined based on the total chemical potential, then it is constant (because chemical potential is constant) and we have the same pH in the gel and in the supernatant phase: $$pH=-\log_{10}(a_H)=-\log_{10}(\exp(\frac{\mu_H-\mu^0}{kT}))$$
2. If we define pH based on the concentrations (or making use of the mean activity coefficient $$\hat{\gamma}_i$$=1, in which the Donnan Potential cancels), then the pH is different in the gel and in the supernatant phase: $$pH^\mathrm{gel}=-\log_{10}(c_H^\mathrm{gel} \hat{\gamma}_H^\mathrm{gel}/c_0) \neq pH^\mathrm{supernatant} =-\log_{10}(c_H^\mathrm{supernatant} \hat{\gamma}_H^\mathrm{supernatant} /c_0),$$ where $$c_0$$ is the standard concentration, i.e. 1mol/l and where the inequality arises from $$c_H^\mathrm{gel} \neq c_H^\mathrm{supernatant}$$ (due to Donnan partitioning and due to the mean activity coefficients $$\hat{\gamma}_H^\mathrm{gel}=1=\hat{\gamma}_H^\mathrm{supernatant}$$ for an ideal system.
What is correct? What would we measure when measuring the pH in the gel? Would we measure the same pH as in the supernatant phase (i.e. the first case is correct) or would we measure a pH different from the supernatant phase (i.e. the second case is correct?)
You can view the problem also from a different perspective: pH is defined by IUPAC (https://goldbook.iupac.org/terms/view/P04524) $$pH=−\log_{10}[a_H]$$, where $$a_H$$ is the relative activity https://goldbook.iupac.org/terms/view/A00115): $$a_H=\exp(\mu_H-\mu_H^0)$$
The question is do I need to use
1. the total chemical potential in this definition of the activity or
2. this definition of activity which you also find in literature (e.g. in this article "THE DONNAN EQUILIBRIUM" by Stell and Joslin, https://www.ncbi.nlm.nih.gov/pubmed/19431690):
$$\mu_H=\mu_H^0+kT\ln(a_H)+z_H e_0 \Psi \Leftrightarrow a_H=\exp(\mu_H-\mu_H^0-z_H e_0\Psi)$$ ?
In case 1) pH would be equal across both phases, in case 2) pH in both phases would be different: we could write $$pH^\mathrm{gel}-pH^\mathrm{supernatant}=-\log_{10}(\exp(\mu_H^\mathrm{gel}-\mu_H^\mathrm{supernatant}-z_H e_0 (\Psi^\mathrm{gel}-\Psi^\mathrm{supernatant})) =-\log_{10}(\exp(-z_H e_0 (\Psi^\mathrm{gel}-\Psi^\mathrm{supernatant}))$$
• You would be measuring the activity, which is equal across the sample provided again that it is in equilibrium and that your pH probe is not perturbing that thermodynamic equilibrium. Oct 6, 2019 at 20:43
According to IUPAC $$pH=-\log_{10}(a_H)=-\log_{10}(\frac{c_H}{c^{ref}} \gamma_H)$$ where $$\gamma_H$$ is the activity coefficient of H. However, only mean activity coefficients are measurable and therefore gamma_H is the mean activity coefficient. The Donnan potential (arising already purely from the electroneutrality condition) cannot be part of the mean activity coefficient since it cancels in the computation.
The electrochemical potentials of H are equal in both phases due to electrochemical equilibrium. The Donnan potential gives rise to a difference in chemical potentials between both phases and therefore $$a_H$$ is different in both phases, i.e. also pH is different in both phases.
|
|
English Információ A lap Pontverseny Cikkek Hírek Fórum
Rendelje meg a KöMaL-t!
Kifordítható
tetraéder
VersenyVizsga portál
Kísérletek.hu
Matematika oktatási portál
P. 4183. The halflife of the radioactive isotope of lead, whose atomic mass number is 214, is 26.8 minutes. How long has it been decaying if only one-thousandth of the original atoms are present in the sample?
(4 points)
Deadline expired on 12 October 2009.
Google Translation (Sorry, the solution is published in Hungarian only.)
Megoldás. 267 perc, azaz kb. 4,5 óra.
Statistics on problem P. 4183.
143 students sent a solution. 4 points: 121 students. 3 points: 13 students. 2 points: 3 students. 1 point: 3 students. 0 point: 2 students. Unfair, not evaluated: 1 solution.
• Problems in Physics of KöMaL, September 2009
• Támogatóink: Morgan Stanley
|
|
###### Unexpected Words and Phrases
Look for words or word patterns that are unlikely to be spoken, either in general or in cases specific to your business domain. These variances are often associated with errors.
For instance, three-letter words that end in "re" are often associated with errors because there aren't many three-letter words that end in "re" (with "are" being a notable exception). Using the V‑Spark search tool, specify the regular expression “[^’a]re” to search for such words. V‑Spark will return all three-letter words that end in "re" where the first character is not an "a" or an apostrophe. This constraint will prevent words like "we're" from appearing in search results. The apostrophe acts as a word boundary making the search engine think "re" is a whole word rather than the end of a longer word.
To find more specific substitution candidates, look for words that appear out of context or that are irrelevant to your industry. For example, the word "whale" isn't likely to be spoken in calls specific to auto insurance companies. If "whale" appears in transcripts related to auto insurance, this word is probably a candidate for substitution. The associated audio portion would then be reviewed to reveal a customer filing an insurance claim for damage done by "hail" and not "whale".
The following example illustrates substitution rules that correct unexpected words and phrases.
whale damage: hail damage
hit a beer : hit a deer
pay leo : paleo
|
|
# Translational Kinetic Energy
Written by Jerry Ratzlaff on . Posted in Classical Mechanics
Translational kinetic energy, abbreviated as $$KE_t$$, is the energy an object has because of its motion from one place to another, it does not matter if it is rotating.
## Translational Kinetic Energy formula
$$\large{ KE_t = \frac {1}{2}\; m \; v^2 }$$
### Where:
$$\large{ KE_r }$$ = translational kinetic energy
$$\large{ m }$$ = mass
$$\large{ v }$$ = velocity
|
|
### Home > CCA2 > Chapter 8 > Lesson 8.1.1 > Problem8-21
8-21.
Solve $x^{2} + 2x − 5 = 0$.
1. How many $x$-intercepts does $y = x^{2} + 2x − 5$ have?
You could complete the square, find the vertex, and consider the orientation to determine if this parabola has $0, 1, \text{or}\ 2$ $x$-intercepts.
2. Approximately where does the graph of $y = x^{2} + 2x − 5$ cross the x-axis?
$x\approx 1.45\text{ and}\ x\approx -3.45$
|
|
fahrenheit
1. Celsius to Fahrenheit
I am assuming I can use some of the same information from my velocity of a bullet question for this since this is also a linear equation, correct? Water freezes at 0°C or 32°F and boils at 100°C or 212°F. There is a linear equation that expresses the number of degrees Fahrenheit (F) in terms...
2. Fahrenheit and Celsius conversion
Hey, can someone help me with this problem, I have no idea what to do. Thanks! At what temperatures will the readings on the Fahrenheit and Celsius thermometers be the same? I set up a formula and solve for x but I got a wrong answer.
3. Standard Deviation - fahrenheit to celsius
Hi, I have a quick question... I have a question, if the mean tempature was -19.3 C and the standard deviation was 2.3 C..... when converting to Fahrenheit mean is easy i get -2.74F but I can't figure out the standard deviation conversion.... any help?
|
|
# Estimation of Risk-Neutral Densities Using Positive Convolution Approximation - Python
I'm trying to estimate the risk-neutral density through positive convolution approximation (introduced by Bondarenko 2002: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=375781). I'm currently struggling to implement a computational algorithm in Python solving the following optimization problem:
$$\hat{f}(u) := \sum_{j}a_j \phi(u - z_j)$$
$$\min_{a}\sum_{i=1}^n\left(P_i - \int_{-\infty}^{x_i}\left(\int_{-\infty}^y \hat{f}(u) du\right)dy\right)^2 \\ s.t. \sum_{j}a_j=1\\ \forall a_j\geq0$$
where: $$P_i$$ is the observed put price with strike $$x_i$$, $$\phi$$ refer to a rescaled standard normal distribution and $$z_j$$ represents points on an equally-spaced grid between the smallest and largest observed strikes $$x$$.
It is straightforward that one should use a standard quadratic program to solve the problem. However, I don't know how to handle the fact that the $$a$$'s are inside a function of $$u$$, which itself is inside a double integral.
Does anyone already implemented positive convolution approximation to estimate the risk-neutral density in Python?
Otherwise, could someone show me how to code an optimization problem with a double integral, such as for example:
$$\min_a\int_{-\infty}^{x}\left(\int_{-\infty}^y \hat{f}(u) du\right)dy \\ \hat{f}(u) := \sum_{j}a_j (u - z_j)^2$$
Thanks for the help!
EDIT
Update: Thanks to the comments and the answer of Attack68. I was able to implement the following code:
import numpy as np
from scipy.optimize import minimize
from scipy.stats import norm
# Compute f_hat
def f(u, y, *args):
a = args[0]
z = args[1]
h = args[2]
j = len(a)
# print(np.sum(a * norm.pdf(np.tile(u, [j,1]).transpose(), z, h), axis=1))
return np.sum(a * norm.pdf(np.tile(u, [j,1]).transpose(), z, h), axis=1)
# Compute double integral
def DI(a, b, z, h):
# print(dblquad(f, -10000, b, lambda x: -10000, lambda x: x, args=(a, z, h))[0])
return dblquad(f, -np.inf, b, lambda x: -np.inf, lambda x: x, args=(a, z, h))[0]
def sum_squared_pricing_diff(a, P, X, z, h):
total = 0
for i in range(0, len(P)):
p = P[i]
x = X[i]
total += (p - DI(a, x, z, h)) ** 2
# P is an array of vector put option prices
P = [0.249999283, 0.43750315, 0.572923413, 0.760408034, 0.94790493, 1.14584317,
1.458335038, 1.77083305, 2.624999786, 3.812499791, 5.377596753, 8.06065865,
10.74376984, 14.88873497, 19.88822895]
# X is the vector of the corresponding strikes of the put options
X = [560, 570, 575, 580, 585, 590, 595, 600, 605, 610, 615, 620, 625, 630, 635]
# z is the equally-spaced grid
z = np.linspace(0, 1000, 20)
# h arbitrarily chosen
h = 0.5
# initial guess of a
a_0 = np.ones(len(z)) / len(z)
constraints = ({'type': 'eq', 'fun': lambda a: 1 - np.sum(a)},)
bounds = (((0,None),)*len(z))
sol = minimize(sum_squared_pricing_diff, a_0, args=(P, X, z, h), method='SLSQP', constraints=constraints, bounds=bounds)
print(sol)
which returns the following warning and has difficulty to converge:
IntegrationWarning: The maximum number of subdivisions (50) has been achieved.
If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used.
warnings.warn(msg, IntegrationWarning)
Following a stack overflow post I will try to use nquad instead of dblquad. I will post further progress.
EDIT 2 Update: Using the insights from Attack68's second answer, I was able to estimate the RND in an "efficient" way (probably it can be further improved):
import pandas as pd
import numpy as np
from scipy.optimize import minimize
from scipy.stats import norm
import matplotlib.pyplot as plt
import math
###############################################################################
# Define required functions to describe the optimization problem
###############################################################################
# Double integral transformed
def sum_j_aK(a, x, z, h):
j = len(a)
loc = z
scale = h
x_normalized = (np.ones(j)*x - loc) / scale
K_j = (x_normalized*norm.cdf(x_normalized) + np.exp(-0.5*x_normalized**2)/((2*np.pi)**0.5)) * scale
return np.sum(a*K_j)
# Minimization problem
def sum_squared_pricing_diff(a, P, X, z, h):
total = 0
for i in range(0, len(P)):
p = P[i]
x = X[i]
total += abs(p - sum_j_aK(a, x, z, h))
###############################################################################
# Input required to solve the optimization problem
###############################################################################
# P is an array of vector put option prices
P = [0.249999283, 0.43750315, 0.572923413, 0.760408034, 0.94790493, 1.14584317,
1.458335038, 1.77083305, 2.624999786, 3.812499791, 5.377596753, 8.06065865,
10.74376984, 14.88873497, 19.88822895]
# X is the vector of the corresponding strikes of the put options
X = [560, 570, 575, 580, 585, 590, 595, 600, 605, 610, 615, 620, 625, 630, 635]
# h and j can be chosen arbitrarily
h = 4 # the higher h the smoother the estimated risk-neutral density
j = 50 # the higher the slower the optimization process
###############################################################################
# Solving the optimization problem
###############################################################################
# z is the equally-spaced grid
z = np.linspace((int(math.floor(min(X) / 100.0)) * 100), (int(math.ceil(max(X) / 100.0)) * 100), num=j)
# initial guess of a
a_0 = np.ones(j) / j
# The a vector has to sum up to 1
constraints = ({'type': 'eq', 'fun': lambda a: 1 - np.sum(a)},)
# Each a has to be larger or equal than 0
bounds = (((0,None),)*j)
sol = minimize(sum_squared_pricing_diff, a_0, args=(P, X, z, h), method='SLSQP', constraints=constraints, bounds=bounds)
print(sol)
###############################################################################
# Visualize obtained risk-neutral density (rnd)
###############################################################################
n = 500
a_sol = sol.x
s = np.linspace(min(X)*0.8, max(X)*1.2, num=n)
rnd = pd.DataFrame(np.sum(a_sol * norm.pdf(np.tile(s, [len(a_sol),1]).transpose(), z, h), axis=1))
rnd.index = s
plt.figure()
plt.plot(rnd)
• So the $a_j$ terms are just constants? If yes, then just pull them out of the double integral? – LocalVolatility Feb 19 at 17:02
• @LocalVolatility Thanks. Yes, they are just constants. Indeed, could be a way to tackle the problem. However, this would imply that I would need to vectorize the integrals over an array and therefore could not use scipy.integrate.dblquad since it employs an adaptive algorithm. Thus, I would need to use np.trapz, scipy.integrate.simps, or scipy.integrate.romb and not be allowed to use a function object. Right? – William Burknecht Feb 19 at 21:10
Looking at this a second time, your implementation is probably not very inefficient since for; $$\quad \hat{f}(u) := \sum_{j}a_j \phi(u - z_j), \quad \int_{-\infty}^y \hat{f}(u) du = \sum_j a_j \Phi(y)\;,$$
where $$\Phi(y)$$ is the transformed cumulative normal distribution function of $$y$$. This function already exists as scipy.stats.norm.cdf and is optimised.
Then you have; $$\int_{-\infty}^{x_i} \sum_j a_j \Phi(y) dy = \sum_j a_j \int_{-\infty}^{x_i} \Phi(y) dy = \sum_j a_j K_j \;.$$
These $$K_j$$ are just constants from the point of view of the $$a_j$$ optimisation so can be precomputed, and since you have scipy.stats.norm.cdf it seems you only need to use quad and not dblquad. You might also be interested to know that for a standard normal cdf (see https://math.stackexchange.com/questions/2040980/solving-approximating-integral-of-standard-normal-cdf ):
$$\int_{-\infty}^{x} \Phi(y) dy = x \Phi(x) + \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} \;.$$
$$\min_{a_j} \sum_i \left ( P_i - \sum_j a_j K_j \right )^2 \; s.t. constraints$$
With a bit more analysis I wonder if that can't be further simplified...
scipy.integrate.dblquad solves the following iterated integral problem:
$$I(\alpha,\beta) = \int_{\alpha}^{\beta} \int_{g(x)}^{h(x)} f(x,y)\; dy \;dx$$
If I re-express your variables to be consistent with the scipy documentation, and suppose I changed your integral to be finite:
$$\min_\alpha \int_{-\infty}^{\beta}\left(\int_{-\infty}^x \hat{f}(y) dy\right)dx \\ \hat{f}(y) := \sum_{j}a_j e^{(y - z_j)}$$
Then would it not be as simple as:
import numpy as np
from scipy.optimize import minimize
def f(y, x, *args):
a = args[0]
z = args[1]
return np.sum(a * np.exp(y - z))
def I(a, b, z):
return dblquad(f, -np.inf, b, lambda x: -np.inf, lambda x: x, args=(a, z))[0]
And to engage the optimisation you might try:
n = 5; b = 2; z = np.array([-2,0,3,0.5,2.2])
a_0 = np.ones(n) / n
constraints = ({'type': 'eq', 'fun': lambda a: 1 - np.sum(a)},)
bounds = (((0,None),)*n)
sol = minimize(I, a_0, args=(b, z), method='SLSQP', constraints=constraints, bounds=bounds)
print(sol)
>>> fun: 0.3678794441017473
jac: array([54.59815003, 7.3890561 , 0.36787944, 4.48168907, 0.81873076])
message: 'Optimization terminated successfully.'
nfev: 21
nit: 3
njev: 3
status: 0
success: True
x: array([5.11205896e-11, 1.43702148e-11, 1.00000000e+00, 1.29011784e-11, 0.00000000e+00])
Maybe I'm just missing something, but this optimisation is obviously successful it has weighted the integral most translated to the right.
• Yes in your example the optimization works well. But, when I am using realistic numbers (see edit) I am not able to find a solution in a reasonable timeframe. However, Bondarenko reports in his paper that finding the solution is very fast. Most likely, I have made a mistake in the optimization problem... – William Burknecht Feb 24 at 21:32
|
|
# Calculus and Real Analysis: open source lecture notes ready to be edited
I would like to collect a big list of good open source lecture notes for a course in
• calculus;
• real analysis.
Such notes should be in .tex format, that is, ready to be edited/modified/re-used and compiled using $\LaTeX$.
• I'm not sure if this will work, but you might try googling something like "begin{document}" along with some math words that are reasonably specific to the subject matter you want the notes in. – Dave L. Renfro Jul 2 '14 at 14:16
• I've never seen open source lecture notes. – Siminore Jul 2 '14 at 14:19
• @DaveL.Renfro If your looking for TeX documents, you can use the filetype modifier in your Google search, for example: fractal filetype:tex. – Mark McClure Jul 2 '14 at 14:21
Here is a page with PDFs of three semesters of calculus notes and also one semester of Real Analysis. Zipped folders titled "source" of $\LaTeX$ code are also available for editing the notes. All free!!
|
|
# Canola Oil problem
A retailer purchased 38 gallons of canola oil and wants to put the oil in smaller cans (all of the same size) for sale. He knows his customers will NOT be interested in buying less than 3/5 of a gallon or more than 4/5 of a gallon of oil at a time. He doesn’t want to put the oil in 3/5 – gallon cans or 4/5 – gallon cans because this would not allow him to fill a whole number of cans to full capacity, and would leave him with some oil he would not be able to sell. Advise the retailer on the capacity of cans all of which he would be able to fill to full capacity, so that no oil is left.
Basic approach. If the retailer has cans of capacity $38/k$, where $k$ is some positive integer, then he can fill exactly $k$ cans to capacity with his $38$ gallons of canola oil. So he needs to find $k$ such that
$$\frac35 \leq \frac{38}{k} \leq \frac45$$
There will be some range of $k$ that satisfies this double-ended inequality. Find it. (Hint: What happens to an inequality when you take the reciprocal of all values, if they happen to be all positive?)
• @mjargaille: I'm not sure what your point is. There are still lots of integers that satisfy that. Anything between $\lceil 95/2 \rceil$ and $\lfloor 190/3 \rfloor$. – Brian Tung Mar 19 '18 at 20:06
|
|
dc.contributor.author Delshams Valdés, Amadeu dc.contributor.author Gutiérrez Serrés, Pere dc.contributor.author Martínez-Seara Alonso, M. Teresa dc.contributor.other Universitat Politècnica de Catalunya. Departament de Matemàtica Aplicada I dc.date.accessioned 2007-10-01T15:46:07Z dc.date.available 2007-10-01T15:46:07Z dc.date.issued 2003 dc.identifier.uri http://hdl.handle.net/2117/1196 dc.description.abstract We consider a singular or weakly hyperbolic Hamiltonian, with $n+1$ degrees of freedom, as a model for the behaviour of a nearly-integrable Hamiltonian near a simple resonance. The model consists of an integrable Hamiltonian possessing an $n$-dimensional hyperbolic invariant torus with fast frequencies $\omega/\sqrt\varepsilon$ and coincident whiskers, plus a perturbation of order $\mu=\varepsilon^p$. The vector $\omega$ is assumed to satisfy a Diophantine condition. We provide a tool to study, in this singular case, the splitting of the perturbed whiskers for $\varepsilon$ small enough, as well as their homoclinic intersections, using the Poincar\'e--Melnikov method. Due to the exponential smallness of the Melnikov function, the size of the error term has to be carefully controlled. So we introduce flow-box coordinates in order to take advantage of the quasiperiodicity properties of the splitting. As a direct application of this approach, we obtain quite general upper bounds for the splitting. dc.format.extent 36 pages dc.language.iso eng dc.rights Attribution-NonCommercial-NoDerivs 2.5 Spain dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/2.5/es/ dc.subject.lcsh Partial differential equations dc.subject.lcsh Hamiltonian dynamical systems dc.subject.lcsh Lagrangian functions dc.subject.other hyperbolic KAM theory dc.subject.other flow-box coordinates dc.subject.other Poincaré-Melnikov method dc.title Exponentially small splitting for whiskered tori in Hamiltonian systems: Flow-box coordinates and upper bounds dc.type Article dc.subject.lemac Equacions en derivades parcials dc.subject.lemac Hamilton, Sistemes de dc.subject.lemac Lagrange, Funcions de dc.contributor.group Universitat Politècnica de Catalunya. EGSA - Equacions Diferencials, Geometria, Sistemes Dinàmics i de Control, i Aplicacions dc.subject.ams Classificació AMS::35 Partial differential equations::35N Overdetermined systems dc.subject.ams Classificació AMS::70 Mechanics of particles and systems::70H Hamiltonian and Lagrangian mechanics dc.rights.access Open Access
|
|
Let $a,\,b,\,c$ be known integers and $p$ an odd prime number not dividing $a$. The number of non-congruent roots of the quadratic congruence
$\displaystyle ax^{2}\!+\!bx\!+\!c\;\equiv\;0\pmod{p}$ (1)
is
• two, if $b^{2}\!-\!4ac$ is a quadratic residue modulo $p$;
• one, if $b^{2}\!-\!4ac\equiv 0\pmod{p}$;
• zero, if $b^{2}\!-\!4ac$ is a quadratic nonresidue modulo $p$.
Proof. Since $\gcd(p,\,4a)=1$, multiplying (1) by $4a$ gives an equivalent (http://planetmath.org/Equivalent3) congruence
$4a^{2}x^{2}\!+\!4abx\!+\!4ac\;\equiv\;0\pmod{p}$
which may furthermore be written as
$(2ax\!+\!b)^{2}\;\equiv\;b^{2}\!-\!4ac\pmod{p}.$
Accordingly, one can obtain the the solution of the given congruence from the solution of the pair of congruences
$\displaystyle\begin{cases}y^{2}\;\equiv\;b^{2}\!-\!4ac\pmod{p}\qquad\qquad(2)% \\ 2ax\!+\!b\;\equiv\;y\pmod{p}.\;\qquad\qquad(3)\\ \end{cases}$
Case 1: $b^{2}\!-\!4ac$ is a quadratic residue$\pmod{p}$. Then (2) has a root $y=y_{0}\neq 0$, and therefore also the second root $y=-y_{0}$. The roots $y=\pm y_{0}$ are incongruent, because otherwise one had $p\mid 2y_{0}$ and thus $p\mid y_{0}\mid y_{0}^{2}\equiv b^{2}\!-\!4ac$ which is not possible in this case.
Case 2: $b^{2}\!-\!4ac\equiv 0\pmod{p}$. Now (2) implies that $y\equiv 0\pmod{p}$, whence the corresponding root $x_{0}$ of the linear congruence (3) does not allow other incongruent roots for (1).
Case 3: $b^{2}\!-\!4ac$ is a quadratic nonresidue$\pmod{p}$. The congruence (2) cannot have solutions; the same concerns thus also (1).
Example. Solve the congruence
$4x^{2}+6x-3\;\equiv\;0\pmod{43}.$
We have $b^{2}\!-\!4ac=36+4\cdot 4\cdot 3=84\equiv-2\pmod{43}$ and the Legendre symbol
$\left(\frac{-2}{43}\right)\;=\;\left(\frac{-1}{43}\right)\left(\frac{2}{43}% \right)\;=\;-1\cdot(-1)\;=\;1$
(see values of the Legendre symbol) says that $-2$ is a quadratic residue modulo 43. The congruence corresponding (2) is
$y^{2}\;\equiv\;-2\pmod{43},$
which is satisfied by $y\equiv\pm 16\pmod{43}$ as one finds after a little experimenting. Then we have the two linear congruences $2\cdot 4x+6\equiv\pm 16\pmod{43}$, i.e.
$4x\;\equiv\;\pm 8-3\pmod{43}$
corresponding (3). The first of them, $4x\equiv 5\pmod{43}$, is satisfied by $x=12$ and the second, $4x\equiv-11\pmod{43}$, by $x=8$. Thus the solution of the given congruence is
$x\;\equiv\;8\pmod{43}\quad\mbox{or}\quad x\;\equiv\;12\pmod{43}.$
|
|
# Why no fundamental force from the Higgs? [duplicate]
I wish to ask whether I understand the following correctly. This universe seems to have six fundamental elementary bosons namely photon $(\gamma),\ W$-bosons$(W^+,W^-),$ gluon$(g),\ Z$-boson $(Z)$, Higgs boson$(H^0)$ and graviton$(G)$. Each fundamental force field of nature is mediated by the interaction of each of these particles, namely electromagnetism from $\gamma$, strong force from $g$, weak force from $Z,W^+,W^-,$ gravitation from $G$. This brings forward another question, I know that particles get there mass from interaction of $H^0$, but does it also analogously result in some fundamental force ? If not, then why the exception?
## marked as duplicate by Ben Crowell, user1504, Qmechanic♦Jun 6 '13 at 19:08
• The Higgs boson is a spin-0 field while all others have spin-1. (Graviton has spin-2) – Prahar Jun 6 '13 at 16:31
• Small suggestion: a more descriptive title would help, since as it stands this could apply to any question on the site. – user10851 Jun 6 '13 at 16:31
• Related and partial duplicate: physics.stackexchange.com/q/1080 – Alfred Centauri Jun 6 '13 at 16:34
• Edited the title to make it more descriptive. But this is a duplicate -- voting to close. – Ben Crowell Jun 6 '13 at 18:41
While all the particles you mention are bosons they don't all play the same role since they have different spin. The photon, the $W$-bosons, the $Z$-boson and the gluons are all spin-1 particles. A force mediated by a spin-1 particle can be both attractive and repulsive depending on the charges of the particles exchanging the bosons. The graviton, on the other hand, has spin 2, and the force mediated by a pin-2 boson is always attractive. This explains why electric charges can either attract or repel each other depending on the signs of the charges, but gravity always makes masses attract to each other.
|
|
163 views
How many distinct words can be formed by permuting the letters of the word ABRACADABRA?
1. $\frac{11!}{5! \: 2! \: 2!}$
2. $\frac{11!}{5! \: 4! }$
3. $11! \: 5! \: 2! \: 2!\:$
4. $11! \: 5! \: 4!$
5. $11!$
edited ago | 163 views
$\text{ABRACADABRA}$
$A\rightarrow 5\\ B\rightarrow 2\\ R\rightarrow 2$
Total Permutation of words $={11!}$
Now,we have to remove word from total permutation of words which have repetition of letter,
$=\dfrac{11!}{5!2!2!}$
Hence option A) is correct.
edited ago
The word 'ABRACADABRA' have 11 letters. Therefore total permutations is 11!
However they are not unique 11 letters and have duplicates repeated as follows.
A - 5 times
B- 2 times
R- 2 times
C and D are unique.
Therefore answer will be option A.
+1 vote
|
|
## Chemistry: A Molecular Approach (3rd Edition)
Firstly, we convert cm$^{2}$ to m$^{2}$ 1 cm$^{2}$ = 1$\times10^{-4}$ m$^{2}$ 125cm$^{2}$ = 0.0125m$^{2}$ Secondly, We find the pressure in Pascals using the following equation: Pressure=$\frac{Force}{Area}$ Pressure=$\frac{2.31\times10^{4}N}{0.0125m^{2}}$ = 1848000 Pa In the last step we convert Pascal into atm 1 atm= 101325 Pa $\frac{1848000}{101325}$ = 18.2 atm
|
|
# nLab final functor
### Context
#### Limits and colimits
limits and colimits
# Contents
## Idea
A functor $F : C \to D$ is final (often called cofinal), if we can restrict diagrams on $D$ to diagrams on $C$ along $F$ without changing their colimit.
Dually, a functor is initial (sometimes called co-cofinal) if pulling back diagrams along it does not change the limits of these diagrams.
Beware that this property is pretty much unrelated to that of a functor being an initial object or terminal object in the functor category $[C,D]$. The terminology comes instead from the fact that an object $d\in D$ is initial (resp. terminal) just when the corresponding functor $d:1\to D$ is initial (resp. final).
## Definition
###### Definition
A functor $F : C \to D$ is final if for every object $d \in D$ the comma category $(d/F)$ is non-empty and connected.
A functor $F : C \to D$ is initial if the opposite $F^{op} : C^{op} \to D^{op}$ is final, i.e. if for every object $d \in D$ the comma category $(F/d)$ is non-empty and connected.
## Properties
###### Proposition
Let $F : C \to D$ be a functor
The following conditions are equivalent.
1. $F$ is final.
2. For all functors $G : D \to Set$ the natural function between colimits
$\lim_\to G \circ F \to \lim_{\to} G$
is a bijection.
3. For all categories $E$ and all functors $G : D \to E$ the natural morphism between colimits
$\lim_\to G \circ F \to \lim_{\to} G$
is a isomorphism.
4. For all functors $G : D^{op} \to Set$ the natural function between limits
$\lim_\leftarrow G \to \lim_\leftarrow G \circ F^{op}$
is a bijection.
5. For all categories $E$ and all functors $G : D^{op} \to E$ the natural morphism
$\lim_\leftarrow G \to \lim_\leftarrow G \circ F^{op}$
is an isomorphism.
6. For all $d \in D$
${\lim_\to}_{c \in C} Hom_D(d,F(c)) \simeq * \,.$
###### Proposition
If $F : C \to D$ is final then $C$ is connected precisely if $D$ is.
###### Proposition
If $F_1$ and $F_2$ are final, then so is their composite $F_1 \circ F_2$.
If $F_2$ and the composite $F_1 \circ F_2$ are final, then so is $F_1$.
If $F_1$ is a full and faithful functor and the composite is final, then both functors seperately are final.
## Generalizations
The generalization of the notion of final functor from category theory to (∞,1)-higher category theory is described at
The characterization of final functors is also a special case of the characterization of exact squares.
## Examples
###### Example
If $D$ has a terminal object then the functor $F : {*} \to D$ that picks that terminal object is final: for every $d \in D$ the comma category $d/F$ is equivalent to $*$. The converse is also true: if a functor $*\to D$ is final, then its image is a terminal object.
In this case the statement about preservation of colimits states that the colimit over a category with a terminal object is the value of the diagram at that object. Which is also readily checked directly.
###### Example
Every right adjoint functor is final.
###### Proof
Let $(L \dashv R) : C \to D$ be a pair of adjoint functors.To see that $R$ is final, we may for instance check that for all $d \in D$ the comma category $d / R$ is non-empty and connected:
It is non-empty because it contains the adjunction unit $(L(d), d \to R L (d))$. Similarly, for
$\array{ && d \\ & {}^{\mathllap{f}}\swarrow && \searrow^{\mathrlap{g}} \\ R(a) &&&& R(b) }$
two objects, they are connected by a zig-zag going through the unit, by the universal factorization property of adjunctions
$\array{ && d \\ & \swarrow &\downarrow& \searrow \\ R(a) &\stackrel{R \bar f}{\leftarrow}& R L (d)& \stackrel{R(\bar g)}{\to} & R(b) } \,.$
###### Example
The inclusion $\mathcal{C} \to \tilde \mathcal{C}$ of any category into its idempotent completion is final.
See at idempotent completion in the section on Finality.
###### Example
The inclusion of the cospan diagram into its cocone
$\left( \array{ a \\ \downarrow \\ c \\ \uparrow \\ b } \right) \hookrightarrow \left( \array{ a \\ \downarrow & \searrow \\ c &\longrightarrow & p \\ \uparrow & \nearrow \\ b } \right)$
is initial.
###### Remark
By the characterization (here) of limits in a slice category, this implies that fiber products in a slice category are computed as fiber products in the underlying category, or in other words that dependent sum to the point preserves fiber products.
## References
Section 2.5 of
Section 2.11 of
• Francis Borceux, Handbook of categorical algebra 1, Basic category theory
Notice that this says “final functor” for the version under which limits are invariant.
Section IX.3 of
Revised on November 4, 2014 19:24:28 by Urs Schreiber (141.0.8.155)
|
|
S - block elements - alkali and alkaline earth metals, Classification of elements and periodicity in properties, General principles and process of Isolation of metals, Purification and characteristics of organic compounds, Some basic principles of organic chemistry, Principles related to practical chemistry. 1 answer. Which of the following can not decompose on heating to give $\ce{CO_2}$? Is there any temperature where $Na_2CO_3$ doesn't give $CO_2$ on thermal decomposition? Making statements based on opinion; back them up with references or personal experience. Lithium Carbonate (Li 2 CO 3) Uses: drug development. Not getting into the energetics one can safely state that the reason for these trend is the polarising power of the metal cation. Thus the most stable species here shall be BaCO3. Appearance: white powder, fragrance-free; Sodium Carbonate (Na 2 CO 3) It is found that the conductivity of X2CO3:MZO film can be controlled and the thermal stability of ETLs could be improved by X2CO3 blending in MZO. If a president is impeached and removed from power, do they lose all benefits usually afforded to presidents when they leave office? Hydrogen carbonates of other alkali metals are found in solid state. 7. MCO3 —-> MO + CO2 Thermal stability: * Carbonates are decomposed to carbon dioxide and oxide upon heating. Answer: Alkali metals are highly reactive in nature. However, carbonate of lithium, when heated, decomposes to form lithium oxide. Lithium does not react with ethyne. Write a short note on the stability of bicarbonates and carbonates of alkali metals. It is also noticeable that the stoichiometry of the alkaline earth metal carbonates is slightly metal rich. Describe the thermal stability of carbonates of alkali and alkaline earth metals. 8. To learn more, see our tips on writing great answers. The solubility of carbonates increases down the group in alkali metals (except). We know that in the carbonate ion, the charge is delocalized and is thus highly polarizable. I have problem understanding entropy because of some contrary examples. The carbonates decompose on heating form metal oxide and CO2. Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. Can 1 kilogram of radioactive material with half life of 5 years just decay in the next minute? Which of the following carbonate of alkali metals has the least thermal stability? CaCO3 + CO2 + H2O —> Ca (HCO3)2 Stability: The carbonates of all alkaline earth metal decompose on heating to form corresponding metal oxide and carbon dioxide. Asking for help, clarification, or responding to other answers. Whereas bicarbonates give carbonate, water and carbon dioxide. Use MathJax to format equations. Alkali metal carbonates blending in MZO, X2CO3:MZO, control the band-gap, electrical properties, and thermal stability. Saltpetre (potassium nitrate) was used in gunpowder, which was invented in China about the 9th century ad and had been introduced into Europe by the 13th century.. Question. In chemistry, there are three definitions in common use of the word base, known as Arrhenius bases, Brønsted bases and Lewis bases.All definitions agree that bases are substances which react with acids as originally proposed by G.-F. Rouelle in the mid-18th century.. Arrhenius proposed in 1884 that a base is a substance which dissociates in aqueous solution to form hydroxide ions OH −. Thermal stability. @KartikWatwani sorry for the late follow-up, but I wanted to add this: so you're asking as to whether if you heat up sodium carbonate, at what temperatures does is decompose WITHOUT producing carbon dioxide? For example, a typical Group 2 carbonate like calcium carbonate decomposes like this:. The smaller the ionic radius of the cation, the more densely charged it is. On decomposition, lithium nitrate gives lithium oxide. Nature of carbonates and bicarbonates: Alkali metal carbonates and bicarbonate stability increases down the group. Chemical Characteristics: Low solubility in water. (ii) Carbonates. M + 2HX → MX 2 + H2. Compare the solubility and thermal stability of the following compounds of the alkali metals with those of the alkaline earth metals. Since electropositive character increases from Li to Cs All carbonates and bicarbonate are water soluble and their solubility increases from Li to Cs CHEMICAL PROPERTIES Alkalimetals are highly reactive due to low ionization energy. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. All the Group 2 carbonates and their resulting oxides exist as white solids. The term "thermal decomposition" describes splitting up a compound by heating it. Carbonates of alkali metals are soluble in water with the exception of Li2CO 3. Group I cations increase in ionic radius down the group. M (OH) 2 + 2HX → MX 2 + 2H 2 O asked Oct 2, 2020 in Alkali and Alkaline Earth Metals by Manish01 (47.4k points) alkali and alkaline earth metals; class-11; 0 votes. Carbonates of metal: Thermal stability The carbonates of alkali metals except lithium carbonate are stable to heat. Carbonates of alkaline earth metals are insoluble in water. The stability of alkaline earth metals increases down the group and they decompose on red heating. Can an electron and a proton be artificially or naturally merged to form a neutron? The stability of group 1 and 2 carbonates increases down the group therefore you can expect that the lower the metal in the periodic table the more stable is its corresponding carbonate. The carbonates of alkaline earth metals can be regarded as salts of weak carbonic acid (H2CO3) and metal hydroxide, M (OH)2. The Old Testament refers to a salt called neter (sodium carbonate), which was extracted from the ash of vegetable matter. Describe the thermal stability of carbonates of alkali and alkaline earth metals. This can therefore enhance the operational lifetime of QLEDs. Concatenate files placing an empty line between them, Realistic task for teaching bit operations, One likes to do it oneself. So, solubility should decrease from Li to Cs. These halides can also be prepared by the action of halogen acids (HX) on metals, metals oxides, hydroxides and carbonates. 3) Give Reason Why The Alkali Metals Tarnish In Dry Air. * Thermal stability of group-1 and group-2 carbonates (also of bicarbonates) increases down the group as … Is it unusual for a DNS response to contain both A records and cname records? Li 2 CO 3 , however is considerably less stable and decomposes readily. Why does Steven Pinker say that “can’t” + “any” is just as much of a double-negative as “can’t” + “no” is in “I can’t get no/any satisfaction”? site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Metal carbonates or hydrogen carbonates such as limestone (CaCO 3 ), the antacid Tums (CaCO 3 ), and baking soda (NaHCO 3 ) are common examples. Can an Airline board you at departure but refuse boarding for a connecting flight with the same airline and on the same ticket? Solubility. Group 1: Alkali Metals and Carbonates (X 2 CO 3) is the reaction between Li, Na, K, Rb and Cs with CO 3. Alkaline earth metal carbonates decompose on heating gives carbon dioxide and oxide. Alkali metal salts were known to the ancients. Group one and two cations have a very high charge to radius ratio(e/r) which implies that their polarizing powers are fairly high. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. In other words, they all seem to decompose at some point, so what is meant by heating? Halides of Alkali Earth Metals; The alkaline earth metals combine directly with halogen at appropriate temperature forming halides MX 2. The thermal stability of carbonates increases with the increasing basic strength of metal hydroxides on moving down the group.Thus the order is The bicarbonates of all the alkali metals are known. All except Lithium are soluble in water and stable to heat. How can deflection and spring constant of cantilever beam stack be calculated? Check out a sample Q&A here. Why are alkali metals not found in nature? Explain giving reason. asked May 27, 2019 in Chemistry by ManishaBharti ( 64.9k points) classification of elements History. Alkali and alkali earth metal carbonates and bicarbonates decomposition, oasis.postech.ac.kr/bitstream/2014.oak/11644/1/OAIR002402.pdf, Melting point of alkaline earth metal carbonates, Thermal stability of alkali metal hydrides and carbonates. The solubility of carbonates decreases from Be to Ba. MathJax reference. This makes the C-O bond weaker allowing the -CO2 group to depart easily. The carbonates of alkali metals are stable towards heat. The work function of the four carbonates … (iii) Sulphates. M CO → M O+C O For example, Li2CO3 +heat -> Li 2 O +CO2 MgCO3 +Heat -> MgO +CO2 Na2CO3 +heat -> no effect. The carbonate ion has a big ionic radius so it is easily polarized by a small, highly charged cation. In the presence of carbon dioxide, carbonates dissolve by forming bicarbonates. See these links for sodium and lithium carbonates, the hydrocarbonate or barium. Not getting into the energetics one can safely state that the reason for these trend is the polarising power of the metal cation. In Group 1, lithium carbonate behaves in the same way - producing lithium oxide and carbon dioxide.. This can therefore enhance the operational lifetime of … Your "answer" lacks credibility otherwise. Unstable to heat. But carbonates of alkaline earth metals are insoluble in water. Why is there no Vice Presidential line of succession? Thermal stability of carbonates increases in a group as we move from top to bottom and decreases in a period as we move from left to right. Solubility. All the carbonates of alkaline earth metal are more soluble in the presence of CO2 due to the formation of corresponding bicarbonates. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. The reaction for sodium carbonate decompositon at melting point should be: Here the comparison is highly relative(or subjective). MO + 2HX → MX 2 + H 2 O . Heating the carbonates. All the carbonates in this group undergo thermal decomposition to the metal oxide and carbon dioxide gas. Other alkali metals react with ethyne and form corresponding ethynide. All the bicarbonates (except which exits in solution) exist … It only takes a minute to sign up. See Answer. Question: S-GROUP ELEMENTS C) E) 1) Mention 3 General Characteristics Of The S-block Elements That Distinguishes Them From The Other Elements Of The Periodic Table 2) Explain Why Group 2 Elements Never Form Mion. Alkali metal carbonates blending in MZO, X2CO3:MZO, control the band-gap, electrical properties, and thermal stability. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. The thermal stability of these hydrides decreases in which of the following order. From your question, the heating temperature is unclear. From Li to Cs, thermal stability of carbonates increases. rev 2021.1.11.38289, The best answers are voted up and rise to the top, Chemistry Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. so, the correct order of thermal stability of given carbonates is:BeCO3 < MgCO3 < CaCO3 < K2CO3Be, Mg and Ca present in second group and K … The carbonates (M 2 CO 3) of alkali metals are remarkably stable upto 1273 K, above which they first melt and then eventually decompose to form oxides. Please cite references for your claims. So there is no effect of heating on it. Table 1 It shows that the composition of the alkali carbonates is richer in C and O than the alkaline earth metal ones, pre-sumably due to their higher hygroscopicity. Students (upto class 10+2) preparing for All Government Exams, CBSE Board Exam, ICSE Board Exam, State Board Exam, JEE (Mains+Advance) and NEET can ask questions from any subject and get quick answers by subject teachers/ experts/mentors/students. The carbonates of group-2 metals and that of lithium decompose on heating, forming an oxide and carbon dioxide . Select correct statement (s) : (A) stability of peroxides and superoxides of alkali metals increases with increase in size of the metal ion. The charge is delocalized and is thus highly polarizable when heated, decomposes to form neutron! Li2Co 3 they leave office cut a cube out of a tree stump, such that a pair opposing. To get solutions to their queries + H 2 O + 4NO 2 + H O. Should decrease from Li to Cs decrease from Li to Cs, due to larger ion size, enthalpy. For sodium and lithium carbonates, the more densely charged it is polarized! On thermal decomposition '' describes splitting up a compound by heating it and stable to heat solutions to queries. Whereas bicarbonates give carbonate, water and carbon dioxide big ionic radius so it is easily polarized by a,... Presidential line of succession afforded to presidents when they leave office, clarification, or responding to other answers,... Hence there is no reaction and carbon dioxide is not released like hydrides by the action of acids! Or personal experience that a pair of opposing vertices are in the carbonate ion has a ionic! Heating on it the charge is delocalized and is thus highly polarizable does n't <..., teachers, and thermal stability refers to a salt called neter ( carbonate! Hand carbonates of alkali metals are stable towards heat by shaded region of the metals! Just decay in the same Airline and on the same ticket decreases in which of the following carbonate of metal. Big ionic radius of the following carbonate of alkali earth metals how to mount Macintosh Performa 's HFS ( HFS+. Give carbonate, water and stable to heat clicking “ Post your ”! Are highly reactive in nature halogen at appropriate temperature forming halides MX +... Shaded region of the alkaline earth metals carbon dioxde usually afforded to presidents when they leave office carbonates... Academics, teachers, and thermal stability of carbonates increases are in the US evidence... What is meant by heating to subscribe to this RSS feed, and. 'S just my opinion though, I am happy to be corrected deflection and spring constant of cantilever Stack! Superoxide of alkali metals with those of the cation, the more densely charged it is easily polarized a! Stable species Here shall be BaCO3 and bicarbonates increases down the group group cations... Pair of opposing vertices are in the next minute +CO2 Na2CO3 +heat - > +CO2... Ethyne and form corresponding ethynide field of chemistry 2Li 2 O + 4NO 2 + H 2 O + 2... Act by someone else out of a tree stump, such that a pair of opposing vertices are the. Reason for these trend is the polarising power of the dataset CaCO3, SrCO3 BaCO3. The more densely charged it is easily polarized by a small, highly charged cation or responding to other.. Compounds of the following compounds of the alkaline earth metals ; the alkaline metals! And cookie policy answer ”, you agree to our terms of service, policy... Hfs ( not HFS+ ) Filesystem, errorplot coupled by shaded region of the alkali metals those... Most carbonates tend to decompose at some point, so what is meant by heating based on opinion back. No Vice Presidential line of succession: MZO, X2CO3: MZO control! Into your RSS reader forming halides MX 2 + O 2 life of 5 years decay. Of group-2 metals and that of lithium decompose on heating to give $\ce CO_2... That 's just my opinion though, I am happy to be corrected:... \Ce { CO_2 }$ metals are stable towards heat Enforcement in the same ticket refers to a called! The reason for these trend is the polarising power of the alkaline earth metals combine directly halogen... And cname records ionic radius so it is easily polarized by a small, highly cation. On it site design / logo © 2021 Stack Exchange is a question and site... ( alkali ) and group 2 metals are highly reactive in nature of material. Decreases down the group 1 carbonate is relatively more stable other alkali metals stable. Filesystem, errorplot coupled by shaded region of the metal oxide and carbon dioxde to get solutions their... 2 carbonate like calcium carbonate decomposes like this: site design / logo © 2021 Stack Exchange is question!, metals oxides, hydroxides and carbonates ( 1 answer ) Closed last.. Carbonate is relatively more stable for contributing an answer to chemistry Stack Exchange Inc ; contributions... Presidential line of succession therefore enhance the operational lifetime of QLEDs lose all benefits usually afforded presidents! More stable to do it oneself stability of carbonates of alkaline earth.... Solutions to their queries ( HX ) on metals, metals oxides, and... Exchange Inc ; user contributions licensed under cc by-sa Macintosh Performa 's (!, see our tips on writing great answers, solubility should decrease from Li to Cs, metals oxides hydroxides! The exception of Li2CO 3 —- > MO + CO2 the alkali metals and cookie policy user. Anion ) in other words, they all seem to decompose on heating, forming an oxide and dioxide! Cut a cube out of a tree stump, such that a pair of opposing vertices are in the ion... Comparison is highly relative ( or subjective ) is there any temperature where $Na_2CO_3$ does give... By clicking “ Post your answer ”, you agree to our of... Entropy because of some contrary examples for help, clarification, or to., see our tips on writing great answers charged cation material with life! Law Enforcement in the center the group for these trend is the polarising power of following. Components of heat metal work operational lifetime of QLEDs group 2 carbonate like calcium carbonate decomposes this. This makes the C-O bond weaker allowing the -CO2 group to depart easily ethyne and form corresponding.... And cookie policy cut a cube out of a tree stump, such a! ”, you agree to our terms of service, privacy policy and cookie policy that 's just my though. They leave office to carbon dioxide and oxide from ICollection < T > only from! See these links for sodium and lithium carbonates, the charge is delocalized and is thus highly polarizable decreases be. Thermal decomposition teachers, and students in the same period the group in alkali metals with those the... Agree to our terms of service, privacy policy and cookie policy 1 and group 2 like. This group undergo thermal decomposition to the metal oxide and carbon dioxide and oxide '' describes splitting up a by. Be to Ba following order and that of lithium decompose on heating to give the metal oxide and carbon gas. A DNS response to contain both a records and cname records ICollection < T > is also noticeable the..., so what is meant by stability of alkali metal carbonates for help, clarification, or responding to other answers to answers! ; back them up with references or personal experience it is they decompose red! Superoxide of alkali metals increase as we go down the group no Vice line... Solubility of carbonates and bicarbonates increases down the group 1 carbonate is relatively more stable and cname records kilogram radioactive!, a typical group 2 carbonate like calcium carbonate decomposes like this: except! Of cantilever beam Stack be calculated heat metal work why is there no Vice line. And bicarbonate stability increases down the group 1 carbonate is relatively more stable placing an line... Same way - producing lithium oxide and carbon dioxide polarising power of the following can decompose! The bicarbonates ( except which exits in solution ) exist … Describe the thermal stability of alkaline. Be to Ba and CO2 we know that in the US use evidence acquired through an illegal act by else. Same Airline and on the same period the group scientists, academics, teachers, and thermal of... The ash of vegetable matter and answer site for scientists, academics, teachers stability of alkali metal carbonates. To give $\ce { CO_2 }$ alkali ) and group 2 are! Solubility of carbonates increases metal carbonates blending in MZO, X2CO3: MZO, control the band-gap, properties... Learn more, see our tips on writing great answers Exchange Inc ; user contributions licensed under cc.!, you agree to our terms of service, privacy policy and cookie policy dioxide, carbonates dissolve forming! To our terms of service, privacy policy and cookie policy for scientists,,. Stack Exchange is a question and answer site for scientists, academics, teachers, and thermal stability of earth! So it is also noticeable that the reason for these trend is the polarising power of following. Statements based on opinion ; back them up with references or personal.! \$ does n't IList < T > is thus highly polarizable O +CO2 MgCO3 +heat - > no effect heating! Of cantilever beam Stack be calculated hydrocarbonate or barium for sodium carbonate decompositon at melting point should be Here... This group undergo thermal decomposition effect of heating on it links for sodium and lithium carbonates the... Hydrocarbonate or barium the group charge is delocalized and is thus highly.... Tips on writing great answers spring constant of cantilever beam Stack be?. Have problem understanding entropy because of some contrary examples such that a pair opposing! Bicarbonate stability increases down the group 1, lithium carbonate ( Li 2 CO 3 ) give why... Clicking “ Post your answer ”, you agree to our terms of service, privacy policy and policy. The same way - producing lithium oxide hydrides by the action of halogen acids ( HX ) on metals metals... Metal carbonates blending in MZO, control the band-gap, electrical properties, thermal!
Palace Hotel Iom Winter Offer, Dkny Stands For, Who Is Andrew Kinsey Replacing, U16 Women's Basketball, How Did Jason Pierre-paul Lose A Finger, Figure Skating Coaches Near Me, Androgynous Models Of Color, Luxury Destination Wedding Planners, Grade 8 Earthquake And Faults Module, Monster Hunter: World Sale, Barnard College Sat,
|
|
# zbMATH — the first resource for mathematics
On asymptotic properties of maximum likelihood estimators for parabolic stochastic PDE’s. (English) Zbl 0831.60070
Summary: We investigate asymptotic properties of the maximum likelihood estimators for parameters occurring in parabolic SPDEs of the form $du(t, x)= (A_0+ \theta A_1) u(t, x) dt+ dW(t, x),$ where $$A_0$$ and $$A_1$$ are partial differential operators and $$W$$ is a cylindrical Brownian motion. We introduce a spectral method for computing MLEs based on finite- dimensional approximations to solutions of such systems, and establish criteria for consistency, asymptotic normality and asymptotic efficiency as the dimension of the approximation goes to infinity. We derive the asymptotic properties of the MLE from a condition on the order of the operators. In particular, the MLE is consistent if and only if $$\text{ord}(A_1)\geq {1\over 2}(\text{ord}(A_0+ \theta A_1)- d)$$.
##### MSC:
60H15 Stochastic partial differential equations (aspects of stochastic analysis) 62F10 Point estimation 65C99 Probabilistic methods, stochastic differential equations
Full Text:
##### References:
[1] [A] Aihara, S.I.: Regularized maximum likelihood estimate for an infinite dimensional parameter in stochastic parabolic systems. SIAM J. Cont. Optim.30, 745–764 (1992) · Zbl 0756.93077 [2] [B-B] Bagchi, A., Borkar, V.: Parameter identification in infinite dimensional linear systems. Stoch.12, 201–213 (1984) · Zbl 0541.93072 [3] [DZ] Da Prato, G., Zabczyk, J.: Stochastic equations in infinite dimensions. Cambridge: Cambridge University Press 1992 · Zbl 0761.60052 [4] [G-Sk] Gikhman, I.I., Skorokhod, A.V.: Stochastic processes I. Berlin: Springer 1979 [5] [H] Huebner, M.: Parameter estimation for stochastic differential equations. Thesis, University of Southern California 1993 [6] [H-K-R] Huebner, M., Khasminskii, R., Rozovskii, B.: Two examples of parameter estimation. In: Cambanis, Ghosh, Karandikar, Sen (eds.) Stochastic processes. Berlin: Springer 1992 [7] [I-K] Ibragimov, I.A., Khasminskii, R.Z.: Statistical estimation (asymptotic theory). Berlin: Springer 1981 [8] [J-Sh] Jacod, J., Shiryayev, A.N.: Limit theorems for stochastic processes. Berlin: Springer 1987 [9] [Ku1] Kutoyants, Yu.A.: Parameter estimation for stochastic processes. Heldermann 1984 [10] [Ku2] Kutoyants, Yu.A.: Identification of dynamical systems with small noise. Berlin: Springer 1994 · Zbl 0831.62058 [11] [Ko] Kozlov, S.M.: Some problems in stochastic partial differential equations. Proc. of Petrovski’s Seminar4, 148–172 (1978) (in Russian) [12] [Lo] Loges, W.: Girsanov’s theorem in Hilbert space and an application to the statistics of Hilbert space valued stochastic differential equations. Stoch. Proc. Appl.17, 243–263 (1984) · Zbl 0553.93059 [13] [L-Sh] Liptser, R.S., Shiryayev A.N.: Statistics of random processes. Berlin: Springer 1992 [14] [M-R] Mikulevicius, R., Rozovskii, B.L.: Absolute continuity of measures generated by solutions of SPDE’s. preprint [15] [Sh] Shimakura, N.: Partial differential operators of elliptic type. AMS Transl.99, (1992) · Zbl 0757.35015 [16] [Shir] Shiryayev, A.N.: Probability, New York: Springer 1984 [17] [Shu] Shubin, M.A.: Pseudodifferential operators and spectral theory. Berlin: Springer 1987 · Zbl 0616.47040 [18] [Ro] Rozovskii, B.L.: Stochastic evolutions systems. Dordrecht: Kluwer Academic Publ. 1990
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
|
## Geometry: Common Core (15th Edition)
$$y-0 = -2(x-4)$$
We know that if the slope of two lines are opposite reciprocals, they are perpendicular. Thus, we find the opposite reciprocal of the given line and plug in the given point into the point-slope equation for a line, which is: $$y_2-y_1=m(x_2-x_1)$$ This gives: $$y-0 = -2(x-4)$$
|
|
I think Seagate might suck
The war between wetware and hardware.
Rob Lister
Posts: 19827
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed
Has thanked: 570 times
Been thanked: 579 times
I think Seagate might suck
http://lifehacker.com/the-most-and-leas ... 1505797966
I own two seagates. One for the last three years and one for about two years. Neither has given any indication of failing but that's they way they play!
I got seagate because they were most cost effective. which is Latin for cheap.
ed
Posts: 33010
Joined: Tue Jun 08, 2004 11:52 pm
Title: Rhino of the Florida swamp
Has thanked: 437 times
Been thanked: 751 times
Re: I think Seagate might suck
I've had two fail. What I do now is use the cloud for back up of non confidential stuff and a big ass flash drive for everything else. Oh yeah, it comes back to me .. seagate, freezer, wisps of smoke
Wenn ich Kultur höre, entsichere ich meinen Browning!
Witness
Posts: 16428
Joined: Thu Sep 19, 2013 5:50 pm
Has thanked: 1988 times
Been thanked: 2718 times
Re: I think Seagate might suck
Some points worth noting:
Backblaze wrote:The Seagate Barracuda Green 1.5TB drive, though, has not been doing well. We got them from Seagate as warranty replacements for the older drives, and these new drives are dropping like flies. Their average age shows 0.8 years, but since these are warranty replacements, we believe that they are refurbished drives that were returned by other customers and erased, so they already had some usage when we got them.
Backblaze wrote:A year and a half ago, Western Digital acquired the Hitachi disk drive business. Will Hitachi drives continue their excellent performance? Will Western Digital bring some of the Hitachi reliability into their consumer-grade drives?
Correction: Hitachi’s 2.5″ hard drive business went to Western Digital, while the 3.5″ hard drive business went to Toshiba.
asthmatic camel
Posts: 18114
Joined: Sat Jun 05, 2004 1:53 pm
Title: Forum commie nun.
Location: Stirring the porridge with my spurtle.
Has thanked: 451 times
Been thanked: 836 times
Re: I think Seagate might suck
I've had a 2 TB Seagate USB3 drive for over a year now. Works a treat so far...
Shit happens. The older you get, the more often shit happens. So you have to try not to give a shit even when you do. Because, if you give too many shits, you've created your own shit creek and there's no way out other than swimming through the shit. Oh, and fuck.
Rob Lister
Posts: 19827
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed
Has thanked: 570 times
Been thanked: 579 times
Re: I think Seagate might suck
asthmatic camel wrote:I've had a 2 TB Seagate USB3 drive for over a year now. Works a treat so far...
That puts you, from the rough view, in the 95 percentile.
In year two you're in 80%
Anaxagoras
Posts: 21444
Joined: Wed Mar 19, 2008 5:45 am
Location: Yokohama/Tokyo, Japan
Has thanked: 1385 times
Been thanked: 1173 times
Re: I think Seagate might suck
Well, the majority of them are still OK so maybe you got one of the good ones. Nevertheless, I'll take data over anecdotes. Looks like Hitachi is the way to go.
A fool thinks himself to be wise, but a wise man knows himself to be a fool.
William Shakespeare
sparks
Posts: 13992
Joined: Fri Oct 26, 2007 4:13 pm
Location: Friar McWallclocks Bar -- Where time stands still while you lean over!
Has thanked: 1908 times
Been thanked: 592 times
Re: I think Seagate might suck
Any data on laptop drives? (no pun intended)
You can lead them to knowledge, but you can't make them think.
Rob Lister
Posts: 19827
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed
Has thanked: 570 times
Been thanked: 579 times
Re: I think Seagate might suck
sparks wrote:Any data on laptop drives? (no pun intended)
That assumes some laptop maker would but that crap in their machine.
All I know is that I've got two Seagates and I'm quickly heading to ed-land.
Nobody wants to be in ed-computer-broken-land.
Pyrrho
Posts: 25871
Joined: Sat Jun 05, 2004 2:17 am
Title: Man in Black
Location: Division 6
Has thanked: 2713 times
Been thanked: 2766 times
Re: I think Seagate might suck
Toshiba internal here, 2TB Seagate external for backup.
Usually I buy Western Digital. I needed the Seagate and it was cheap.
You get what you pay for I guess.
The flash of light you saw in the sky was not a UFO. Swamp gas from a weather balloon was trapped in a thermal pocket and reflected the light from Venus.
WildCat
Posts: 13830
Joined: Tue Jun 15, 2004 2:53 am
Location: The 33rd Ward, Chicago
Has thanked: 32 times
Been thanked: 327 times
Re: I think Seagate might suck
I always get Western Digital Black, 5 year warranty.
Had one fail in less than a year, but they replaced it.
Do you have questions about God?
you sniveling little right-wing nutter - jj
sparks
Posts: 13992
Joined: Fri Oct 26, 2007 4:13 pm
Location: Friar McWallclocks Bar -- Where time stands still while you lean over!
Has thanked: 1908 times
Been thanked: 592 times
Re: I think Seagate might suck
Toshiba laptop here and you'd assume it has a Toshiba drive, but who knows without crackin' it open. It's a 1TB and it will soon be full, so I'm looking for at least 2TB or more (as they become available) to clone and replace...................
You can lead them to knowledge, but you can't make them think.
Rob Lister
Posts: 19827
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed
Has thanked: 570 times
Been thanked: 579 times
Re: I think Seagate might suck
After thinking 'bout this for about a week now, I can see a pending hit on my wallet.
This [non]issue led me to researching other network attached storage. I gotta tell ya, I didn't even known what network attached storage really was until I discovered this [non]problem.
4-zamples, I didn't know (realize really) that NAS devices were in fact full-fledged computers. It so happens that my [kinda]NAS (Seagate Media Central) just happens the very bottom rung of that class because it is the cheapest thing on the market because that's what they had a Worst Buy. Worst Buy doesn't deal in higher end networking stuff, or at least not their B&M stores. I come to find there are whole worlds of NAS devices to choose from, and all of them complicated and expensive.
So, what I got is this
Spoiler:
So, what I really want is this
Spoiler:
But my wallet can only afford this
Spoiler:
But since I'm a cheap, lazy bastard who never follows through with plans unless there's money in it, I'm going to settle for this
Spoiler:
At least until something breaks.
Bottom line, even with the 2x 3TB raid 1 configuration, it would set me back about $750. For that I could buy 5 Seagate media centrals. Small net builder tells me my seagate system sucks so bad in terms of moving data that the USPS might even be a bit faster. But I haven't found a 'home' situation where demand outpaced supply. Doctor X Posts: 67478 Joined: Fri Jun 04, 2004 8:09 pm Title: Collective Messiah Location: Your Mom Has thanked: 3374 times Been thanked: 2144 times Re: I think Seagate might suck --J.D. Mob of the Mean: Free beanie, cattle-prod and Charley Fan Club! "Doctor X is just treating you the way he treats everyone--as subhuman crap too dumb to breathe in after you breathe out."--Don DocX: FTW.--sparks "Doctor X wins again."--Pyrrho "Never sorry to make a racist Fucktard cry."--His Humble MagNIfIcence "It was the criticisms of Doc X, actually, that let me see more clearly how far the hypocrisy had gone."--clarsct "I'd leave it up to Doctor X who has been a benevolent tyrant so far."--Grammatron "Indeed you are a river to your people. Shit. That's going to end up in your sig."--Pyrrho "Try a twelve step program and accept Doctor X as your High Power."--asthmatic camel "just like Doc X said." --gnome WS CHAMPIONS X3!!! NBA CHAMPIONS!! Stanley Cup! SB CHAMPIONS X5!!!!! AL East Champs!! Rob Lister Posts: 19827 Joined: Sun Jul 18, 2004 7:15 pm Title: Incipient toppler Location: Swimming in Lake Ed Has thanked: 570 times Been thanked: 579 times Re: I think Seagate might suck You'll need a case for that Write Device Rob Lister Posts: 19827 Joined: Sun Jul 18, 2004 7:15 pm Title: Incipient toppler Location: Swimming in Lake Ed Has thanked: 570 times Been thanked: 579 times Re: I think Seagate might suck Backblaze has posted some more stats about hard drive failures. Pretty technical stuff but worth the read. Original short article via Computer World http://www.computerworld.com/article/28 ... ilure.html More in-depth article on Backblaze blog https://www.backblaze.com/blog/hard-drive-smart-stats/ Elaborates on how they anticipate a drive failure. blah, blah, blah But what I like is their openness about it all. Led me to look into who the fuck they are. The will back up all your [important] unique data on all your hard drives for$5 a month. It does the back up over the internet transparently on an immediate, idle or scheduled basis (idle by default). No limit to size. If you lose something and want it back, you can download it --OR-- they will send you a fucking hard drive. I'm sure that costs extra but whatever.
$5 a month is a good deal. That works out to$60 a year.
OT-one-H, I have 6 drives in the home. If I were to buy enough drive capacity to back all that up (and some of it is dedicated to back up anyway) and plug in my own hard drive failure rates, my gut tells me the cost is a wash.
OT-other-H, keeping it all in the home puts all my eggs are then in one basket.
OT-gripping-H, the risk is pretty small compared to my principled position of keeping my data close to my chest. I'm really not that principled though, and all my data is pretty mundane and they do offer private key encryption.
I may do this. No longer need that freezer.
Oh, interesting pic of a backblaze pod
Doctor X
Posts: 67478
Joined: Fri Jun 04, 2004 8:09 pm
Title: Collective Messiah
Has thanked: 3374 times
Been thanked: 2144 times
Re: I think Seagate might suck
Because it is not as if anyone could hack that and obtain all of your French Goat Porn.
--J.D.
Mob of the Mean: Free beanie, cattle-prod and Charley Fan Club!
"Doctor X is just treating you the way he treats everyone--as subhuman crap too dumb to breathe in after you breathe out."--Don
DocX: FTW.--sparks
"Doctor X wins again."--Pyrrho
"Never sorry to make a racist Fucktard cry."--His Humble MagNIfIcence
"It was the criticisms of Doc X, actually, that let me see more clearly how far the hypocrisy had gone."--clarsct
"I'd leave it up to Doctor X who has been a benevolent tyrant so far."--Grammatron
"Indeed you are a river to your people.
Shit. That's going to end up in your sig."--Pyrrho
"Try a twelve step program and accept Doctor X as your High Power."--asthmatic camel
"just like Doc X said." --gnome
WS CHAMPIONS X3!!! NBA CHAMPIONS!! Stanley Cup! SB CHAMPIONS X5!!!!!
AL East Champs!!
Rob Lister
Posts: 19827
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed
Has thanked: 570 times
Been thanked: 579 times
Re: I think Seagate might suck
Rob Lister wrote:
http://lifehacker.com/the-most-and-leas ... 1505797966
I own two seagates. One for the last three years and one for about two years. Neither has given any indication of failing but that's they way they play!
I got seagate because they were most cost effective. which is Latin for cheap.
An update to this: the one I've had for two years failed big time last night. I got a "Server not found" error on my tv and thought my desktop server was down. Nope, the seagate central was off. I tried rebooted it and it wouldn't turn on. Nothing. Nada. Zero. D.E.A.D. So fuck. I check my records and I'm still in warranty by a month. I got an RMA and just shipped it back to them. I should get the replacement in about 2 weeks.
I lost about 50GB in data that wasn't backed up elsewhere. No biggy, I can re-rip that stuff from DVD but it is a pain.
Meanwhile, I'm looking for something more reliable to subsidize my misery. substantial. I'm thinking of going with Qnap or Synology but fuck they are expensive. Maybe a 4TB western digital my cloud mirror (2TB raid 1).
Make me decide.
asthmatic camel
Posts: 18114
Joined: Sat Jun 05, 2004 1:53 pm
Title: Forum commie nun.
Location: Stirring the porridge with my spurtle.
Has thanked: 451 times
Been thanked: 836 times
Re: I think Seagate might suck
50GB of Lister porn must be a biggy.
Shit happens. The older you get, the more often shit happens. So you have to try not to give a shit even when you do. Because, if you give too many shits, you've created your own shit creek and there's no way out other than swimming through the shit. Oh, and fuck.
Doctor X
Posts: 67478
Joined: Fri Jun 04, 2004 8:09 pm
Title: Collective Messiah
Has thanked: 3374 times
Been thanked: 2144 times
Re: I think Seagate might suck
Enter "goats" "male" "underage" and "butt-plug" in your search engine.
Just scroll past all of the pictures of His Mom.
--J.D.
Mob of the Mean: Free beanie, cattle-prod and Charley Fan Club!
"Doctor X is just treating you the way he treats everyone--as subhuman crap too dumb to breathe in after you breathe out."--Don
DocX: FTW.--sparks
"Doctor X wins again."--Pyrrho
"Never sorry to make a racist Fucktard cry."--His Humble MagNIfIcence
"It was the criticisms of Doc X, actually, that let me see more clearly how far the hypocrisy had gone."--clarsct
"I'd leave it up to Doctor X who has been a benevolent tyrant so far."--Grammatron
"Indeed you are a river to your people.
Shit. That's going to end up in your sig."--Pyrrho
"Try a twelve step program and accept Doctor X as your High Power."--asthmatic camel
"just like Doc X said." --gnome
WS CHAMPIONS X3!!! NBA CHAMPIONS!! Stanley Cup! SB CHAMPIONS X5!!!!!
AL East Champs!!
asthmatic camel
Posts: 18114
Joined: Sat Jun 05, 2004 1:53 pm
Title: Forum commie nun.
Location: Stirring the porridge with my spurtle.
Has thanked: 451 times
Been thanked: 836 times
Re: I think Seagate might suck
Fuck.
I just tried to use my Seagate drive and no can do with USB3 ports.
Works fine with USB2 tho' so not sure if it's a mobo problem, the first effort failed.
Time for Doc to send me an apple.
Shit happens. The older you get, the more often shit happens. So you have to try not to give a shit even when you do. Because, if you give too many shits, you've created your own shit creek and there's no way out other than swimming through the shit. Oh, and fuck.
|
|
# Type regular documents using regular characters in Latex?
I am typing letters in Latex and I would like to just allow characters such as & and ' to be treated as regular characters and not latex related commands. Is there a simple way to this? Thanks
-
You could use a verbatim environment or insert them in a listings environment, or change the activeness of those characters. – Christian Hupfer May 15 at 10:29
How do I use the verbatim package? – linuxfreebird May 15 at 10:32
\begin{verbatim} your text with & and ' characters \end{verbatim} -- but it does not look very nice. You don't need an extra package. – Christian Hupfer May 15 at 10:33
Your right, vertbatim does not look very nice. How does listing work? – linuxfreebird May 15 at 10:36
The question is, whether you want just put some (code) examples of & and ' characters in running text or layout them nicely... or just use David's answer below ;-) – Christian Hupfer May 15 at 10:38
'
is treated as a regular character.
For
&
if you don't need any tables or math alignments just put
\catcode\&=12
after \begin{document}
You should probably use
\usepackage[T1]{fontenc}
`
as well, as the default OT1 encoding is a little eccentric
-
|
|
# Experiments of Rayleigh and Brace
The experiments of Rayleigh and Brace (1902, 1904) were aimed to show whether length contraction leads to birefringence or not. They were some of the first optical experiments measuring the relative motion of Earth and the luminiferous aether which were sufficiently precise to detect magnitudes of second order to v/c. The results were negative, which was of great importance for the development of the Lorentz transformation and consequently of the theory of relativity. See also Tests of special relativity.
## The experiments
To explain the negative outcome of the Michelson–Morley experiment, George FitzGerald (1889) and Hendrik Lorentz (1892) introduced the contraction hypothesis, according to which a body is contracted during its motion through the stationary aether.
Lord Rayleigh (1902) interpreted this contraction as a mechanical compression which should lead to optical anisotropy of materials, so the different refraction indices would cause birefringence. To measure this effect, he installed a tube of 76 cm length upon a rotatable table. The tube was closed by glass at its ends, and was filled with carbon bisulphide or water, and the liquid was between two nicol prisms. Through the liquid, light (produced by an electric lamp and more importantly by limelight) was sent to and fro. The experiment was sufficiently precise to measure retardations of ${\displaystyle {\tfrac {1}{6000}}}$ of a half wavelength, i.e. of the order 1.2×10−10. Depending on the direction relative to Earth's motion, the expected retardation due to birefringence was of order 10−8, which was well within the accuracy of the experiment. Therefore, it was, besides the Michelson-Morley experiment and the Trouton–Noble experiment, one of the few experiments by which magnitudes of second order in v/c could be detected. However, the result was completely negative. Rayleigh repeated the experiments with layers of glass plates (although with a diminished precision by a factor of 100), and again obtained a negative result.[1]
However, those experiments were criticized by DeWitt Bristol Brace (1904). He argued that Rayleigh hadn't properly considered the consequences of contraction (0.5×10−8 instead of 10−8) as well as of the refraction index, so that the results were in no way conclusive. Therefore, Brace conducted experiments of much higher precision. He employed an apparatus that was 4.13 m long, 15 cm wide, and 27 cm deep, which was filled with water, and which could be rotated (depending on the experiment) about a vertical or a horizontal axis. Sun light was directed into the water through a system of lenses, mirrors and reflexion prisms, and was reflected 7 times so that it traversed 28.5 m. In this way, a retardation of order 7.8×10−13 was observable. However, also Brace obtained a negative result. Another experimental installation with glass instead of water (precision: 4.5×10−11), also yielded no sign of birefringence.[2]
The absence of birefringence was initially interpreted by Brace as a refutation of length contraction. However, it was shown by Lorentz (1904) and Joseph Larmor (1904) that when the contraction hypothesis is maintained and the complete Lorentz transformation is employed (i.e. including the time transformation), then the negative outcome can be explained. Furthermore, if the relativity principle is considered as valid from the outset, as in Albert Einstein's theory of special relativity (1905), then the result is quite clear, since an observer in uniform translational motion can consider himself as at rest, and consequently won't experience any effect of his own motion. Length contraction is thus not measurable by a comoving observer, and has to be supplemented by time dilation for non-comoving observers, which was subsequently also confirmed by the Trouton–Rankine experiment (1908) and the Kennedy–Thorndike experiment (1932).[3][4][A 1][A 2]
|
|
# Create a resizable symbol?
Some symbols can be resized automatically with \middle when placed between a \left. and a \right. They can usually also be resized using \big or \bigg. Examples of such symbols are |and / and pretty much all the brackets; (, [, \{, \langle, and so on. I have never seen a complete list of symbols which can be resized this way. Are they special in some way or is it possible to create new resizable symbols? If it is possible: What is the "correct" way to create a new resizable symbol? By that, I mean a symbol \mysymbol such that, e.g.
\left( \somethinghuge \middle\mysymbol b \right)
will have the "correct" size. Of course, bear in mind that one would usually want to apply this to already existing symbols which do not resize. Like the \# symbol, for instance.
-
The ability to stretch this way is a feature of the font metrics, so the list of extendible symbols depends on which font set you are using. – David Carlisle Feb 23 at 16:22
So that's basically a "no", right? Assuming that the symbol is not already extendible in the font I am using, there is no way to make it extendible? – Jesko Hüttenhain Feb 23 at 16:25
You should never say no, but basically that is the case. You can of course load the font at different sizes (or use \scalebox) together with some macros that measure something and choose an appropriate size for your symbol, but this will not work naturally with \left\right\middle – David Carlisle Feb 23 at 16:28
A list of resizable symbols can be found in the "Comprehensive LaTeX symbol list" or unimath-symbols.pdf for the fontspec package. – Toscho Feb 23 at 18:34
The characters which can be resized with \left, \middle and \right are those with a non zero \delcode. Commands are also allowed, provided their expansion starts with \delimiter; they are defined by \DeclareMathDelimiter in fontmath.ltx:
%%% characters
\DeclareMathDelimiter{(}{\mathopen} {operators}{"28}{largesymbols}{"00}
\DeclareMathDelimiter{)}{\mathclose}{operators}{"29}{largesymbols}{"01}
\DeclareMathDelimiter{[}{\mathopen} {operators}{"5B}{largesymbols}{"02}
\DeclareMathDelimiter{]}{\mathclose}{operators}{"5D}{largesymbols}{"03}
\DeclareMathDelimiter{<}{\mathopen}{symbols}{"68}{largesymbols}{"0A}
\DeclareMathDelimiter{>}{\mathclose}{symbols}{"69}{largesymbols}{"0B}
\DeclareMathDelimiter{/}{\mathord}{operators}{"2F}{largesymbols}{"0E}
\DeclareMathSymbol{/}{\mathord}{letters}{"3D}
\DeclareMathDelimiter{|}{\mathord}{symbols}{"6A}{largesymbols}{"0C}
%%% commands
\DeclareMathDelimiter{\lmoustache} % top from (, bottom from )
{\mathopen}{largesymbols}{"7A}{largesymbols}{"40}
\DeclareMathDelimiter{\rmoustache} % top from ), bottom from (
{\mathclose}{largesymbols}{"7B}{largesymbols}{"41}
{\mathord}{symbols}{"6A}{largesymbols}{"3C}
\DeclareMathDelimiter{\Arrowvert} % double arrow without arrowheads
{\mathord}{symbols}{"6B}{largesymbols}{"3D}
\DeclareMathDelimiter{\Vert}
{\mathord}{symbols}{"6B}{largesymbols}{"0D}
\let\|=\Vert
\DeclareMathDelimiter{\vert}
{\mathord}{symbols}{"6A}{largesymbols}{"0C}
\DeclareMathDelimiter{\uparrow}
{\mathrel}{symbols}{"22}{largesymbols}{"78}
\DeclareMathDelimiter{\downarrow}
{\mathrel}{symbols}{"23}{largesymbols}{"79}
\DeclareMathDelimiter{\updownarrow}
{\mathrel}{symbols}{"6C}{largesymbols}{"3F}
\DeclareMathDelimiter{\Uparrow}
{\mathrel}{symbols}{"2A}{largesymbols}{"7E}
\DeclareMathDelimiter{\Downarrow}
{\mathrel}{symbols}{"2B}{largesymbols}{"7F}
\DeclareMathDelimiter{\Updownarrow}
{\mathrel}{symbols}{"6D}{largesymbols}{"77}
\DeclareMathDelimiter{\backslash} % for double coset G\backslash H
{\mathord}{symbols}{"6E}{largesymbols}{"0F}
\DeclareMathDelimiter{\rangle}
{\mathclose}{symbols}{"69}{largesymbols}{"0B}
\DeclareMathDelimiter{\langle}
{\mathopen}{symbols}{"68}{largesymbols}{"0A}
\DeclareMathDelimiter{\rbrace}
{\mathclose}{symbols}{"67}{largesymbols}{"09}
\DeclareMathDelimiter{\lbrace}
{\mathopen}{symbols}{"66}{largesymbols}{"08}
\DeclareMathDelimiter{\rceil}
{\mathclose}{symbols}{"65}{largesymbols}{"07}
\DeclareMathDelimiter{\lceil}
{\mathopen}{symbols}{"64}{largesymbols}{"06}
\DeclareMathDelimiter{\rfloor}
{\mathclose}{symbols}{"63}{largesymbols}{"05}
\DeclareMathDelimiter{\lfloor}
{\mathopen}{symbols}{"62}{largesymbols}{"04}
\DeclareMathDelimiter{\lgroup} % extensible ( with sharper tips
{\mathopen}{largesymbols}{"3A}{largesymbols}{"3A}
\DeclareMathDelimiter{\rgroup} % extensible ) with sharper tips
{\mathclose}{largesymbols}{"3B}{largesymbols}{"3B}
\DeclareMathDelimiter{\bracevert} % the vertical bar that extends braces
{\mathord}{largesymbols}{"3E}{largesymbols}{"3E}
Other may be defined by other packages.
In order to know whether a character corresponds to a resizable delimiter, inspect its \delcode:
\showthe\delcode(
would output 164608. For a command, use \show; for instance, \show\bracevert would output
\bracevert:
macro:->\delimiter "033E33E
Some delimiters are arbitrarily resizable like braces, which in large sizes are made up by repeatable parts. Others, like / have a maximum size.
-
Using \scalerel, one can make one argument match the vertical extent of another argument. The optional argument is a max-width constraint (in this case, I chose 2ex). Here's an example of the same code working on two different \somethinghuge arguments. In this case, I chose \mysymbol as a / sign.
\documentclass{article}
\usepackage{scalerel}
\def\mysymbol{/}
\begin{document}
\def\somethinghuge{\rule[-2.2ex]{2ex}{6ex}}
$\left( \somethinghuge \scalerel*[2ex]{\mysymbol}{\somethinghuge} b \right)$
\def\somethinghuge{\rule[-3.2ex]{2ex}{8ex}}
$\left( \somethinghuge \scalerel*[2ex]{\mysymbol}{\somethinghuge} b \right)$
\end{document}
Here's an equivalent syntax that would allow you to avoid typing \somethinghuge twice.
$\left( \scaleleftright[2ex]{.}{\somethinghuge}{\mysymbol} b \right)$
`
-
|
|
# Intuition behind normal probability plots?
I'm studying for an exam in experimental design. I know from previous exams that I will need to construct normal probability plots manually but strangely, this hasn't been mentioned in the literature.
I have been forced forced to rely on less reputable sources to get the gist of it, the method used here seems to reflect the one used in previous exams:
https://www.dummies.com/careers/project-management/six-sigma/how-to-construct-and-interpret-a-normal-probability-plot-for-a-six-sigma-project/
If I summarize:
First you rank all observations according to size, then you determine their cumulative probability via the formula $$p_i=\frac{i-0,5}{n}$$ I don't quite understand the purpose behind this formula but I think it tries to describe the theoretical distribution of the observations, "if" they were perfectly spread out and "if" they followed the normal distribution perfectly.
These values will constitute the y-values for our observations in our plot
Next, we check which z values correspond to our (theoretical) y-values, and plot these on our x-axis.
But if we plot our theoretical p values vs our theoretical z values, wouldn't every single value lie on the line? Shouldn't the actual value of the observations somehow determine there placement in the plot?
• The essence of this plot is to take observed values and plot them against what would be expected in a sample of the same size from a normal distribution. For that you pair off values your smallest versus the expected smallest, your second smallest versus the expected second smallest, and so on. $(i - 0.5)/n$ as plotting position is one of various conventional choices (see e.g. stata.com/support/faqs/statistics/… and its references), but you need to push that through a normal quantile function (inverse normal cumulative distribution function). – Nick Cox Oct 28 '18 at 11:57
• The plotting positions themselves are uniformly spaced and using them as coordinates would only be a test of a hypothesis of a uniform distribution (although that would still be a useful descriptive plot). I don't know what "the literature" is for you but this is explained in most general statistics texts that are not elementary. en.wikipedia.org/wiki/Normal_probability_plot is a start. – Nick Cox Oct 28 '18 at 12:00
• The URL you cite seems to have the right flavour, although I have not read every word. Note that although $p$ is fine as notation for cumulative probabilities, they aren't P-values in the usual sense of that term. – Nick Cox Oct 28 '18 at 12:04
Computation. Consider a normal sample of size $$n = 10:$$ small enough for easy computation by hand, but not necessarily large enough to make a useful normal probability plot. (Computations below are in R statistical software.)
set.seed(1028); x = round(rnorm(10, 100, 15), 1)
samp.q = sort(x); samp.q
[1] 75.9 78.7 83.8 101.9 105.0 107.8 116.3 123.7 128.8 140.5
One style of normal probability plot puts 'Theoretical Quantiles' on the horizontal axis and 'Sample Quantiles' on the vertical axis. As discussed in comments, the theoretical quantiles are the normal quantiles of $$(i - .5)/n:$$
i = 1:10; theor.q = qnorm((i-.5)/10); theor.q
[1] -1.6448536 -1.0364334 -0.6744898 -0.3853205 -0.1256613
[6] 0.1256613 0.3853205 0.6744898 1.0364334 1.6448536
The the normal probability plot is:
plot(theor.q, samp.q), pch=19)
abline(v = theor.q, col="green2")
The idea is that when normal data are plotted in this way, points will fall 'nearly' in a straight line. One method of illustrating this intended linearity is to plot the line $$y = \bar X + Sx,$$ based on the sample mean and standard deviation, through the plotted points. [Generally, we will not know the population mean and variance, so it would not be possible to plot a line based on population parameters. For such a small sample, these two lines may be quite different, so (even if population parameters were known) the line based on sample mean and standard deviation is often a more useful reference line for judging normality.]
abline(a = mean(x), b = sd(x), col="red")
abline(a = 100, b = 15, col="blue", lty="dotted")
Here is the default normal probability plot (also called 'normal quantile-quantile plot') in R; the default reference line passes through the first and third quantiles.
qqnorm(x); qqline(x)
Intuition. A normal probability plot with Sanple Quantiles on the horizontal axis may be compared to the plot of the Empirical CDF of the sample. (The ECDF jumps up by $$1/n$$ at each observed data value.) With a large enough sample the ECDF of a sample approximates the CDF of the distribution (red curve).
The left-hand panel below shows an ECDF of a normal sample of size $$n = 15.$$ Notice that the values plotted on the vertical axis are $$1/n, 2/n, \cdots, n/n.$$ The CDF of the population from which the sample was drawn is shown as a red curve.
The right-hand panel shows the normal probability plot of same sample. The values on the vertical axis are theoretical quantiles corresponding to $$(i-.5)/n,\, i = 1, 2, \dots, n.$$ The red reference line is $$y = -\frac{\mu}{\sigma} + \frac{x}{\sigma}.$$ Roughly speaking, this line may be considered as a "quantile transformation to linearity" of the normal CDF.
Also for reference, grey points correspond to normal probability plots of 20 additional samples from the same normal distribution. Although the points of the normal probability plot of the sample (blue points) do not lie exactly on a straight line, they lie well within the 'cloud' of the normal probability plots of the 20 other normal samples.
Note: R code for the last figure:
set.seed(2018); n = 15; mu = 100; sg = 15
x = rnorm(n, mu, sg)
par(mfrow = c(1,2))
plot(ecdf(x), col="blue")
curve(pnorm(x, 100, 15), add=T, col="red", lwd=2)
qqnorm(x, pch=19, col="blue", datax=T)
for (j in 1:20) {
i = 1:n; tq = qnorm((i-.5)/n); sq = sort(rnorm(n,mu,sg))
points(sq, tq, col="grey") }
abline(a = -mean(x)/sd(x), b = 1/sd(x), col="red")
par(mfrow = c(1,1))
• Very helpful. Obvious to you and to many readers that you are giving R code -- but not to all readers, so I suggest you spell that out. – Nick Cox Oct 28 '18 at 19:40
• Mentioned R once towards the middle, but you're right. Put an additional mention of R right at the start. And another in a note at the end. – BruceET Oct 28 '18 at 19:46
|
|
# Cointegration between daily time series and intraday time series
I am working with time series data of daily prices, and intraday prices. For simplicity sake I will refer to the daily time series as 'A' and 'B', and the intraday time series of the same instruments as 'a' and 'b'. When I check for cointegration between A and B my results tell me that the series are indeed cointegrated, but when checking their intraday series, a and b, my analysis shows no cointegration unless I apply some form of differencing to the series ( ie. taking returns of a and b , or log(a) and log(b) ).
Is the conclusion here as simple as declaring that intraday the series are not cointegrated, but over longer time frames they are? Or can I reach some generalized conclusion that I should be able to expect some degree of mean reversion intraday of a and b due to the daily cointegration between A and B.
I am mainly having a hard time connecting whether or not there are implications to be drawn from daily results -> intraday data, or vice versa in general.
• Hi There's nothing necessarily with differencing two variables and then finding thast they are cointegrated but ONLY IF THAT DIFFERENCING DOESN"T MAKE THEM I(0). II the differenced variables are I(0), then it's not possible for them to be co-integrated by definition. I would be careful testing for intraday cointegration.. It's an unstable beast to being with and then try to test it intraday probably just makes it more unstable. Finally, testing for I(1) of intraday seres seems like an exercise in futility because the respective seriies doesn't necessarily have time to trend. – mark leeds Oct 17 '18 at 8:52
• please see Verbeek (2012) $A$ $guide$ $to$ $modern$ $econometrics$ for an excellent discussion on cointegration. – user22485 Oct 26 '18 at 9:08
|
|
Found 108 Documents (Results 1–100)
100
MathJax
Full Text:
Multiplicity of nonnegative solutions for a class of fractional p-Laplacian system in $$\mathbb{R}^N$$. (English)Zbl 1475.35037
MSC: 35B38 35J92 35R11
Full Text:
Existence of nontrivial solutions for a class of Schrödinger-Maxwell system. (Chinese. English summary)Zbl 07404337
MSC: 35Q55 35Q61 35A15
Full Text:
Existence and multiplicity of positive solutions for a coupled fourth-order system of Kirchhoff type. (English)Zbl 07403726
MSC: 35B09 35J48 35J50
Full Text:
Full Text:
MSC: 35Jxx
Full Text:
Normalized solutions to the fractional Schrödinger equations with combined nonlinearities. (English)Zbl 1445.35307
MSC: 35R11 26A33 35J61
Full Text:
Full Text:
Full Text:
Full Text:
Variational reduction for semi-stiff Ginzburg-Landau vortices. (English)Zbl 1449.35213
MSC: 35J50 35J66
Full Text:
Solutions concentrating around the saddle points of the potential for two-dimensional Schrödinger equations. (English)Zbl 1415.35015
MSC: 35B25 35B33 35J61
Full Text:
Full Text:
A nonexistence result for blowing up sign-changing solutions of the Brezis-Nirenberg-type problem. (English)Zbl 1424.35134
MSC: 35J20 35J60
Full Text:
Nonlinear profile decomposition for the $$\dot{H}^{\frac{1}{2}} \times \dot{H}^{- \frac{1}{2}}(\mathbb{R}^d)$$ energy subcritical wave equation. (English)Zbl 1420.35164
MSC: 35L71 35L15
Full Text:
Ground states for a linearly coupled system of Schrödinger equations on $$\mathbb{R}^N$$. (English)Zbl 1398.35220
MSC: 35Q55 35A01 35J48
Full Text:
Full Text:
Sharp Adams-type inequality invoking Hardy inequalities. (English)Zbl 1398.46029
MSC: 46E35 35B33 46E30
Full Text:
Full Text:
Positive ground state of coupled systems of Schrödinger equations in $$\mathbb{R}^2$$ involving critical exponential growth. (English)Zbl 1387.35182
MSC: 35J50 35B33 35Q55
Full Text:
Full Text:
On the prescribed scalar curvature problem on $$S^{n}$$. I: Asymptotic estimates and existence results. (English)Zbl 1353.53046
MSC: 53C21 35J65 35B40
Full Text:
Full Text:
Full Text:
Existence and non-existence results for minimizers of the Ginzburg-Landau energy with prescribed degrees. (English)Zbl 1354.35038
MSC: 35J50 35J66
Full Text:
The existence of a ground-state solution for a class of Kirchhoff-type equations in $$\mathbb R^{N}$$. (English)Zbl 1346.35203
MSC: 35R09 35A15 35B09
Full Text:
MSC: 35B41
Full Text:
Full Text:
MSC: 53C20
Full Text:
Full Text:
A Fourier approach to the profile decomposition in Orlicz spaces. (English)Zbl 1311.46030
MSC: 46E35 46E30
Full Text:
Characterization of the lack of compactness of $$H^2_{\mathrm{rad}}(\mathbb R^4)$$ into the Orlicz space. (English)Zbl 1310.46033
MSC: 46E35 46E30 35B33
Full Text:
Local minimizers of the magnetic Ginzburg-Landau functional with $$S^1$$-valued order parameter on the boundary. (English)Zbl 1293.35311
MSC: 35Q56 35A01 35J20
Full Text:
Ground states for nonlinear Kirchhoff equations with critical growth. (English)Zbl 1300.35016
Reviewer: Yang Yang (Wuxi)
MSC: 35J20 35A15 35R09
Full Text:
MSC: 46E35
Full Text:
MSC: 35J75
Full Text:
On the elements involved in the lack of compactness in critical Sobolev embedding. (English)Zbl 1291.46030
Adimurthi (ed.) et al., Concentration analysis and applications to PDE. ICTS workshop, Bangalore, India, January 3–12, 2012. Basel: Birkhäuser/Springer (ISBN 978-3-0348-0372-4/hbk; 978-3-0348-0373-1/ebook). Trends in Mathematics, 1-15 (2013).
MSC: 46E35 35A27 46E30
Full Text:
Full Text:
Full Text:
MSC: 53C20
Full Text:
Full Text:
MSC: 35J35
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
An existence result for a class of $$p$$-Laplacian elliptic systems involving homogeneous nonlinearities in $$\mathbb R^N$$. (English)Zbl 1222.35080
MSC: 35J50 35A15
Full Text:
Existence and multiplicity of positive solutions of semilinear elliptic equations in unbounded domains. (English)Zbl 1226.35036
MSC: 35J61 35B09 35A01
Full Text:
On the lack of compactness in the 2D critical Sobolev embedding. (English)Zbl 1217.46017
MSC: 46E35 26D10 26D15
Full Text:
Dynamics of the modified viscous Cahn-Hilliard equation in $$\mathbb{R}^N$$. (English)Zbl 1223.35078
MSC: 35B41 35K30 37L15
Near boundary vortices in a magnetic Ginzburg-Landau model: their locations via tight energy bounds. (English)Zbl 1188.35186
MSC: 35Q56 49J20
Full Text:
Full Text:
Positive solutions for some Schrödinger equations having partially periodic potentials. (English)Zbl 1169.35331
MSC: 35J60 35J10 35D05
Full Text:
Solitons of linearly coupled systems of semilinear non-autonomous equations on $$\mathbb R^{n}$$. (English)Zbl 1148.35080
MSC: 35Q55 35J45
Full Text:
Full Text:
Full Text:
The prescribed boundary mean curvature problem on the standard $$n$$-dimensional ball. (English)Zbl 1190.53033
MSC: 53C21 35J92
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Existence, multiplicity, perturbation, and concentration results for a class of quasi-linear elliptic problems. (English)Zbl 1293.35093
Electronic Journal of Differential Equations. Monograph 7. San Marcos, TX: Southwest Texas State University. 213 p., electronic only, open access (2006).
Full Text:
Prescribing $$Q$$-curvature on higher dimensional spheres. (English)Zbl 1211.58021
MSC: 58J60 53C21 35J35
Full Text:
Full Text:
Infinite-dimensional exponential attractors for nonlinear reaction-diffusion systems in unbounded domains and their approximation. (English)Zbl 1072.35045
MSC: 35B41 35K57 35K50
Full Text:
Conformal deformations of Riemannian metrics via “Critical point theory at infinity”: the conformally flat case with umbilic boundary. (English)Zbl 1080.58020
Bahri, Abbas (ed.) et al., Noncompact problems at the intersection of geometry, analysis, and topology. Proceedings of the conference on noncompact variational problems and general relativity held in honor of Haim Brezis and Felix Browder at Rutgers University, New Brunswick, NJ, USA, October 14–18, 2001. Providence, RI: American Mathematical Society (AMS) (ISBN 0-8218-3635-8/pbk). Contemporary Mathematics 350, 1-17 (2004).
Some generic results in mathematical physics. (English)Zbl 1255.82012
MSC: 82B20 47A05 54D45
Multiple solutions of nonlinear scalar field equations. (English)Zbl 1140.35401
MSC: 35J20 35J60 47J30
Full Text:
Hydrodynamical limit for a drift-diffusion system modeling large-population dynamics. (English)Zbl 1043.35106
MSC: 35L60 82D37
Full Text:
Full Text:
Full Text:
Nonvariational elliptic systems via Galerkin methods. (English)Zbl 1109.35040
Haroske, Dorothee (ed.) et al., Function spaces, differential operators and nonlinear analysis. The Hans Triebel anniversary volume. Based on the lectures given at the international conference on function spaces, differential operators and nonlinear analysis, FSDONA-01, Teistungen, Germany, June 28–July 4, 2001, in honor of the 65th birthday of H. J. Triebel. Basel: Birkhäuser (ISBN 3-7643-6935-3/hbk). 47-57 (2003).
MSC: 35J60 35J55 65N30
Multiple positive solutions for singularly perturbed elliptic problems in exterior domains. (English)Zbl 1274.35080
MSC: 35J20 35J60
Full Text:
A positive solution for an asymptotically linear elliptic problem on $$\mathbb R^{N}$$ autonomous at infinity. (English)Zbl 1225.35088
MSC: 35J60 58E05
Full Text:
Attractors for reaction-diffusion equations in unbounded domains. (English)Zbl 0953.35022
MSC: 35B41 35K57 35B40
Full Text:
Attractors for damped nonlinear wave equations in an infinite strip. (English)Zbl 0922.35021
Reviewer: E.Feireisl (Praha)
On an elliptic problem with critical nonlinearity in expanding annuli. (English)Zbl 0954.35068
MSC: 35J65 58E05 35J20
Full Text:
On a parabolic-elliptic problem. (Sur un problème parabolique-elliptique.) (French. English summary)Zbl 0922.35080
MSC: 35K65 47H20 35K60 35D05 47H06
Full Text:
Strong asymptotic stability for $$n$$-dimensional thermoelasticity systems. (English)Zbl 0958.35012
MSC: 35B40 35Q72 35B37 74F05
Full Text:
A quasilinear functional reaction-diffusion equation from climate modeling. (English)Zbl 0907.35064
MSC: 35K57 35R10 35B10
Full Text:
Exponential attractors for non-dissipative systems modeling convection in porous media. (English)Zbl 0959.35026
MSC: 35B41 76S05 35K50 35Q35 35K55
Equilibrium for perturbations of multifunctions by convex processes. (English)Zbl 0857.47031
Reviewer: J.F.Toland (Bath)
Full Text:
BV and Nikol’skij spaces and applications to the Stefan problem. (Italian. English summary)Zbl 0838.46027
MSC: 46E35 35R35
Full Text:
A global compactness result for quasilinear elliptic equation involving critical Sobolev exponent. (Chinese)Zbl 0842.35032
MSC: 35J60 35J20
Full Text:
On parabolic initial-boundary value problems with critical growth for the gradient. (English)Zbl 0836.35078
MSC: 35K60 47H05 35K20 35D05
Full Text:
Semilinear elliptic problems with singular potentials in $$\mathbb{R}^ N$$. (English)Zbl 0788.35040
Reviewer: P.Drábek (Plzeň)
MSC: 35J60 35R05 35A05
Full Text:
Existence and nonexistence of nontrivial axially symmetric solution for quasilinear elliptic systems with inhomogeneous term. (English)Zbl 0792.35060
MSC: 35J65 58E20 35J45
Full Text:
Exponential attractor for natural convection in porous medium. (Attracteurs exponentiels pour un modèle de convection naturelle en milieu poreux.) (French. Abridged English version)Zbl 0778.76088
MSC: 76R10 76S05 35Q35
Concentration of solutions to elliptic equations with critical nonlinearity. (English)Zbl 0761.35034
Reviewer: G.Hetzer (Auburn)
Full Text:
Critical Sobolev exponent and the dimension three. (English)Zbl 0754.35027
MSC: 35J20 35B45 49R50
A homotopical index and multiplicity of periodic solutions to dynamical systems with singular potential. (English)Zbl 0774.34028
MSC: 34C25 37G99 55P99
Full Text:
Solutions of elliptic equations with critical Sobolev exponent in dimension three. (English)Zbl 0754.35040
Reviewer: V.N.Pavlenko
Full Text:
Measures of weak non compactness in symmetric ideal spaces. (Italian. English summary)Zbl 0711.46026
MSC: 46E30 46E35 47H09
A multiplicity result for a variational problem with lack of compactness. (English)Zbl 0702.35101
Reviewer: O.Rey
MSC: 35J65 35A15 46E35
Full Text:
Sur un problème variationnel non compact: L’effet de petits trous dans le domaine. (On a variational problem with lack of compactness: The effect of small holes in the domain). (French)Zbl 0686.35047
Reviewer: O.Rey
MSC: 35J65 35B99
Some nonlinear elliptic problems with lack of compactness. (English)Zbl 0682.35035
Nonlinear variational problems, Vol. II, Proc. Int. Conf., Elba/Italy 1986, Pitman Research Notes Math. Ser. 193, 137-160 (1989).
Reviewer: G.Cerami
MSC: 35J65 35J20 35B99
all top 5
all top 5
all top 5
all top 3
|
|
A T v or $$P =\left[ 130.2k SHARES.$$. \). {\displaystyle Rv} Singleton Matrix. Taking any three rows and three It is denoted by adj A. Similarly, $$b_{32} = 9 , b_{13} = 13$$ and so on. 2. The determinant of any orthogonal matrix is either +1 or −1. It is also called as raising matrix to a power calculator which increases a matrix to a power greater than one involves multiplying a matrix by itself a specific number of times for example A 2 = A . If A is a square matrix of order 3 such that =3, then find the value of 1:01 313.3k LIKES. {\displaystyle n} If all entries outside the main diagonal are zero, Let us now look at a way to create a matrix for a given funciton: For $$P_{ij} = i-2j$$ , let us construct a 3 × 2 matrix. A yields another column vector describing the position of that point after that rotation. To know more, download BYJU’S-The Learning App and study in an innovative way. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfy When working in the real numbers, the equation ax=b could be solved for x by dividing bothsides of the equation by a to get x=b/a, as long as a wasn't zero. Enter the elements of the matrix in the boxes provided. For example matrices with dimensions of 2x2, 3x3, 4x4, 5x5 etc., are referred to as square matrix. So, A is a 2 × 3 matrix and B is a 4 × 3 matrix. is a number encoding certain properties of the matrix. In linear algebra, the trace of a square matrix A, denoted (), is defined to be the sum of elements on the main diagonal (from the upper left to the lower right) of A.The trace of a matrix is the sum of its (complex) eigenvalues, and it is invariant with respect to a change of basis.. The diagonal of a square matrix from the top right to the bottom left corner is called antidiagonal or counterdiagonal. [10] This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. ) If a (square) matrix has a (multiplicative) inverse (that is, if the matrix is nonsingular), then that inverse is unique. \). Question #1: In this problem, you will implement, in Matlab, a number of functions for computing the SVD of a square matrix. A square matrix is a matrix with the same number of rows and columns. \end{matrix} The calculator given in this section can be used to find square of a matrix. the associated quadratic form given by. If there are m rows and n columns in a matrix, then the order is m x n. Matrices called by special names based on its order. | A square matrix A with 1s on the main diagonal (upper left to lower right) and 0s everywhere else is called a unit matrix. {\displaystyle A} Use this online calculator to find the square of a 2x2 or 3x3 matrices. Adjoint of a matrix If $$A$$ is a square matrix of order $$n$$, then the corresponding adjoint matrix, denoted as $$C^*$$, is a matrix formed by the cofactors $${A_{ij}}$$ of the elements of the transposed matrix $$A^T$$. Subtraction was defined in terms of addition and division was defined in terms ofmultiplication. \end{matrix} Question #1: In this problem, you will implement, in Matlab, a number of functions for computing the SVD of a square matrix. The entries a ii form the main diagonal of a square matrix. According to the inverse of a matrix definition, a square matrix A of order n is said to be invertible if there exists another square matrix B of order n such that AB = BA = I. where I is the identity of order n*n. Identity matrix of order 2 Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables. $$a_{ij}$$ represents any element of matrix which is in $$i^{th}$$ row and $$j^{th}$$ column. Multiplication of a matrix by a constant a multiplies each element with that constant. n [1] This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below. If two If m = n, then the matrix called as square matrix. − {\displaystyle A} Let’s calculate the determinant of the following matrix: The inverse of a matrix product is the product of the inverses in reverse order. Finally, the Laplace expansion expresses the determinant in terms of minors, i.e., determinants of smaller matrices. \right]_{2 × 3} An n-by-n matrix is known as a square matrix of order It is a square matrix of order H For example matrices with dimensions of 2x2, 3x3, 4x4, 5x5 etc., are referred to as square matrix. 6.4 - The Determinant of a Square Matrix A determinant is a real number associated with every square matrix. n If only all entries above (or below) the main diagonal are zero, In linear algebra, square matrix is a matrix which contains same number of rows and columns. [15] They may be complex even if the entries of A are real. Examples, a two-dimensional matrix consists of the order of the kth power of a spiral odd-order matrix! Are going to see how to determine it by multiplying it by −1 and how to the! Om prakash kartik April 03, 2019, i.e see, the other matrix known..., are referred to as square matrix of order n { \displaystyle n.... April 03, 2019 is invertible if and only if its determinant is, your is... The capital English alphabet like a, B, C……, etc then it is row matrix, n. Its diagonal entries alphabet like a, B, C……, etc contains same number of elements present will 12... For a given 2 by 2 matrix, for the given number of columns pA in an innovative way your... Us take an example to understand how to evaluate the order for any given matrix more about matrices. 2 times 3, i.e option ' B ' column matrix \displaystyle a } is called matrix! Matrices are defined as a square matrix of order { \displaystyle }! ): delta_x = matrix are the same number of rows = number of rows = of... • Implicit_bidiag_QR.m • Implicit_bidiag_QR_SVD.m diagonals of a square matrix of order 3, i.e is a product... ) th power of the matrix called as square matrix is a matrix is the inverse of the of... An n-by-n matrix is of the above picture, you can see, the other matrix known... Special orthogonal matrix is equal to m x n and I be a matrix! Be complex even if the entries a ii form the main diagonal of a real. Some negative and some positive values ) may be complex even if the entries of a ij th element the..., 4x4, 5x5 etc., are referred to as square matrix ; Maximum and Minimum a! Formulae to all dimensions ) given that a is the inverse of the sum the. 3 such that $\det ( a dj ( a dj ( a dj ( a ) isa! Has all its eigenvalues are positive XIn−A ) is called a diagonal matrix also a special type matrix! Matrix there is only element then it is a unitary matrix, we have 6 different ways to write value. Contains same number of elements present in a matrix is of the power... The more lengthy Leibniz formula generalises these two formulae to all dimensions ) d ) NoneCorrect answer is '..., including how to add and multiply them be: def square ( sq ): delta_x sq. Including how to determine the order 2 × 3 matrix even if the entries a ii form the diagonal!: let a be any square matrix is a 2 × 3 more! Generalises these two formulae to all dimensions … a square matrix is to... Learn how to find the trace, tr ( a dj ( a ) ) isa ) B c... Be 12 i.e is, we should first understand what is a matrix will also be 2 times 3 i.e! 5X5 etc., are referred to as square matrix a good English definition what... Each element of the squares of each element of matrix is a square matrix is we. 13 } = 9, b_ { 32 } = 13 \ ) represents any element of matrix?! Rows or two columns of a given 2 by 2 matrix, for the order of a square matrix as. And so on value of 1:01 313.3k LIKES, 2 or 1 determinant a... The uses of it corner to the bottom right corner of the squares of each with! Study in an innovative way also a special kind of diagonal matrix you see... As shearing or rotation present will be 12 i.e if your matrix ( a... Multiply them$ with integer entries such that =3, then it is column matrix a. Will have mn elements and 4 columns this and solve a series of high school pdf on. Functions arranged in rows and columns they may be complex even if the entries a ii form the main of! Understand how to evaluate the order of matrix shares a relationship with the number columns! Functions arranged in rows and 4 columns be 2 times 3, 2 or.! To infinite-dimensional situations related to matrices with infinitely many rows and columns to more!, in above example, matrix a is the product of the order for any given matrix of! Are referred to as square matrix of order n { \displaystyle n }, how. { \displaystyle n } rows ( m ) and a number of elements present in a two space... Of 1:01 313.3k LIKES see, the order of the matrix a is!, 5x5 etc., are referred to as square matrix of same order write order. Formulae to all dimensions should first understand what is a matrix product is the inverse of the order for given! Let us take an example to understand how to determine the order ×. 3 such that $\det ( a ) of a square matrix matrices involves 6 terms ( rule Sarrus! And |A| = 4, then the matrix called as square matrix of order n { \displaystyle a is! Other matrix is of the matrix a ii form the main diagonal are zero, order of a square matrix \displaystyle! ] the table at the right shows two possibilities for 2-by-2 matrices is given by, the number elements. 2 matrix, we can find a 2 × 3 matrix same, matrix... 2 by 2 matrix, if n = 1, then it is called antidiagonal or counterdiagonal positive-definite and. See the below example to understand how to find the trace, (... With integer entries such that =3, then find the trace, tr ( a ) of a.! Given matrix matrix: Here we are going to see how to evaluate the order of =. Defined as a rectangular array of numbers or functions product is the minor a!, C……, etc ) B ) c ) d ) NoneCorrect answer is option ' B ' two-dimensional.: delta_x = × n order, it states to a set order of a square matrix numbers or functions tr a! टीचू ) square matrix of order 3 such that$ \det ( )! • Implicit_bidiag_QR_SVD.m examples, a = [ a ] is … a matrix! Is either +1 or −1 for 2-by-2 matrices m ) and so on, 3x3 4x4! Bottom left corner to the bottom left corner is called a row (., download BYJU ’ S-The Learning App and study in an innovative way )! Multiplies each element of the matrix be complex even if the entries a form... Diagonal elements of the order of matrix calculator is an online tool programmed to the... Know what order of matrix with the same number of columns the ( −k ) th power of a.. Is the sum of both diagonals of a 2x2 or 3x3 matrices to matrices with dimensions of,! ; both some negative and some positive values ) positive values ( respectively only negative values ; some! Variables or functions arranged in rows and columns BYJU ’ S-The Learning App and in. =1 $mat [ ] [ ] [ ], the task is sort... To sort the main diagonal are zero, a square matrix are the same number of.! Prakash kartik April 03, 2019 called the characteristic polynomial of a are real be a square matrix of 3... ) =1$ below example to understand how to add and multiply them everything I find... Is known as a square matrix the sum of both diagonals of a matrix there is element. Find the square of a square matrix how to find the trace a. Rows x number of rows and columns 3 such that \$ \det a. Infinite-Dimensional situations related to matrices with order of a square matrix of 2x2, 3x3, 4x4, 5x5 etc., are referred as. Elements present will be 12 i.e ( also pronounced as ‘ m by n ’ ) 3x3... 3 matrix called antidiagonal or counterdiagonal n x n ( also pronounced as ‘ m n! Root matrices lie on the imaginary line which runs from the top right to the bottom left is... Of smaller matrices it will have mn elements to write the notation of 15 for matrix B diagonal! −K ) th power of the matrix in increasing order ) represents element! Entries of a square C……, etc matrix Om prakash kartik April 03, 2019 and columns is: a... With dimensions of 2x2, order of a square matrix, 4x4, 5x5 etc., are referred as... Enter the elements one by one B is a matrix is positive-definite if and only if determinant... Have to select the order of your matrix ( or a null matrix is known as a matrix! Values ; both some negative and some positive values ) ] they may complex... Detailed example square root of the above picture, you can see, the other matrix a. } \ ) represents any element of the matrix algebra, square matrix of same order can be to. That the order of matrix calculator is an online tool programmed to calculate the square of matrix is m. Is, we find all the square root of the matrix a of... By n ’ ) tool programmed to calculate the square of a and 4 columns if two rows two! Related to matrices with dimensions of 2x2, 3x3, 4x4, 5x5 etc., are referred to as matrix. ( m ) and so on ways to write the value of 313.3k.
|
|
# GED Math : Decimals and Fractions
## Example Questions
### Example Question #72 : Numbers
Steve works for a retailer at a wage of $13.00 per hour. He normally works 40 hours per week; any hours in excess of that in any given week are paid at "time and a half" - that is, at 50% higher. How much did Steve earn for the week reflected by the time card in the above diagram? Possible Answers: Correct answer: Explanation: Below is the same time card, with the number of hours worked in each shift. Steve worked exactly 40 hours, his normal work week, so he will earn normal wage for the entire week. 40 hours at$13.00 per hour is .
### Example Question #32 : Decimals And Fractions
Each student at a classical studies school is required to take one of three languages - Latin, Greek, or Hebrew. Assume that no student takes more than one language.
One-fourth of the students decide to take Hebrew; half the remaining students decide to take Greek. What is the ratio of students who don't take Latin to those who do?
Explanation:
of the students take Hebrew, so
of the students don't take Hebrew. Half of these students take Greek, so the other half must take Latin; this is
of the students.
Therefore,
of the students don't take Latin.
The ratio of those not taking Latin to those who are is
.
### Example Question #33 : Decimals And Fractions
Above is the menu for a coffee shop. Today, the shop has a special - buy two iced coffees of any size, and get a third iced coffee of the same size free.
Jerry orders three large iced coffees; Elaine orders two large cappucinos; George orders three butter croissants. List the three customers in ascending order by money spent. (You may ignore tax.)
George, Jerry, Elaine
Elaine, George, Jerry
George, Elaine, Jerry
Jerry, George, Elaine
George, Jerry, Elaine
Explanation:
Since the third large iced coffee is free, Jerry pays for two large iced coffees. He pays
.
Elaine pays for two large cappucinos. She pays
.
George pays for three butter croissants. He pays
.
In ascending order by amount spent, the three are George, Jerry, Elaine.
### Example Question #81 : Numbers
What is as an improper fraction?
Explanation:
To convert a mixed number, (whole number and fraction), to an improper fraction, (a fraction whose numerator is greater than the denominator), the steps are as follows:
Multiply the whole number by the denominator of the fraction. In this example the whole number is 17 and the denominator is 5.
Then add the numerator to that total.
This total represents the numerator in the improper fraction. The denominator does not change.
Therefore the improper fraction would be
### Example Question #35 : Decimals And Fractions
Explanation:
Convert the decimal to a fraction.
Rewrite the expression.
Identify the least common denominator and convert the fractions.
### Example Question #82 : Numbers
What is a quarter of 492?
Explanation:
To find a fraction of a whole number, we will multiply the fraction by the whole number.
Now, we know a quarter is the same as . So, we get
Therefore, a quarter of 492 is 123.
### Example Question #83 : Numbers
What is half of 120?
Explanation:
To find a fraction of a whole number, we will multiply the fraction by the whole number.
Now, we know that half is the same as . So, we will multiply. We get
Therefore, half of 120 is 60.
### Example Question #84 : Numbers
What is of ?
Explanation:
To find a fraction of a number, we will multiply the two together. So, we get
Now, we can simplify before we multiply. The 3 and the 9 can both be divided by 3. So, we get
Now, we simplify.
Therefore, of is .
### Example Question #31 : Decimals And Fractions
Multiply the following:
Explanation:
To multiply fractions, we will multiply the numerators together, then we will multiply the denominators together. So, we get
Now, we can simplify to make things easier. The 5 and the 25 can both be divided by 5. So, we get
### Example Question #86 : Numbers
Divide the following:
|
|
# An electron in a magnetic field
1. Mar 16, 2009
### dphysics
1. The problem statement, all variables and given/known data
The acceleration of an electron in a magnetic field of 87 mT at a certain point is 1.268×1017 m/s2. Calculate the angle between the velocity and magnetic field.
2. Relevant equations
F = q dot v cross B
F = qBsin(theta)
F/m = a
3. The attempt at a solution
I calculated the force on the electron using F/m = a rearranged to F = ma, then plugged F into F = qBsin(theta). I came up with sin-1(8243000) = theta, which is obviously incorrect.
Am I making a calculation error, or is there an error with my formulas?
In advance, any help is much appreciated and thank you.
2. Mar 16, 2009
### Queue
There is a mild error in your formula, at least a bit. You know F = ma, you know the mass of the electron (it's a constant) and you know a since it's given. You also know F = qB sin($$\theta$$) and you know B (given) and q, constant. So you get $$sin^{-1}(\frac{ma}{qB}) = \theta$$
Make sense?
3. Mar 16, 2009
### dphysics
Hmm, that is what I have above, at least I think.
Because when you substitute F = ma into F = qBsin(theta), you end up with ma = qbSin(theta), which can be arranged to sin-1(ma/qb) = Theta.
The values I used:
m = 9.1E-31 kg
q = 1.609E-19 C
B = 87E-3 T
a = 1.268E17
I still end up calculating 8243000, and when I try to take the inverse sine of that I get a domain error.
Last edited: Mar 16, 2009
4. Mar 16, 2009
### Queue
I get a very similar number for ma/(qb). Is it possible you have a or B written down incorrectly?
5. Mar 16, 2009
### dphysics
I just checked again, and those are the correct values for a and B.
6. Mar 16, 2009
### Queue
OH! F = qvBsin([tex]\theta[\tex]). You missed the v which should account for the 107[\SUP] factor.
7. Mar 16, 2009
### dphysics
Ah alright, that does make sense then. How would I go about calculating the velocity though?
qvb = F
F = ma
qvb = ma
v = 8243000
qvB/ma = sin(theta)
qvB/ma = 1, giving you theta = 90 degrees.
This unfortunately is incorrect.. =\
8. Mar 16, 2009
### Queue
qvb = F? That's missing a sin(theta) isn't it, that would then explain how you get sin(theta) = 1.
I think there's an equation we're not looking at that we should be because otherwise we've two unknowns and only one question. Or perhaps velocity should be a given (which is how I see the question posed elsewhere online).
9. Mar 16, 2009
### dphysics
It mentions in an earlier part of the problem that the velocity is 9.40×106, however it doesn't specify if that remains constant throughout the problem.
Assuming for a moment that it does, lets try to plug that into the equation:
F = qvBsin(theta)
F = 1.609E-19 * 9.40×106 * 87E-3 (I'm assuming 87 mT = 87 E-3 T)
And, assuming that I can use F = ma (which I'm not sure about)
I can substitute ma = 9.1E-31 * 1.268E17, divide that by qvB
End up with sin-1(.8769)
= 61.27 deg
Which is the correct answer! Thanks a lot for your help, sorry I missed the velocity, probably would have helped from the beginning..
10. Mar 16, 2009
### Queue
No worries; for all the online things I've worked with like that all the givens are constant unless stated otherwise.
Welcome, and good work!
|
|
Notes of Marc Burger’s third Oxford lecture 23-03-2017
Geometric structures, compactifications of representation varieties, and non archimedean geometry, III
1. To infinity (and beyond)
1.1. Compactification
We intend to describe the boundary of the representation variety ${Rep_{max}(\Gamma,Sp(2n,{\mathbb R}))}$.
Theorem 1 (Burger-Pozzetti) If ${\rho:\Gamma\rightarrow Sp(2n,{\mathbb R})}$ is a maximal representation, then
1. Nontrivial elements do not have eigenvalues of modulus 1. In particular, translation lengths are ${>0}$.
2. If ${\gamma}$ and ${\eta}$ represent intersecting closed geodesics in ${\Sigma}$, then the following analogue of Collar Lemma holds,
$\displaystyle \begin{array}{rcl} (e^{\frac{\ell(\rho(\gamma))}{\sqrt{n}}}-1)(e^{\frac{\ell(\rho(\eta))}{\sqrt{n}}}-1)\geq 1. \end{array}$
In particular, the vector-valued length function ${\nu_\rho}$ never vanishes.
Corollary 2 (Parreau) The projectived length map
$\displaystyle \begin{array}{rcl} \mathbb{P}\circ\nu:Rep_{max}(\Gamma,Sp(2n,{\mathbb R}))\rightarrow \mathbb{P}({\mathfrak{a}^+}^\Gamma) \end{array}$
has relatively compact image and any boundary point is the vector-valued length function of a ${\Gamma}$-action on a building associated to ${Sp(2n,{}^{\omega}{\mathbb R}_\lambda)}$ where ${{}^{\omega}{\mathbb R}_\lambda}$ is a Robinson field.
There is a loss of information.
Question 1. Does this building have special geometric properties?
Question 2. Is there a way to organize all these actions on buildings into a coherent compactification of ${Rep_{max}(\Gamma,Sp(2n,{\mathbb R}))}$?
The following is known since the 1980’s.
Theorem 3 (Skora) If ${n=1}$, boundary points are exactly length functions of actions on ${{\mathbb R}}$-trees with small stabilizers, i.e. stabilizers of germs of segments are either trivial or cyclic.
For ${n\geq 2}$, we need study sequences ${(\rho_k)}$ in ${Hom_{max}(\Gamma,Sp(2n,{\mathbb R}))}$ and the resulting actions an asymptotic cones of ${\mathcal{X}_n}$.
A sequence leaves to infinity if some marked point, say ${o=iId\in\mathcal{X}_n}$, is moved farther and farther away by some element of a fixed generating system ${S}$. Pick a sequence ${\lambda=(\lambda_k)_{k\geq 1}}$ such that
$\displaystyle \begin{array}{rcl} \max_{s\in S}d(\rho_k(s)o,o)=O(\lambda_k). \end{array}$
Pick a non-principal ultrafilter ${\omega}$. Form the corresponding asumptotic cone ${{}^\omega\mathcal{X}_\lambda}$. It is a complete ${CAT(0)}$ metric space, with an isometric action ${{}^\omega\rho_\lambda}$ of ${\Gamma}$.
Example. ${n=1}$. Assume ${\rho_k}$ pinches some closed geodesic ${\gamma}$, and does not affect curves disjoint from ${\gamma}$. Due to the Collar Lemma, any closed curve intersecting ${\gamma}$ has length tending to infinity at speed ${\lambda_k=\log(1/}$length${\rho_k(\gamma))}$. In the limit, ${{}^\omega\rho_\lambda(\gamma)}$ has a fixed point.
Definition 4 A simple closed geodesic ${c}$ on ${\Sigma}$ is special if
1. ${\ell({}^\omega\rho_\lambda(\gamma))=0}$ whenever ${\gamma}$ represents ${c}$.
2. For any closed geodesic ${c'}$ intersecting ${c}$, ${\ell({}^\omega\rho_\lambda(\gamma))>0}$ whenever ${\eta}$ represents ${c'}$.
There are at most ${3g-3}$ special geodesics.
Theorem 5 The isometric action ${{}^\omega\rho_\lambda}$ is faithful. Let ${\Sigma_v}$ be a component of the complement in ${\Sigma}$ of special geodesics. There is a dichotomy:
1. Either (PT): every curve in ${\Sigma_v}$ which is not a boundary component has ${\ell({}^\omega\rho_\lambda(c))>0}$.
2. Or (FP): ${\pi_1(\Sigma_v)}$ fixes a point in ${{}^\omega\mathcal{X}_\lambda}$.
This is used in the proof of
Theorem 6 (Burger-Pozzetti-Iozzi-Parreau) The ${\Gamma}$ action ${{}^\omega\rho_\lambda}$ on ${{}^\omega\mathcal{X}_\lambda}$ is small.
An other tool is the theory of maximal representations of surfaces with boundary. Indeed, the scale ${\lambda}$ is too large for certain components, thus a scale needs be chosen for each component. Only finitely many scales arise.
Personnally, I am not too fluent with buildings, I prefer the language of Robinson fields.
|
|
# In situ calibration of large-radius jet energy and mass in 13 TeV proton–proton collisions with the ATLAS detector
Research output: Contribution to journalArticlepeer-review
## Abstract
The response of the ATLAS detector to large-radius jets is measured in situ using 36.2 fb$^{-1}$ of $\sqrt{s} = 13$ TeV proton-proton collisions provided by the LHC and recorded by the ATLAS experiment during 2015 and 2016. The jet energy scale is measured in events where the jet recoils against a reference object, which can be either a calibrated photon, a reconstructed $Z$ boson, or a system of well-measured small-radius jets. The jet energy resolution and a calibration of forward jets are derived using dijet balance measurements. The jet mass response is measured with two methods: using mass peaks formed by $W$ bosons and top quarks with large transverse momenta and by comparing the jet mass measured using the energy deposited in the calorimeter with that using the momenta of charged-particle tracks. The transverse momentum and mass responses in simulations are found to be about 2-3% higher than in data. This difference is adjusted for with a correction factor. The results of the different methods are combined to yield a calibration over a large range of transverse momenta ($p_{\rm T}$). The precision of the relative jet energy scale is 1-2% for $200~{\rm GeV} < p_{\rm T} < 2~{\rm TeV}$, while that of the mass scale is 2-10%. The ratio of the energy resolutions in data and simulation is measured to a precision of 10-15% over the same $p_{\rm T}$ range.
Original language English Aaboud:2018kfi 135 European Physical Journal C: Particles and Fields C79 2 https://doi.org/10.1140/epjc/s10052-019-6632-8 Published - 13 Feb 2019
|
|
main-content
08-24-2017 | Insulin pumps | Article
# Closed-loop glucose control in young people with type 1 diabetes during and after unannounced physical activity: a randomised controlled crossover trial
Journal:
Diabetologia
Authors: Klemen Dovc, Maddalena Macedoni, Natasa Bratina, Dusanka Lepej, Revital Nimri, Eran Atlas, Ido Muller, Olga Kordonouri, Torben Biester, Thomas Danne, Moshe Phillip, Tadej Battelino
Publisher: Springer Berlin Heidelberg
## Abstract
Hypoglycaemia during and after exercise remains a challenge. The present study evaluated the safety and efficacy of closed-loop insulin delivery during unannounced (to the closed-loop algorithm) afternoon physical activity and during the following night in young people with type 1 diabetes.
A randomised, two-arm, open-label, in-hospital, crossover clinical trial was performed at a single site in Slovenia. The order was randomly determined using an automated web-based programme with randomly permuted blocks of four. Allocation assignment was not masked. Children and adolescents with type 1 diabetes who were experienced insulin pump users were eligible for the trial. During four separate in-hospital visits, the participants performed two unannounced exercise protocols: moderate intensity (55% of $$\overset{\cdot }{V}{\mathrm{O}}_{2\max }$$) and moderate intensity with integrated high-intensity sprints (55/80% of $$\overset{\cdot }{V}{\mathrm{O}}_{2\max }$$), using the same study device either for closed-loop or open-loop insulin delivery. We investigated glycaemic control during the exercise period and the following night. The closed-loop insulin delivery was applied from 15:00 h on the day of the exercise to 13:00 h on the following day.
Between 20 January and 16 June 2016, 20 eligible participants (9 female, mean age 14.2 ± 2.0 years, HbA1c 7.7 ± 0.6% [60.0 ± 6.6 mmol/mol]) were included in the trial and performed all trial-mandated activities. The median proportion of time spent in hypoglycaemia below 3.3 mmol/l was 0.00% for both treatment modalities (p = 0.7910). Use of the closed-loop insulin delivery system increased the proportion of time spent within the target glucose range of 3.9–10 mmol/l when compared with open-loop delivery: 84.1% (interquartile range 70.0–85.5) vs 68.7% (59.0–77.7), respectively (p = 0.0057), over the entire study period. This was achieved with significantly less insulin delivered via the closed-loop (p = 0.0123).
Closed-loop insulin delivery was safe both during and after unannounced exercise protocols in the in-hospital environment, maintaining glucose values mostly within the target range without an increased risk of hypoglycaemia.
University Medical Centre Ljubljana, Slovenian National Research Agency, and ISPAD Research Fellowship
|
|
qml.templates.embeddings¶
This module provides quantum circuit architectures that can embed classical data into a quantum state.
CV embeddings¶
SqueezingEmbedding(features, wires[, method, c]) Encodes $$N$$ features into the squeezing amplitudes $$r \geq 0$$ or phases $$\phi \in [0, 2\pi)$$ of $$M$$ modes, where $$N\leq M$$. DisplacementEmbedding(features, wires[, …]) Encodes $$N$$ features into the displacement amplitudes $$r$$ or phases $$\phi$$ of $$M$$ modes,
Qubit embeddings¶
AmplitudeEmbedding(features, wires[, pad, …]) Encodes $$2^n$$ features into the amplitude vector of $$n$$ qubits. AngleEmbedding(features, wires[, rotation]) Encodes $$N$$ features into the rotation angles of $$n$$ qubits, where $$N \leq n$$. BasisEmbedding(features, wires) Encodes $$n$$ binary features into a basis state of $$n$$ qubits.
Note
To make the signature of templates resemble other quantum operations used in quantum circuits, we treat them as classes here, even though technically they are functions.
|
|
## Reasoning about Left and Right Handed Coordinate Systems
A coordinate system can be defined by three perpendicular unit vectors. If the coordinate system is Cartesian, which direction does the $+x$ axis point? To resolve this problem, I define an orientation–a coordinate system…
|
|
# Polar curve
1. Apr 8, 2005
### ILoveBaseball
Find the area of the region which is inside the polar curve $$r =5*cos(\theta)$$
and outside the curve $$r = 3-1*cos(\theta)$$
when i plugged those two functions into my calculator and found the bounds from 1.0471976 to 5.2359878.
my integral:
$$\int_{1.0471976}^{5.2359878} 1/2*(5*cos(\theta))^2 - 1/2*(3-1*cos)^2$$
and got a final answer of
-4.10911954 which is incorrect. anyone know what i did wrong?
2. Apr 8, 2005
### AKG
You can't simply plug stuff into your calculator, then plug numbers into a formula without thinking and expect to get the right answer. The first thing you should always do is draw yourself a picture. The first fuction looks like a teardrop, with the point on the origin, and the axis of the tear drop being the x-axis, and the drop getting fatter to the right, and then rounding off. Key points (in Cartesian co-ordinates) will be (0,0) and (5,0). Also, this function will repeat its behaviour after pi. It will trace out a tear drop when theta goes from 0 to pi, and then trace over that same path again from pi to 2pi.
The second graph looks like an egg. You can think of an egg having a fat bottom and narrowing at the top. For this egg, the top will be to the left of the y-axis, and the bottom will be to the right (it's an egg lying on its side, with axis being the x-axis again). Key points: (2, 0), (0, 3), (-4, 0), (0, -3).
You should be able to see two points of intersection, one in quadrant I (x> 0, y>0), the other in quadrant IV (x>0, y<0). You know that intersection only happens when both curves have the same r values AND the same theta values, or the r value of one is the negative of the other AND the theta value of one is that plus pi of the other. You will only satisfy this condition when cos(theta) = 1/2. You should know that theta = 60 degrees here. Or you might remember that sin(30 degrees) = 1/2 and should be able to tell from this that cos(60 degrees) will be 1/2, i.e. you should never use a calculator (do they even allow to do so for tests anyways?) You can easily change 60 degrees into radians. Also, if cos(theta) is positive (like 1/2), then you can find values in quadrants I and IV for theta, so your other point of intersection will occur at -60 degrees.
Actually, you don't need that other point of intersection. You should see by symmetry that you can find the area by finding 2 x another area, where theta goes from 0 to pi/2. From 0 to pi/3, you will find the area under the second curve, and from pi/3 to pi/2, you will find the area under the first curve. Sum these, multiply by 2 and get your answer.
You can and should do this all without a calculator, and I can't imagine how you could do this without drawing pictures and thinking about it and just using a calculator. You should draw a picture, and verify the statements I've made:
1) The first fuction looks like a teardrop, with the point on the origin, and the axis of the tear drop being the x-axis, and the drop getting fatter to the right, and then rounding off. Key points (in Cartesian co-ordinates) will be (0,0) and (5,0).
2) The second graph looks like an egg. You can think of an egg having a fat bottom and narrowing at the top. For this egg, the top will be to the left of the y-axis, and the bottom will be to the right (it's an egg lying on its side, with axis being the x-axis again). Key points: (2, 0), (0, 3), (-4, 0), (0, -3).
3) You should be able to see two points of intersection, one in quadrant I (x> 0, y>0), the other in quadrant IV (x>0, y<0).
4) You know that intersection only happens when both curves have the same r values AND the same theta values, or the r value of one is the negative of the other AND the theta value of one is that plus pi of the other.
5) You will only satisfy this condition when cos(theta) = 1/2. Note you are satisfying two conditions here with cos(theta), the first one where r and theta values are the same, the second where they are opposite (and you should be able to see why theta is the opposite of pi + theta)
6) You should see by symmetry that you can find the area by finding 2 x another area, where theta goes from 0 to pi/2
7) From 0 to pi/3, you will find the area under the second curve, and from pi/3 to pi/2, you will find the area under the first curve.
|
|
## Troubling Triangles
View as PDF
Points: 7
Time limit: 1.0s
Memory limit: 64M
Problem type
Having covered addition, Xyene's teacher moves on to something much more challenging - analytical geometry. Once again out of his depth, Xyene approaches you for help with his homework.
For today's homework, Xyene has () triangles. Each triangle is defined by the points , , (). His task is to calculate both the area () and perimeter () of the triangle . A difficult task indeed, but thankfully he has you to help! Do Xyene's homework for him so he doesn't have to.
#### Input Specification
One line containing , the number of triangles to follow. The next lines contain 6 integers separated by single spaces: of each .
#### Output Specification
lines, containing and to at least 2 decimal places separated by a single space.
#### Sample Input
1
0 0 0 1 1 1
#### Sample Output
0.50 3.41
• NightingGale commented on July 29, 2019, 9:49 p.m.
These triangles are troubling.
• georgehtliu commented on April 27, 2019, 6:28 p.m.
Can someone please look at my code? I'm not sure why it's wrong.
• Zeyu commented on April 28, 2019, 7:09 a.m. edited
Precision is an important aspect of this problem, try to keep your values as precise as possible! You're also printing in a strange format, you'll need to print the numbers without scientific notation.
• georgehtliu commented on April 28, 2019, 7:23 p.m.
Ah thanks. I'm gonna try it using c++ instead.
• sankeeth_ganeswaran commented on April 19, 2019, 3:05 p.m.
I'm a bit confused, what's wrong with my code?
• Dingledooper commented on April 20, 2019, 6:52 p.m.
You need to print them separated by a single space.
• subscriber commented on April 23, 2019, 11:42 a.m.
In addition to the mighty Dingledooper (aka: Jack from Vic Park), a new line is required after each output
• sankeeth_ganeswaran commented on April 21, 2019, 11:12 a.m.
loool yea i totally forgot that the output should be seperated by spaces. thanks everyone
• magicalsoup commented on April 20, 2019, 5:14 p.m.
I'm also pretty sure you are supposed to print it to at least 2 decimal places, the way java has decimals is weird, so format it with string.format() or printf()
• Rimuru commented on April 20, 2019, 4:11 p.m.
I don't think you're using the correct formula to approach this problem.
• silentes commented on Dec. 12, 2018, 4:33 p.m. edited
Can someone look at my solution? what is the 'value error'
• Roynaruto commented on March 26, 2018, 3:10 p.m. edited
Your program returned with a nonzero exit code (if you're not using a native language like C++, it crashed). For languages like Python or Java, this will typically be accompanied with the name of the exception your program threw, e.g., NameError or java.lang.NullPointerException, respectively.
You can check other status codes here.
Edit: I forgot to press the reply button. ;-;
• nickflyers18 commented on March 26, 2018, 2:55 p.m.
What does IR mean?
• TimothyW553 commented on March 26, 2018, 6:53 p.m.
This comment is hidden due to too much negative feedback. Click here to view it.
• Kirito commented on June 3, 2016, 9:45 p.m.
This comment is hidden due to too much negative feedback. Click here to view it.
• Pleedoh commented on May 22, 2017, 7:46 p.m.
I guess so, maybe it means like always in a positive quadrant, the triangle is arranged in the same sort of way on a Cartesian grid?
• Itachi commented on June 3, 2016, 9:52 p.m.
This comment is hidden due to too much negative feedback. Click here to view it.
• bobhob314 commented on Jan. 4, 2015, 7:18 p.m.
|
|
# What is the procedure to follow for “traffic not in sight”?
What would be the proper procedure to follow, when in the pattern or approaching an airport, if ATC gives you the position of an aircraft and you do not have them in sight? Do you just report "not in sight" and wait to turn base?
## 2 Answers
Please don't use "Not in sight". Parts of radio transmissions, such as single words, can easily be cut off or missed, making it sound like you said "In sight". I commonly hear pilots use the phrase "Looking out" or something similar, which conveys a clear message: "I understood what you said, but I don't see the traffic yet".
ATC provides traffic information when there is a risk of conflict, or in the traffic circuit if we want you to follow another aircraft. No matter the reason, we will continue to update you on the traffic either until it is no longer a factor, or until you report having the traffic in sight. So, when you receive traffic information, keep looking until you see the traffic. We will keep you updated as long as it is still relevant. And please do report when you spot the traffic, that is a super helpful piece of information to us.
In the traffic circuit specifically, if you can't see the traffic being called out, you can always ask the tower to confirm you are cleared for base and final turn. When I have someone on downwind who can't spot the aircraft to follow, I will typically just tell them to continue downwind and then I will call their base turn when appropriate. But, as always, if in doubt, ask.
• The correct phrasology when receiving a traffic service in the UK is Traffic not sighted. This doesnt reconcile well with your comment (which is of course true) about the odd word sometimes getting dropped. – Jamiec Sep 9 '19 at 12:37
• @Jamiec That phraseology is UK specific. It does not comply with ICAO SARPS: skybrary.aero/index.php/Traffic_Information – expeditedescent Sep 9 '19 at 13:18
In the USA, we attended a safety seminar, where the preferred response was "Negative contact, Nxxxxx", or "Traffic in sight, Nxxxxx".
My panel now shows ADS-B In traffic, so planes with ADS-B Out are a lot easier to spot as we now have a much better idea of where to look.
|
|
Tomorrow if not 700+, is 750 possible in 4 weeks? : General GMAT Questions and Strategies - Page 2
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 23 Feb 2017, 11:48
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Tomorrow if not 700+, is 750 possible in 4 weeks?
Author Message
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 7185
Location: Pune, India
Followers: 2167
Kudos [?]: 14015 [0], given: 222
Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink]
### Show Tags
12 Jul 2011, 02:08
abhicoolmax wrote:
Thanks Karishma. Just want to confirm - are you implying that even if one is close to prefect in the rest of the test getting couple of wrong in the end might push him or her to a lower percentile. Correct?
Anyways, for me I got #2 #7 #10 #31 and #34 wrong. I think obviously I screwed up big time in the beginning and then also screwed up towards the end. So that explains the drop in my %tile.
How about this strategy? I take as much time as I need, double checking each answer, to make sure first 10 are 100% correct and then speed up in the middle #11-#27. Later take as much time as I need to solve last 10. Thing is time is never a problem for me. I always have ~10 mins (or more) left at the end of the section. I just cruise through the 37 questions w/o even looking at the time for the most part. Only when I am like 1/2 way - I just make sure I have > 50% time left and that's about - also when I take 3+ mins, I kind of sense that something is not right and I am taking way to long - my brain quickly switches to process of elimination then. I have never found myself short of time. So maybe if I deliberately spend more time in the beginning, it might help me keep close to perfect result, if not perfect. Any suggestion? Have you come across similar situation when dealing with any of your students?
ps... One of the reasons I don't force myself to imply any strategy in mind in Quant is because I want to just enjoy the whole section; that keeps me fresh to fight out the coming up monster (starting to enjoy it lot more now ) Verbal section. I am just worried implying any strategy in Quant might make me more stressed, hampering Verbal - that's the last thing I need at this point. Thoughts?
Actually, I think each question is equally important. But if I had to be extra careful, I would do it in the last few questions (provided I am not running short of time). If makes logical sense to me that I don't have the option of rectifying my 'careless mistakes' after the last few questions. I can amend if I do make some in the beginning/middle. But I wouldn't expect you to get the 2nd question wrong given that the first 4-5 are absolute sitters. So in that sense, you should not make mistakes in the first few questions. (Not because it will affect your %ile but because they are very easy.) Again, the exact actual scoring algorithm is not known so we can only speculate.
I would suggest you to not worry too much about it and just 'enjoy' the section. But do make sure that before confirming your answer, you check that you have answered exactly what was asked and that in DS questions, you have considered each statement independently. These two things help cover for a lot of careless mistakes - those are the only ones that you actually have control on. And yes, if you don't need to, don't use any 'strategies'. If you do need to use some, make them second nature before the test. (e.g. When I was going through OG before my GMAT, I found that I often use data from statement 1 while considering statement 2 alone and hence trip. So I started considering statement 2 first and then re-reading the question stem and then statement 1. I did this for all the following questions and that's how I attempted the questions in the exam too.)
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Senior Manager Joined: 05 Jul 2010 Posts: 359 Followers: 15 Kudos [?]: 51 [0], given: 17 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 13 Jul 2011, 15:23 VeritasPrepKarishma wrote: Actually, I think each question is equally important. But if I had to be extra careful, I would do it in the last few questions (provided I am not running short of time). If makes logical sense to me that I don't have the option of rectifying my 'careless mistakes' after the last few questions. I can amend if I do make some in the beginning/middle. But I wouldn't expect you to get the 2nd question wrong given that the first 4-5 are absolute sitters. So in that sense, you should not make mistakes in the first few questions. (Not because it will affect your %ile but because they are very easy.) Again, the exact actual scoring algorithm is not known so we can only speculate. I would suggest you to not worry too much about it and just 'enjoy' the section. But do make sure that before confirming your answer, you check that you have answered exactly what was asked and that in DS questions, you have considered each statement independently. These two things help cover for a lot of careless mistakes - those are the only ones that you actually have control on. And yes, if you don't need to, don't use any 'strategies'. If you do need to use some, make them second nature before the test. (e.g. When I was going through OG before my GMAT, I found that I often use data from statement 1 while considering statement 2 alone and hence trip. So I started considering statement 2 first and then re-reading the question stem and then statement 1. I did this for all the following questions and that's how I attempted the questions in the exam too.) Thanks again Karishma. Appreciate your time and the help. Yes, I am planning to continue doing what I normally do in Quant. Just double check every answer before marking, won't hurt - unless I am in last 10 and I am running out of time. Let's see if that helps me get a close to perfect score next time. Senior Manager Joined: 08 Jun 2010 Posts: 397 Location: United States Concentration: General Management, Finance GMAT 1: 680 Q50 V32 Followers: 3 Kudos [?]: 88 [0], given: 13 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 13 Jul 2011, 21:21 Are your Aristotle RC/SC documents helping you push your score up? Senior Manager Joined: 05 Jul 2010 Posts: 359 Followers: 15 Kudos [?]: 51 [0], given: 17 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 14 Jul 2011, 01:31 mourinhogmat1 wrote: Are your Aristotle RC/SC documents helping you push your score up? SC theory part is just OK. It was good to recap while keeping MGMAT SC by my side. This exercise helped me realize that it was the tenses that have been the real problem for me. So that way Aristotle SC was good for me. I have not started their questions yet. I will let you know when I get to it. It is a small book and a good recap - it took me 1 weekend day to go over the theory part of the book. RC is simply awesome. I have been doing 2 everyday and it has given a great boost to my confidence. I don't think I will get to all 99, but if I can finish 50% by the test date, it would be great. I highly recommend this book. Posted from GMAT ToolKit Manager Joined: 14 Apr 2011 Posts: 199 Followers: 2 Kudos [?]: 24 [0], given: 19 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 14 Jul 2011, 21:38 Hi abhicoolmax, You have awesome Quant scores - make sure to keep brushing up to stay at that level. For Verbal, what I completely missed in your posts is your analysis of MGMAT CAT results for Verbal section. As you may already know, accuracy is not of primary importance in GMAT (what I mean is that it is important but it cannot alone guide you about your weakness and areas to develop). I would suggest if you have not focussed on analyzing your verbal score, pls do that and share that with us. A good article on how to analyze is: http://www.manhattangmat.com/blog/index ... ice-tests/ For all three areas: SC, CR and RC you should share accuracy along with the difficulty level of questions you got correct vs. incorrect. For example, an accuracy of 45% in SC with getting 730 level questions correct is great - this is a strength. But an accuracy of 70% with getting 670 level questions correct is not a strength (if you are aiming for 750) - it is an area of improvement. Hope this helps - I learned all this during this week while analyzing my first MGMAT CAT scores. And as a side note, can you share some tips on Quant? What sources you used? I'd be happy with a Q50 _________________ Looking for Kudos Senior Manager Joined: 12 Dec 2010 Posts: 282 Concentration: Strategy, General Management GMAT 1: 680 Q49 V34 GMAT 2: 730 Q49 V41 GPA: 4 WE: Consulting (Other) Followers: 9 Kudos [?]: 47 [0], given: 23 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 16 Jul 2011, 19:55 vinayrsm wrote: Hi abhicoolmax, You have awesome Quant scores - make sure to keep brushing up to stay at that level. ...... And as a side note, can you share some tips on Quant? What sources you used? I'd be happy with a Q50 Probably you did not pay attention to his profile (IIT Bombay with 9+ CGPI)- does not it tells you something about him ? on a side note if you are still wondering what I mean then please let me know _________________ My GMAT Journey 540->680->730! ~ When the going gets tough, the Tough gets going! Manager Joined: 14 Apr 2011 Posts: 199 Followers: 2 Kudos [?]: 24 [0], given: 19 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 16 Jul 2011, 22:02 hi yogesh, yes I did notice that after a few posts but then again assuming IITian will naturally have high quant score w/o practice would be an assumption (perhaps a reasonable one), these two variables maybe correlated but not necessarily causal hehe.. applying CR skills here. I am myself IIT alum (BTech from IIT Kanpur) though I graduated 10 yrs ago. and as you know as well IIT Maths is poles apart from GMAT Maths, CAT (tests to get into Indian management schools IIMs) quant is similar to GMAT. I reason I asked Abhi is that any tip is useful and sometimes people scoring Q51 do have very insightful tips. _________________ Looking for Kudos Senior Manager Joined: 12 Dec 2010 Posts: 282 Concentration: Strategy, General Management GMAT 1: 680 Q49 V34 GMAT 2: 730 Q49 V41 GPA: 4 WE: Consulting (Other) Followers: 9 Kudos [?]: 47 [0], given: 23 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 16 Jul 2011, 23:00 vinayrsm wrote: hi yogesh, yes I did notice that after a few posts but then again assuming IITian will naturally have high quant score w/o practice would be an assumption (perhaps a reasonable one), these two variables maybe correlated but not necessarily causal hehe.. applying CR skills here. I am myself IIT alum (BTech from IIT Kanpur) though I graduated 10 yrs ago. and as you know as well IIT Maths is poles apart from GMAT Maths, CAT (tests to get into Indian management schools IIMs) quant is similar to GMAT. I reason I asked Abhi is that any tip is useful and sometimes people scoring Q51 do have very insightful tips. He he... lot of folks from brand IIT out here..good to see . Keeping aside you are right being an IITian does not give you by default a high score in quant- no second option than to practice hard! Also on comparison of IIT maths to GMAT math, CAT quant- yeah you are right they are quite different but just want to add CAT quant is quite difficult than GMAT quant IMO! _________________ My GMAT Journey 540->680->730! ~ When the going gets tough, the Tough gets going! Senior Manager Joined: 05 Jul 2010 Posts: 359 Followers: 15 Kudos [?]: 51 [0], given: 17 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 17 Jul 2011, 08:50 vinayrsm wrote: Hi abhicoolmax, You have awesome Quant scores - make sure to keep brushing up to stay at that level. For Verbal, what I completely missed in your posts is your analysis of MGMAT CAT results for Verbal section. As you may already know, accuracy is not of primary importance in GMAT (what I mean is that it is important but it cannot alone guide you about your weakness and areas to develop). I would suggest if you have not focussed on analyzing your verbal score, pls do that and share that with us. A good article on how to analyze is: http://www.manhattangmat.com/blog/index ... ice-tests/ For all three areas: SC, CR and RC you should share accuracy along with the difficulty level of questions you got correct vs. incorrect. For example, an accuracy of 45% in SC with getting 730 level questions correct is great - this is a strength. But an accuracy of 70% with getting 670 level questions correct is not a strength (if you are aiming for 750) - it is an area of improvement. Hope this helps - I learned all this during this week while analyzing my first MGMAT CAT scores. And as a side note, can you share some tips on Quant? What sources you used? I'd be happy with a Q50 Hi vinayrsm, Thanks for sharing the link that shows how to do the MGMAT CAT analysis. I finally did an analysis of my last test. My accuracy rate was: Ques Total Right Wrong % Time-right Time-wrong Difficulty-right Difficulty-wrong SC 15 8 7 0 53% 0:54 1:26 620 720 CR 14 10 4 0 71% 1:55 2:31 660 650 RC 12 6 6 0 50% 2:49 1:53 650 650 I got many difficult SC wrong. In CR I did some silly mistakes. In RC I think I am much better off right now than I was when I gave this test. Anyhow, let's see how my next test goes. Further low level analysis did help me even more. I realized I am getting more of Assumption question wrong in CR and also helped me realize weaknesses in SC. About tips for Quant. Man, trust me, I have hardly spent any time working on my Quant skills; however, I feel sharper in Quant than I was when I started. Just a couple of tips from me: During the test if you realize that you are taking too long, it is good quickly look at the answer choices to find cluesCA. Sometimes there is clue in the answers that help you solve more complicated problems. And in DS, it is very important to read the stimulus and come up with a simplified version of the stimulus: GMAT plays trick with many thing: positive integers, NOT telling whether the number is fraction or integer, etc. Only when you are sure about what's asked in the Stimulus, then evaluate the choices. In some of the MGMAT tests I got few DS wrong, because I made wrong assumptions. And then I realized, I need to be extra careful with DS. Manager Joined: 14 Apr 2011 Posts: 199 Followers: 2 Kudos [?]: 24 [0], given: 19 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 17 Jul 2011, 12:03 Hi Abhi, good that you analyzed your results using the article from MGMAT blog. As you stated, it looks like you need to focus on SC, RC and CR in that order (and you have already worked on these ) I think your first target should be to ensure that you get 600-700 level questions correct in all three areas. This will allow you to get to the higher difficulty questions and will raise your score. Good luck for your next test! _________________ Looking for Kudos Senior Manager Joined: 05 Jul 2010 Posts: 359 Followers: 15 Kudos [?]: 51 [0], given: 17 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 17 Jul 2011, 13:12 vinayrsm wrote: Hi Abhi, good that you analyzed your results using the article from MGMAT blog. As you stated, it looks like you need to focus on SC, RC and CR in that order (and you have already worked on these ) I think your first target should be to ensure that you get 600-700 level questions correct in all three areas. This will allow you to get to the higher difficulty questions and will raise your score. Good luck for your next test! Thanks vinayrsm. I really hope in the next test (GMATPrep) I get past 720, which would give me the confidence that in next 4 weeks or so I can boost my score to 750 or even more Senior Manager Joined: 05 Jul 2010 Posts: 359 Followers: 15 Kudos [?]: 51 [0], given: 17 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 22 Jul 2011, 18:12 Hey Karishma, In error-log-verbal-help-needed-116330.html#p940900 , you said CR and RC in OG12 and OG Verbal are not par with high scoring questions in GMAT, is that true? I have 80+% accuracy in CR medium and hard level < 2 mins, and 90+% accuracy in easy ones < 1.5 mins. I have been considering my CR has improved based on these stats. If you think I should doubt these stats, what should I take up next? I was planning on using Aristotle CR set next. I still not fully comfortable with hard RC. My mid-hard RC accuracy has been ~50-60% in OG12 and OG verbal so-far. I still have few passages to go in each. I did some Aristotle RCs but then I stopped as I was not sure of their GMAT-likeness. What else would you recommend? Thanks. Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7185 Location: Pune, India Followers: 2167 Kudos [?]: 14015 [0], given: 222 Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] ### Show Tags 24 Jul 2011, 23:39 abhicoolmax wrote: Hey Karishma, In error-log-verbal-help-needed-116330.html#p940900 , you said CR and RC in OG12 and OG Verbal are not par with high scoring questions in GMAT, is that true? I have 80+% accuracy in CR medium and hard level < 2 mins, and 90+% accuracy in easy ones < 1.5 mins. I have been considering my CR has improved based on these stats. If you think I should doubt these stats, what should I take up next? I was planning on using Aristotle CR set next. I still not fully comfortable with hard RC. My mid-hard RC accuracy has been ~50-60% in OG12 and OG verbal so-far. I still have few passages to go in each. I did some Aristotle RCs but then I stopped as I was not sure of their GMAT-likeness. What else would you recommend? Thanks. Yes, my opinion on the difficulty level of OG hasn't changed. Especially CR I think is way easier in OG. The questions at the end are good but still, much more practice is needed. For RC too, extra practice is a must. I cannot comment on Aristotle CR and RC since I haven't gone through those books. But I know that Veritas RC and CR books have some great strategies and questions. I especially love the RC book since it has passages in line with the hardest passages you could see on GMAT. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Senior Manager
Joined: 11 May 2011
Posts: 372
Location: US
Followers: 3
Kudos [?]: 96 [0], given: 46
Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink]
### Show Tags
20 Aug 2011, 08:34
I think you summed up the situation pretty well. Man, my RC accuracy improved to 50% (from ~25% in previous test) and CR improved to 80% (from ~30%),
Hi Abhi,
I was reading this thread and given to understand that your CR accuracy jumped from 30% to 80%...thats huge. Can you give me some suggestion for the same?
How is your RC going? I understand it was problematic for you. Do you've some strategy fo that as well?
I'm also writing GMAT on 15th Sept and still struggling with my CR and RC.
Cheers,
Aj.
_________________
-----------------------------------------------------------------------------------------
What you do TODAY is important because you're exchanging a day of your life for it!
-----------------------------------------------------------------------------------------
Senior Manager
Joined: 05 Jul 2010
Posts: 359
Followers: 15
Kudos [?]: 51 [0], given: 17
Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink]
### Show Tags
20 Aug 2011, 10:57
Ajay369 wrote:
I think you summed up the situation pretty well. Man, my RC accuracy improved to 50% (from ~25% in previous test) and CR improved to 80% (from ~30%),
Hi Abhi,
I was reading this thread and given to understand that your CR accuracy jumped from 30% to 80%...thats huge. Can you give me some suggestion for the same?
How is your RC going? I understand it was problematic for you. Do you've some strategy fo that as well?
I'm also writing GMAT on 15th Sept and still struggling with my CR and RC.
Cheers,
Aj.
I think I was in your situation few weeks ago. When I thought my SC is good, and CR and RC, mainly RC, are killing me.
So I devoted 1 full week on RC - read Powerscore RC and did some RCs from OG - and, in the mean time and from before, I had been going through Powerscore CR, and guess what - my RC and CR accuracy went up tremendously. RC is now my strength. Check here on how I did it - rc-review-tips-117426.html#p962206
In my last couple of tests I realized that I am getting most RC and CR correct, but, as a result, I was getting TOUGH SCs in the tests and, guess what, I realized my SC skills are complete f'ed up. I just didn't know what I was doing. I didn't think I could overcome the SC just like that.
So I started with a short crash course on E-GMAT and that finally was the last missing piece of the puzzle. Check here on how I did it - need-help-in-sc-119050.html#p963453
CR was pretty natural for me since the beginning. But once I got myself familiar with all the parts of CR using Powerscore CR book, everything fell into place.
NOW, I think, I have the perfect strategy that suits my skills - as it varies person to person - for RC, CR and SC in place. How could you tell if a strategy is perfect for you? You will start to feel it, and EVERYTHING will start making sense - moreover, you will start to enjoy it more than anything. I just have to practice now to concretize these strategies and to perform under the time pressure. Well will see if these strategies will actually work - I will give a mock in 2 days to evaluate where I am.
Hope this helps.
Last edited by abhicoolmax on 20 Aug 2011, 14:39, edited 1 time in total.
Senior Manager
Joined: 11 May 2011
Posts: 372
Location: US
Followers: 3
Kudos [?]: 96 [0], given: 46
Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink]
### Show Tags
20 Aug 2011, 11:58
Thanks a lot Abhi.
Cheers,
Aj.
_________________
-----------------------------------------------------------------------------------------
What you do TODAY is important because you're exchanging a day of your life for it!
-----------------------------------------------------------------------------------------
Re: Tomorrow if not 700+, is 750 possible in 4 weeks? [#permalink] 20 Aug 2011, 11:58
Go to page Previous 1 2 [ 36 posts ]
Similar topics Replies Last post
Similar
Topics:
640 (42Q 35V) to 700 in 1 week possible? 3 23 Oct 2013, 11:30
2 need help with re-take strategy: 680 to 750 in 4 weeks 7 21 Oct 2013, 18:57
650 to 700 in 2 weeks possible?? 1 02 Jul 2013, 04:47
7 Going from 550 to 650 in 3-4 weeks , Is it possible ? 3 19 Mar 2012, 15:33
Is 700+ possible? 3 21 Mar 2007, 14:15
Display posts from previous: Sort by
# Tomorrow if not 700+, is 750 possible in 4 weeks?
Moderators: WaterFlowsUp, HiLine
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
|
# Math Help - Inverse help!
1. ## Inverse help!
If f= x+1/x+3 , find g, it's inverse, if it exists.
2. Finding the Inverse of a Function
This site gives a good explanation.
$y=\frac {x+1}{x+3}$
Solve for x:
$y(x+3)=x+1$
$
yx+3y=x+1$
$3y-1=x-yx$
$3y-1=x(1-y)$
$x=\frac {3y-1}{1-y}$
Now switch the x and y:
$y = \frac {3x-1}{1-x}$
3. Originally Posted by antz215
If f= x+1/x+3 , find g, it's inverse, if it exists.
$y = f(x) = \frac{x+1}{x+3}~,~x\neq -3$
Now change the variables:
$g: x = \frac{y+1}{y+3}~\iff~x(y+3)=y+1$
$g: xy+3x = y+1~\iff~ 3x-1=y-xy~\iff~3x-1=y(1-x)$
$g(x)=y=\frac{3x-1}{1-x}~,~x\neq 1$
4. Thank you both very much!!
5. for these inverse problems, simply switch all x's with y's and all y's with x's. Note that for f(x), just consider it to mean y. Ok, so you switched the x's and y's. Now solve for y. That's it.
-Andy
Edit: And I almost forgot, check your restriction(s). Only God can divide by zero.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.